Laurens Moonens
Laurens Moonens

Gameplay programmer
Gameplay programmer

Neural network
Neural network

Details
Details

Technical

  • C++
  • Windows
  • Built using my own game engine

Project details

  • Goal: to build a c++ framework to be able to create, edit and train neural networks. Get knowledge on the inner workings of neural networks, by creating a neural network from scratch.
  • Application: used to make an AI capabale of playing packman.
  • Multiple training algorithms can be used:
    • back propagation
    • genetic evolution algorithm
  • Accompanied by a paper, explaining the inner workings of the program in detail.
  • Time scope: 2,5 months

About the neural network
About the neural network

This project involved making a pacman AI, using neural networks. A neural network framework was made from scratch in C++. The basic structure of neural networks was implemented, along with a simple editor. These neural networks can be trained for the specific application, using two algorithms, a genetic algorithm and back propagation. The genetic algorithm relies solely on trial and error, and random mutations to the neural networks. After many generations, the genetic algorithm produced a working AI, although it had some rough edges. The back propagation algorithm uses training examples to make the neural network learn. To avoid having to play pacman for hours and record the moves I make, I let the existing AI (made with the genetic algorithm), play the game, and record the moves it makes. To increase the quality of the training data, I filtered out useless moves. After training the existing AI more on this improved data, I ended up with an AI, that is well able to play the game, and at times, get some pretty high scores.

Code snippet
Code snippet


                
                
   1
   2
   3
   4
   5
   6
   7
   8
   9
  10
  11
  12
  13
  14
  15
  16
  17
  18
  19
  20
  21
  22
  23
  24
  25
  26
  27
  28
  29
  30
  31
  32
  33
  34
  35
  36
  37
  38
  39
  40
  41
  42
  43
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
void NeuralNetwork::FeedForward(const vector<double>& input, vector<double>& output)
{
    if (input.size() != m_InputLayer.GetNeurons().size() - 1)
    {
        cout << "WARNING: FeedForward input size does NOT match the nr of input neurons!\n";
        return;
    }

    std::vector<double> layerOutput{};
    std::vector<double> nextLayerOutput{};

    // Feed forward through first layer
    m_InputLayer.FeedForward(input, layerOutput);

    // Feed forward through hidden layers
    for (unsigned int i{}; i < m_HiddenLayers.size(); ++i)
    {
        m_HiddenLayers[i].FeedForward(layerOutput, nextLayerOutput);
        layerOutput = nextLayerOutput;
    }

    // Feed forward through output layer
    m_OutputLayer.FeedForward(layerOutput, output);
}

void NeuralNetwork::BackPropagate(const vector<double>& expectation)
{
    if (expectation.size() != m_OutputLayer.GetNeurons().size())
    {
        cout << "WARNING: BackPropagate expectation size does NOT match the nr of output neurons!\n";
        return;
    }

    m_Cost = 0.0;
    vector<Neuron>& outputNeurons{ m_OutputLayer.GetNeurons() };

    //Calculate the output's cost and add it to the overall cost.
    for (unsigned int i{}; i < expectation.size(); ++i)
    {
        outputNeurons[i].ResetCost();

        double neuronCost{ 2.0 * (m_OutputLayer.GetNeurons()[i].GetOutput() - expectation[i]) };    // Derivative of networkCost formula ( x^2 -> 2*x )
        outputNeurons[i].AddCost(neuronCost);

        double networkCost{ pow(m_OutputLayer.GetNeurons()[i].GetOutput() - expectation[i], 2.0f) };
        m_Cost += networkCost;
    }

    vector<Neuron>* prevLayer{ &m_InputLayer.GetNeurons() };

    if (!m_HiddenLayers.empty())
    {
        prevLayer = &m_HiddenLayers.back().GetNeurons();
    }

    //Back propagate through output layer
    m_OutputLayer.BackPropagate(*prevLayer);

    //Back propagate through hidden layers ( from last to first )
    for (int i{ (int)m_HiddenLayers.size() - 1 }; i >= 0; --i)
    {
        if (i == 0)	// If it is the first hidden layer
        {
            m_HiddenLayers[i].BackPropagate(m_InputLayer.GetNeurons());
        }
        else
        {
            m_HiddenLayers[i].BackPropagate(m_HiddenLayers[i - 1].GetNeurons());
        }
    }

    //Get the network's average cost
    m_Cost /= expectation.size();
}

void NeuralNetwork::ApplyDeltaWeights()
{
    //Loop through all output neurons and apply the delta weights
    for (unsigned int i{}; i < m_OutputLayer.GetNeurons().size(); ++i)
    {
        for (unsigned int w{}; w < m_OutputLayer.GetNeurons()[i].GetWeights().size(); ++w)
        {
            m_OutputLayer.GetNeurons()[i].GetWeights()[w].weight -= m_OutputLayer.GetNeurons()[i].GetWeights()[w].deltaweight * m_TrainingRate;
            m_OutputLayer.GetNeurons()[i].GetWeights()[w].deltaweight = 0.0;
        }
    }

    //Loop through all hidden neurons and apply the delta weights
    for (unsigned int i{}; i < m_HiddenLayers.size(); ++i)
    {
        for (unsigned int j{}; j < m_HiddenLayers[i].GetNeurons().size(); ++j)
        {
            for (unsigned int w{}; w < m_HiddenLayers[i].GetNeurons()[j].GetWeights().size(); ++w)
            {
                m_HiddenLayers[i].GetNeurons()[j].GetWeights()[w].weight -= m_HiddenLayers[i].GetNeurons()[j].GetWeights()[w].deltaweight * m_TrainingRate;
                m_HiddenLayers[i].GetNeurons()[j].GetWeights()[w].deltaweight = 0.0;
            }
        }
    }
}