Neural Net Fail

I recently tried to make my first neural network. I was planning on using it for some simple character recognition where the user would turn lights in a grid on/off and then hit space to check to see if they made a one (it would only recognize ones). I finished the first truly testable version of the program (right now I'm training it) and thought that it had turned out alright. I was able to train it to recognize ones but the problem is that it's output is the same no matter how the lights are arranged. I tested that like this:

1
2
3
4
5
6
7
8
9
    vector<float> Temp;
    Temp.resize(25);
    for ( int n = 0; n < 1000; n++ )
    {
        for ( int i = 0; i < 25; i++ )
            Temp[i] = rand()%2;
        cout<<Net->Fire(Temp)<<endl // Net is a pointer to a neural net class and the fire function returns a float
    }
    exit(33);


I'm not sure where exactly in the code the problem could originate so I can't post the code that needs fixing. Instead, I uploaded all my code online (http://www.mediafire.com/download/q8qociiaq9rdidv/CR.rar) for anyone to see and I'm asking someone who may have experienced a similar problem before to give me some advice on what caused it for them. Thanks in advance

Edit:
I made my net based on this video:
http://www.youtube.com/watch?v=zpykfC4VnpM
Last edited on
Your test is running on an untrained/randomized neural net and the output for different input is not exactly the same so I'm not sure there is a problem.

Removing the test, it looks like its learning but very slowly. Not sure if it matters that you use target 0.6 and 0.8 instead of 0 and 1 that I have seen is more common. I guess you could increase the learning rate by making Aida larger but that might make it forget old stuff too quickly so it is something you'll have to experiment with.

Training the neural net manually is going to take very long time. I recommend that you automate the training process somehow.
Thanks for the advice. I use 0.6 and 0.8 because they were above and below the threshold so I could try using 0 and 1. I've also realized that since I'm using the sigmoid function on my input, instead of sending a zero or 1 to the hidden layer, I'm sending a 0.5 or 0.7... so I'm gonna try to fix that as well. I think that I could find a way to train it automatically so I'll try that as well.
For the training http://yann.lecun.com/exdb/mnist/ (handwritten digits)

> Net is a pointer (...)
¿why? you are leaking all over the place
The reason Net is a pointer was because when I first started working on the program I had ideas that would use it in such a way that it would be deleted and rebuilt but those ideas were scrapped (similarily with layers) but I've changed that now. Also, I have gotten the program to behave the way that I want it to by adjusting some parameters with the net and fixing some strange behaviors.
Last edited on
Topic archived. No new replies allowed.