Simple 2-D Neuron Network to be used in Image Processing

Excuse my long post. This is so important and joyful for me and I will appreciate any help.

I am researching 2-D Neuron Networks and I really like it ! 2-D Neuron Network.

I want to know the best out of it. I want to implement a simple C++ Image Processing Windows Application on Visual Studio 2015. My plan is to do anything, maybe a digit recognition, or maybe just a simple task for knowing the dominating color in an image. (whatever is easier to learn 2-D Neuron Network - its Implementations and approaches)

I have never implemented a Neuron Networks before, especially 2-D NN. However, I read a lot and I know the idea behind it.

I feel lost on how and where to start with 2-D and I would really appreciate any guidance, discussion or/and ideas to start implementing.

I did online researches about implementations but couldn't find a C++ 2-D guide. Although I now know that I need to implement a neuron class, a Layer class and a Network class. Then, a training function to feedforward the network. Then a test function to test an image.

I went through these:

1-D Neuron Network in C# - Library http://www.codeproject.com/Articles/16447/Neural-Networks-on-C

1-D Neuron Network in C# - simple OCR application http://www.codeproject.com/Articles/11285/Neural-Network-OCR

Technical Paper about Artificial Neuron Networks http://www.codeproject.com/Articles/15304/Unicode-Optical-Character-Recognition

2-D Neuron Network in C++ and Python for Linux https://github.com/davidrmiller/neural2d#2D

1-D Neuron Network Implementations and Tutorials https://takinginitiative.wordpress.com/2008/04/23/basic-neural-network-tutorial-c-implementation-and-source-code/

1-D Neuron Network course in Python - Digit Recognition https://www.coursera.org/course/neuralnets

now I am absorbing this: http://stats.stackexchange.com/questions/39037/how-does-neural-network-recognise-images

Concerns:

- I prepared my base code (Cpp and Header files, for each class - empty classes/functions at the moment), this is the right beginning?
- Do I need to have exact number of neurons in the input layer exactly as my input image? (each Pixel on each Input-Neuron)
- Do I need to have 2-D output layer? or having 5 Neuron-Layer will be okay for 5 possible colors (Black, White, Red, Green, Blue) - (10 Neuron-Layer for 10 possible digits for the other application)

Any reply post is appreciated.
Last edited on
closed account (oGN8b7Xj)
I may help you, but I need a small initial case first. It is difficult to do the whole thing in one go.
Hi,

I am sorry for my late reply. The notification email was in my "spam" folder. I built a 2-D neural network using C++. Out from scratch - Well I followed some 1-D implementations and converted/added most of the stuff.

I am in the phase where I feed my network the training data (one sample for now, testing). I am getting wrong results though. I am getting weird error rate (2e^234324''' which supposed to range between 0->1 ) and extremely high output which supposed to be only 0 or 1). And on the same sample training data, the output stays the same even on the 1000th Pass.

I am using back-propagation with the TanHyperbolic equation.

Can you help at this point? I may explain more about exact scenario/results

It's a time consuming debug phase though. On each execution, random weights are created and I plan to follow the calculations for each neuron and do it manually on a piece of paper, then check and see what is exactly happening on all variables (step on - step over)

Thanks in advance
Can you show your back propagation function?
Hi naraku I hope you are doing fine.

Sure. Here is my backProb() and all the functions that I am calling within it.

Also the output for a testing case of [3][3] -> [2][2] -> [1][2] network

Thank you guys this is really awesome and I believe that it will turn to be awesome work if we approach it


global variables at the mean time:

1
2
double Neuron::eta = 0.001; // can and should be adjusted
double Neuron::alpha = 0.99; // can and should be adjusted 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
void Net::backProb(const vector<vector<double>> &targetVals) // 1 dimensional output layer - modify stuff
{
	// calculate overall network error
	//						
	// Average Error = Sum (Target - Output) ^ 2
	//
	Layer &outputLayer = m_layers.back();
	this->m_error = 0.0;

	for (int i = 0; i < outputLayer[0].size(); i++)
	{
		double delta = targetVals[0][i] - outputLayer[0][i].getOutputValue(); 
                // index zero because the output layer is 1-D layer (so it's a single row)
		this->m_error += delta * delta;
	}
	this->m_error /= outputLayer[0].size(); // get average error squared
	this->m_error = sqrt(this->m_error); // RMS root measure 

	// implement a recent average measurement
	this->m_recentAverageError
		= (this->m_recentAverageError * this->m_recentAverageSmoothingFactor + this->m_error)
		/ (this->m_recentAverageSmoothingFactor + 1.0);
	
	// calculate output layer gradients
	for (int i = 0; i < outputLayer[0].size(); i++)
	{
		outputLayer[0][i].calOutputGradients(targetVals[0][i]);
	}


	// calculate hidden layer gradients
	for (int LayerNum = this->m_layers.size() - 2; LayerNum > 0; LayerNum--)
	{
		Layer &hiddenLayer = this->m_layers[LayerNum];
		Layer &nextLayer = this->m_layers[LayerNum + 1];

		for (int i = 0; i < hiddenLayer.size(); i++)
			for (int j = 0; j < hiddenLayer[i].size(); j++)
				hiddenLayer[i][j].calHiddenGradients(nextLayer);
	}

	// update all connections
	for (int LayerNum = this->m_layers.size() - 1; LayerNum > 0; LayerNum--)
	{
		Layer &currentLayer = this->m_layers[LayerNum];
		Layer &prevLayer = this->m_layers[LayerNum - 1];
		
		//if (LayerNum == this->m_layers.size() - 1) // if output layer

			for (int i = 0; i < currentLayer.size(); i++)
				for (int j = 0; j < currentLayer[i].size(); j++)
				{
					if (currentLayer[i][j].getDescription() == "Bias") // if Bias Neuron, do nothing
						break;
					else
						currentLayer[i][j].updateInputWeights(prevLayer);
				}
	}
}


1
2
3
4
5
void Neuron::calOutputGradients(double targetVal) // computes the difference and multiply it by the derivative of our transfer function
{
	double delta = targetVal - this->m_OutputValue;
	this->m_gradient = delta * this->transferFunctionDerivative(this->m_OutputValue);
}


1
2
3
4
5
void Neuron::calHiddenGradients(Layer &nextLayer) // same as output gradients but error difference is calculated differently, no target value to compare
{
	double dow = sumDOW(nextLayer);
	this->m_gradient = dow * this->transferFunctionDerivative(this->m_OutputValue);
}


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
void Neuron::updateInputWeights(Layer &prevLayer) // this will modify and update the weights in previous layers
{
	for (int i = 0; i < prevLayer.size(); i++)
	{
		for (int j = 0; j < prevLayer[i].size(); j++)
		{
			Neuron &currentIterationNeuron = prevLayer[i][j];
			double oldDeltaWeight = currentIterationNeuron.m_OutputWeights[this->m_myIndex].deltaWeight;
			double newDeltaWeight = 
				eta // overall net learning rate
				* currentIterationNeuron.getOutputValue() // it's actual data
				* this->m_gradient // magnified by the gradient
				+ alpha // the momentum
				* oldDeltaWeight;

			currentIterationNeuron.m_OutputWeights[this->m_myIndex].deltaWeight = newDeltaWeight;
			currentIterationNeuron.m_OutputWeights[this->m_myIndex].weight += newDeltaWeight;

		}
	}
}




Type    ID   Cnx(s):    1st     2nd     3rd     ...etc
------------------------------------------------------
Regular 0       4       0.3     0.9     0.9     0.1
Regular 1       4       0.7     0.8     1       0.6
Regular 2       4       0.1     0.8     0.7     0.5
Regular 3       4       0.5     1       0.8     0.2
Regular 4       4       0.6     0.04    0.3     0.6
Regular 5       4       0.5     0.7     0.4     1
Regular 6       4       0.4     0.002   0.3     0.1
Regular 7       4       0.4     0.7     0.1     0.8
Regular 8       4       0.3     0.5     0.02    0.8
Bias    9       4       0.5     0.4     0.4     0.9
Regular 0       2       0.6     0.03
Regular 1       2       0.8     0.01
Regular 2       2       0.2     0.7
Regular 3       2       0.4     0.6
Bias    4       2       0.7     0.2
Regular 0       0
Regular 1       0

Pass  1         Error: 1e+02    Output Neurons: -6e+66  -6e+66
Pass  1001      Error: 1e+02    Output Neurons: -6e+66  -6e+66
Pass  2001      Error: 1e+02    Output Neurons: -6e+66  -6e+66
Pass  3001      Error: 1e+02    Output Neurons: -6e+66  -6e+66
Pass  4001      Error: 1e+02    Output Neurons: -6e+66  -6e+66
Pass  5001      Error: 1e+02    Output Neurons: -6e+66  -6e+66
Pass  6001      Error: 1e+02    Output Neurons: -6e+66  -6e+66
Pass  7001      Error: 1e+02    Output Neurons: -6e+66  -6e+66
Pass  8001      Error: 1e+02    Output Neurons: -6e+66  -6e+66
Pass  9001      Error: 1e+02    Output Neurons: -6e+66  -6e+66
Last edited on
Topic archived. No new replies allowed.