The program that we chose to expand upon was Self-Organizing Feature Maps by Bashir Magomedov. This was in part due to Kuido having extensive knowlede in C# in which the program was written in. It also proved to be a good basis to expand upon, as it already had limited SOM functionality and was released under the GPL31 licence.
The program was written in C#2 using Visual Studio 20103. The previous program used a 2-dimensional coordinate system . The updated version will migrate to a 3-dimensional coordinate system. However, as no free 3-dimensional visualization component for C# could be found, all the graphs will still be presented in 2 dimensions.
4 different neighbourhood functions are implemented. Neighborhood functions calculate which node is within radius of the BMU and how much it’s weights should be changed.
The discrete function uses Euclidean distance to calculate it’s neighborhood. The resulting distance is compared to a constant and the distance is calculated using , where b is the selected constant. A graph describing the function is shown on Figure 5.
Figure 5- Graph of a discrete neighborhood function
, where is unity, is the position of the winner neuron and is a measure of width of the bell shape. A graph describing the function is shown on Figure 6.
Figure 6 - Graph of a Gauss neighborhood function 
The Mexican hat function is defined as , where is unity, is the position of the winner neuron and is a measure of width of the bell shape. A graph describing the function is shown of Figure 7.
Figure 7 - Graph of a mexican hat neighborhood function 
The French hat neighborhood is somewhat similar to the discrete neighborhood. It also uses predefined distances and Euclidean distance as the distance measurement. If the neuron is within a predefined distance from the BMU, it’s automatically regarded the same as the BMU. If it is within 3 leghts of the predefined distance, the value drops to . After that, the value is 0.
There are a few different functions to calculate the learning rate, which is a function of time. The one used in this program is the simplest: the learning rate decreases inversely with iterations. That is, the learning rate is a linear function that is calculated by dividing the iteration number by a certain predefined value.
Delta is the average change of the node weights. Every iteration, each node’s weight changes are averaged. Then these averages are in turn added together and divided by the number of total nodes. This number is used to determine whether further learning is necessary or not. Menaing that the algorithm runs until the changes in weights becomes small and insignificant.