Tartu ülikool matemaatika­informaatikateaduskond



Download 400.6 Kb.
Page3/6
Date conversion08.07.2018
Size400.6 Kb.
1   2   3   4   5   6

4 Implementation


The program that we chose to expand upon was Self-Organizing Feature Maps by Bashir Magomedov[8]. This was in part due to Kuido having extensive knowlede in C# in which the program was written in. It also proved to be a good basis to expand upon, as it already had limited SOM functionality and was released under the GPL31 licence.

The program was written in C#2 using Visual Studio 20103. The previous program used a 2-dimensional coordinate system [8]. The updated version will migrate to a 3-dimensional coordinate system. However, as no free 3-dimensional visualization component for C# could be found, all the graphs will still be presented in 2 dimensions.

4 different neighbourhood functions are implemented. Neighborhood functions calculate which node is within radius of the BMU and how much it’s weights should be changed.

Discrete function

The discrete function uses Euclidean distance to calculate it’s neighborhood. The resulting distance is compared to a constant and the distance is calculated using , where b is the selected constant. A graph describing the function is shown on Figure 5.



Figure 5- Graph of a discrete neighborhood function



Gauss function

, where is unity, is the position of the winner neuron and is a measure of width of the bell shape. A graph describing the function is shown on Figure 6.

Figure 6 - Graph of a Gauss neighborhood function [3]



Mexican hat

The Mexican hat function is defined as , where is unity, is the position of the winner neuron and is a measure of width of the bell shape. A graph describing the function is shown of Figure 7.

Figure 7 - Graph of a mexican hat neighborhood function [4]



French(Bergere) hat

The French hat neighborhood is somewhat similar to the discrete neighborhood. It also uses predefined distances and Euclidean distance as the distance measurement. If the neuron is within a predefined distance from the BMU, it’s automatically regarded the same as the BMU. If it is within 3 leghts of the predefined distance, the value drops to . After that, the value is 0.



Learning rate

There are a few different functions to calculate the learning rate, which is a function of time. The one used in this program is the simplest: the learning rate decreases inversely with iterations. That is, the learning rate is a linear function that is calculated by dividing the iteration number by a certain predefined value.



Delta

Delta is the average change of the node weights. Every iteration, each node’s weight changes are averaged. Then these averages are in turn added together and divided by the number of total nodes. This number is used to determine whether further learning is necessary or not. Menaing that the algorithm runs until the changes in weights becomes small and insignificant.


1   2   3   4   5   6


The database is protected by copyright ©dentisty.org 2016
send message

    Main page