![]() ![]() As in other similar models, the idea of the algorithm is to find correct weights for the connections between to give a correct representation of the input data in the geometrical structure of learning nodes. ![]() Next figure shows a possible 2D architecture for a SOM training. Also, there exist connections between all the nodes of both layers. SOM algorithm has a 2 layer network architecture: we have a learning nodes layer, which we mind the geometrical-topological relation between them, and that finally will contain the information about the resulting representation and we have one more layer with the input nodes, where the original vectors will be feeded to the network during the training process. From a technical point of view, it considers a network of nodes in a similar way as artificial neural networks. In the obtained 2D representation, similar colors are located in adjacent regions.Īs machine learning algorithm, SOM is in the smaller bag of unsupervised methods, what means we have no feedback about a goal to require, but it provides a distribution of the vectors considering their similarities. Next figure show a trained SOM to recognize the 8 colors from the left. One important fact in Kohonen technique is that we also obtain some conservation of topological properties from the original data.Ī tipical example to see SOM working is based in the projection of colors (associated to 3D vectors from, for example, the RGB components) to a 2D space. In general, different techniques to reduce vectors dimensionality are data compression techniques also known as Vector Quantization. ![]() Self Organizing Maps, or SOM, were created in 1982 by Teuvo Kohonen, professor from the Finland Academy, and provide a way to represent multidimensional data (vectors) in lower dimensional spaces, usually 2D. ![]()
0 Comments
Leave a Reply. |