Az Eszterházy Károly Tanárképző Főiskola Tudományos Közleményei. 1997. Sectio Mathematicae. (Acta Academiae Paedagogicae Agriensis : Nova series ; Tom. 24)

HOFFMANN, M. and VÁRADY, L., Free-form curve design by neural networks

100 Miklós Hoffmann and Lajos Varady 2 Si(u) = £ Pi+rb r(u) we [0,1] 2 = 2,..., n — 2 r=­1 where b r axe the well-known B-sphne basis functions. The Kohonen neural network Neural networks can be divided into two classes, the supervised and the non-supervised learning or self organizing neural networks. Supervised learning neural nets have to be trained with training or test data sets, where the result of the task to be done has to be provided in advance. After training, the net is adapted to the problem by the test set and is able to generalize its behavior. Self organizing networks, however, organize the data during the learning phase where the result of the task is not required. Following the training rules, the network adapts its internal knowledge to the task. The Kohonen neural network is a two-layered noil-sup er vised learning neural network. Adaptation of the Kohonen net to the problem Let a set of points P{ (i = 1,..., n) (scattered data) be given on the plane. Our purpose is to fit (by interpolation or approximation) a B-spline curve to them. Thus our first task is to determine the order of the points for the interpolating or approximating methods. The Kohonen net is used to order the points. The first layer of neurons is called input layer and contains the two input neurons which pick up the data. The input neurons are entirely interconnected to a second, competitive layer, which contains m neurons (where m > n). The weights associated with the connections are adjusted during training. Only one single neuron can be active at a time and this neuron represents the cluster which the input data set belongs to. Let a set of two dimensional vectors Pi(x 1,0:2) be given. These vectors axe called input vectors. The coordinates of these vectors are submitted to the input layer which contains two neurons. When all the input vectors were presented to the input neurons, we restart at the first vector. Let the output vectors oi, ... ,o m be two dimensional vectors with the coordinates (wij, W2 1),j = 1,. .., m, where Wij denotes the weights between the input neuron i and the output neuron j . We use the terms "output vector" and "weights of the output neuron" interchangeably. Let the output map be one dimensional (see Figure 1).

Next

/
Thumbnails
Contents