A large number of alternative transfer functions have been proposed and exploitedin modern research efforts. Universal transfer functions, parameterized to changefrom localized to a delocalized type, are of greatest interest (Duch and Jankowski,1999). For example, Hoffmann (2004) discussed the development of universal basisfunctions (UBFs) with flexible activation functions parameterized to change theirshape smoothly from one functional form to another. This allows the coverage ofbounded and unbounded subspaces depending on the data distribution. UBFs havebeen shown to produce parsimonious models that tend to generalize more efficientlythan comparable approaches (Hoffmann, 2004). Other types of neural transfer functionsbeing considered include functions with activations based on non-Euclidean distancemeasures, bicentral functions, biradial functions formed from products or linearcombinations of pairs of sigmoids, and extensions of such functions making rotationsof localized decision borders in highly dimensional spaces practical (Duch andJankowski, 1999). In summary, a variety of activation functions are used to control theamplitude of the output of the neuron. Chapter 2 will extend the discussion on artificialneuron models, including network connectivity and architecture considerations.
đang được dịch, vui lòng đợi..
