the derivative of the sigmoid transfer function (see Chapter 1, Equation 1.4 and Figure 1.5) that monotonically maps its inputs into [0,1]. The full derivatization of this sigmoid function is given by, for example, Gallant (1993). The back-propagation learning algorithm can be decomposed into five defined steps as shown in the following text. More detailed computational descriptions can be found in a variety of literature sources (Gallant, 1993; Chauvin and Rumelhart, 1995; Braspenning et al., 1995). The five steps include the following:
đang được dịch, vui lòng đợi..