University of Lincoln
Browse

Some practical aspects on incremental training of RBF network for robot behavior learning

conference contribution
posted on 2024-02-09, 17:01 authored by Li Jun, Tom Duckett
<p>The radial basis function (RBF) neural network with Gaussian activation function and least- mean squares (LMS) learning algorithm is a popular function approximator widely used in many applications due to its simplicity, robustness, optimal approximation, etc.. In practice, however, making the RBF network (and other neural networks) work well can sometimes be more of an art than a science, especially concerning parameter selection and adjustment. In this paper, we address three issues, namely the normalization of raw sensory-motor data, the choice of receptive fields for the RBFs, and the adjustment of the learning rate when training the RBF network in incremental learning fashion for robot behavior learning, where the RBF network is used to map sensory inputs to motor outputs. Though these issues are less theoretical and scientific, they are more practical, and sometimes more crucial for the application of the RBF network to the problems at hand. We believe that being aware of these practical issues can enable a better use of the RBF network in the real-world application. 1Introduction The radial basis function (RBF) network [3, 16] has found a wide range of application due to its simplicity, local learning, robustness, optimal approximation, etc.. For example, in an autonomous robot control system, the RBF network can be applied to directly map the sensory inputs to motor outputs [23, 21, 9, 15] for acquiring the required behaviors. However, in these successful applications there has been much less description on how to choose and adjust the parameters and why they are adjusted so for the applications of interest. In this paper, we address three practical aspects for incremental training of the RBF network, namely normalizing the raw sensor input, choosing the receptive fields of RBFs, and adjusting the learning rate for robot behavior learning. We restrict our investigation of these issues to the following situations: First of all, for simplicity of notation, consider a multi-input and single-output (MISO) system in which x = [x1,x2,...,xm]Tis an m-dimensional input vector, and y the scalar output. The RBF neural network can be defined as: ˆ y = F(x) = K ? k=1 wk?k(x) + b,?k(x) = e? 1 (??k)2?x?µk?2 for k = 1,2,...,K, (1) where wkis the weight of k-th Gaussian function ?k(x), µk= [µk1,µk2,...,µkm]Tis the m-dimensional position vector of k-th radial basis function, and ?kis receptive field of k-th radial basis function. In addition, K is the number of the RBFs, b is the bias, and ? is the optimal factor introduced for optimising the receptive field ?k, as in [20]. We assume that the number of RBFs K could either be designated in advance before training, thus clustering algorithms like McQueen’s K-means, or Kohonen’s SOM [10] can be used for determining the position vector µk; or it could be automatically obtained in real time during the training process by using dynamically adaptive clustering algorithms such as GWR [14]. In both cases, the receptive field ?kcan be determined by some empirical estimation method (see section 3). We also assume that the RBF network’s weights wkand bias b are updated by the least mean squares (LMS) algorithm, as wk? wk+ ?t(yt? ˆ y)?k(xt), for k = 1,2,...,K,b ? b + ?t(yp? ˆ y), (2) 1</p>

History

School affiliated with

  • School of Computer Science (Research Outputs)

Publisher

IEEE

ISBN

9781424421145

Date Submitted

2014-01-06

Date Accepted

2014-01-06

Date of First Publication

2014-01-06

Date of Final Publication

2014-01-06

Event Name

7th World Congress on Intelligent Control and Automation, 2008. WCICA 2008

Event Dates

25-27 June 2008

ePrints ID

12853

Usage metrics

    University of Lincoln (Research Outputs)

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC