Copyright ? 1995 T. RayChaudhuri, L. Hamey and R. Bell.
Appears in Control 95, vol 2, pp 369-373.
Neural Network Control Using Active Learning
Tirthankar RayChaudhuri Leonard G.C. Hamey Rodney D. Bell
School of MPCE, Macquarie University, New South Wales, Australia
Neural networks have been shown to give considerably better results when controlling complex non-linear systems than conventional control methods. Most neural network controllers today are built around `passive' learning methods whereby the network once trained is expected to perform repeatedly with equal accuracy on fresh sets of input-output data. This is not always suitable in real world situations where external environmental parameter variations cause changes in the plant and controller performance. In the current paper we propose the use of an autonomous `active' learning technique which will cause training to re-occur precisely when these parameter variations happen, yielding enhanced controller performance.
Neural Network Controllers, Active Learning, Optimal Experiment Design, Autonomous Inelligent Neurocontroller.
It is known that a neural network controller(NNC) is able to learn from data that incorporates knowledge about the plant and to produce from such knowledge control actions that usually outperform the more conventional methods of control . The generalisation ability of such learning can be limited however, and it is often found that while a NNC causes the plant to produce the desired output most of the time there can be significant variations in the accuracy of performance on occasion. Such variations are normally due to changes in plant parameters. Ambient environmental changes or unknown alterations in plant characteristics usually cause these parameter variations. Therefore the initial `passive' training of the network may not be able to equip the NNC to handle all situations. Continuous retraining can be troublesome and expensive . If however a means can be devised whereby the learning algorithm constantly interacts with the environment and updates the training data set with new components
which are significantly different as and when they are encountered then the uncertainty and error levels of the network would be noticeably reduced . We call this kind of training `active' learning. We propose in this paper to apply the concept of active learning to neurocontrol. The remainder of the paper discusses the proposal in greater detail. Section 2 gives a background of the state of the art in neurocontrol research and describes Model Reference Adaptive Systems in some detail. Section 3 explains the fundamental concepts of active and reinforcement learning systems. In Section 4 the overall basic design concepts of an active learning neurocontrol system are presented and it is compared with an adaptive control system. Finally in Section 5 a discussion on Optimal Experiment Design (OED) is included. OED provides a method to implement active learning in neural networks and this is a subject of ongoing investigation.
2 Background : State of the Art
Design of a control system concerns two basic aspects: plant identification (or modelling) and controller design. In both of these areas neural network methods have been applied successfully.
2.1 The Identification Issue
It is commonly accepted that an accurate model of the plant to be controlled has to be made available before attempting to design a controller for the plant . A model is basically a set of mathematical equations that describe the input-output and internal state level relationships within the plant. Such a model is often difficult to identify accurately especially if the plant is highly non-linear and there are parameters which vary over time due to external and environmental factors. Neural networks can be a highly efficacious tool for such identification . Plant identification techniques with the use of neural networks (see figure 1) have been found to be far more accurate than conventional methods especially in nonlinear systems with varying parameters . Hence a neural network model would be