Homework # 4

EE 7280 Homework # 4
1. In this problem we will try to identify the scalar nonlinear dynamical system
x_ = f(x) + u
where u is the input, x is the output, and f is a nonlinear function that is assumed unknown.
We will consider two different approximation models:
(a) Sigmoidal Neural Networks (SNN)
To make a fair comparison, use 12 adjustable weights for each approximation model. Therefore
^
fs
(x; θ1; : : : θ12) =
4Xi
=1
θiσ(θi+4x + θi+8) (SNN)

 f^r(x; θ1; : : : θ12) = Xi =1 θi exp–(x – ci)2=σ2 (RBF)

12

 where the sigmoidal function σ in the SNN is given by the logistic function σ(p) = 1+1e–p , and

the centers ci of the RBF are chosen uniformly distributed in the region of interest, which is
1 x 1; let the width σ of the RBF be σ = 0:3 (you may want to vary it to check if it
improves performance).
For simulation purposes take the unknown function
f(x) to be
f(x) = sin(2:5x) 0:4x(8 + x2)
0
:5(7 + x2)
and suppose that we are interested in the region
1 x 1.
Static Learning: First consider the approximation of the function f(x) in the region x 2
[1; 1]. Use the gradient method to update the network weights, i.e.,
θ_ = γ f(x) f^(x; θ) @@θ f^(x; θ) :
Give random values to the initial weights and work in continuous-time (i.e., continuous
adjustment of the weights). To generate
x(t), we need a function that covers the whole
region of interest; for example,
x(t) = sin t, or x(t) = cos t, or a random number between
-1 and 1 (uniformly distributed) will do. Train the network/approximator for some time
and then use the final weights that you obtained after training to check if you have
learned the function
f(x).
Dynamic Learning: Now try to identify the dynamical system
x_ = f(x) + u
where u(t) = sin(t) and the initial condition is x(0) = 0:3. Use am = 1. Compare and
discuss the two approximation models.

EE 7280 Homework # 4 { Addendum
The Sigmoidal Neural Network (SNN) for this homework consists of 4 nodes, which implies
there are 12 adjustable weights. Therefore it is given by:
^
fs
(x; θ1; : : : θ12) = θ1σ(θ5x + θ9) + θ2σ(θ6x + θ10) + θ3σ(θ7x + θ11) + θ4σ(θ8x + θ12)
where
σ(p) = 1
1 +
ep
The update of the weights (for the static learning case) is given by
θ_i = γ f(x) f^s(x; θ) @f^(x; θ i )
For example, for the 7th parameter:
θ_7 = γ f(x) f^s(x; θ) θ3 dσdp (p) @θ @p7 (where p = θ7x + θ11)

 = γ f(x) – f^s(x; θ) θ3 (1 +e–e(–θ(7θx7+xθ+11 θ11 ) ))2

x