**a)**
Tlearn has a capability for training general recurrent networks using the
back propogation through time (BPTT) technique. It automatically makes K copies
of the recurrent net and ensures that the corresponding weights are equal. Our
first task will be to try and get Tlearn to learn the simple WTA network of
problem 4. You should start with an architecture like the one you used for
problem 4, but of course some of the weights will need to be learned. We still
need the crucial weights from 1 -> 2 and 2 -> 1 to be equal and the "group"
feaure of Tlearn supports this.
A subtle point is that we used linear units in problem 4, but Tlearn is set up
to do back-prop with sigmoid units. The suggestion is that you leave units
3-4 as linear. You can also introduce weights from bias units to nodes 1-2
to see if that helps. You probably will not be able to get Tlearn to learn an
WTA nearly as good as the one you built by hand. Why is that?

**b)**
The even-ones task of
Problem 3 is closely related to the "parity problem"
in the NN literature. Try training a net to classify strings of length
7 as to whether
there are an odd or even number of ones. Now try to do this with
a Simple Recurrent Net and serial presentation of the strings
(refer to Chapter 8, Rethinking Innateness:
A Handbook for Connectionist Simulations
Kim Plunkett and Jeffrey L. Elman. MIT Press, 1997.
Handbook.)
What general conclusions can you draw from this experiment?

*This assignment is due in class on October 2, or by earlier*
*electronic submission.*