NOTE: this is only a coding exercise. It is not an official assignment that serves to access the oral examination. Hence you are NOT required to do it and you are not required to send it.
Completing this assignment will not provide any extra-points, but it is meant to provide you with an opportunity to better understand how DeepRBM work.
Write down a Matlab-script DeepRbm.m implementing pretraining of a Deep Restricted Boltzmann Machine with two layers. I suggest that you reuse the code that you already have for RBM: remember that in the first layer you have to double the inputs from the visible units to the hidden units, while in the last layer you have to double the inputs from the hidden units to the reconstruction of the intermediate layer.
Use the same data that you have used for the RBM assignment (available in this archive), that are the MNIST images.
If you feel brave enough, try using the trained two-layer model, clamping one image to the visible units in the lower layer and performing the Contrastive Divergence algorithm traversing upwards the whole network up to the last layer, identifying the most active hidden unit in the last layer. Try using images from different classes to see if they activate different units.