DeeL@BiCi: Deep Learning: Theory, Algorithms, and Applications
Sprache des Tagungstitel:
Rectified factor networks (RFNs) are generative unsupervised models, which learn obust, very sparse, and non-linear codes with many code units.
RFN learning can be considered as variational expectation maximization (EM) algorithm with unknown prior which includes
(i) rectified posterior means and
(ii) normalized signals of hidden units
Like factor analysis, RFNs explain the data variance by their parameters.
On unsupervised benchmark tasks, RFNs lead with comparable reconstruction error to sparser codes and better explained covariance than
(1) denoising autoencoders with rectified linear units,
(2) restricted Boltzmann machines,
(3) factor analysis with Jeffrey's prior,
(4) factor analysis with Laplace prior,
(5) independent component analysis,
(6) sparse factor analysis,
(7) standard factor analysis,
(8) principal component analysis.
Most importantly, on biclustering task RFN outperformed all existing biclustering methods including our previously suggested FABIA method.
For pretraining of deep networks RFNs were superior to restricted Boltzmann machines (RBMs) and autoencoders.
Sprache der Kurzfassung:
Hauptvortrag / Eingeladener Vortrag auf einer Tagung