Long Short-Term Memory for Uniform Credit Assignment
Sprache des Vortragstitels:
DeeL@BiCi: Deep Learning: Theory, Algorithms, and Applications
Sprache des Tagungstitel:
The success of LSTM networks comes from its memory cells which avoid vanishing gradients.
The advantage of LSTM in speech and language processing is not to extract long-term dependencies, rather it is its capability to perform "uniform credit assignment" to inputs.
Uniform credit assignment to inputs means that all input signals obtain a similar error signal and
treated on the same level.
For example, at processing a sentence, the first word is as important as the last word for LSTM network learning.
LSTM networks can be used for uniform credit assignment to deep networks which process images, speech, or chemical compounds.
Such networks can be applied to the classification of actions in videos.
The can analyze high resolution images of high content imaging of cells in drug design, where subimages are sequentially presented to the network.
These networks can predict the toxicity or the biological effects of a mixture of chemical compounds which are sequentially presented to the network.
Such compound mixtures are typically found in samples from the soil or the air but also in traditional medicine which uses plant extracts.
Credit assignment to deep networks via LSTM networks has several advantages in sequence classification:
(a) uninformative inputs are not penalized for a mis-classification if informative inputs lead to a correct classification,
(b) information required for a classification can be distributed across the input sequence,
(c) outputs of deep networks can be weighted and processed context-dependent.
Sprache der Kurzfassung:
Hauptvortrag / Eingeladener Vortrag auf einer Tagung