You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<!-- navigation toc: --><li><ahref="#gating-mechanism-long-short-term-memory-lstm" style="font-size: 80%;">Gating mechanism: Long Short Term Memory (LSTM)</a></li>
201
201
<!-- navigation toc: --><li><ahref="#implementing-a-memory-cell-in-a-neural-network" style="font-size: 80%;">Implementing a memory cell in a neural network</a></li>
@@ -304,15 +304,16 @@ <h2 id="plans-for-the-week-march-10-14" class="anchor">Plans for the week March
<h2id="reading-recommendations-rnns-and-lstms" class="anchor">Reading recommendations: RNNs and LSTMs</h2>
308
308
309
309
<divclass="panel panel-default">
310
310
<divclass="panel-body">
311
311
<!-- subsequent paragraphs come in larger fonts, so start with a paragraph -->
312
312
<ol>
313
-
<li> For RNNs see Goodfellow et al chapter 10.</li>
313
+
<li> For RNNs see Goodfellow et al chapter 10, see <ahref="https://www.deeplearningbook.org/contents/rnn.html" target="_self"><tt>https://www.deeplearningbook.org/contents/rnn.html</tt></a></li>
314
314
<li> Reading suggestions for implementation of RNNs in PyTorch: Rashcka et al's text, chapter 15</li>
315
-
<li> Reading suggestions for implementation of RNNs in TensorFlow: <ahref="https://github.com/CompPhysics/MachineLearning/blob/master/doc/Textbooks/TensorflowML.pdf" target="_self">Aurelien Geron's chapter 14</a>.</li>
315
+
<li> RNN video at URL":https://youtu.be/PCgrgHgy26c?feature=shared"</li>
316
+
<li> New xLSTM, see Beck et al <ahref="https://arxiv.org/abs/2405.04517" target="_self"><tt>https://arxiv.org/abs/2405.04517</tt></a>. Exponential gating and modified memory structures boost xLSTM capabilities to perform favorably when compared to state-of-the-art Transformers and State Space Models, both in performance and scaling.</li>
<!-- subsequent paragraphs come in larger fonts, so start with a paragraph -->
327
328
<ol>
328
-
<li> Goodfellow et al chapter 14.</li>
329
+
<li> Goodfellow et al chapter 14, see <ahref="https://www.deeplearningbook.org/contents/autoencoders.html" target="_self"><tt>https://www.deeplearningbook.org/contents/autoencoders.html</tt></a></li>
329
330
<li> Rashcka et al. Their chapter 17 contains a brief introduction only.</li>
330
-
<li><ahref="http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/" target="_self">Deep Learning Tutorial on AEs from Stanford University</a></li>
331
-
<li><ahref="https://blog.keras.io/building-autoencoders-in-keras.html" target="_self">Building AEs in Keras</a></li>
332
-
<li><ahref="https://www.tensorflow.org/tutorials/generative/autoencoder" target="_self">Introduction to AEs in TensorFlow</a></li>
333
-
<li><ahref="http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec20.pdf" target="_self">Grosse, University of Toronto, Lecture on AEs</a></li>
334
-
<li><ahref="https://arxiv.org/abs/2003.05991" target="_self">Bank et al on AEs</a></li>
331
+
<li>Deep Learning Tutorial on AEs from Stanford University at <ahref="http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/" target="_self"><tt>http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/</tt></a></li>
332
+
<li>Building AEs in Keras at <ahref="https://blog.keras.io/building-autoencoders-in-keras.html" target="_self"><tt>https://blog.keras.io/building-autoencoders-in-keras.html</tt></a></li>
333
+
<li>Introduction to AEs in TensorFlow at <ahref="https://www.tensorflow.org/tutorials/generative/autoencoder" target="_self"><tt>https://www.tensorflow.org/tutorials/generative/autoencoder</tt></a></li>
334
+
<li>Grosse, University of Toronto, Lecture on AEs at <ahref="http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec20.pdf" target="_self"><tt>http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec20.pdf</tt></a></li>
335
+
<li>Bank et al on AEs at <ahref="https://arxiv.org/abs/2003.05991" target="_self"><tt>https://arxiv.org/abs/2003.05991</tt></a></li>
335
336
<li> Baldi and Hornik, Neural networks and principal component analysis: Learning from examples without local minima, Neural Networks 2, 53 (1989)</li>
<p><li> For RNNs see Goodfellow et al chapter 10.</li>
217
+
<p><li> For RNNs see Goodfellow et al chapter 10, see <ahref="https://www.deeplearningbook.org/contents/rnn.html" target="_blank"><tt>https://www.deeplearningbook.org/contents/rnn.html</tt></a></li>
218
218
<p><li> Reading suggestions for implementation of RNNs in PyTorch: Rashcka et al's text, chapter 15</li>
219
-
<p><li> Reading suggestions for implementation of RNNs in TensorFlow: <ahref="https://github.com/CompPhysics/MachineLearning/blob/master/doc/Textbooks/TensorflowML.pdf" target="_blank">Aurelien Geron's chapter 14</a>.</li>
219
+
<p><li> RNN video at URL":https://youtu.be/PCgrgHgy26c?feature=shared"</li>
220
+
<p><li> New xLSTM, see Beck et al <ahref="https://arxiv.org/abs/2405.04517" target="_blank"><tt>https://arxiv.org/abs/2405.04517</tt></a>. Exponential gating and modified memory structures boost xLSTM capabilities to perform favorably when compared to state-of-the-art Transformers and State Space Models, both in performance and scaling.</li>
<p><li> Goodfellow et al chapter 14, see <ahref="https://www.deeplearningbook.org/contents/autoencoders.html" target="_blank"><tt>https://www.deeplearningbook.org/contents/autoencoders.html</tt></a></li>
232
233
<p><li> Rashcka et al. Their chapter 17 contains a brief introduction only.</li>
233
-
<p><li><ahref="http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/" target="_blank">Deep Learning Tutorial on AEs from Stanford University</a></li>
234
-
<p><li><ahref="https://blog.keras.io/building-autoencoders-in-keras.html" target="_blank">Building AEs in Keras</a></li>
235
-
<p><li><ahref="https://www.tensorflow.org/tutorials/generative/autoencoder" target="_blank">Introduction to AEs in TensorFlow</a></li>
236
-
<p><li><ahref="http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec20.pdf" target="_blank">Grosse, University of Toronto, Lecture on AEs</a></li>
237
-
<p><li><ahref="https://arxiv.org/abs/2003.05991" target="_blank">Bank et al on AEs</a></li>
234
+
<p><li>Deep Learning Tutorial on AEs from Stanford University at <ahref="http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/" target="_blank"><tt>http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/</tt></a></li>
235
+
<p><li>Building AEs in Keras at <ahref="https://blog.keras.io/building-autoencoders-in-keras.html" target="_blank"><tt>https://blog.keras.io/building-autoencoders-in-keras.html</tt></a></li>
236
+
<p><li>Introduction to AEs in TensorFlow at <ahref="https://www.tensorflow.org/tutorials/generative/autoencoder" target="_blank"><tt>https://www.tensorflow.org/tutorials/generative/autoencoder</tt></a></li>
237
+
<p><li>Grosse, University of Toronto, Lecture on AEs at <ahref="http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec20.pdf" target="_blank"><tt>http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec20.pdf</tt></a></li>
238
+
<p><li>Bank et al on AEs at <ahref="https://arxiv.org/abs/2003.05991" target="_blank"><tt>https://arxiv.org/abs/2003.05991</tt></a></li>
238
239
239
240
<p><li> Baldi and Hornik, Neural networks and principal component analysis: Learning from examples without local minima, Neural Networks 2, 53 (1989)</li>
<li> For RNNs see Goodfellow et al chapter 10.</li>
251
+
<li> For RNNs see Goodfellow et al chapter 10, see <ahref="https://www.deeplearningbook.org/contents/rnn.html" target="_blank"><tt>https://www.deeplearningbook.org/contents/rnn.html</tt></a></li>
252
252
<li> Reading suggestions for implementation of RNNs in PyTorch: Rashcka et al's text, chapter 15</li>
253
-
<li> Reading suggestions for implementation of RNNs in TensorFlow: <ahref="https://github.com/CompPhysics/MachineLearning/blob/master/doc/Textbooks/TensorflowML.pdf" target="_blank">Aurelien Geron's chapter 14</a>.</li>
253
+
<li> RNN video at URL":https://youtu.be/PCgrgHgy26c?feature=shared"</li>
254
+
<li> New xLSTM, see Beck et al <ahref="https://arxiv.org/abs/2405.04517" target="_blank"><tt>https://arxiv.org/abs/2405.04517</tt></a>. Exponential gating and modified memory structures boost xLSTM capabilities to perform favorably when compared to state-of-the-art Transformers and State Space Models, both in performance and scaling.</li>
<li> Goodfellow et al chapter 14, see <ahref="https://www.deeplearningbook.org/contents/autoencoders.html" target="_blank"><tt>https://www.deeplearningbook.org/contents/autoencoders.html</tt></a></li>
266
267
<li> Rashcka et al. Their chapter 17 contains a brief introduction only.</li>
267
-
<li><ahref="http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/" target="_blank">Deep Learning Tutorial on AEs from Stanford University</a></li>
268
-
<li><ahref="https://blog.keras.io/building-autoencoders-in-keras.html" target="_blank">Building AEs in Keras</a></li>
269
-
<li><ahref="https://www.tensorflow.org/tutorials/generative/autoencoder" target="_blank">Introduction to AEs in TensorFlow</a></li>
270
-
<li><ahref="http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec20.pdf" target="_blank">Grosse, University of Toronto, Lecture on AEs</a></li>
271
-
<li><ahref="https://arxiv.org/abs/2003.05991" target="_blank">Bank et al on AEs</a></li>
268
+
<li>Deep Learning Tutorial on AEs from Stanford University at <ahref="http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/" target="_blank"><tt>http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/</tt></a></li>
269
+
<li>Building AEs in Keras at <ahref="https://blog.keras.io/building-autoencoders-in-keras.html" target="_blank"><tt>https://blog.keras.io/building-autoencoders-in-keras.html</tt></a></li>
270
+
<li>Introduction to AEs in TensorFlow at <ahref="https://www.tensorflow.org/tutorials/generative/autoencoder" target="_blank"><tt>https://www.tensorflow.org/tutorials/generative/autoencoder</tt></a></li>
271
+
<li>Grosse, University of Toronto, Lecture on AEs at <ahref="http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec20.pdf" target="_blank"><tt>http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec20.pdf</tt></a></li>
272
+
<li>Bank et al on AEs at <ahref="https://arxiv.org/abs/2003.05991" target="_blank"><tt>https://arxiv.org/abs/2003.05991</tt></a></li>
272
273
<li> Baldi and Hornik, Neural networks and principal component analysis: Learning from examples without local minima, Neural Networks 2, 53 (1989)</li>
<li> For RNNs see Goodfellow et al chapter 10.</li>
328
+
<li> For RNNs see Goodfellow et al chapter 10, see <ahref="https://www.deeplearningbook.org/contents/rnn.html" target="_blank"><tt>https://www.deeplearningbook.org/contents/rnn.html</tt></a></li>
329
329
<li> Reading suggestions for implementation of RNNs in PyTorch: Rashcka et al's text, chapter 15</li>
330
-
<li> Reading suggestions for implementation of RNNs in TensorFlow: <ahref="https://github.com/CompPhysics/MachineLearning/blob/master/doc/Textbooks/TensorflowML.pdf" target="_blank">Aurelien Geron's chapter 14</a>.</li>
330
+
<li> RNN video at URL":https://youtu.be/PCgrgHgy26c?feature=shared"</li>
331
+
<li> New xLSTM, see Beck et al <ahref="https://arxiv.org/abs/2405.04517" target="_blank"><tt>https://arxiv.org/abs/2405.04517</tt></a>. Exponential gating and modified memory structures boost xLSTM capabilities to perform favorably when compared to state-of-the-art Transformers and State Space Models, both in performance and scaling.</li>
<li> Goodfellow et al chapter 14, see <ahref="https://www.deeplearningbook.org/contents/autoencoders.html" target="_blank"><tt>https://www.deeplearningbook.org/contents/autoencoders.html</tt></a></li>
343
344
<li> Rashcka et al. Their chapter 17 contains a brief introduction only.</li>
344
-
<li><ahref="http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/" target="_blank">Deep Learning Tutorial on AEs from Stanford University</a></li>
345
-
<li><ahref="https://blog.keras.io/building-autoencoders-in-keras.html" target="_blank">Building AEs in Keras</a></li>
346
-
<li><ahref="https://www.tensorflow.org/tutorials/generative/autoencoder" target="_blank">Introduction to AEs in TensorFlow</a></li>
347
-
<li><ahref="http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec20.pdf" target="_blank">Grosse, University of Toronto, Lecture on AEs</a></li>
348
-
<li><ahref="https://arxiv.org/abs/2003.05991" target="_blank">Bank et al on AEs</a></li>
345
+
<li>Deep Learning Tutorial on AEs from Stanford University at <ahref="http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/" target="_blank"><tt>http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/</tt></a></li>
346
+
<li>Building AEs in Keras at <ahref="https://blog.keras.io/building-autoencoders-in-keras.html" target="_blank"><tt>https://blog.keras.io/building-autoencoders-in-keras.html</tt></a></li>
347
+
<li>Introduction to AEs in TensorFlow at <ahref="https://www.tensorflow.org/tutorials/generative/autoencoder" target="_blank"><tt>https://www.tensorflow.org/tutorials/generative/autoencoder</tt></a></li>
348
+
<li>Grosse, University of Toronto, Lecture on AEs at <ahref="http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec20.pdf" target="_blank"><tt>http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec20.pdf</tt></a></li>
349
+
<li>Bank et al on AEs at <ahref="https://arxiv.org/abs/2003.05991" target="_blank"><tt>https://arxiv.org/abs/2003.05991</tt></a></li>
349
350
<li> Baldi and Hornik, Neural networks and principal component analysis: Learning from examples without local minima, Neural Networks 2, 53 (1989)</li>
0 commit comments