Skip to content

Latest commit

 

History

History
13 lines (7 loc) · 1.46 KB

File metadata and controls

13 lines (7 loc) · 1.46 KB

Background reading

This is a list of background reading for the paper "Linear logic and recurrent neural networks" and program induction by neural methods, in general. For the papers, see DPsurvey.md. In this file we concentrate on more basic background material.

  • For general background on neural networks see Goodfellow-Bengio-Courville. At a minimum, try to understand the deep feedforward network (II.6) and backpropagation for it, plus ordinary RNNs and their backpropagation (II.10).

  • Grefenstette has a nice talk on augmented RNNs.

  • The particular package we are using for training neural networks is Google's TensorFlow. For a clear exposition of the underlying concepts, see the official paper.

  • For RNNs in TensorFlow see this and this.

  • The theory behind our approach is differential linear logic, which is a fancy version of automatic differentiation. For the role of automatic differentiation in machine learning and backpropagation specifically, see these lecture notes and the INRIA guide What is automatic differentiation?