Bookmarks
Bookmarks Bar
DL
Articles
- 6 areas of AI&ML to watch (come back to this routinely)
- What a Deep Neural Network thinks about your #selfie
- Neuron explained using simple algebra – Chingu – Medium
- ML is Fun (World's easiest intro to ML)
- Identifying rare diseases, lung cancer & more with Deep Learning – Transmission Newsletter – Medium
- An overview of gradient descent optimization algorithms
- Write an AI to win at Pong from scratch with Reinforcement Learning – Medium
- Artificial Intelligence, Deep Learning, and Neural Networks, Explained
- Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images
- Beginner's Guide To Convolutional Neural Networks – Adit Deshpande – CS Undergrad at UCLA ('19)
Books
- AdvBk: Deep Learning (Ian Goodfellow)
- IntroBk: NN and DL (M. Nielsen)
- COURSE TEXTBOOK: Manning | Grokking Deep Learning
Competitions
- Bag of Words Meets Bags of Popcorn | Kaggle
CNNs
- 2014: Olah: Conv Nets: A Modular Perspective
- 2012: Krizhevsky et al: ImageNet classification w/ deep CNNs
- Convolution Animations
- ConvNetJS demo: Classify toy 2D data
- Understanding Convolutions - colah's blog
- Understanding Convolutional Neural Networks for NLP – WildML
- Convolutional Neural Networks (LeNet)
- Visualizing and Understanding CNNs
- 2014: Lin et al: Network in Network (1x1 Convolutions)
- 2014: Szegedy et al: Going deeper w/ convolutions (Google)
Courses
- OpenAI Gym
- bayareadlschool | presentations
- Sutton & Barto Book: Reinforcement Learning: An Introduction
- John Schulman: Deep Reinforcement Learning (YouTube)
Udacity__DL-NDF
- Udacity-DL NanoDegree Main Page
- Syllabus Overview (blog)
- Udacity-DL Slack
- UdacityDL Forums
- Udacity-DL Student Handbook
- A Neural Network Playground
- An overview of ... Momentum
- Google TensorFlow Series
- Live Q&A with the Deep Learning Foundations Team - YouTube
- How to Use Tensorflow for Time Series (Live) - YouTube
MIT Self-Driving Car
- DeepTesla Tutorial
- GitHub - lexfridman/deeptesla
- DeepTraffic
Udacity__Intro-to-DL
- Take this 1st: DL
cs231n: CNNs for VizRec
- 2016 Main Page
- 2016 Notes on GitHub
- UCL Course on Reinforcement Learning (2015)
- 2015: Machine Learning (DL, CNN, RNN, RL, etc)
- 2017: UCB-cs294: Deep Reinforcement Learning
- 2015: UCB-cs294: Deep Reinforcement Learning (abbv version of 2017 course)
- Quoc Le’s Lectures on Deep Learning
- TensorFlow and deep learning, without a PhD
- Practical tutorials and labs for TensorFlow used by Nvidia, FFN, CNN, RNN, Kaggle, AE
- Watch this: Ng Overview of DL
- Rec: Intro to Comp Vision
- Rec: AI for Robotics
- Stanford: ConvNN for Visual Recognition
- Stat212b: Topics Course on Deep Learning by joanbruna
- Neural Networks for Machine Learning - University of Toronto | Coursera
- CS231n Convolutional Neural Networks for Visual Recognition
- CS231n Winter 2016 - YouTube
- CS 294 Deep Reinforcement Learning, Spring 2017
DataNews
- The Wild Week in AI
- DeepLearning.Net
- Transmission Newsletter – Medium
- DataTau (ML/DL News)
- Deep Learning Weekly
DataSets
- MNIST
- SVHN (Street View House Numbers)
- CIFAR-10 and CIFAR-100
de/trans CNNs
- 2010: Zeiler et al (NYU): Deconvolutional Networks
- Zeiler's Slides on Deconvolutional Networks
- CS231n Winter 2016: Lecture 13: Segmentation, soft attention, spatial transformers - YouTube
- What are deconvolutional layers? (Stack Exchange)
GANs
- [1406.2661] Generative Adversarial Networks
- [1606.03498] Improved Techniques for Training GANs
- openai/improved-gan: code for the paper "Improved Techniques for Training GANs"
- [1606.03657] InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
- openai/InfoGAN
- Newmu/dcgan_code: Deep Convolutional Generative Adversarial Networks
- [1606.03476] Generative Adversarial Imitation Learning
- openai/imitation
Hardware
- NVIDIA TITAN X Graphics Card with Pascal (what OpenAI uses)
- Andrew's Linux Rig
- Build a super fast deep learning machine for under $1,000 - O'Reilly Media
Home Pages
- Bengio Home Page
- Andrej Karpathy Home Page
- Richard Socher - Home Page
- Greg Brockman (@gdb) | Twitter
- Ilya Sutskever's home page
- Trevor Blackwell | Home Page
- Diederik P. Kingma | Home Page
- John Schulman's Homepage
- Hugo Larochelle | Home Page
- Wojciech Zaremba | Home Page
- Adam Coates PhD Dissertation
- Home - colah's blog
Kernel Methods for DL
- 2009: Cho: Kernel Methods for Deep Learning
- 2011: Montavon: Kernel Analysis of Deep Networks
NLP
- Word2Vec & Friends (YouTube)
- 2013: Mikolov et al: Efficient Estimation of Word Representations in Vector Space (Word2Vec)
- 2014: Goldberg & Levy: word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding method
- Word2vec - Wikipedia
- Word embedding - Wikipedia
- 2014: Olah: Deep Learning, NLP, and Representations
- 2010: Turian et al: Word Representations: A simple and general method for semi-supervised learning
- 2001: Bengio et al: A Neural Probabilistic Language Model - LISA - Publications - Aigaion 2.0
- 2003: Bengio et al: A Neural Probabilistic Language Model
- 2013: Luong et al: Better word representations w/ RNNs for Morphology (pdf)
- 2014: Norouzi et al: Zero-shot learning by convex combination of semantic embeddings
- 2011: Collobert et al: NLP (almost) from Scratch
- Standford Course: Deep Learning for NLP
- Stanford Course: NLP with Deep Learning (most recent)
- 2013: Mikolov et al: Linguistic Regularities in Continuous Space Word Representations
- 2014: Hannun et al: Deep Speech: Scaling up end-to-end speech recognition
- 2015: Amodei et al: Deep Speech 2: End-to-End Speech Recognition in English and Mandarin
- [1611.04558] Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
- Lexicon-Based Methods for Sentiment Analysis
- Sentiment Analysis w/ Deeply Learned Distributed Representations of Variable Length Texts
- Distributed Representations of Sentences and Documents
- LSTM Networks for Sentiment Analysis
OpenAI
- OpenAI Gym
- Ant-v1
- OpenAI Gym: Documentation
- Humanoid-v1
- Blog
- Requests for Research
- Jobs at OpenAI
- Gaming Environments
- OpenAI Forum
- NIPS 2016 OpenAI Schedule
- openai/universe-starter-agent
- Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free | WIRED
Q/Reinforcement Learning
- Deep Reinforcement Learning: Pong from Pixels
- UCL Course on Reinforcement Learning
- [1605.09674] VIME: Variational Information Maximizing Exploration
- openai/vime
Quasi-RNNs
- 2016: Bradbury et al: Quasi-Recurrent Neural Networks
RecEngPapers
- Proceedings of the RecSys 2011 Workshop on Human Decision Making in Recommender Systems (Decisions@RecSys’11) and User‐Centric Evaluation of Recommender Systems and Their Interfaces ‐ 2 (UCERSTI 2) affiliated with the 5th ACM Conference on Recommend
- download
- a13-gomez-uribe.pdf
- bingham-walker.pdf
- Conference_UMAP_2016_re.pdf
- SIG-039-Perfect.indd
- 1409.2944.pdf
- End-to-end learning for music audio
- Improving Content-based and Hybrid Music Recommendation using Deep Learning
- Piczak2015-ESC-ConvNet.pdf
- Analyzing Spotify Data
- Workshop_RSWEB_2014.pdf
- FIVE_APPROACHES_TO_COLLECTING_TAGS_FOR_MUSIC_ISMIR08.pdf
- Gomez-Uribe: The Netflix recommender system: Algorithms,... - Google Scholar
Relations to Physics
- Path Integrals of Information (pdf)
- Path integral guided policy search
- A neural network wave formalism - ScienceDirect
- What are the connections between machine learning and physics? - Quora
- Deep Learning Relies on Renormalization, Physicists Find | Quanta Magazine
- Why Deep Learning Works II: the Renormalization Group – CALCULATED CONTENT
- Understanding Convolution in Deep Learning (It's all fluid dynamics, QM, etc)
- machine learning - Why the sudden fascination with tensors? - Cross Validated
- Neural Networks, Manifolds, and Topology
- Paper: Why does deep/cheap learning work so well? (Lin&Tegmark2016)
- [1410.3831] An exact mapping between the Variational Renormalization Group and Deep Learning
RNNs
- Understanding LSTM Networks
- The Unreasonable Effectiveness of Recurrent Neural Networks
- 1994: Bengio et al: Learning long-term dependencies with gradient descent is difficult (synopsis: why standard RNNs are good in theory, but suck in practice)
- 1997: Hochreiter & Schmidhuber: Long Short-term Memory (i.e., the new, non-sucky RNN)
- 2016: Olah & Carter: Attention and Augmented Recurrent Neural Networks
- [1601.06759] Pixel Recurrent Neural Networks
- Exploring the Limits of Language Modeling
- CS231n Lecture 10 - Recurrent Neural Networks, Image Captioning, LSTM - YouTube
- DLBook: Ch10: Sequence Modeling w/ Recurrent and Recursive Nets
- 1997: Hochreiter & Schmidhuber: Long Short-Term Memory
- 1999: Gers, Schmidhuber & Cummins: Learning to Forget
- 2008: Graves: Supervised Sequence Labelling w/ RNNs (Dissertation)
- A Critical Review of RNNs for Sequence Learning
- 2016: Karpathy, Johson, Fei-Fei: Visualizing and Understanding RNNs
- Written Memories: Understanding, Deriving and Extending the LSTM - R2RT
Style Transfer
- A Neural Algorithm of Artistic Style
- Perceptual Losses for Real-Time Style Transfer and Super-Resolution
- Instance Normalization
- GitHub - lengstrom/fast-style-transfer: Fast Style Transfer in TensorFlow! ⚡🖥🎨🖼
- Fast Style Transfer Models - Google Drive
VAEs
- High-Level Explanation of Variational Inference
- Blei2004.pdf
- [1505.05770] Variational Inference with Normalizing Flows
- [1312.6114] Auto-Encoding Variational Bayes
- [1502.04623] DRAW: A Recurrent Neural Network For Image Generation
- [1603.08575] Attend, Infer, Repeat: Fast Scene Understanding with Generative Models
- 2011: Rifai et al: Contractive Auto-Encoders: Explicit Invariance During Feature Extraction
- Exponential expressivity in deep neural networks through transient chaos (PDF)
SOMs
- SOM: Fundamentals (from jxb notes)
- SOMs: Algorithms & Applications (from jxb notes)
- SOM tutorial part 1
- Self-organizing map - Wikipedia
- Self-Organizing Maps with Google’s TensorFlow | Sachin Joglekar's blog
Scale/Rot-Inv DL
- 2014: Kanazawa et al: Locally Scale-Invariant Convolutional Neural Networks
- Quantifying translation-invariance in CNNs
- 2016: Marcos et al: Learning rotation invariant convolutional filters for texture classification
- Encoded invariance in CNNs
- Rotation-invariant neoperceptron
- akanazawa/si-convnet: Implementation of the [Locally Scale-Invariant Convolutional Neural Network](http://www.umiacs.umd.edu/~kanazawa/papers/sicnn_workshop2014.pdf)
- Transform-Invariant Convolutional Neural Networks for Image Classification and Search
- 2016: Duvenaud: Avoiding pathologies in very deep networks
- 2013: Szegedy et al: Intriguing properties of neural networks
- Can you beat a computer? (Karpathy's image test)
- Google Translate
- Amazon Mechanical Turk
- CrowdFlower (AI for your Biz)
- Project Malmo (Microsoft)
- TensorBoard
- MS COCO (Common Objects in Context)
- Gab41
- WildML – AI, Deep Learning, NLP
- ConvNetJS Deep Q Demo
- ConvNetJS: Deep Learning in your browser
- Google's ML Style Guide
- DL (small review paper: LeCun, Bengio, & Hinton)
- List of Most-Cited DL/ML papers
- Floyd Zero Setup Deep Learning
- Dropout: A Simple Way to Prevent NNs from Overfitting
- [0908.4425] Geometry of the restricted Boltzmann machine
- Practical Guide to Training Restricted Boltzmann Machines
- Recommending music on Spotify with deep learning – Sander Dieleman