Statistical Learning Research Group (Sailing)
2024
- November 8, Mathematical Discoveries from program Search with Large Language Models, Pawan Kumar (Google DeepMind)
- November 1, How should we evaluate long-context language models, Mohit Iyyer (UMass Amherst)
Fall break
- October 11, « What robots have taught me about machine learning », Chelsea Finn (Stanford University)
- October 4, Abide by the law and follow the flow: conservation laws for gradient flows, (paper), Sibylle Marcotte (ENS)
- September 27, Causal Imputation and Causal Disentanglement, Chandler Squires (MIT)
- September 17, Optimal Quantile Estimation for Streams, Mihir Singhal (UC Berkeley)
Summer break
- June 28, Data Contribution Estimation for Machine Learning, Stephanie Schoch (Unversity of Virginia)
- June 21, Do You Prefer Learning with Preferences? Aditya Gopalan (IIS)
- June 13, Transformers for Bootstrapperd Amplitudes, Francois Charton (Meta)
- June 7, Chaining: a long story (Abel lecture), Michel Talagrand (CNRS)
- May 31, Nobel Prize lectures in physics (2023), Pierre Agostini (Ohio State University), Ferenc Krausz (University of Munich) and Anne L’Huillier (Lund University)
- May 24, Sparsification of Gaussian Processes, Anindya De (University of Pennsylvania)
- May 3, Inverse Reinforcement Learning, Pascal Poupart (University of Waterloo)
- April 26, Recreational Lunch
- April 19. No lunch seminar (BIOT’24 conference)
- April 12, Learning-Based Solutions for Inverse Problems, Stella Yu (University of Michigan)
- March 22, The Era of 1-bit LLMs-All Large Language Models are in 1.58 Bits, Ma et. al (Microsoft Research), paper
- March 15, Whiteboard Seminar: Mathematics of Machine Learning, part 3 (online learning), Félicien Hêche, PhD candidate.
- March 8, Whiteboard Seminar: Mathematics of Machine Learning, part 2 (convex optimization), Félicien Hêche, PhD candidate.
- March 1, Whiteboard Seminar: Mathematics of Machine Learning, part 1, Félicien Hêche, PhD candidate.
- February 23, The Many Faces of Responsible AI, Lora Aroyo (Google)
- February 14, Pretrained diffusion is all we need: a journey beyond training distribution, Tali Dekel (Weizmann Institute of Science)
- February 8, Heavy Tails in ML: Structure, Stability, Dynamics, Adam Wierman (Caltech)
- February 2, Unsupervised Pre-Training:Contrastive Learning, Chelsea Finn (Stanford University), class link
- January 25, Direct Preference Optimization: Your Language Model is Secretly a Reward Model, Rafael Rafailov et al. (Stanford University), paper
- January 18, Scaling Data-Constrained Language Models (at NeurIPS), (long version), Niklas Muennighoff (Peking University), paper
- January 11, Are Emergent Abilities of Large Language Models a Mirage? Brando Miranda (Stanford University), paper
2023
- December 14, Statistical Applications of Wasserstein Gradient Flows, Philippe Rigollet (MIT)
- December 7, Pareto Invariant Risk Minimization: Towards Mitigating The Optimization Dilemma in Out-of-Distribution Generalization, Yongqiang Chen (Chinese University Hong Kong)
- November 30, Artificial Intelligence, Ethics, and a Right to a Human Decision, John Tasioulas (University of Oxford)
- November 16, Analyzing Transfer Learning Bounds through Distributional Robustness, Jihun Hamm (Tulane University)
- November 9, Designing High-Dimensional Closed-Loop Optimal Control Using Deep Neural Networks, Jiequn Han (Flatiron Institute, NY)
- November 2, Climate modeling with AI: Hype or Reality?, Laure Zanna (NYU, Current Institute)
- October 19, Generative Models and Physical Processes, Tommi Jaakkola (MIT)
- October 12, Quantifying causal influence in time series and beyond, Dominik Janzing (Amazon, Tübingen)
- October 5, Reasoning and Abstraction as Challenges for AI, Cezary Kaliszyk (University of Innsbruck)
- September 28, Steering AI for the Public Good: A Dialogue for the Future | Institute for Advanced Study (Princeton)
- September 21, Topological Modeling of Complex Data, Gunnar Carlsson (Stanford University)
- September 14, Watermarking of Large Language Models, Scott Aaronson (UT Austin / OpenAI)
- Summer break
- June 12, Transformers United, Andrej Karpathy (OpenAI), CS Seminar (Stanford University)
- June 5, Variational Autoencoder, Anand Avait (Apple), CS lecture (Stanford University)
- May 22, Could a Large Language Model be Conscious? David Chalmers (NYU)
- May 15, Transformers and Pretraining, NLP lecture (Stanford University)
- May 8, Self-Attention and Transformers, NLP lecture (Stanford University)
- May 1, Introduction to self-attention and transformers, NLP lecture (Stanford University)
- April 17, GPT-3 & Beyond, Christopher Potts (Stanford University)
- April 3, Reinforcement Learning 10 (Classic Games Case Study), Hado van Hasselt (DeepMind)
- March 27, How to increase certainty in predictive modeling, Emmanuel Candès (Stanford University)
- March 20, Reinforcement Learning 8 (Advanced Topics in Deep RL), Hado van Hasselt (DeepMind)
- March 13, Reinforcement Learning 7 (Planning and Models), Hado van Hasselt (DeepMind)
- March 6, Reinforcement Learning 6 (Policy Gradients and Actor Critics), Hado van Hasselt (DeepMind)
- February 20, Welcome lunch
- February 13, Reinforcement Learning 4 (Model-Free Prediction and Control), Hado van Hasselt (DeepMind)
- February 6, Offline Reinforcement Learning, Sergey Levine, (UC Berkeley)
- January 30, Reinforcement Learning 2 (Exploration and Exploitation), Hado van Hasselt (DeepMind). HW: watch the rest of the video
- January 23, Reinforcement Learning 1, Hado van Hasselt (DeepMind). HW: watch the rest of the video
- Reference: RLbook2020.
- January 16. Introduction to Algebraic Topology, N J Wildberger (University of New South Wales, Australia)
- January 9, Signal Recovery with Generative Priors, Paul Hand (Northeastern University, Boston)
2022
- December 12, Learning-Based Low-Rank Approximations, Piotr Indyk (MIT), paper
- December 5, General graph problems with neural networks, Soledad Villar (New York University, Current Institute)
- November 28, The Transformer Network for the Traveling Salesman Problem, Xavier Bresson (NTU)
- November 25, Artificial Intelligence in Acute Medecine, From theory to applications, Centre St-Roch, salle R102, Avenue des Sports 20, Yverdon-les-Bains, Suisse
- November 21, Attention, Learn to Solve Routing Problems! Wouter Kool (University of Amsterdam)
- November 14, Why Did Quantum Entanglement Win the Nobel Prize in Physics? Sixty Symbols – Spooky Action at a Distance (Bell’s Inequality)
- November 7, EigenGame PCA as a Nash Equilibrium, Ian Gemp (DeepMind), Deep Semi-Supervised Anomaly Detection
- October 31, Discovering faster matrix multiplication algorithms with reinforcement learning, Yannic Kilcher
- October 17, Compressing Variational Bayes, Stephan Mandt (UC Irvine)
- October 10, From Machine Learning to Autonomous Intelligence, Yann LeCun (NYU, Meta)
- October 3, Diffusion Probabilistic Models, Jascha Sohl-Dickstein (Google Brain)
- September 12 & 26, Attention and Memory in Deep Learning, Alex Graves (DeepMind)
- September 5, Transformers and Self-Attention, Ashish Vaswani (Adept AI Labs)
- August 29, Ensuring Safety in Online Reinforcement Learning by Leveraging Offline Data, Sergey Levine (UC Berkeley)
- August 22, Geometric Deep Learning: The Erlangen Programme of ML, Michael Bronstein (Imperial College, London)
- August 15, The Devil is in the Tails and Other Stories of Interpolation, Niladri Chatterji (Stanford Unversity)
- Summer break
- July 11, Gaussian multiplicative chaos: applications and recent developments, Nina Holden (ETH, Werner Wendelin Group)
- July 1, Meeting with UFC-LNIT.
- June 27, Statistical mechanics arising from random matrix theory, Thomas Spencer (IAS)
- June 20, Stop Explaining Black Box Machine Learning Models
Cynthia Rudin (Duke University) - May 30 & June 13, Network Calculus, Jean-Yves Le Boudec (EPFL)
- May 16 & 23, Synthetic Healthcare Data Generation and Assessment: Challenges, Methods, and Impact on Machine Learning, Mihaela van der Schaar (university of Cambridge) and Ahmed Alaa (MIT).
-
- Régis Houssou et al., Generation and Simulation of Synthetic Datasets with Copulas, arXiv preprint arXiv:2203.17250
-
- May 9, Combining Reinforcement Learning & Constraint Programming for Combinator…, Louis-Martin Rousseau (Ecole Polytechnique de Montreal)
- May 2, Deep Reinforcement Learning at the Edge of the Statistical Precipice, Rishabh Agarwal (Google Brain)
- April 25, Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies, Paul Vicol (University of Toronto)
- April 11, I Can’t Believe Latent Variable Models Are Not Better, Chris Maddison (University of Toronto)
- April 4, From System 1 Deep Learning to System 2 Deep Learning, Yoshua Bengio (University of Montreal)
- March 14 and 21, Latent Dirichlet Allocation, Philipp Hennig (University of Tübingen). Interesting paper (Test of Time Award NeurIPS 2021): Online Learning for Latent Dirichlet Allocation, Matthew D. Hoffman, David M. Blei (Princeton University)
- March 7, On the Expressivity of Markov Reward, David Abel (DeepMind) et al. paper
- February 28, Continuous Time Dynamic Programming — The Hamilton-Jacobi-Bellman Equation, Neil Walton (University of Manchester)
- February 21, Computational Barriers in Statistical Estimation and Learning, Andrea Montanari (Stanford University)
- February 14, Offline Deep Reinforcement Learning Algorithms, Sergey Levine (UC Berkeley)
- February 7, Infusing Physics and Structure into Machine Learning, Anima Anandkumar (CalTech)
- January 31, Robust Predictable Control, Benjamin Eysenbach (CMU), web page, paper
- January 10, 17 and 24, Recent Advances in Integrating Machine Learning and Combinatorial Optimization – Tutorial at AAAI-21
-
- Tutorial webpage with slides
https://sites.google.com/view/ml-co-aaai-21/ - Part 1: Introduction to combinatorial optimization & tutorial overview
Part 2: The pure ML approach: predicting feasible solutions
Part 3: The hybrid approach: improving exact solvers with ML
Part 4: Machine learning for MIP solving: challenges & literature
Part 5: Ecole: A python framework for learning in exact MIP solvers
Part 6: Decision-focused Learning
Part 7: Concluding remarks - This tutorial will provide an overview of the recent impact machine learning is having on combinatorial optimization, particularly under the Mixed Integer Programming (MIP) framework. Topics covered will include ML and reinforcement learning for predicting feasible solutions, improving exact solvers with ML, a software framework for learning in exact MIP solvers, and the emerging paradigm of decision-focused learning.
- The tutorial targets both junior and senior researchers in two prominent areas of interest to the AAAI community: (1) Machine learning researchers looking for a challenging application domain, namely combinatorial optimization; (2) Optimization practitioners and researchers who may benefit from learning about recent advances in ML methods for improving combinatorial optimization algorithms.
- Presented by: Elias B. Khalil (University of Toronto), Andrea Lodi (Polytechnique Montréal), Bistra Dilkina (University of Southern California), Didier Chételat (Polytechnique Montréal), Maxime Gasse (Polytechnique Montréal), Antoine Prouvost (Polytechnique Montréal), Giulia Zarpellon (Polytechnique Montréal) and Laurent Charlin (HEC Montréal)
- Tutorial webpage with slides
-
2021
- December 13 and 20, Attention and Transformer Networks, Pascal Poupart (University of Waterloo, Canada)
- December 6, Yes, Generative Models Are The New Sparsity, Alex Dimakis (University of Texas at Austin)
- November 29. The Knockoffs Framework: New Statistical Tools for Replicable Selections, Emmanuel Candès (Stanford University)
- November 15 and 22. Advanced Machine Learning Day 3: Neural Architecture Search, Debadeepta Dey (Microsoft Research AI)
- November 8, La théorie de l’apprentissage de Vapnik et les progrès récents de l’IA, Yann Le Cun (New York University, Facebook)
- November 1, Compositional Dynamics Modeling for Physical Inference and Control, Yunshu Li (MIT)
- October 25, Safe and Efficient Exploration in Reinforcement Learning, Andreas Krause (ETH)
- October 18, Special guest today: Luis von Ahn, CEO and co-founder of Duolingo (former professor at CMU)
- October 11, Contrastive Learning: A General Self-supervised Learning Approach, Yonglong Tian (MIT)
- September 27 & October 4, Adversarial Robustness – Theory and Practice, J. Z. Kolter (CMU-Bosch) and A. Madry (MIT)
- No Lunch Seminar during the summer break
- June 21 & 28, Recent Developments in Over-parametrized Neural Networks, Jason Lee (University of Southern California) (43.42-end)
- June 14, Feedback Control Perspectives on Learning, Jeff Shamma (University of Illinois at Urbana-Champaign)
- June 7, Self-Supervised Learning & World Models, Yann LeCun (NYU – Courant institute, Facebook)
- May 31, Theoretical Foundations of Graph Neural Networks, Petar Veličković (University of Cambridge)
- May 17 & 24, Deep Implicit Layers, David Duvenaud (University of Toronto), J. Zico Kotler (CMU), Matt Johnson (Google Brain)
- May 3 & 10, Bayesian Deep Learning and Probabilistic Model Construction, Andrew Gordon Wilson (Courant institute, New York University)
- April 26, Learning Ising Models from One, Ten or a Thousand Samples, Costantinos Daskalakis (MIT)
- April 19, Deconstructing the Blockchain to Approach Physical Limits, Sreeram Kannan (University of Washington)
- April 12, Federated Learning and Analytics at Google and Beyond, Peter Kairouz (Google)
- March 22, Equivariant Networks and Natural Graph Networks, Taco Cohen (Qualcomm)
- March 8 and 15, Parameter-free Online Optimization
Francesco Orabona (Boston University), Ashok Cutkosky (Boston University), part 1, 2, 3, 4. - February 22 and March 1, Machine Learning with Signal Processing, Arno Solin (Aalto University)
- February 15. A Function Approximation of Perspective on Sensory Representations, Cengiz Pehlevan (Harvard). References: Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent, Neural Tangent Kernel: Convergence and Generalization in Neural Networks, On Exact Computation with an Infinitely Wide Neural Net, Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
- February 8: Hopfield Networks in 2021, Fireside chat between Sepp Hochreiter (Johannes Kepler University of Linz) and Dmitry Krotov (IBM Watson)
- February 1: Convergence and Sample Complexity of Gradient Methods for the Model-Free Linear Quadratic Regulator, Mihailo Jovanovic (USC)
- January 25: TIC Department Conference
- January 18, Influence: Using Disentangled Representations to Audit Model Predictions, Charlie Marx (Haverford College)
- January 11, Offline Reinforcement Learning, Sergey Levine, UC Berkeley
2020
- December 14, Stanford Seminar (part 2) – Information Theory of Deep Learning, Naftali Tishby, Hebrew University of Jerusalem. New Theory Cracks Open the Black Box of Deep Learning, Quanta Magazine
- December 7, Stanford Seminar (part 1) – Information Theory of Deep Learning, Naftali Tishby, Hebrew University of Jerusalem
- November 30, Computer vision: who is harmed and who benefits? Timnit Gebru, Google. Links related to the talk: James Landay, Smart Interfaces for Human-Centered AI, Ali Alkhatib, Anthropological/Artificial Intelligence & the HAI, Faception, HireVue , Our Data Bodies,
- November 23, Network Telemetry and Analytics for tomorrows Zero Touch Operation Network, Online talk by Swisscom Digital Lab
- November 16, Representation Learning Without Labels, Ali Eslami, DeepMind
- November 9, Active Learning: From Theory to Practice, Robert Nowak, University of Wisconsin-Madison
- November 2, Stanford Seminar – Machine Learning for Creativity, Interaction, and Inclusion, Rebecca Fiebrink, Goldsmiths, University of London
- October 26 (pdf) , LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained, ICLR 2021 submission)
- October 12, Artificial Stupidity: The New AI and the Future of Fintech, Andrew W. Lo (Massachusetts Institute of Technology)
- September 28, LSTM is dead. Long Live Transformers! Leo Dirac, Amazon. LSTM paper, LSTM Diagrams- Understanding LSTM, Attention is all you need, Illustrated Attention, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, Deep contextualized word representations, huggingface/transformers
- September 14, Machine Learning Projects Against COVID-19, Yoshua Bengio, Université de Montréal
- No Lunch Seminar during the summer break
- June 22, Kernel and Deep Regimes in Overparameterized Learning, Suriya Gunasekar (TTI-Chicago, Microsoft Research)
- June 15, Energy-based Approaches to Representation Learning, Yann LeCun (New York University, FaceBook)
- June 8, Learning probability distributions; What can, What can’t be done, Shai Ben-David (University of Waterloo)
- References: Shai Ben-David et al. , Learnability can be undecidable, Sushant Agarwal et al., On Learnability with Computable Learners
- May 25, Generalized Resilience and Robust Statistics, Jacob Steinhardt (UC Berkeley).
- References: Slides, Banghua Zhu et al., Generalized Resilience and Robust Statistics, Charu C Aggarwal. Outlier analysis,
- May 18, From Classical Statistics to Modern Machine Learning, Mikhail Belkin (The Ohio State University).
- References: Mikhail Belkin, Siyuan Ma, Soumik Mandal To Understand Deep Learning We Need to Understand Kernel Learning, Luc Devroye et al., The Hilbert Kernel Regression Estimate, Cover and Hart, Nearest Neighbor Pattern Classification, Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate, Adityanarayanan Radhakrishnan et al., Overparameterized Neural Networks Can Implement Associative Memory, Mikhail Belkin et al., Reconciling modern machine-learning practice and the classical bias–variance trade-off, Madhu S. Advani, Andrew M. Saxe, High-dimensional dynamics of generalization error in neural networks
- May 11, Automatic Machine Learning, part 3, (from minute 90) Frank Hutter (University of Freiburg) and Joaquin Vanschoren (Eindhoven University of Technology). Slides part 3.
- May 4, Automatic Machine Learning, part 2, (from minute 46-90) Frank Hutter (University of Freiburg) and Joaquin Vanschoren (Eindhoven University of Technology). Slides parts 1-2, Slides part 3.
- References: Thomas Elsken et al., Neural Architecture Search: A Survey. J. Bergstra et al., Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures, Hector Mendoza et al., Towards Automatically-Tuned Neural Networks, Barret Zoph et. al., Neural Architecture Search with Reinforcement Learning, Thomas Elsken et al., Neural Architecture Search: A Survey, Peter J. Angeline et al., An Evolutionary Algorithm that Constructs Recurrent Neural Networks, Kenneth O. Stanley et al., Evolving Neural Networks through Augmenting Topologies, Risto Miikkulainen et al., Evolving Deep Neural Networks, Esteban Real et al.,Regularized Evolution for Image Classifier Architecture Search, Kevin Swersky et al., Raiders of the Lost Architecture: Kernels for Bayesian Optimization in Conditional Parameter Spaces, Kirthevasan Kandasamy et al., Neural Architecture Search with Bayesian Optimisation and Optimal Transport, Chenxi Liu et al., Progressive Neural Architecture Search, Arber Zela et al., Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search, Catherine Wong et al., Transfer Learning with Neural AutoML, Tianqi Chen et al., Net2Net: Accelerating Learning via Knowledge Transfer, Tao Wei et al., Network Morphism, Han Cai et al., Path-Level Network Transformation for Efficient Architecture Search, Han Cai et al., Efficient Architecture Search by Network Transformation, Thomas Elsken et al., Simple and Efficient Architecture Search for CNNs, Corinna Cortes et al., AdaNet: Adaptive Structural Learning of Artificial Neural Networks, Shreyas Saxena, Convolutional Neural Fabrics, Gabriel Bender et al., Understanding and Simplifying One-Shot Architecture Search, Hieu Pham et al., Efficient Neural Architecture Search via Parameter Sharing, Andrew Brock et al., SMASH: One-Shot Model Architecture Search through HyperNetworks, Hanxiao Liu et al., DARTS: Differentiable Architecture Search, Mingxing Tan, MnasNet: Platform-Aware Neural Architecture Search for Mobile, Rui Leite et al., Selecting Classification Algorithms with Active Testing, Salisu Mamman Abdulrahman et al., Speeding up algorithm selection using average ranking and active testing by introducing runtime, Martin Wistuba et al., Learning Hyperparameter Optimization Initializations, J. N. van Rijn et al., Hyperparameter Importance Across Datasets, Philipp Probst et al., Tunability: Importance of Hyperparameters of Machine Learning Algorithms, Martin Wistuba et al., Hyperparameter Search Space Pruning – A New Component for Sequential Model-Based Hyperparameter Optimization, C. E. Rasmussen et al.,Gaussian Processes for Machine Learning, Martin Wistuba et al., Scalable Gaussian process-based transfer surrogates for hyperparameter optimization, Matthias Feurer et al., Scalable Meta-Learning for Bayesian Optimization
- April 27, Automatic Machine Learning, part 1, Frank Hutter (University of Freiburg) and Joaquin Vanschoren (Eindhoven University of Technology). Slides parts 1-2, Slides part 3.
- References: Part 1: Book, chapter 1, J. Močkus, On bayesian methods for seeking the extremum, Nando de Freitas et al., Exponential Regret Bounds for Gaussian Process Bandits with Deterministic Observations, Kenji Kawaguchi et al., Bayesian Optimization with Exponential Convergence, Ziyu Wang et al., Bayesian Optimization in High Dimensions via Random Embeddings, Frank Hutter et al., Sequential Model-Based Optimization for General Algorithm Configuration, Kevin Swersky et al., Raiders of the Lost Architecture: Kernels for Bayesian Optimization in Conditional Parameter Spaces, Leo Breiman, Random Forests, Jasper Snoek et al., Scalable Bayesian Optimization Using Deep Neural Networks, Jost Tobias Springenberg, Bayesian Optimization with Robust Bayesian Neural Networks, James Bergstra, Algorithms for Hyper-Parameter Optimization, Hans Georg Beyer, Hans Paul Paul Schwefel, Evolution strategies –A comprehensive introduction, Nikolaus Hansen, The CMA Evolution Strategy: A Tutorial, Ilya Loshchilov et al., CMA-ES for hyperparameters optimization of neural networks, Tobias Domhan et al., Speeding Up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves, Luca Franceschi et al., Bilevel Programming for Hyperparameter Optimization and Meta-Learning, Jelena Luketina et al., Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters, Aaron Klein et al, Learning curve prediction with Bayesian neural networks, Kevin Swersky, Multi-Task Bayesian Optimization, Kevin Swersky, Freeze-Thaw Bayesian optimization, Kirthevasan Kandasamy, Multi-fidelity Bayesian Optimisation with Continuous Approximations, Stefan Falkner et al., BOHB: Robust and Efficient Hyperparameter Optimization at Scale, Github link, Lisha Li et al., Hyperband: Bandit-based configuration evaluation for hyperband parameters optimiation, Kevin Jamieson, Non-stochastic Best Arm Identification and Hyperparameter Optimization, Chris Thornton et al., Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms, Brent Komer et al., Hyperopt-Sklearn: Automatic Hyperparameter Configuration for Scikit-Learn, Matthias Feurer et al., Efficient and Robust Automated Machine Learning, Auto-sklearn, GitHub link, Randal S. Olson et al., Automating Biomedical Data Science Through Tree-Based Pipeline Optimization
- April 20, Using Knockoffs to Find Important Variables with Statistical Guarantees, Lucas Janson (Harvard University)
- April 6, Efficient Deep Learning with Humans in the Loop, Zachary Lipton (Carnegie Mellon University)
- References: Davis Liang et al., Learning Noise-Invariant Representations for Robust Speech Recognition, Zachary C. Lipton et al., BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems, Yanyao Shen et al., Deep Active Learning for Named Entity Recognition, Aditya Siddhant et al., Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study, David Lowell et al., Practical Obstacles to Deploying Active Learning, Peiyun Hu et al., Active Learning with Partial Feedback, Shish Khetan et al., Learning From Noisy Singly-labeled Data, Yanyao Shen et al. Deep Active Learning for Named Entity Recognition, Peiyun Hu et al. Active Learning with Partial Feedback, Jonathon Byrd et al., What is the Effect of Importance Weighting in Deep Learning? Jason Yosinski et al., Understanding Neural Networks Through Deep Visualization
- March 30, Studying Generalization in Deep Learning via PAC-Bayes, Gintare Karolina Dziugaite (Element AI)
-
- Few references: G.K Dziugaite, D. Roy, Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data, Huang et al., Stochastic Neural Network with Kronecker Flow, Zhou et al., Non-vacuous Generalization Bounds at the ImageNet Scale: a PAC-Bayesian Compression Approach, Abadi et al., Deep Learning with Differential Privacy, R Herbrich, T Graepel, C Campbell, Bayes point machines, Neyshabur et al., The role of over-parametrization in generalization of neural networks, K Miyaguchi, PAC-Bayesian Transportation Bound
- A little bit of background on probably approximately correct (PAC) learning: Probably Approximately Correct Learning, A primer on PAC-Bayesian learning
-
- March 23, Integrating Constraints into Deep Learning Architectures with Structured Layers J. Zico Kolter (Carnegie Mellon University)
-
- References: Honglak Lee, Roger Grosse, Rajesh Ranganath, Andrew Y. Ng, Convolutional Deep Belief Networksfor Scalable Unsupervised Learning of Hierarchical Representations. Brandon Amos, J. Zico Kolter, OptNet: Differentiable Optimization as a Layer in Neural Networks. Po-Wei Wang, Priya L. Donti, Bryan Wilder, Zico Kolter, SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud, Neural Ordinary Differential Equations. Shaojie Bai, J. Zico Kolter, Vladlen Koltun, Trellis Networks for Sequence Modeling. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Attention Is All You Need
-
- March 9 & March 16, From Deep Learning of Disentangled Representations to Higher-level Cognition, Yoshua Bengio (Université de Montréal). Lesson on Variational Auto Encoders, based on Irina Higgins et al., β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework.
-
- References: Stanislas Dehaene, Hakwan Lau, Sid Kouider, What is consciousness, and could machines have it? Yoshua Bengio The Consciousness PriorYoshua Bengio et al., Better Mixing via Deep Representations. Valentin Thomas et al., Independently Controllable Factors. Donald W. Mathis and Michael C. Mozer. On the Computational Utility of Consciousness. Becker & Hinton, Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature
-
- March 2, Rebooting AI, Gary Marcus (Robust AI)
- February 24, Is Optimization the Right Language to Understand Deep Learning? Sanjeev Arora (Princeton University)
- February 17, Adversarial Machine Learning, Ian Goodfellow (Google)
- February 10, Our Mathematical Universe, Max Tegmark (MIT)
- February 3, Nobel Lecture: Michel Mayor, Nobel Prize in Physics 2019
- January 29, How to Successfully Harness Machine Learning to Combat Fraud and Abuse, Elie Bursztein, Anti-Abuse Research Lead (Google)
2019
- December 16 & January 13, 2019-20, Variational Inference: Foundations and Innovations (Part 2, 46′), David Blei (Columbia University)
- December 2 & 9, 2019, Variational Inference: Foundations and Innovations (Part 1), David Blei (Columbia University)
- November 18, On Large Deviation Principles for Large Neural Networks, Joan Bruna (Courant Institute of Mathematical Sciences, NYU)
- November 11, 2019, Anomaly Detection using Neural Networks, Dean Langsam (BlueVine)
- October 28 & November 4, 2019, Extreme Value Theory. Paul Embrechts (ETH)
- October 7, 2019, On the Optimization Landscape of Matrix and Tensor Decomposition Problems, Tengyu Ma (Princeton University)
- September 30, 2019, Recurrent Neural Networks, Ava Soleimany (MIT)
- September 23, 2019, When deep learning does not learn, Emmanuel Abbe (EPFL and Princeton)
- July 15, 2019, Optimality in Locally Private Estimation and Learning, John Duchi (Stanford)
- July 1, 2019. Capsule Networks, Geoffrey Hinton (University of Toronto – Google Brain – Vector institute)
- June 24, 2019, A multi-perspective introduction to the EM algorithm, William M. Wells III.
- June 17, 2019, Theoretical Perspectives on Deep Learning, Nati Srebro (TTI Chicago)
- May 27, 2019. 2018 ACM Turing Award. Stanford Seminar – Human in the Loop Reinforcement Learning. Emma Brunskill (Stanford)
- May 20, 2019. How Graph Technology Is Changing Artificial Intelligence and Machine Learning. Amy E. Hodles (Neo4j), Jake Graham (Neo4j).
- May 13, 2019, 2017 Nobel Lectures in Physics. Awarded « for decisive contributions to the LIGO detector and the observation of gravitational waves ». Rainer Weiss (MIT), Barry C. Barish (Caltech) and Kip S. Thorne (Caltech)
- May 6, 2019, Accessorize to a Crime: Real and Stealthy Attacks on State-Of-The-Art Face Recognition, Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer (Carnegie Mellon University) and Michael K. Reiter (University of North Carolina Chapel Hill), paper
- April 29, 2019, Build Intelligent Fraud Prevention with ML and Graphs, Nav Mathur, Graham Ganssle
- April 15, 2019, Active Learning: Why Smart Labeling is the Future of Data Annotation, Jennifer Prendki (Figure Eight)
- April 8, 2019, Generalization, Interpolation, and Neural Nets, Alexander Rakhlin (MIT)
- April 1, 2019, Similarity learning using deep neural networks – Jacek Komorowski (Warsaw University of Technology)
- March 18/25, 2019, Deep Reinforcement Learning (First lecture of MIT course 6.S091), Lex Fridman (MIT)
- March 11, 2019, Ensembles: Boosting, Alexander Ihler University of California, Irvine)
- March 4, 2019, Dataset shift in machine learning, Peter Prettenhofer (DataRobot)
- February 25, 2019, Could Machine Learning Ever Cure Alzheimer’s Disease? – Winston Hide (Sheffield University)
- February 18, 2019, 2015 IAAA Winner Intelligent Surgical Scheduling System
- February 11, 2019, Artificial Intelligence Machine Learning Big Data, Exponential Finance – Neil Jacobstein (Singularity University)
- February 4, 2019, Bayesian Deep Learning with Edward (and a trick using Dropout) – Andrew Rowan (PrismFP)
- January 28, 2019, Ouroboros, Aggelos Kiayias (University of Edinburgh)
- January 21, 2019, Cosmos Proof of Stake – Sunny Aggrawal
- January 14, 2019, Geometric Deep Learning – Michael Bronstein (University of Lugano and Tel Aviv University)
- January 7, 2019, Deep Generative Networks as Inverse Problems – Stéphane Mallat, Ecole Normale Supérieure (ENS)
2018
- December 3/17, 2018, Convex Optimization and Applications – Stephen Boyd (Stanford University)
- November 26, 2018, The mathematics of machine learning and deep learning – Sanjeev Arora (Princeton University)
- November 15, 2018, Reinforcement Learning in Healthcare: Challenges and Promise – Doshi-Velez (Harvard University)
2023
- December 14, Statistical Applications of Wasserstein Gradient Flows, Philippe Rigollet (MIT)
- December 7, Pareto Invariant Risk Minimization: Towards Mitigating The Optimization Dilemma in Out-of-Distribution Generalization, Yongqiang Chen (Chinese University Hong Kong)
- November 30, Artificial Intelligence, Ethics, and a Right to a Human Decision, John Tasioulas (University of Oxford)
- November 16, Analyzing Transfer Learning Bounds through Distributional Robustness, Jihun Hamm (Tulane University)
- November 9, Designing High-Dimensional Closed-Loop Optimal Control Using Deep Neural Networks, Jiequn Han (Flatiron Institute, NY)
- November 2, Climate modeling with AI: Hype or Reality?, Laure Zanna (NYU, Current Institute)
- October 19, Generative Models and Physical Processes, Tommi Jaakkola (MIT)
- October 12, Quantifying causal influence in time series and beyond, Dominik Janzing (Amazon, Tübingen)
- October 5, Reasoning and Abstraction as Challenges for AI, Cezary Kaliszyk (University of Innsbruck)
- September 28, Steering AI for the Public Good: A Dialogue for the Future | Institute for Advanced Study (Princeton)
- September 21, Topological Modeling of Complex Data, Gunnar Carlsson (Stanford University)
- September 14, Watermarking of Large Language Models, Scott Aaronson (UT Austin / OpenAI)
- Summer break
- June 12, Transformers United, Andrej Karpathy (OpenAI), CS Seminar (Stanford University)
- June 5, Variational Autoencoder, Anand Avait (Apple), CS lecture (Stanford University)
- May 22, Could a Large Language Model be Conscious? David Chalmers (NYU)
- May 15, Transformers and Pretraining, NLP lecture (Stanford University)
- May 8, Self-Attention and Transformers, NLP lecture (Stanford University)
- May 1, Introduction to self-attention and transformers, NLP lecture (Stanford University)
- April 17, GPT-3 & Beyond, Christopher Potts (Stanford University)
- April 3, Reinforcement Learning 10 (Classic Games Case Study), Hado van Hasselt (DeepMind)
- March 27, How to increase certainty in predictive modeling, Emmanuel Candès (Stanford University)
- March 20, Reinforcement Learning 8 (Advanced Topics in Deep RL), Hado van Hasselt (DeepMind)
- March 13, Reinforcement Learning 7 (Planning and Models), Hado van Hasselt (DeepMind)
- March 6, Reinforcement Learning 6 (Policy Gradients and Actor Critics), Hado van Hasselt (DeepMind)
- February 20, Welcome lunch
- February 13, Reinforcement Learning 4 (Model-Free Prediction and Control), Hado van Hasselt (DeepMind)
- February 6, Offline Reinforcement Learning, Sergey Levine, (UC Berkeley)
- January 30, Reinforcement Learning 2 (Exploration and Exploitation), Hado van Hasselt (DeepMind). HW: watch the rest of the video
- January 23, Reinforcement Learning 1, Hado van Hasselt (DeepMind). HW: watch the rest of the video
- Reference: RLbook2020.
- January 16. Introduction to Algebraic Topology, N J Wildberger (University of New South Wales, Australia)
- January 9, Signal Recovery with Generative Priors, Paul Hand (Northeastern University, Boston)
2022
- December 12, Learning-Based Low-Rank Approximations, Piotr Indyk (MIT), paper
- December 5, General graph problems with neural networks, Soledad Villar (New York University, Current Institute)
- November 28, The Transformer Network for the Traveling Salesman Problem, Xavier Bresson (NTU)
- November 25, Artificial Intelligence in Acute Medecine, From theory to applications, Centre St-Roch, salle R102, Avenue des Sports 20, Yverdon-les-Bains, Suisse
- November 21, Attention, Learn to Solve Routing Problems! Wouter Kool (University of Amsterdam)
- November 14, Why Did Quantum Entanglement Win the Nobel Prize in Physics? Sixty Symbols – Spooky Action at a Distance (Bell’s Inequality)
- November 7, EigenGame PCA as a Nash Equilibrium, Ian Gemp (DeepMind), Deep Semi-Supervised Anomaly Detection
- October 31, Discovering faster matrix multiplication algorithms with reinforcement learning, Yannic Kilcher
- October 17, Compressing Variational Bayes, Stephan Mandt (UC Irvine)
- October 10, From Machine Learning to Autonomous Intelligence, Yann LeCun (NYU, Meta)
- October 3, Diffusion Probabilistic Models, Jascha Sohl-Dickstein (Google Brain)
- September 12 & 26, Attention and Memory in Deep Learning, Alex Graves (DeepMind)
- September 5, Transformers and Self-Attention, Ashish Vaswani (Adept AI Labs)
- August 29, Ensuring Safety in Online Reinforcement Learning by Leveraging Offline Data, Sergey Levine (UC Berkeley)
- August 22, Geometric Deep Learning: The Erlangen Programme of ML, Michael Bronstein (Imperial College, London)
- August 15, The Devil is in the Tails and Other Stories of Interpolation, Niladri Chatterji (Stanford Unversity)
- Summer break
- July 11, Gaussian multiplicative chaos: applications and recent developments, Nina Holden (ETH, Werner Wendelin Group)
- July 1, Meeting with UFC-LNIT.
- June 27, Statistical mechanics arising from random matrix theory, Thomas Spencer (IAS)
- June 20, Stop Explaining Black Box Machine Learning Models
Cynthia Rudin (Duke University) - May 30 & June 13, Network Calculus, Jean-Yves Le Boudec (EPFL)
- May 16 & 23, Synthetic Healthcare Data Generation and Assessment: Challenges, Methods, and Impact on Machine Learning, Mihaela van der Schaar (university of Cambridge) and Ahmed Alaa (MIT).
-
- Régis Houssou et al., Generation and Simulation of Synthetic Datasets with Copulas, arXiv preprint arXiv:2203.17250
-
- May 9, Combining Reinforcement Learning & Constraint Programming for Combinator…, Louis-Martin Rousseau (Ecole Polytechnique de Montreal)
- May 2, Deep Reinforcement Learning at the Edge of the Statistical Precipice, Rishabh Agarwal (Google Brain)
- April 25, Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies, Paul Vicol (University of Toronto)
- April 11, I Can’t Believe Latent Variable Models Are Not Better, Chris Maddison (University of Toronto)
- April 4, From System 1 Deep Learning to System 2 Deep Learning, Yoshua Bengio (University of Montreal)
- March 14 and 21, Latent Dirichlet Allocation, Philipp Hennig (University of Tübingen). Interesting paper (Test of Time Award NeurIPS 2021): Online Learning for Latent Dirichlet Allocation, Matthew D. Hoffman, David M. Blei (Princeton University)
- March 7, On the Expressivity of Markov Reward, David Abel (DeepMind) et al. paper
- February 28, Continuous Time Dynamic Programming — The Hamilton-Jacobi-Bellman Equation, Neil Walton (University of Manchester)
- February 21, Computational Barriers in Statistical Estimation and Learning, Andrea Montanari (Stanford University)
- February 14, Offline Deep Reinforcement Learning Algorithms, Sergey Levine (UC Berkeley)
- February 7, Infusing Physics and Structure into Machine Learning, Anima Anandkumar (CalTech)
- January 31, Robust Predictable Control, Benjamin Eysenbach (CMU), web page, paper
- January 10, 17 and 24, Recent Advances in Integrating Machine Learning and Combinatorial Optimization – Tutorial at AAAI-21
-
- Tutorial webpage with slides
https://sites.google.com/view/ml-co-aaai-21/ - Part 1: Introduction to combinatorial optimization & tutorial overview
Part 2: The pure ML approach: predicting feasible solutions
Part 3: The hybrid approach: improving exact solvers with ML
Part 4: Machine learning for MIP solving: challenges & literature
Part 5: Ecole: A python framework for learning in exact MIP solvers
Part 6: Decision-focused Learning
Part 7: Concluding remarks - This tutorial will provide an overview of the recent impact machine learning is having on combinatorial optimization, particularly under the Mixed Integer Programming (MIP) framework. Topics covered will include ML and reinforcement learning for predicting feasible solutions, improving exact solvers with ML, a software framework for learning in exact MIP solvers, and the emerging paradigm of decision-focused learning.
- The tutorial targets both junior and senior researchers in two prominent areas of interest to the AAAI community: (1) Machine learning researchers looking for a challenging application domain, namely combinatorial optimization; (2) Optimization practitioners and researchers who may benefit from learning about recent advances in ML methods for improving combinatorial optimization algorithms.
- Presented by: Elias B. Khalil (University of Toronto), Andrea Lodi (Polytechnique Montréal), Bistra Dilkina (University of Southern California), Didier Chételat (Polytechnique Montréal), Maxime Gasse (Polytechnique Montréal), Antoine Prouvost (Polytechnique Montréal), Giulia Zarpellon (Polytechnique Montréal) and Laurent Charlin (HEC Montréal)
- Tutorial webpage with slides
-
2021
- December 13 and 20, Attention and Transformer Networks, Pascal Poupart (University of Waterloo, Canada)
- December 6, Yes, Generative Models Are The New Sparsity, Alex Dimakis (University of Texas at Austin)
- November 29. The Knockoffs Framework: New Statistical Tools for Replicable Selections, Emmanuel Candès (Stanford University)
- November 15 and 22. Advanced Machine Learning Day 3: Neural Architecture Search, Debadeepta Dey (Microsoft Research AI)
- November 8, La théorie de l’apprentissage de Vapnik et les progrès récents de l’IA, Yann Le Cun (New York University, Facebook)
- November 1, Compositional Dynamics Modeling for Physical Inference and Control, Yunshu Li (MIT)
- October 25, Safe and Efficient Exploration in Reinforcement Learning, Andreas Krause (ETH)
- October 18, Special guest today: Luis von Ahn, CEO and co-founder of Duolingo (former professor at CMU)
- October 11, Contrastive Learning: A General Self-supervised Learning Approach, Yonglong Tian (MIT)
- September 27 & October 4, Adversarial Robustness – Theory and Practice, J. Z. Kolter (CMU-Bosch) and A. Madry (MIT)
- No Lunch Seminar during the summer break
- June 21 & 28, Recent Developments in Over-parametrized Neural Networks, Jason Lee (University of Southern California) (43.42-end)
- June 14, Feedback Control Perspectives on Learning, Jeff Shamma (University of Illinois at Urbana-Champaign)
- June 7, Self-Supervised Learning & World Models, Yann LeCun (NYU – Courant institute, Facebook)
- May 31, Theoretical Foundations of Graph Neural Networks, Petar Veličković (University of Cambridge)
- May 17 & 24, Deep Implicit Layers, David Duvenaud (University of Toronto), J. Zico Kotler (CMU), Matt Johnson (Google Brain)
- May 3 & 10, Bayesian Deep Learning and Probabilistic Model Construction, Andrew Gordon Wilson (Courant institute, New York University)
- April 26, Learning Ising Models from One, Ten or a Thousand Samples, Costantinos Daskalakis (MIT)
- April 19, Deconstructing the Blockchain to Approach Physical Limits, Sreeram Kannan (University of Washington)
- April 12, Federated Learning and Analytics at Google and Beyond, Peter Kairouz (Google)
- March 22, Equivariant Networks and Natural Graph Networks, Taco Cohen (Qualcomm)
- March 8 and 15, Parameter-free Online Optimization
Francesco Orabona (Boston University), Ashok Cutkosky (Boston University), part 1, 2, 3, 4. - February 22 and March 1, Machine Learning with Signal Processing, Arno Solin (Aalto University)
- February 15. A Function Approximation of Perspective on Sensory Representations, Cengiz Pehlevan (Harvard). References: Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent, Neural Tangent Kernel: Convergence and Generalization in Neural Networks, On Exact Computation with an Infinitely Wide Neural Net, Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
- February 8: Hopfield Networks in 2021, Fireside chat between Sepp Hochreiter (Johannes Kepler University of Linz) and Dmitry Krotov (IBM Watson)
- February 1: Convergence and Sample Complexity of Gradient Methods for the Model-Free Linear Quadratic Regulator, Mihailo Jovanovic (USC)
- January 25: TIC Department Conference
- January 18, Influence: Using Disentangled Representations to Audit Model Predictions, Charlie Marx (Haverford College)
- January 11, Offline Reinforcement Learning, Sergey Levine, UC Berkeley
2020
- December 14, Stanford Seminar (part 2) – Information Theory of Deep Learning, Naftali Tishby, Hebrew University of Jerusalem. New Theory Cracks Open the Black Box of Deep Learning, Quanta Magazine
- December 7, Stanford Seminar (part 1) – Information Theory of Deep Learning, Naftali Tishby, Hebrew University of Jerusalem
- November 30, Computer vision: who is harmed and who benefits? Timnit Gebru, Google. Links related to the talk: James Landay, Smart Interfaces for Human-Centered AI, Ali Alkhatib, Anthropological/Artificial Intelligence & the HAI, Faception, HireVue , Our Data Bodies,
- November 23, Network Telemetry and Analytics for tomorrows Zero Touch Operation Network, Online talk by Swisscom Digital Lab
- November 16, Representation Learning Without Labels, Ali Eslami, DeepMind
- November 9, Active Learning: From Theory to Practice, Robert Nowak, University of Wisconsin-Madison
- November 2, Stanford Seminar – Machine Learning for Creativity, Interaction, and Inclusion, Rebecca Fiebrink, Goldsmiths, University of London
- October 26 (pdf) , LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained, ICLR 2021 submission)
- October 12, Artificial Stupidity: The New AI and the Future of Fintech, Andrew W. Lo (Massachusetts Institute of Technology)
- September 28, LSTM is dead. Long Live Transformers! Leo Dirac, Amazon. LSTM paper, LSTM Diagrams- Understanding LSTM, Attention is all you need, Illustrated Attention, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, Deep contextualized word representations, huggingface/transformers
- September 14, Machine Learning Projects Against COVID-19, Yoshua Bengio, Université de Montréal
- No Lunch Seminar during the summer break
- June 22, Kernel and Deep Regimes in Overparameterized Learning, Suriya Gunasekar (TTI-Chicago, Microsoft Research)
- June 15, Energy-based Approaches to Representation Learning, Yann LeCun (New York University, FaceBook)
- June 8, Learning probability distributions; What can, What can’t be done, Shai Ben-David (University of Waterloo)
- References: Shai Ben-David et al. , Learnability can be undecidable, Sushant Agarwal et al., On Learnability with Computable Learners
- May 25, Generalized Resilience and Robust Statistics, Jacob Steinhardt (UC Berkeley).
- References: Slides, Banghua Zhu et al., Generalized Resilience and Robust Statistics, Charu C Aggarwal. Outlier analysis,
- May 18, From Classical Statistics to Modern Machine Learning, Mikhail Belkin (The Ohio State University).
- References: Mikhail Belkin, Siyuan Ma, Soumik Mandal To Understand Deep Learning We Need to Understand Kernel Learning, Luc Devroye et al., The Hilbert Kernel Regression Estimate, Cover and Hart, Nearest Neighbor Pattern Classification, Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate, Adityanarayanan Radhakrishnan et al., Overparameterized Neural Networks Can Implement Associative Memory, Mikhail Belkin et al., Reconciling modern machine-learning practice and the classical bias–variance trade-off, Madhu S. Advani, Andrew M. Saxe, High-dimensional dynamics of generalization error in neural networks
- May 11, Automatic Machine Learning, part 3, (from minute 90) Frank Hutter (University of Freiburg) and Joaquin Vanschoren (Eindhoven University of Technology). Slides part 3.
- May 4, Automatic Machine Learning, part 2, (from minute 46-90) Frank Hutter (University of Freiburg) and Joaquin Vanschoren (Eindhoven University of Technology). Slides parts 1-2, Slides part 3.
- References: Thomas Elsken et al., Neural Architecture Search: A Survey. J. Bergstra et al., Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures, Hector Mendoza et al., Towards Automatically-Tuned Neural Networks, Barret Zoph et. al., Neural Architecture Search with Reinforcement Learning, Thomas Elsken et al., Neural Architecture Search: A Survey, Peter J. Angeline et al., An Evolutionary Algorithm that Constructs Recurrent Neural Networks, Kenneth O. Stanley et al., Evolving Neural Networks through Augmenting Topologies, Risto Miikkulainen et al., Evolving Deep Neural Networks, Esteban Real et al.,Regularized Evolution for Image Classifier Architecture Search, Kevin Swersky et al., Raiders of the Lost Architecture: Kernels for Bayesian Optimization in Conditional Parameter Spaces, Kirthevasan Kandasamy et al., Neural Architecture Search with Bayesian Optimisation and Optimal Transport, Chenxi Liu et al., Progressive Neural Architecture Search, Arber Zela et al., Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search, Catherine Wong et al., Transfer Learning with Neural AutoML, Tianqi Chen et al., Net2Net: Accelerating Learning via Knowledge Transfer, Tao Wei et al., Network Morphism, Han Cai et al., Path-Level Network Transformation for Efficient Architecture Search, Han Cai et al., Efficient Architecture Search by Network Transformation, Thomas Elsken et al., Simple and Efficient Architecture Search for CNNs, Corinna Cortes et al., AdaNet: Adaptive Structural Learning of Artificial Neural Networks, Shreyas Saxena, Convolutional Neural Fabrics, Gabriel Bender et al., Understanding and Simplifying One-Shot Architecture Search, Hieu Pham et al., Efficient Neural Architecture Search via Parameter Sharing, Andrew Brock et al., SMASH: One-Shot Model Architecture Search through HyperNetworks, Hanxiao Liu et al., DARTS: Differentiable Architecture Search, Mingxing Tan, MnasNet: Platform-Aware Neural Architecture Search for Mobile, Rui Leite et al., Selecting Classification Algorithms with Active Testing, Salisu Mamman Abdulrahman et al., Speeding up algorithm selection using average ranking and active testing by introducing runtime, Martin Wistuba et al., Learning Hyperparameter Optimization Initializations, J. N. van Rijn et al., Hyperparameter Importance Across Datasets, Philipp Probst et al., Tunability: Importance of Hyperparameters of Machine Learning Algorithms, Martin Wistuba et al., Hyperparameter Search Space Pruning – A New Component for Sequential Model-Based Hyperparameter Optimization, C. E. Rasmussen et al.,Gaussian Processes for Machine Learning, Martin Wistuba et al., Scalable Gaussian process-based transfer surrogates for hyperparameter optimization, Matthias Feurer et al., Scalable Meta-Learning for Bayesian Optimization
- April 27, Automatic Machine Learning, part 1, Frank Hutter (University of Freiburg) and Joaquin Vanschoren (Eindhoven University of Technology). Slides parts 1-2, Slides part 3.
- References: Part 1: Book, chapter 1, J. Močkus, On bayesian methods for seeking the extremum, Nando de Freitas et al., Exponential Regret Bounds for Gaussian Process Bandits with Deterministic Observations, Kenji Kawaguchi et al., Bayesian Optimization with Exponential Convergence, Ziyu Wang et al., Bayesian Optimization in High Dimensions via Random Embeddings, Frank Hutter et al., Sequential Model-Based Optimization for General Algorithm Configuration, Kevin Swersky et al., Raiders of the Lost Architecture: Kernels for Bayesian Optimization in Conditional Parameter Spaces, Leo Breiman, Random Forests, Jasper Snoek et al., Scalable Bayesian Optimization Using Deep Neural Networks, Jost Tobias Springenberg, Bayesian Optimization with Robust Bayesian Neural Networks, James Bergstra, Algorithms for Hyper-Parameter Optimization, Hans Georg Beyer, Hans Paul Paul Schwefel, Evolution strategies –A comprehensive introduction, Nikolaus Hansen, The CMA Evolution Strategy: A Tutorial, Ilya Loshchilov et al., CMA-ES for hyperparameters optimization of neural networks, Tobias Domhan et al., Speeding Up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves, Luca Franceschi et al., Bilevel Programming for Hyperparameter Optimization and Meta-Learning, Jelena Luketina et al., Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters, Aaron Klein et al, Learning curve prediction with Bayesian neural networks, Kevin Swersky, Multi-Task Bayesian Optimization, Kevin Swersky, Freeze-Thaw Bayesian optimization, Kirthevasan Kandasamy, Multi-fidelity Bayesian Optimisation with Continuous Approximations, Stefan Falkner et al., BOHB: Robust and Efficient Hyperparameter Optimization at Scale, Github link, Lisha Li et al., Hyperband: Bandit-based configuration evaluation for hyperband parameters optimiation, Kevin Jamieson, Non-stochastic Best Arm Identification and Hyperparameter Optimization, Chris Thornton et al., Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms, Brent Komer et al., Hyperopt-Sklearn: Automatic Hyperparameter Configuration for Scikit-Learn, Matthias Feurer et al., Efficient and Robust Automated Machine Learning, Auto-sklearn, GitHub link, Randal S. Olson et al., Automating Biomedical Data Science Through Tree-Based Pipeline Optimization
- April 20, Using Knockoffs to Find Important Variables with Statistical Guarantees, Lucas Janson (Harvard University)
- April 6, Efficient Deep Learning with Humans in the Loop, Zachary Lipton (Carnegie Mellon University)
- References: Davis Liang et al., Learning Noise-Invariant Representations for Robust Speech Recognition, Zachary C. Lipton et al., BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems, Yanyao Shen et al., Deep Active Learning for Named Entity Recognition, Aditya Siddhant et al., Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study, David Lowell et al., Practical Obstacles to Deploying Active Learning, Peiyun Hu et al., Active Learning with Partial Feedback, Shish Khetan et al., Learning From Noisy Singly-labeled Data, Yanyao Shen et al. Deep Active Learning for Named Entity Recognition, Peiyun Hu et al. Active Learning with Partial Feedback, Jonathon Byrd et al., What is the Effect of Importance Weighting in Deep Learning? Jason Yosinski et al., Understanding Neural Networks Through Deep Visualization
- March 30, Studying Generalization in Deep Learning via PAC-Bayes, Gintare Karolina Dziugaite (Element AI)
-
- Few references: G.K Dziugaite, D. Roy, Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data, Huang et al., Stochastic Neural Network with Kronecker Flow, Zhou et al., Non-vacuous Generalization Bounds at the ImageNet Scale: a PAC-Bayesian Compression Approach, Abadi et al., Deep Learning with Differential Privacy, R Herbrich, T Graepel, C Campbell, Bayes point machines, Neyshabur et al., The role of over-parametrization in generalization of neural networks, K Miyaguchi, PAC-Bayesian Transportation Bound
- A little bit of background on probably approximately correct (PAC) learning: Probably Approximately Correct Learning, A primer on PAC-Bayesian learning
-
- March 23, Integrating Constraints into Deep Learning Architectures with Structured Layers J. Zico Kolter (Carnegie Mellon University)
-
- References: Honglak Lee, Roger Grosse, Rajesh Ranganath, Andrew Y. Ng, Convolutional Deep Belief Networksfor Scalable Unsupervised Learning of Hierarchical Representations. Brandon Amos, J. Zico Kolter, OptNet: Differentiable Optimization as a Layer in Neural Networks. Po-Wei Wang, Priya L. Donti, Bryan Wilder, Zico Kolter, SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud, Neural Ordinary Differential Equations. Shaojie Bai, J. Zico Kolter, Vladlen Koltun, Trellis Networks for Sequence Modeling. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Attention Is All You Need
-
- March 9 & March 16, From Deep Learning of Disentangled Representations to Higher-level Cognition, Yoshua Bengio (Université de Montréal). Lesson on Variational Auto Encoders, based on Irina Higgins et al., β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework.
-
- References: Stanislas Dehaene, Hakwan Lau, Sid Kouider, What is consciousness, and could machines have it? Yoshua Bengio The Consciousness PriorYoshua Bengio et al., Better Mixing via Deep Representations. Valentin Thomas et al., Independently Controllable Factors. Donald W. Mathis and Michael C. Mozer. On the Computational Utility of Consciousness. Becker & Hinton, Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature
-
- March 2, Rebooting AI, Gary Marcus (Robust AI)
- February 24, Is Optimization the Right Language to Understand Deep Learning? Sanjeev Arora (Princeton University)
- February 17, Adversarial Machine Learning, Ian Goodfellow (Google)
- February 10, Our Mathematical Universe, Max Tegmark (MIT)
- February 3, Nobel Lecture: Michel Mayor, Nobel Prize in Physics 2019
- January 29, How to Successfully Harness Machine Learning to Combat Fraud and Abuse, Elie Bursztein, Anti-Abuse Research Lead (Google)
2019
- December 16 & January 13, 2019-20, Variational Inference: Foundations and Innovations (Part 2, 46′), David Blei (Columbia University)
- December 2 & 9, 2019, Variational Inference: Foundations and Innovations (Part 1), David Blei (Columbia University)
- November 18, On Large Deviation Principles for Large Neural Networks, Joan Bruna (Courant Institute of Mathematical Sciences, NYU)
- November 11, 2019, Anomaly Detection using Neural Networks, Dean Langsam (BlueVine)
- October 28 & November 4, 2019, Extreme Value Theory. Paul Embrechts (ETH)
- October 7, 2019, On the Optimization Landscape of Matrix and Tensor Decomposition Problems, Tengyu Ma (Princeton University)
- September 30, 2019, Recurrent Neural Networks, Ava Soleimany (MIT)
- September 23, 2019, When deep learning does not learn, Emmanuel Abbe (EPFL and Princeton)
- July 15, 2019, Optimality in Locally Private Estimation and Learning, John Duchi (Stanford)
- July 1, 2019. Capsule Networks, Geoffrey Hinton (University of Toronto – Google Brain – Vector institute)
- June 24, 2019, A multi-perspective introduction to the EM algorithm, William M. Wells III.
- June 17, 2019, Theoretical Perspectives on Deep Learning, Nati Srebro (TTI Chicago)
- May 27, 2019. 2018 ACM Turing Award. Stanford Seminar – Human in the Loop Reinforcement Learning. Emma Brunskill (Stanford)
- May 20, 2019. How Graph Technology Is Changing Artificial Intelligence and Machine Learning. Amy E. Hodles (Neo4j), Jake Graham (Neo4j).
- May 13, 2019, 2017 Nobel Lectures in Physics. Awarded « for decisive contributions to the LIGO detector and the observation of gravitational waves ». Rainer Weiss (MIT), Barry C. Barish (Caltech) and Kip S. Thorne (Caltech)
- May 6, 2019, Accessorize to a Crime: Real and Stealthy Attacks on State-Of-The-Art Face Recognition, Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer (Carnegie Mellon University) and Michael K. Reiter (University of North Carolina Chapel Hill), paper
- April 29, 2019, Build Intelligent Fraud Prevention with ML and Graphs, Nav Mathur, Graham Ganssle
- April 15, 2019, Active Learning: Why Smart Labeling is the Future of Data Annotation, Jennifer Prendki (Figure Eight)
- April 8, 2019, Generalization, Interpolation, and Neural Nets, Alexander Rakhlin (MIT)
- April 1, 2019, Similarity learning using deep neural networks – Jacek Komorowski (Warsaw University of Technology)
- March 18/25, 2019, Deep Reinforcement Learning (First lecture of MIT course 6.S091), Lex Fridman (MIT)
- March 11, 2019, Ensembles: Boosting, Alexander Ihler University of California, Irvine)
- March 4, 2019, Dataset shift in machine learning, Peter Prettenhofer (DataRobot)
- February 25, 2019, Could Machine Learning Ever Cure Alzheimer’s Disease? – Winston Hide (Sheffield University)
- February 18, 2019, 2015 IAAA Winner Intelligent Surgical Scheduling System
- February 11, 2019, Artificial Intelligence Machine Learning Big Data, Exponential Finance – Neil Jacobstein (Singularity University)
- February 4, 2019, Bayesian Deep Learning with Edward (and a trick using Dropout) – Andrew Rowan (PrismFP)
- January 28, 2019, Ouroboros, Aggelos Kiayias (University of Edinburgh)
- January 21, 2019, Cosmos Proof of Stake – Sunny Aggrawal
- January 14, 2019, Geometric Deep Learning – Michael Bronstein (University of Lugano and Tel Aviv University)
- January 7, 2019, Deep Generative Networks as Inverse Problems – Stéphane Mallat, Ecole Normale Supérieure (ENS)
2018
- December 3/17, 2018, Convex Optimization and Applications – Stephen Boyd (Stanford University)
- November 26, 2018, The mathematics of machine learning and deep learning – Sanjeev Arora (Princeton University)
- November 15, 2018, Reinforcement Learning in Healthcare: Challenges and Promise – Doshi-Velez (Harvard University)