Statistical Learning Research Group (Sailing)
2023
 February 6, Reinforcement Learning 4 (ModelFree Prediction and Control), Hado van Hasselt (DeepMind)
 January 30, Reinforcement Learning 2 (Exploration and Exploitation), Hado van Hasselt (DeepMind). HW: watch the rest of the video
 January 23, Reinforcement Learning 1, Hado van Hasselt (DeepMind). HW: watch the rest of the video
 January 16. Introduction to Algebraic Topology, N J Wildberger (University of New South Wales, Australia)
 January 9, Signal Recovery with Generative Priors, Paul Hand (Northeastern University, Boston)
2022
 December 12, LearningBased LowRank Approximations, Piotr Indyk (MIT), paper
 December 5, General graph problems with neural networks, Soledad Villar (New York University, Current Institute)
 November 28, The Transformer Network for the Traveling Salesman Problem, Xavier Bresson (NTU)
 November 25, Artificial Intelligence in Acute Medecine, From theory to applications, Centre StRoch, salle R102, Avenue des Sports 20, YverdonlesBains, Suisse
 November 21, Attention, Learn to Solve Routing Problems! Wouter Kool (University of Amsterdam)
 November 14, Why Did Quantum Entanglement Win the Nobel Prize in Physics? Sixty Symbols – Spooky Action at a Distance (Bell’s Inequality)
 November 7, EigenGame PCA as a Nash Equilibrium, Ian Gemp (DeepMind), Deep SemiSupervised Anomaly Detection
 October 31, Discovering faster matrix multiplication algorithms with reinforcement learning, Yannic Kilcher
 October 17, Compressing Variational Bayes, Stephan Mandt (UC Irvine)
 October 10, From Machine Learning to Autonomous Intelligence, Yann LeCun (NYU, Meta)
 October 3, Diffusion Probabilistic Models, Jascha SohlDickstein (Google Brain)
 September 12 & 26, Attention and Memory in Deep Learning, Alex Graves (DeepMind)
 September 5, Transformers and SelfAttention, Ashish Vaswani (Adept AI Labs)
 August 29, Ensuring Safety in Online Reinforcement Learning by Leveraging Offline Data, Sergey Levine (UC Berkeley)
 August 22, Geometric Deep Learning: The Erlangen Programme of ML, Michael Bronstein (Imperial College, London)
 August 15, The Devil is in the Tails and Other Stories of Interpolation, Niladri Chatterji (Stanford Unversity)
 Summer break
 July 11, Gaussian multiplicative chaos: applications and recent developments, Nina Holden (ETH, Werner Wendelin Group)
 July 1, Meeting with UFCLNIT.
 June 27, Statistical mechanics arising from random matrix theory, Thomas Spencer (IAS)
 June 20, Stop Explaining Black Box Machine Learning Models
Cynthia Rudin (Duke University)
 May 30 & June 13, Network Calculus, JeanYves Le Boudec (EPFL)
 May 16 & 23, Synthetic Healthcare Data Generation and Assessment: Challenges, Methods, and Impact on Machine Learning, Mihaela van der Schaar (university of Cambridge) and Ahmed Alaa (MIT).
 May 9, Combining Reinforcement Learning & Constraint Programming for Combinator…, LouisMartin Rousseau (Ecole Polytechnique de Montreal)
 May 2, Deep Reinforcement Learning at the Edge of the Statistical Precipice, Rishabh Agarwal (Google Brain)
 April 25, Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies, Paul Vicol (University of Toronto)
 April 11, I Can’t Believe Latent Variable Models Are Not Better, Chris Maddison (University of Toronto)
 April 4, From System 1 Deep Learning to System 2 Deep Learning, Yoshua Bengio (University of Montreal)
 March 14 and 21, Latent Dirichlet Allocation, Philipp Hennig (University of Tübingen). Interesting paper (Test of Time Award NeurIPS 2021): Online Learning for Latent Dirichlet Allocation, Matthew D. Hoffman, David M. Blei (Princeton University)
 March 7, On the Expressivity of Markov Reward, David Abel (DeepMind) et al. paper
 February 28, Continuous Time Dynamic Programming — The HamiltonJacobiBellman Equation, Neil Walton (University of Manchester)
 February 21, Computational Barriers in Statistical Estimation and Learning, Andrea Montanari (Stanford University)
 February 14, Offline Deep Reinforcement Learning Algorithms, Sergey Levine (UC Berkeley)
 February 7, Infusing Physics and Structure into Machine Learning, Anima Anandkumar (CalTech)
 January 31, Robust Predictable Control, Benjamin Eysenbach (CMU), web page, paper
 January 10, 17 and 24, Recent Advances in Integrating Machine Learning and Combinatorial Optimization – Tutorial at AAAI21

 Tutorial webpage with slides
https://sites.google.com/view/mlcoaaai21/
 Part 1: Introduction to combinatorial optimization & tutorial overview
Part 2: The pure ML approach: predicting feasible solutions
Part 3: The hybrid approach: improving exact solvers with ML
Part 4: Machine learning for MIP solving: challenges & literature
Part 5: Ecole: A python framework for learning in exact MIP solvers
Part 6: Decisionfocused Learning
Part 7: Concluding remarks
 This tutorial will provide an overview of the recent impact machine learning is having on combinatorial optimization, particularly under the Mixed Integer Programming (MIP) framework. Topics covered will include ML and reinforcement learning for predicting feasible solutions, improving exact solvers with ML, a software framework for learning in exact MIP solvers, and the emerging paradigm of decisionfocused learning.
 The tutorial targets both junior and senior researchers in two prominent areas of interest to the AAAI community: (1) Machine learning researchers looking for a challenging application domain, namely combinatorial optimization; (2) Optimization practitioners and researchers who may benefit from learning about recent advances in ML methods for improving combinatorial optimization algorithms.
 Presented by: Elias B. Khalil (University of Toronto), Andrea Lodi (Polytechnique Montréal), Bistra Dilkina (University of Southern California), Didier Chételat (Polytechnique Montréal), Maxime Gasse (Polytechnique Montréal), Antoine Prouvost (Polytechnique Montréal), Giulia Zarpellon (Polytechnique Montréal) and Laurent Charlin (HEC Montréal)
2021
 December 13 and 20, Attention and Transformer Networks, Pascal Poupart (University of Waterloo, Canada)
 December 6, Yes, Generative Models Are The New Sparsity, Alex Dimakis (University of Texas at Austin)
 November 29. The Knockoffs Framework: New Statistical Tools for Replicable Selections, Emmanuel Candès (Stanford University)
 November 15 and 22. Advanced Machine Learning Day 3: Neural Architecture Search, Debadeepta Dey (Microsoft Research AI)
 November 8, La théorie de l’apprentissage de Vapnik et les progrès récents de l’IA, Yann Le Cun (New York University, Facebook)
 November 1, Compositional Dynamics Modeling for Physical Inference and Control, Yunshu Li (MIT)
 October 25, Safe and Efficient Exploration in Reinforcement Learning, Andreas Krause (ETH)
 October 18, Special guest today: Luis von Ahn, CEO and cofounder of Duolingo (former professor at CMU)
 October 11, Contrastive Learning: A General Selfsupervised Learning Approach, Yonglong Tian (MIT)
 September 27 & October 4, Adversarial Robustness – Theory and Practice, J. Z. Kolter (CMUBosch) and A. Madry (MIT)
 No Lunch Seminar during the summer break
 June 21 & 28, Recent Developments in Overparametrized Neural Networks, Jason Lee (University of Southern California) (43.42end)
 June 14, Feedback Control Perspectives on Learning, Jeff Shamma (University of Illinois at UrbanaChampaign)
 June 7, SelfSupervised Learning & World Models, Yann LeCun (NYU – Courant institute, Facebook)
 May 31, Theoretical Foundations of Graph Neural Networks, Petar Veličković (University of Cambridge)
 May 17 & 24, Deep Implicit Layers, David Duvenaud (University of Toronto), J. Zico Kotler (CMU), Matt Johnson (Google Brain)
 May 3 & 10, Bayesian Deep Learning and Probabilistic Model Construction, Andrew Gordon Wilson (Courant institute, New York University)
 April 26, Learning Ising Models from One, Ten or a Thousand Samples, Costantinos Daskalakis (MIT)
 April 19, Deconstructing the Blockchain to Approach Physical Limits, Sreeram Kannan (University of Washington)
 April 12, Federated Learning and Analytics at Google and Beyond, Peter Kairouz (Google)
 March 22, Equivariant Networks and Natural Graph Networks, Taco Cohen (Qualcomm)
 March 8 and 15, Parameterfree Online Optimization
Francesco Orabona (Boston University), Ashok Cutkosky (Boston University), part 1, 2, 3, 4.
 February 22 and March 1, Machine Learning with Signal Processing, Arno Solin (Aalto University)
 February 15. A Function Approximation of Perspective on Sensory Representations, Cengiz Pehlevan (Harvard). References: Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent, Neural Tangent Kernel: Convergence and Generalization in Neural Networks, On Exact Computation with an Infinitely Wide Neural Net, Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
 February 8: Hopfield Networks in 2021, Fireside chat between Sepp Hochreiter (Johannes Kepler University of Linz) and Dmitry Krotov (IBM Watson)
 February 1: Convergence and Sample Complexity of Gradient Methods for the ModelFree Linear Quadratic Regulator, Mihailo Jovanovic (USC)
 January 25: TIC Department Conference
 January 18, Influence: Using Disentangled Representations to Audit Model Predictions, Charlie Marx (Haverford College)
 January 11, Offline Reinforcement Learning, Sergey Levine, UC Berkeley
2020
 December 14, Stanford Seminar (part 2) – Information Theory of Deep Learning, Naftali Tishby, Hebrew University of Jerusalem. New Theory Cracks Open the Black Box of Deep Learning, Quanta Magazine
 December 7, Stanford Seminar (part 1) – Information Theory of Deep Learning, Naftali Tishby, Hebrew University of Jerusalem
 November 30, Computer vision: who is harmed and who benefits? Timnit Gebru, Google. Links related to the talk: James Landay, Smart Interfaces for HumanCentered AI, Ali Alkhatib, Anthropological/Artificial Intelligence & the HAI, Faception, HireVue , Our Data Bodies,
 November 23, Network Telemetry and Analytics for tomorrows Zero Touch Operation Network, Online talk by Swisscom Digital Lab
 November 16, Representation Learning Without Labels, Ali Eslami, DeepMind
 November 9, Active Learning: From Theory to Practice, Robert Nowak, University of WisconsinMadison
 November 2, Stanford Seminar – Machine Learning for Creativity, Interaction, and Inclusion, Rebecca Fiebrink, Goldsmiths, University of London
 October 26 (pdf) , LambdaNetworks: Modeling longrange Interactions without Attention (Paper Explained, ICLR 2021 submission)
 October 12, Artificial Stupidity: The New AI and the Future of Fintech, Andrew W. Lo (Massachusetts Institute of Technology)
 September 28, LSTM is dead. Long Live Transformers! Leo Dirac, Amazon. LSTM paper, LSTM Diagrams Understanding LSTM, Attention is all you need, Illustrated Attention, BERT: Pretraining of Deep Bidirectional Transformers for Language Understanding, Deep contextualized word representations, huggingface/transformers
 September 14, Machine Learning Projects Against COVID19, Yoshua Bengio, Université de Montréal
 No Lunch Seminar during the summer break
 June 22, Kernel and Deep Regimes in Overparameterized Learning, Suriya Gunasekar (TTIChicago, Microsoft Research)
 June 15, Energybased Approaches to Representation Learning, Yann LeCun (New York University, FaceBook)
 June 8, Learning probability distributions; What can, What can’t be done, Shai BenDavid (University of Waterloo)
 May 25, Generalized Resilience and Robust Statistics, Jacob Steinhardt (UC Berkeley).
 May 18, From Classical Statistics to Modern Machine Learning, Mikhail Belkin (The Ohio State University).
 References: Mikhail Belkin, Siyuan Ma, Soumik Mandal To Understand Deep Learning We Need to Understand Kernel Learning, Luc Devroye et al., The Hilbert Kernel Regression Estimate, Cover and Hart, Nearest Neighbor Pattern Classification, Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate, Adityanarayanan Radhakrishnan et al., Overparameterized Neural Networks Can Implement Associative Memory, Mikhail Belkin et al., Reconciling modern machinelearning practice and the classical bias–variance tradeoff, Madhu S. Advani, Andrew M. Saxe, Highdimensional dynamics of generalization error in neural networks
 May 11, Automatic Machine Learning, part 3, (from minute 90) Frank Hutter (University of Freiburg) and Joaquin Vanschoren (Eindhoven University of Technology). Slides part 3.
 May 4, Automatic Machine Learning, part 2, (from minute 4690) Frank Hutter (University of Freiburg) and Joaquin Vanschoren (Eindhoven University of Technology). Slides parts 12, Slides part 3.
 References: Thomas Elsken et al., Neural Architecture Search: A Survey. J. Bergstra et al., Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures, Hector Mendoza et al., Towards AutomaticallyTuned Neural Networks, Barret Zoph et. al., Neural Architecture Search with Reinforcement Learning, Thomas Elsken et al., Neural Architecture Search: A Survey, Peter J. Angeline et al., An Evolutionary Algorithm that Constructs Recurrent Neural Networks, Kenneth O. Stanley et al., Evolving Neural Networks through Augmenting Topologies, Risto Miikkulainen et al., Evolving Deep Neural Networks, Esteban Real et al.,Regularized Evolution for Image Classifier Architecture Search, Kevin Swersky et al., Raiders of the Lost Architecture: Kernels for Bayesian Optimization in Conditional Parameter Spaces, Kirthevasan Kandasamy et al., Neural Architecture Search with Bayesian Optimisation and Optimal Transport, Chenxi Liu et al., Progressive Neural Architecture Search, Arber Zela et al., Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search, Catherine Wong et al., Transfer Learning with Neural AutoML, Tianqi Chen et al., Net2Net: Accelerating Learning via Knowledge Transfer, Tao Wei et al., Network Morphism, Han Cai et al., PathLevel Network Transformation for Efficient Architecture Search, Han Cai et al., Efficient Architecture Search by Network Transformation, Thomas Elsken et al., Simple and Efficient Architecture Search for CNNs, Corinna Cortes et al., AdaNet: Adaptive Structural Learning of Artificial Neural Networks, Shreyas Saxena, Convolutional Neural Fabrics, Gabriel Bender et al., Understanding and Simplifying OneShot Architecture Search, Hieu Pham et al., Efficient Neural Architecture Search via Parameter Sharing, Andrew Brock et al., SMASH: OneShot Model Architecture Search through HyperNetworks, Hanxiao Liu et al., DARTS: Differentiable Architecture Search, Mingxing Tan, MnasNet: PlatformAware Neural Architecture Search for Mobile, Rui Leite et al., Selecting Classification Algorithms with Active Testing, Salisu Mamman Abdulrahman et al., Speeding up algorithm selection using average ranking and active testing by introducing runtime, Martin Wistuba et al., Learning Hyperparameter Optimization Initializations, J. N. van Rijn et al., Hyperparameter Importance Across Datasets, Philipp Probst et al., Tunability: Importance of Hyperparameters of Machine Learning Algorithms, Martin Wistuba et al., Hyperparameter Search Space Pruning – A New Component for Sequential ModelBased Hyperparameter Optimization, C. E. Rasmussen et al.,Gaussian Processes for Machine Learning, Martin Wistuba et al., Scalable Gaussian processbased transfer surrogates for hyperparameter optimization, Matthias Feurer et al., Scalable MetaLearning for Bayesian Optimization
 April 27, Automatic Machine Learning, part 1, Frank Hutter (University of Freiburg) and Joaquin Vanschoren (Eindhoven University of Technology). Slides parts 12, Slides part 3.
 References: Part 1: Book, chapter 1, J. Močkus, On bayesian methods for seeking the extremum, Nando de Freitas et al., Exponential Regret Bounds for Gaussian Process Bandits with Deterministic Observations, Kenji Kawaguchi et al., Bayesian Optimization with Exponential Convergence, Ziyu Wang et al., Bayesian Optimization in High Dimensions via Random Embeddings, Frank Hutter et al., Sequential ModelBased Optimization for General Algorithm Configuration, Kevin Swersky et al., Raiders of the Lost Architecture: Kernels for Bayesian Optimization in Conditional Parameter Spaces, Leo Breiman, Random Forests, Jasper Snoek et al., Scalable Bayesian Optimization Using Deep Neural Networks, Jost Tobias Springenberg, Bayesian Optimization with Robust Bayesian Neural Networks, James Bergstra, Algorithms for HyperParameter Optimization, Hans Georg Beyer, Hans Paul Paul Schwefel, Evolution strategies –A comprehensive introduction, Nikolaus Hansen, The CMA Evolution Strategy: A Tutorial, Ilya Loshchilov et al., CMAES for hyperparameters optimization of neural networks, Tobias Domhan et al., Speeding Up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves, Luca Franceschi et al., Bilevel Programming for Hyperparameter Optimization and MetaLearning, Jelena Luketina et al., Scalable GradientBased Tuning of Continuous Regularization Hyperparameters, Aaron Klein et al, Learning curve prediction with Bayesian neural networks, Kevin Swersky, MultiTask Bayesian Optimization, Kevin Swersky, FreezeThaw Bayesian optimization, Kirthevasan Kandasamy, Multifidelity Bayesian Optimisation with Continuous Approximations, Stefan Falkner et al., BOHB: Robust and Efficient Hyperparameter Optimization at Scale, Github link, Lisha Li et al., Hyperband: Banditbased configuration evaluation for hyperband parameters optimiation, Kevin Jamieson, Nonstochastic Best Arm Identification and Hyperparameter Optimization, Chris Thornton et al., AutoWEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms, Brent Komer et al., HyperoptSklearn: Automatic Hyperparameter Configuration for ScikitLearn, Matthias Feurer et al., Efficient and Robust Automated Machine Learning, Autosklearn, GitHub link, Randal S. Olson et al., Automating Biomedical Data Science Through TreeBased Pipeline Optimization
 April 6, Efficient Deep Learning with Humans in the Loop, Zachary Lipton (Carnegie Mellon University)
 References: Davis Liang et al., Learning NoiseInvariant Representations for Robust Speech Recognition, Zachary C. Lipton et al., BBQNetworks: Efficient Exploration in Deep Reinforcement Learning for TaskOriented Dialogue Systems, Yanyao Shen et al., Deep Active Learning for Named Entity Recognition, Aditya Siddhant et al., Deep Bayesian Active Learning for Natural Language Processing: Results of a LargeScale Empirical Study, David Lowell et al., Practical Obstacles to Deploying Active Learning, Peiyun Hu et al., Active Learning with Partial Feedback, Shish Khetan et al., Learning From Noisy Singlylabeled Data, Yanyao Shen et al. Deep Active Learning for Named Entity Recognition, Peiyun Hu et al. Active Learning with Partial Feedback, Jonathon Byrd et al., What is the Effect of Importance Weighting in Deep Learning? Jason Yosinski et al., Understanding Neural Networks Through Deep Visualization
 March 30, Studying Generalization in Deep Learning via PACBayes, Gintare Karolina Dziugaite (Element AI)

 Few references: G.K Dziugaite, D. Roy, Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data, Huang et al., Stochastic Neural Network with Kronecker Flow, Zhou et al., Nonvacuous Generalization Bounds at the ImageNet Scale: a PACBayesian Compression Approach, Abadi et al., Deep Learning with Differential Privacy, R Herbrich, T Graepel, C Campbell, Bayes point machines, Neyshabur et al., The role of overparametrization in generalization of neural networks, K Miyaguchi, PACBayesian Transportation Bound
 A little bit of background on probably approximately correct (PAC) learning: Probably Approximately Correct Learning, A primer on PACBayesian learning
 March 23, Integrating Constraints into Deep Learning Architectures with Structured Layers J. Zico Kolter (Carnegie Mellon University)

 References: Honglak Lee, Roger Grosse, Rajesh Ranganath, Andrew Y. Ng, Convolutional Deep Belief Networksfor Scalable Unsupervised Learning of Hierarchical Representations. Brandon Amos, J. Zico Kolter, OptNet: Differentiable Optimization as a Layer in Neural Networks. PoWei Wang, Priya L. Donti, Bryan Wilder, Zico Kolter, SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud, Neural Ordinary Differential Equations. Shaojie Bai, J. Zico Kolter, Vladlen Koltun, Trellis Networks for Sequence Modeling. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Attention Is All You Need
 March 2, Rebooting AI, Gary Marcus (Robust AI)
 February 24, Is Optimization the Right Language to Understand Deep Learning? Sanjeev Arora (Princeton University)
 February 17, Adversarial Machine Learning, Ian Goodfellow (Google)
 February 10, Our Mathematical Universe, Max Tegmark (MIT)
 February 3, Nobel Lecture: Michel Mayor, Nobel Prize in Physics 2019
 January 29, How to Successfully Harness Machine Learning to Combat Fraud and Abuse, Elie Bursztein, AntiAbuse Research Lead (Google)
2019
 December 16 & January 13, 201920, Variational Inference: Foundations and Innovations (Part 2, 46′), David Blei (Columbia University)
 December 2 & 9, 2019, Variational Inference: Foundations and Innovations (Part 1), David Blei (Columbia University)
 November 18, On Large Deviation Principles for Large Neural Networks, Joan Bruna (Courant Institute of Mathematical Sciences, NYU)
 November 11, 2019, Anomaly Detection using Neural Networks, Dean Langsam (BlueVine)
 October 28 & November 4, 2019, Extreme Value Theory. Paul Embrechts (ETH)
 October 7, 2019, On the Optimization Landscape of Matrix and Tensor Decomposition Problems, Tengyu Ma (Princeton University)
 September 30, 2019, Recurrent Neural Networks, Ava Soleimany (MIT)
 September 23, 2019, When deep learning does not learn, Emmanuel Abbe (EPFL and Princeton)
 July 15, 2019, Optimality in Locally Private Estimation and Learning, John Duchi (Stanford)
 July 1, 2019. Capsule Networks, Geoffrey Hinton (University of Toronto – Google Brain – Vector institute)
 June 24, 2019, A multiperspective introduction to the EM algorithm, William M. Wells III.
 June 17, 2019, Theoretical Perspectives on Deep Learning, Nati Srebro (TTI Chicago)
 May 27, 2019. 2018 ACM Turing Award. Stanford Seminar – Human in the Loop Reinforcement Learning. Emma Brunskill (Stanford)
 May 20, 2019. How Graph Technology Is Changing Artificial Intelligence and Machine Learning. Amy E. Hodles (Neo4j), Jake Graham (Neo4j).
 May 13, 2019, 2017 Nobel Lectures in Physics. Awarded « for decisive contributions to the LIGO detector and the observation of gravitational waves ». Rainer Weiss (MIT), Barry C. Barish (Caltech) and Kip S. Thorne (Caltech)
 May 6, 2019, Accessorize to a Crime: Real and Stealthy Attacks on StateOfTheArt Face Recognition, Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer (Carnegie Mellon University) and Michael K. Reiter (University of North Carolina Chapel Hill), paper
 April 29, 2019, Build Intelligent Fraud Prevention with ML and Graphs, Nav Mathur, Graham Ganssle
 April 15, 2019, Active Learning: Why Smart Labeling is the Future of Data Annotation, Jennifer Prendki (Figure Eight)
 April 8, 2019, Generalization, Interpolation, and Neural Nets, Alexander Rakhlin (MIT)
 April 1, 2019, Similarity learning using deep neural networks – Jacek Komorowski (Warsaw University of Technology)
 March 18/25, 2019, Deep Reinforcement Learning (First lecture of MIT course 6.S091), Lex Fridman (MIT)
 March 11, 2019, Ensembles: Boosting, Alexander Ihler University of California, Irvine)
 March 4, 2019, Dataset shift in machine learning, Peter Prettenhofer (DataRobot)
 February 25, 2019, Could Machine Learning Ever Cure Alzheimer’s Disease? – Winston Hide (Sheffield University)
 February 18, 2019, 2015 IAAA Winner Intelligent Surgical Scheduling System
 February 11, 2019, Artificial Intelligence Machine Learning Big Data, Exponential Finance – Neil Jacobstein (Singularity University)
 February 4, 2019, Bayesian Deep Learning with Edward (and a trick using Dropout) – Andrew Rowan (PrismFP)
 January 28, 2019, Ouroboros, Aggelos Kiayias (University of Edinburgh)
 January 21, 2019, Cosmos Proof of Stake – Sunny Aggrawal
 January 14, 2019, Geometric Deep Learning – Michael Bronstein (University of Lugano and Tel Aviv University)
 January 7, 2019, Deep Generative Networks as Inverse Problems – Stéphane Mallat, Ecole Normale Supérieure (ENS)
2018