LANE McINTOSH

Machine learning scientist.
Tesla AI. Google Brain. Stanford.

About Me

I am a Senior Staff Machine Learning Scientist at Tesla Autopilot where I train production neural networks for Tesla's ~5M customers. I lead the team of engineers and scientists that build Autopilot's foundation models. I like working across a wide range dimensions - data, architecture, optimization, evaluation, deployment, latency - to achieve the maximum possible impact in the real world.

Previously I worked at Google Brain developing recurrent architectures that trade off latency and performance in segmentation tasks, and completed my PhD at Stanford in theoretical neuroscience and computer science.

Curriculum Vitae




Timeline

  • 2018-present

    Tesla Autopilot
    Senior Staff Machine Learning Scientist

    Training production neural networks for Tesla's active safety and full self driving products.

  • 2012-2018

    Stanford University
    Ph.D. Neuroscience
    Ph.D. Minor Computer Science

    Advisors: Steve Baccus and Surya Ganguli
    NVIDIA Best Poster Award, SCIEN 2015
    Top 10% Poster Award, CS231N CNNs
    Ruth L. Kirschstein National Research Service Award
    Mind, Brain, and Computation Traineeship
    NSF IGERT Graduate Fellowship

  • 2017

    Google Brain
    Software Engineer Intern

    Artificial intelligence research in computer vision
    Mentors: Jon Shlens and David Sussillo
    Publication: Recurrent segmentation for variable computational budgets

  • 2010-2012

    University of Hawaii
    M.A. Mathematics

    Advisor: Susanne Still, Machine Learning Group
    Departmental Merit Award
    NSF SUPER-M Graduate Fellowship
    Kotaro Kodama Scholarship
    Graduate Teaching Fellowship

  • 2006-2010

    University of Chicago
    B.A. Computational Neuroscience

    Research: MacLean Comp. Neuroscience Lab
    Research: Dept. of Economics Neuroecon. Group
    Research: Gallo Memory Lab
    Lerman-Neubauer Junior Teaching Fellowship
    NIH Neuroscience and Neuroengineering Fellowship
    Innovative Funding Strategy Award

  • 2009

    Institute for Advanced Study
    Undergraduate Research Fellow

    Bioinformatics research at Simons Center for Systems Biology in Princeton, NJ

  • Past-2006

    Originally from
    San Diego

    Valedictorian
    Bank of America Mathematics Award
    President's Gold Educational Excellence Award
    California Scholarship Federation Gold Seal
    Advanced Placement Scholar with Distinction

Projects



Deep Learning Models of the Retina

A central challenge in sensory neuroscience is to understand neural computations and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In neural circuits, ubiquitous nonlinear processes present a significant obstacle to the creation of accurate computational models of responses to natural stimuli. We demonstrate that deep convolutional neural networks capture retinal responses to natural scenes nearly to within the variability of a cell's response, and are markedly more accurate than previous models. We are then able to probe the learned models to gain insights about the retina, for instance how it compresses natural scenes efficiently through feedforward inhibition and how it transforms potentially large sources of extrinsic and intrinsic noise into sub-Poisson variability. Overall, this work demonstrates that CNNs not only accurately capture sensory circuit responses to natural scenes, but also can yield information about the circuit's internal structure and function.
Lane McIntosh*, Niru Maheswaranathan*, Aran Nayebi, Surya Ganguli, Stephen Baccus
Accepted Paper, Advances in Neural Information Processing Systems (NIPS), 2016
Accepted Talk, Society for Neuroscience, 2016
Accepted Poster, Computational and Systems Neuroscience (COSYNE), 2016
NVIDIA Best Poster, SCIEN Industry Affiliates Meeting (image processing), 2015
Top 10% Poster Award, CS231n Convolutional Neural Networks, 2015

NIPS 2016 paper COSYNE 2016 Poster

Stanford MBC talk IEEE talk



Synchronous inhibitory pathways create both efficiency and diversity in the retina

Retinal ganglion cells, the bottleneck of all visual information to the brain, have linear response properties that appear to maximize information between the visual world and the retinal ganglion cell responses, subject to a variance constraint. In this paper I contribute a new theoretical finding that generating the ganglion cells' linear receptive field from inhibitory interneurons with disparate spatial scales provides a basis that allows the receptive field to maximize information under a wide range of environments whose signal-to-noise ratios vary by orders of magnitude.
Mihai Manu*, Lane McIntosh*, David Kastner, Benjamin Naecker, and Stephen Baccus
In Review, Nature Neuroscience 2017


Bioarxiv 2017 SfN 2015 Poster Github



Video-based Event Recognition

How can we automatically extract events from video? We used a database of surveillance videos and examined the performance of SVMs and Convolutional Neural Networks in detecting events like people getting in and out of cars.
Ian Ballard* and Lane McIntosh*
CS221 Artificial Intelligence Poster, 2014


PDF Poster



Learning Predictive Filters

How should an intelligent system intent on only keeping information predictive about the future filter its data? We analytically find the optimal predictive filter for Gaussian input using recent theorems from the information bottleneck literature. Using numerical methods, we then show the resemblance of these optimally predictive filters to the receptive fields in early visual pathways of vertebrates.
Lane McIntosh
CS229 Machine Learning Poster, 2013


PDF Poster



Thermodynamics of Prediction in Model Neurons

Recent theorems in nonequlibrium thermodynamics show that information processing inefficiency provides a lower bound for energy dissipation in certain systems. We extend these results to model neurons and find that adapting neurons that match the timescale of their inputs perform predictive inference while minimizing energy inefficiency.
Lane McIntosh and Susanne Still
Master's Thesis, 2012


PDF Github

Teaching



Convolutional Neural Networks





CS 231n Convolutional Neural Networks

Stanford University, Winter 2016 and 2017. Teaching assistant for this class on convolutional neural networks taught by Fei-Fei Li, Andrej Karpathy, Justin Johnson and Serena Yeung. Throughout the class students learn how to derive gradients for large computational graphs, implement, train, and debug their own neural networks, and gain an understanding of recent developments in deep learning. Over 600 students enrolled in 2017.

Math Tools For Neuroscience



Math Tools for Neuroscience

Stanford University, Winter 2017 and 2016, Spring 2015. Co-taught this class with fellow graduate student Kiah Hardcastle, and covered a wide variety of useful mathematical tools including dimensionality reduction, Fourier transforms, dynamical systems, statistics, information theory, and Bayesian probability. Mostly graduate student and postdoctoral audience.

Intro to Perception



ExploreCourses Listing

Stanford University, Fall 2015, 2014. Teaching assistant for this introductory undergraduate course surveying the literature on perception from the retina to high-level cortex and behavioral experiments.

Precalculus



Precalculus Course Website

University of Hawaii, 2010-12. First a teaching assistant, then lecturer, for this large undergraduate introductory mathematics course.

Biophysics and Chemical Biology


University of Chicago, Spring 2008. Teaching assistant for the third course in the advanced-track biology sequence for students who scored 5/5 on their AP Biology test. This course focused on how to read original research papers in biophysics and chemical biology, with weekly presentations.