Thien Le

picture_pima 

PhD researcher
MIT Stata Center
32 Vassar St
Cambridge, MA 02139
Email: thienle [at] mit (dot) edu
Github: steven-le-thien

About me

I am a graduate student in CSAIL/EECS department at MIT started in Fall 2019. I am fortunate to be advised by Stefanie Jegelka. I did my undergraduate in Mathematics and Computer Science from 2016 to 2019 at UIUC where I was fortunate to work with Tandy Warnow and her students on computational phylogenetics. Before that, I worked briefly in system biology with P.I. Imoukhuede

Research

I am broadly interested in

  • Theory of (Geometric) Deep Learning

  • Learning under Invariances/Equivariances

  • Continuous Optimization

  • Mathematical and Algorithmic Biology

My current research focuses on applying graph limit techniques to better understand generalization behaviors of deep learning model tailored to graph data (graph neural networks). In particular, I am interested in the following questions:

  1. Continuity: To what extent, and under what assumptions, can we guarantee classification/prediction consistency of the deep learning model when the input graphs are “structurally” similar?

  2. Out-of-distribution size generalization: if a neural networks are trained to accurately predict a phenomenon on datasets of graphs of size \(n\), can we expect it to behave decently on “structurally” similar datasets of size \(N > n\)?

  3. Optimization: Can gradient-based learning algorithms on deep learning architectures learn graph datasets? In light of computational hardness of many graph problems, we do not expect to be able to train an efficient model for every task, so what exactly is learnt?

At the heart of these questions are different ways to endow the space of graphs with topological/geometric structures. This is where graph limit tools, built on decades of work in graph theory - come into play. Graphons are powerful mathematical object that captures convergent sequence of dense graphs. Beyond graphons, I am very curious in answering these questions for sparse graphs, which are more prevalent in practice (our recent paper uses graphop to model sparse graph limit). There is still much work to be done in applying these tools to better understand deep learning models on graphs, with many untested ideas.