Debiased Inference on Functionals of Inverse Problems and Applications to Long-Term Causal Inference – Prof. Nathan Kallus

Registration Required –  REGISTER HERE

  • Date: May 29, 2024
  • Time: 12:00-2:30 pm
  • Format: In-person
  • Location: Data Sciences Institute, 10th floor Seminar Room, 700 University Avenue, Toronto

Talk Title: Debiased Inference on Functionals of Inverse Problems and Applications to Long-Term Causal Inference


In the presence of endogeneity, instruments and negative controls can still give us a view onto causal effects, but only indirectly, e.g., as a function whose residuals are orthogonal to instruments. Without imposing (unrealistic) parametric restrictions, these inverse problems are generally ill posed, making it difficult to reliably learn a solution from data. In this talk I discuss how to nonetheless make reliable inferences on linear functionals of (nonparametric) solutions to these inverse problems, such as average effects. Any such parameter admits a doubly robust representation involving the solution to a dual inverse problem that is specific to the functional of interest. We use this to develop debiased estimators that are root-n-asymptotically normal around the parameter as long as either the primal or dual inverse problem is sufficiently well posed compared to the functional complexity of the (generic, nonparametric) hypothesis classes for the solutions to the inverse problems, all without knowledge of which inverse problem is the more well posed or how well posed. The result is enabled by strong guarantees for a new iterated Tikhonov regularized adversarial learner for solutions to inverse problems over general hypothesis classes. I will then discuss the particular problem of using the plethora of A/B tests undertaken on digital platforms for learning better surrogate indices for inference on long-term causal effects from short-term experiments. While this too can be phrased as a functional of an instrumental variable regression, since each A/B test has a fixed size, here we encounter the additional challenge of weak instruments, introducing a non-vanishing bias. We resolve this by learning the nuisances for our debiased estimator using a jackknifed loss function that eliminates this bias and recovers consistency if we have many, albeit weak, instruments.


There will be a lunch reception before the talk, and following the talk, there will be a student-led discussion.


This talk is part of the Causal Inference Emergent Data Science Program.


Nathan Kallus is an Associate Professor at the Cornell Tech campus of Cornell University in NYC and a Research Director at Netflix. Nathan’s research interests include the statistics of optimization under uncertainty, causal inference especially when combined with machine learning, sequential and dynamic decision making, and algorithmic fairness. He holds a PhD in Operations Research from MIT as well as a BA in Mathematics and a BS in Computer Science from UC Berkeley. Before coming to Cornell, Nathan was a Visiting Scholar at USC’s Department of Data Sciences and Operations and a Postdoctoral Associate at MIT’s Operations Research and Statistics group.

  • 00


  • 00


  • 00


  • 00


Local Time

  • Timezone: America/New_York
  • Date: May 29 2024


10th floor, 700 University Avenue, Toronto