IFML authors have a strong presence at this year's conference.


The Thirty-Fourth Conference on Neural Information Processing Systems (NeurIPS) 2020 is being held this week and we are excited to announce that the NSF Institute for the Foundations of Machine learning (IFML) has a very strong presence with 37 accepted papers with IFML co-authors.

Led by the University of Texas at Austin, IFML brings together researchers from the University of Washington, Wichita State University and Microsoft Research. Together, they are working to develop entirely new classes of algorithms that will lead to more sophisticated and beneficial AI technologies.

An Imitation from Observation Approach to Transfer Learning with Dynamics Mismatch. Siddharth Desai, Ishan Durugkar, Haresh Karnan, Garrett Warnell, Josiah Hanna and Peter Stone.

Applications of Common Entropy in Causal Inference. Murat Kocaoglu, Constantine Caramanis, Alex Dimakis and Sanjay Shakkottai.

Bayesian Robust Optimization for Imitation Learning. Daniel Brown, Scott Niekum, Marek Petrik.

Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework. Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, Qiang Liu.

Certified Monotonic Neural Networks. Xingchao Liu, Xing Han, Na Zhang, Qiang Liu.

Exactly Computing the Local Lipschitz Constant of ReLU Networks. Matt Jordan, Alex Dimakis.

Finite-Sample Analysis of Stochastic Approximation Using Smooth Convex Envelopes. Zaiwei Chen, Siva Theja Maguluri, Sanjay Shakkottai and Karthikeyan Shanmugam.

Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks. Lemeng Wu, Bo Liu, Peter Stone and Qiang Liu.

FLAMBE: Structural Complexity and Representation Learning of Low Rank MDPs. Alekh Agarwal, Sham Kakade, Akshay Krishnamurthy, Wen Sun.

From Boltzmann Machines to Neural Networks and Back Again. Surbhi Goel, Adam Klivans and Frederic Koehler.

Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough. Mao Ye, Lemeng Wu, Qiang Liu.

Implicit Regularization and Convergence for Weight Normalization. Xiaoxia (Shirley) Wu, Edgar Dobriban, Tongzheng Ren, Shanshan Wu, Zhiyuan Li, Suriya Gunasekar, Rachel Ward, Qiang Liu.

Information Theoretic Regret Bounds for Online Nonlinear Control. Sham Kakade, Akshay Krishnamurthy, Kendall Lowrey, Motoya Ohnishi, Wen Sun.

Is Long Horizon Reinforcement Learning More Difficult Than Short Horizon Reinforcement Learning? Ruosong Wang, Simon S. Du, Lin F. Yang, Sham M. Kakade.

Learning Affordance Landscapes for Interaction Exploration in 3D Environments. Tushar Nagarajan, Kristen Grauman.

Learning Differentiable Programs with Admissible Neural Heuristics. Ameesh Shah, Eric Zhan, Jennifer J. Sun, Abhinav Verma, Yisong Yue, and Swarat Chaudhuri.

Learning Structured Distributions From Untrusted Batches: Faster and Simpler. Sitan Chen, Jerry Li, Ankur Moitra.

Learning to Improve Multi-Robot Hallway Navigation. Jin Soo Park, Brian Y Tsang, Harel Yedidsion, Garrett Warnell, Daehyun Kyoung, Peter Stone

Mix and Match: An Optimistic Tree-Search Approach for Learning Models from Mixture Distributions. Matthew Faw, Rajat Sen, Karthikeyan Shanmugam, Constantine Caramanis and Sanjay Shakkottai.

Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity. Kaiqing Zhang, Sham M. Kakade, Tamer Başar, Lin F. Yang.

Network size and weights size for memorization with two-layers neural networks. S. Bubeck, R. Eldan, Y.T. Lee, and D. Mikulincer.

Neurosymbolic Reinforcement Learning with Formally Verified Exploration. Greg Anderson, Abhinav Verma, Isil Dillig, and Swarat Chaudhuri.

Off-Policy Interval Estimation with Lipschitz Value Iteration. Ziyang Tang, Yihao Feng, Na Zhang, Jian Peng and Qiang Liu.

PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning. Alekh Agarwal, Mikael Henaff, Sham Kakade, Wen Sun.

Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method. Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh.

Robust and Heavy-Tailed Mean Estimation Made Simple, via Regret Minimization. Samuel B. Hopkins, Jerry Li, Fred Zhang.

Robust compressed sensing of generative models. Ajil Jalal, Liu Liu, Constantine Caramanis and Alex Dimakis.

Robust Covariance Estimation in Nearly-Matrix Multiplication Time. Jerry Li, Guanghao Ye.

Robust Meta-learning for Mixed Linear Regression with Small Batches. Weihao Kong, Raghav Somani, Sham Kakade, Sewoong Oh.

Robust Sub-Gaussian Principal Component Analysis and Width-Independent Schatten Packing. Arun Jambulapati, Jerry Li, Kevin Tian.

Sample-Efficient Reinforcement Learning of Undercomplete POMDPs. Chi Jin, Sham M. Kakade, Akshay Krishnamurthy, Qinghua Liu.

Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking. Isidoros Tziotis, Aryan Mokhtari and Constantine Caramanis.

SMYRF: Efficient attention using asymmetric clustering. Giannis Daras, Nikita Kitaev, Augustus Odena, Alex Dimakis.

Statistical-Query Lower Bounds via Functional Gradients. Surbhi Goel, Aravind Gollakota and Adam Klivans.

Stein Self-Repulsive Dynamics: Benefits From Past Samples. Mao Ye, Tongzheng Ren, Qiang Liu.

Task-Robust Model-Agnostic Meta-Learning. Liam Collins, Aryan Mokhtari and Sanjay Shakkottai.

The EMPATHIC Framework for Task Learning from Implicit Human Feedback. Yuchen Cui, Qiping Zhang, Alessandro Allievi, Peter Stone, Scott Niekum, W. Bradley Knox.