The Collaboration

The Trustworthy and Robust AI collaboration (TRAC) between MIT CSAIL and Microsoft Research is working towards fostering advances on robustness and trustworthy AI, which spans safety & reliability, intelligibility, and accountability. The collaboration seeks to address concerns about the trustworthiness of AI systems, including rising concerns with the safety, fairness, and transparency of technologies.

“By quantifying uncertainty, we’re getting closer to designing the novel transparent systems that can function in high-stakes environments. Our goal here is to create a principled approach for robust machine learning – in theory and practice.”Daniela Rus, MIT CSAIL Director

The collaboration leverages the mutual interests of Microsoft and MIT CSAIL on achieving AI systems in both the autonomous, semi-autonomous, and collaborative realms, but centered on the vision of extending and augment the abilities and intellect of people.

“We’re excited about bringing together leading intellects at MIT CSAIL and Microsoft Research to collaborate on intriguing and important opportunities ahead — and to develop trustworthy AI systems that are safe, reliable, understandable, and fair.” Eric Horvitz, MSR Labs Director

Program Committee

Eric Horvitz, Technical Fellow and Director, Microsoft Research

Eric Horvitz Technical Fellow and Director, Microsoft Research

Aleksander Madry, Professor, MIT CSAIL

Aleksander Madry Professor and TRAC Faculty Lead, MIT CSAIL

Daniela Rus, Director, MIT CSAIL

Daniela Rus Director, MIT CSAIL

Evelyne Viegas, Senior Director – Research Engagement, Microsoft Research

Evelyne Viegas Senior Director – Research Engagement, Microsoft Research

Research projects

The research engagement includes funded projects between Microsoft researchers and MIT professors and students.

Projects funded in 2019

Distributed, Private and Efficient Machine Learning
Vinod Vaikuntanathan (MIT), Yael Kalai (Microsoft), Lisa Yang (MIT)

State-Based Approaches for Verifying and Testing Neural Networks
Martin Rinard (MIT), Shuvendu Lahiri (Microsoft), Madan Musuvathi (Microsoft), Kai Jia (MIT)

Safe Online Reinforcement Learning in Networked Systems
Mohammad Alizadeh (MIT), Siddhartha Sen (Microsoft), Hongzi Mao (MIT)

Off-policy evaluation for risk-aware autonomous systems
Cathy Wu (MIT), Alekh Agarwal (Microsoft), Adith Swaminathan (Microsoft), Vindula Jayawardana (MIT)

Exploration of robust machine learning for high-stakes predictions
John Guttag (MIT), Eric Horvitz (Microsoft), Maggie Makar (MIT), Agni Kumar (MIT)

Projects funded in 2018

ML with Theoretical Grantees
Stefanie Jegelka (MIT), Matthew Staib (MIT), Hongzhou Lin (MIT)

Towards ML you can Rely on
Aleksander Madry (MIT), Dimitris Tsipras (MIT), Kai Xiao (MIT)

Robustness meets Algorithms
Ankur Moitra (MIT), Sitan Chen (MIT), Allen Liu (MIT)

Efficient and Explainable ML Algorithms using Coresets
Daniela Rus (MIT), Cenk Baykal (MIT), Lucas Liebenwein (MIT)

Bayesian ML: uncertainty and robustness at scale
Tamara Broderick (MIT), William Stephenson (MIT), Raj Agrawal (MIT), Lorenzo Masoero (MIT), Ryan Giordano (MIT)

Faculty & student collaborators

Professor Daniela Rus
Professor Aleksander Madry
Professor Stefanie Jegelka
Processor Ankur Moitra
Professor Tamara Broderick
Professor Vinod Vaikuntanathan
Professor Martin Rinard
Professor Mohammad Alizadeh
Professor Cathy Wu
Professor John Guttag

Compute resources

Microsoft is proud to provide the students and faculty at CSAIL and MIT in this collaboration with Azure compute resources to assist in the pursuit of addressing concerns about the trustworthiness and robustness in AI systems. For questions about resources please contact Jessica Mastronardi.

TRAC Workshops

Microsoft researchers at MIT faculty and students come together throughout the year, in Cambridge and Redmond, to share recent research advancements and breakthroughs as we look ahead to further foster advances on trustworthy and robust AI.

View presentations from the last TRAC workshops:

November 14–15, 2019 here

February 8, 2019 here

The next workshop aims to take place in July 2020.