“By quantifying uncertainty, we’re getting closer to designing the novel transparent systems that can function in high-stakes environments. Our goal here is to create a principled approach for robust machine learning – in theory and practice.” – Daniela Rus, MIT CSAIL Director
The collaboration leverages the mutual interests of Microsoft and MIT CSAIL on achieving AI systems in both the autonomous, semi-autonomous, and collaborative realms, but centered on the vision of extending and augment the abilities and intellect of people.
“We’re excited about bringing together leading intellects at MIT CSAIL and Microsoft Research to collaborate on intriguing and important opportunities ahead — and to develop trustworthy AI systems that are safe, reliable, understandable, and fair.” – Eric Horvitz, MSR Labs Director
The research engagement includes funded projects between Microsoft researchers and MIT professors and students.
ML with Theoretical Grantees
Stefanie Jegelka (MIT), Matthew Staib (MIT), Hongzhou Lin (MIT)
Towards ML you can Rely on
Aleksander Madry (MIT), Dimitris Tsipras (MIT), Kai Xiao (MIT)
Robustness meets Algorithms
Ankur Moitra (MIT), Sitan Chen (MIT), Allen Liu (MIT)
Efficient and Explainable ML Algorithms using Coresets
Daniela Rus (MIT), Cenk Baykal (MIT), Lucas Liebenwein (MIT)
Bayesian ML: uncertainty and robustness at scale
Tamara Broderick (MIT), William Stephenson (MIT), Raj Agrawal (MIT), Lorenzo Masoero (MIT), Ryan Giordano (MIT)
Professor Daniela Rus
Professor Aleksander Madry
Professor Stefanie Jegelka
Processor Ankur Moitra
Professor Tamara Broderick
Professor Vinod Vaikuntanathan
Professor Martin Rinard
Professor Mohammad Alizadeh
Professor Cathy Wu
Professor John Guttag
Microsoft is proud to provide the students and faculty at CSAIL and MIT in this collaboration with Azure compute resources to assist in the pursuit of addressing concerns about the trustworthiness and robustness in AI systems. For questions about resources please contact Jessica Mastronardi.
Microsoft researchers at MIT faculty and students come together throughout the year, in Cambridge and Redmond, to share recent research advancements and breakthroughs as we look ahead to further foster advances on trustworthy and robust AI.
View presentations from the last TRAC workshops:
November 14–15, 2019 here
February 8, 2019 here
The next workshop aims to take place in July 2020.