Announcement, Solutions Alicia Demanuele Announcement, Solutions Alicia Demanuele

Future Votes: Safeguarding elections in the digital age

In October 2024, the SRI co-hosted a half-day event with The Dais and Rogers Cybersecure Catalyst to address election integrity, cyber security, and disinformation in the age of AI. The result was The Future Votes report, a reflection of key insights and recommendations for policymakers on how we can practically protect our democratic elections.

Read More
Research Schwartz Reisman Institute Research Schwartz Reisman Institute

AI agents pose new governance challenges

How do we successfully govern AI systems that can act autonomously online, making decisions with minimal human oversight? SRI Faculty Affiliate Noam Kolt explores this challenge, highlighting the rise of AI agents, their risks, and the urgent need for transparency, safety testing, and regulatory oversight.

Read More
Events Olivia DiGiuseppe Events Olivia DiGiuseppe

Building trust in AI: A multifaceted approach

The Schwartz Reisman Institute for Technology and Society (SRI) hosted a roundtable discussion on February 11, 2025, as part of the official side events at the AI Action Summit in Paris centered on insights from an upcoming SRI paper, Trust in Human-Machine Learning Interactions: A Multifaceted Approach, led by SRI Research Lead Beth Coleman.

Read More
Research Schwartz Reisman Institute Research Schwartz Reisman Institute

Unequal outcomes: Tackling bias in clinical AI models

A new study by SRI Graduate Affiliate Michael Colacci sheds light on the frequency of biased outcomes when machine learning algorithms are used in healthcare contexts, advocating for more comprehensive and standardized approaches to evaluating bias in clinical AI.

Read More
Research Schwartz Reisman Institute Research Schwartz Reisman Institute

Safeguarding the future: Evaluating sabotage risks in powerful AI systems

As AI systems grow more powerful, ensuring their safe development is critical. A recent paper led by David Duvenaud with contributions from Roger Grosse introduces new methods to evaluate AI sabotage risks, providing insights into preventing advanced models from undermining oversight, masking harmful behaviors, or disrupting human decision-making.

Read More