Sep 1, 2024

Event Roundup: Evidence & International Justice In a Generative AI World

Artificial intelligence presents new challenges and opportunities in upholding accountability and human rights on the global stage. On September 20, the Starling Lab and Hala Systems convened an engaging group of experts – and developed a number of policy recommendations – to address these emerging intersections of international justice, digital evidence, and generative AI. The roundtable was hosted at Georgetown University’s new Capitol Campus thanks to their Tech & Public Policy program at the McCourt School of Public Policy.

United States Ambassador-at-Large for Global Criminal Justice Beth Van Schaack opened the forum with compelling remarks on how generative AI and synthetic media are reshaping the landscape of international justice. She emphasized the dual-edged nature of AI—its potential to bolster forensic verification and reconstruct crime scenes versus its capacity to amplify misinformation and complicate evidence authentication.

 

Key Themes and Insights

  1. The “Liar’s Dividend” and the Crisis of Trust: Participants unpacked the risks generative AI poses, including its ability to cast doubt on authentic evidence by fostering plausible deniability. A recurring theme was the urgency of establishing robust protocols to distinguish genuine digital content from falsified media.
  2. Provenance and Digital Evidence Standards: The event highlighted cutting-edge technical standards for provenance, such as the Coalition for Content Provenance and Authenticity (C2PA), as transformative tools for ensuring the integrity of digital evidence. Experts shared innovative methods for embedding authenticity markers into multimedia assets to secure their credibility in judicial proceedings.
  3. AI-Driven Forensic Tools: From automating data analysis to 3D crime scene reconstructions, the symposium showcased AI’s ability to streamline evidence collection and processing. Such tools are proving potentially valuable in addressing the complex demands of war crime investigations and international tribunals.
  4. Human Rights and Metadata Protections: With frontline documentarians and human rights defenders facing increased risks, participants underscored the need for metadata privacy safeguards and opt-in approaches to protect user safety.
  5. Collaborative Futures: A forward-looking session explored how civil society, government institutions, and the private sector could democratize access to AI tools while fostering inclusivity and ethical AI use.

 

Policy Recommendations

The discussions at the intersection of international justice and AI led to a range of recommendations aimed at fortifying the global justice ecosystem. These have been categorized based on their intended audience: civil society organizations and the Office of Global Criminal Justice (GCJ).

Recommendations for Civil Society

  1. Collaboration and Democratization: Promote deeper collaborations between the private and international justice sectors to democratize AI access. Decision-makers should support efforts ensuring all civil society members benefit equally and address the risk of overlooking underrepresented voices in the justice ecosystem. Shared access to knowledge and technology can ensure these partnerships’ benefits extend beyond a select few, fostering inclusive growth and broader societal impact.
  2. Open Technical Standards: Consider adopting open-source, interoperable digital provenance standards poised to improve the reliability of potential evidence and foster greater trust within court systems. The design of these standards should be informed by cross-disciplinary research and common legal practices. One such example is the Coalition for Content Provenance and Authenticity (C2PA) standard, which addresses the prevalence of misleading information online by certifying the source and history of digital content.
  3. Digital Evidence Protocols: Engage with the development and implementation of protocols for high-quality digital evidence. As a prominent example, the Hala Protocol for the Collection, Processing, and Transfer of Audio Data will outline standards for the use of audio data, bridging a gap between the technical aspects of the collection and the potential evidentiary admissibility and value of audio data. 
  4. Protection of Privacy and Expression: Support authentication technologies that prioritize the protection of human rights defenders by requiring  explicit opt-in mechanisms to metadata collection, and empowering them to control when and how their metadata is shared. These protective design principles must be embedded at every stage of the information workflow.

Recommendations for the Office of Global Criminal Justice (GCJ)

  1. Training: Hands-on training, including technical trial advocacy skills, in the well-established principles of provenance, authenticity and verification – and methods of applying them to digital content. This will empower practitioners with an effective means of combating mis- and disinformation and malicious synthetic media.
  2. Report Card: The creation of a “consumer report card” for AI models and tools in the investigative and justice fields, to promote greater access and improve digital literacy. Using a standardized grading scale, the report card would evaluate each tool based on transparency, accuracy, and security. It would also emphasize the need for maintaining sustainability and open access, even as technologies evolve or are discontinued. This transparency would empower all to better understand AI’s potential and dangers, while incentivizing tool-makers to prioritize ethical standards, inclusivity, and sustainability.

The Starling Lab Perspective

Consistent with our mission, the Starling Lab participants emphasized practical applications for securing justice in the digital age. Our focus remains on implementing tools for capturing, preserving, and verifying digital evidence while fostering transparency and resilience in justice systems worldwide.

“This is more than a technological challenge—it’s a societal imperative,” said Basile Simon of Starling Lab during the event, adding:

“As generative AI technologies become more accessible, their implications for truth, trust, and accountability demand immediate and sustained attention. Our mission is to ensure that AI supports, rather than undermines, justice.”

 

Acknowledgments and Next Steps
The symposium featured contributions from global leaders in technology, law, and human rights, including representatives from the State Department’s Office of Global Criminal Justice and LOCATE team, the Prosecutor-General’s Office of Ukraine, the Atrocity Crimes Advisory Group for Ukraine, the Department of Homeland Security, the Georgetown McCourt School of Public Policy, the Georgetown Center for Security and Emerging Technology, the University of Colorado Boulder, Duke School of Law, the Guardian Project, Harvard’s Library Innovation Lab, HRDAG, the Security Force Monitor, EqualAI, Palantir, the Case Matrix Network, Microsoft’s Digital Crimes Unit, WilmerHale, the War Crimes Research Office, and Georgetown Law’s International Criminal Justice Program, WITNESS, and Hala Systems.

We thank all the guests for their generous contributions and time.

Starling Lab is especially grateful to the Tech & Public Policy program at the Georgetown University McCourt School of Public Policy for so kindly hosting us in the center of Washington, DC, for this conversation.

Privacy Preference Center