News & Events
Sharing trustworthy AI models
The OECD has recently released a report titled “Sharing Trustworthy AI Models with Privacy-Enhancing Technologies”, highlighting practical approaches to sharing AI models while safeguarding data privacy. The report presents ten real-world use cases, grouped under two main archetypes:
- Enhancing AI model performance through minimal and confidential use of input and test data.
- Enabling secure co-creation and sharing of AI models, without compromising data confidentiality.
To address these challenges, the report explores a range of Privacy-Enhancing Technologies (PETs), including synthetic data, homomorphic encryption, differential privacy, trusted execution environments (TEEs), federated learning, and secure multiparty computation (SMPC).
The report was informed by a series of expert workshops organized by the OECD, with participation from international stakeholders. Among them was Antonio Kung (Trialog), representing the International Organization for Standardization (ISO).
The OECD Expert Workshop on PETs and AI, held on June 16, 2025, in Ottawa, also included discussions on the development of a global repository of PETs use cases to support international cooperation and knowledge sharing.
The LICORICE project plans to build on these insights by exploring concrete privacy-related use cases involving Self-Sovereign Identity (SSI) and other innovative PET applications.
Read the full article here: https://www.oecd.org/en/publications/sharing-trustworthy-ai-models-with-privacy-enhancing-technologies\_a266160b-en.html](https://www.oecd.org/en/publications/sharing-trustworthy-ai-models-with-privacy-enhancing-technologies_a266160b-en.html)