We funded 10 teams from around the world to design ideas and tools to collectively govern AI. We summarize the innovations, outline our learnings, and call for researchers and engineers to join us as we continue this work.
As AI gets more advanced and widely used, it is essential to involve the public in deciding how AI should behave in order to better align our models to the values of humanity. In May, we announced the Democratic Inputs to AI grant program. We then awarded $100,000 to 10 teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems. Throughout, the teams tackled challenges like recruiting diverse participants across the digital divide, producing a coherent output that represents diverse viewpoints, and designing processes with sufficient transparency to be trusted by the public.
At OpenAI, we’ll build on this momentum by designing an end-to-end process for collecting inputs from external stakeholders and using those inputs to train and shape the behavior of our models. We’re excited to combine our research with ideas and prototypes developed by the grant teams in the coming months.
In this update, we will cover:
## How our grant recipients innovated on democratic technology
We received nearly 1,000 applications across 113 countries. There were far more than 10 qualified teams, but a joint committee of OpenAI employees and external experts in democratic governance selected the final 10 teams to span a set of diverse backgrounds and approaches: the chosen teams have members from 12 different countries and their expertise spans various fields, including law, journalism, peace-building, machine learning, and social science research.
During the program, teams received hands-on support and guidance. To facilitate collaboration, teams were encouraged to describe and document their processes in a structured way (via “process cards” and “run reports”). This enabled faster iteration and easier identification of opportunities to integrate with other teams’ prototypes. Additionally, OpenAI facilitated a special Demo Day(opens in a new window) in September for the teams to showcase their concepts to one another, OpenAI staff, and researchers from other AI labs and academia.
The projects spanned different aspects of participatory engagement, such as novel video deliberation interfaces, platforms for crowdsourced audits of AI models, mathematical formulations of representation guarantees, and approaches to map beliefs to dimensions that can be used to fine-tune model behavior. Notably, across nearly all projects, AI itself played a useful role as a part of the processes(opens in a new window) in the form of customized chat interfaces, voice-to-text transcription, data synthesis, and more.
Today, along with lessons learned, we share the code that teams created for this grant program(opens in a new window), and present brief summaries of the work accomplished by each of the ten teams:
## ⚖️ Case Law for AI Policy
Creating a robust case repository around AI interaction scenarios that can be used to make case-law-inspired judgments through a process that democratically engages experts, laypeople, and key stakeholders.
## 💬 Collective Dialogues for Democratic Policy Development
Developing policies that reflect informed public will using collective dialogues to efficiently scale democratic deliberation and find areas of consensus.
## 🤝 Deliberation at Scale: Socially democratic inputs to AI
Enabling democratic deliberation in small group conversations conducted via AI-facilitated video calls.
## 🦉 Democratic Fine-Tuning
ReportWebsiteContact
Eliciting values from participants in a chat dialogue in order to create a moral graph of values that can be used to fine-tune models.
## ⚡ Energize AI: Aligned - a Platform for Alignment
ReportWebsiteContact
Developing guidelines for aligning AI models with live, large-scale participation and a 'community notes' algorithm.
## 👫 Generative Social Choice
Distilling a large number of free-text opinions into a concise slate that guarantees fair representation using mathematical arguments from social choice theory.
## 🌎 Inclusive.AI: Engaging Underserved Populations in Democratic Decision-Making on AI
ReportBlog postContact
Facilitating decision-making processes related to AI using a platform with decentralized governance mechanisms (e.g., a DAO) that empower underserved groups.
## 📰 Making AI Transparent and Accountable by Rappler
Enabling discussion and understanding of participants' views on complex, polarizing topics via linked offline and online processes.
## 🎨 Ubuntu-AI: A Platform for Equitable and Inclusive Model Training
Returning value to those who help create it while facilitating LLM development and ensuring more inclusive knowledge of African creative work.
## 🔁 vTaiwan and Chatham House: Bridging the Recursive Public
ReportWebsiteContact
Using an adapted vTaiwan methodology to create a recursive, connected participatory process for AI.
## Key learnings from the grant program so far
### Public opinion can change frequently
Teams captured views in multiple ways. Many teams found that public views changed often.
### Bridging across the digital divide is still difficult and this can skew results
Reaching relevant participants across digital and cultural divides might require additional investments in better outreach and better tooling.
### Finding agreement within polarized groups
Finding a compromise can be hard when a small group has strong opinions on a particular issue.
### Reaching consensus vs. representing diversity
When trying to produce a single outcome or make a single decision to represent a group, there might be tension between trying to reach consensus and adequately representing the diversity of various opinions. It’s not just about siding with the majority, but also giving a platform to different viewpoints.
### Hopes and anxieties about the future of AI governance
Some participants felt nervous about the use of AI in writing policy and would like transparency regarding when and how AI is applied in democratic processes. Post-deliberation sessions, many teams found that participants became more hopeful about the public's ability to help guide AI.
## Our implementation plans
Our goal is to design systems that incorporate public inputs to steer powerful AI models while addressing the above challenges. To help ensure that we continue to make progress on this research, we are forming a “Collective Alignment” team consisting of researchers and engineers that will:
We are recruiting exceptional research engineers from diverse technical backgrounds to help build this work with us. If you’re excited about what we’re doing and would like to apply, please apply to join us!
Tyna Eloundou, Teddy Lee
AJ Ostrow, Alex Beutel, Andrea Vallone, Arka Dhar, Atoosa Kasirzadeh, Artemis Seaford, Aviv Ovadya, Chris Clark, Colin Megill, Erik Ritter, Gillian Hadfield, Greg Brockman, Gretchen Krueger, Hélène Landemore, Jason Kwon, Lama Ahmad, Miguel Manriquez, Miles Brundage, Natalie Cone, Ryan Lowe, Shibani Santurkar, Wojciech Zaremba, Yo Shavit
Disrupting malicious uses of AI by state-affiliated threat actors Security Feb 14, 2024
Building an early warning system for LLM-aided biological threat creation Publication Jan 31, 2024
How OpenAI is approaching 2024 worldwide elections Company Jan 15, 2024
Our Research * Research Index * Research Overview * Research Residency * OpenAI for Science * Economic Research
Latest Advancements * GPT-5.3 Instant * GPT-5.3-Codex * GPT-5 * Codex
Safety * Safety Approach * Security & Privacy * Trust & Transparency
ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)
Sora * Sora Overview * Features * Pricing * Sora log in(opens in a new window)
API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)
For Business * Business Overview * Solutions * Contact Sales
Company * About Us * Our Charter * Foundation * Careers * Brand
Support * Help Center(opens in a new window)
More * News * Stories * Livestreams * Podcast * RSS
Terms & Policies * Terms of Use * Privacy Policy * Other Policies
(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)
OpenAI © 2015–2026 Manage Cookies
English United States