Recommendations for AI and platforms implementing AI
/Recommendations for AI and platforms implementing AI
Designing for public interest and democratic values
Whether you're developing your own AI or deploying an existing model, participatory platform developers play a critical role in shaping how this technology ultimately impacts participatory democracy. We're counting on you to design AI tools and platforms that meaningfully facilitate engagement and shape better societal outcomes. This includes ensuring that your AI model or features support existing goals such as helping institutions engage constituents more often, in more depth, and involving people in real decision-making.
We would also like to see civil society and democracy activists get exposure and access to trustworthy AI systems earlier in the technology adoption cycle, so they can prepare participatory processes and institutions for impending technology-driven disruption. They need your support developing capacity to test AI and avoiding the deployment of inhumane systems.
Working with governments and civic actors
As you probably already know, governments, institutions, and civil society organizations have specific needs that vary widely and differ from other markets. As you design, develop, and integrate AI tools to serve these markets, it would help to directly collaborate in order to understand how their unique contexts will affect your product (and vice versa).
Taking a collaborative approach ensures that your AI solutions are practical and relevant for real-world participatory programs. It can also introduce your cutting-edge solutions in a safe environment, like a regulatory sandbox, allowing you to work together to address and mitigate the very concerns that could otherwise prevent adoption of your platform.
Ethics-by-design: inclusion, bias, transparency
Your product design process should include ethical considerations, like inclusion, bias, and transparency, from the start. You'll need to consider:
Transparency and explainability: How can the workings of AI algorithms be made clear and understandable to participants and platform managers? If you're integrating AI into an existing platform, you should document and disclose where it's showing up and what it's intended to help with. (The sparkle symbol ✨ seems to have emerged as a universal indicator of an AI feature).
Modern language models can now show their "chain of thought" to users. This means they visually display the process by which they got to their answer.Bias mitigation: How can biases in the underlying training data be accounted for, and how can your algorithms be designed to prevent the unintentional reinforcement of inequalities or exclusion of marginalized voices? One way to identify solutions here is to be transparent about how your system works. While you might be the expert of your product, other people with different lived experiences are likely going to be able to help you spot a potential problem with how your system interacts with their community. Engaging them openly can help you address it before it does any harm.
Inclusion: Design for human dignity and equitable access to resources. By prioritizing inclusive and democratic approaches to training, testing, and deployment of your AI, you can directly support the broader objectives of participatory democracy.
Responsible data use: Can you avoid training AI on customers' data in general, or at least without their express permission? What alternatives can you offer? User privacy is an important priority, and increasingly a legally-required one. You can save yourself a lot of trouble by not collecting personally identifiable or other sensitive data in the first place.
Limit AI hallucination
One technique to address the hallucination issue is Retrieval-Augmented Generation. This approach basically focuses the AI model on a set of pre-approved verified documents. Doing so can limit the likelihood that the model hallucinates some totally original (mis)information. This technical approach is proving popular in legal contexts, where AI outputs must be based on established law, policy, or precedent.
Participatory platforms like Policy Synth, Parla, UrbanistAI, Konveio, Redbox, Local Minutes, and Congressional RAG use this method, among others.
Another method to reduce hallucinations is to set your AI model to "low-creativity" setting. OpenAI's API allows for this adjustment, for example. It was used by POPVOX Foundation in addition to RAG to design a tool for legislative staffers in the US Congress that "avoids AI hallucinations by grounding responses strictly in vetted documents."
Real-world testing and feedback loops
By incorporating real-world testing and feedback loops into your development process, you can navigate difficult scenarios and identify edge cases before they become a problem. This includes practices like co-design, red-teaming, and performing audits of your or your providers' AI models.
Approaching AI tools with an experimental mindset and sharing learnings, successes, and failures with customers, participants, and the broader community engenders empathy for your efforts and promotes continuous improvement. Like in other areas of software development, iterative development of participation AI features helps ensure your platform is truly serving users’ needs, and that any AI is well integrated into complex sociopolitical contexts.
Next: Looking Forward
Previous: Cybersecurity