Best practices for platform managers using AI

 

Best Practices for Platform Managers Using AI

Do’s and don’ts for AI adoption

If you are just beginning to explore AI in digital participation, we recommend exploring the most common use cases where participation platform developers have the most experience leveraging the technology, and where the AI itself is often a bit more mature. Based on our recent research, the most frequently observed AI features on participation platforms are:

  1. Translation

  2. Sentiment analysis

  3. Topic clustering

  4. AI discussion moderation

  5. Parsing and summarizing large amounts of text

While still capable of producing errors, participation platforms have already supported these features for years, in many cases.

People Powered’s own community-developed AI principles cautiously encourage the use of AI for tasks like text editing (such as fixing spelling, grammar or improving content, organization and brand voice consistency). This recommendation is offered within the context of existing digital platforms, and might not apply if your application requires training an AI model on significant amounts of custom, original data. While translation is already proving useful for many people, you need to experiment with multiple offerings to test their accuracy across your target language(s), and be transparent about where you use AI in external communications.

Participation platforms have already demonstrated AI's utility for content summarization, as current Large Language Models (LLMs) often perform this task with accuracy. This application alone can greatly increase efficiency and help extract and name key themes that might otherwise be missed in the volumes of public feedback. LLMs still make mistakes, though, and you should validate the accuracy and reproducibility of any summarizations it performs. You can do this through human spot-checks and by inviting public scrutiny of its work (as long as the community feedback is intended to be public).

We generally encourage exploratory use of AI for automating workflows, but complex tasks can still increase the chance of errors. Here and in many other AI applications, regular monitoring and evaluation by humans is crucial. You or your team members should verify the AI's outputs in context of your own subject matter expertise and familiarity with the community. Given how much time AI can free up, you should have capacity freed up to regularly check in on its performance.

The People Powered AI policy does flag certain applications as high-risk. For example, AI notetaking bots have proliferated in online meetings to transcribe the event. We have prohibited them in our own non-public meetings in order to respect participants' privacy and data security rights. We also prohibit using AI to monitor employee activity or "productivity", as it poses a significant risk to trust (and often fails to measure the holistic value of someone's work, anyway).

This may be obvious, but we recommend AI not be used to create synthetic content, including images, videos, or human voices, unless they're clearly labeled as artificial content. Generating an image that's meant to be a simulation of a near-future version of your street in order to advocate for a more vibrant community is great. Attempting to pass off artificially generated content, such as deep fakes, as genuine community engagement stands against everything participatory democracy programs seek to accomplish.

Across the board, AI tools confidently assert their ability to accurately accomplish certain tasks, whether or not they are actually capable of doing the job well. Do not base important decisions on AI's authoritative tone or "predictive" abilities.

AI Moderation

Given the costs of human moderators, many platform developers have applied AI to take on the task of moderation. Even if you use AI to help moderate discussions, we recommend keeping a close eye on what it's finding and whose contributions are being flagged. While AI models can help identify "toxic" language and automate responding to it, it's historically pretty bad at assessing participants' true sentiment or interpreting posts that rely on nuanced communication styles like sarcasm or irony.

You'll likely want to fine-tune the AI moderator as you see how it performs with your participants. Platforms like Decidim offer the ability to train the moderator in real-time on the content appearing on your platform, rather than a pre-trained model that might not include your community in its training data. We also recommend human facilitators to help guide discussions in productive directions, even if you rely on AI to weed out toxic content.

An important concern we have with AI-powered moderation features is that whoever sets the rules of the model can abuse their power to dictate what's permissible to say and what isn't. Whether it's the AI developer, the platform developer, or the host of the participatory process, AI will automatically enforce their standards whether or not they are just. For example, some governments have pressured AI companies to manipulate their data, and some major companies have complied. In addition to enshrining a false version of history, AI moderation models may respond by "rate limiting" or otherwise suppressing the speech of participants found to be violating the moderation rules, whether or not those rules are just to begin with.

We wholeheartedly support general moderation rules that discourage individuals from attacking others based on legally protected characteristics like race, gender, or disability, or otherwise harassing people or violating their human rights. The interpretation of these principles into content moderation programs is an entire field often referred to as user "Trust and Safety," and it's now a hotly contested arena due to the power these rules represent. When in doubt, we side with creating healthy community spaces where diversity is welcome and individuals' rights are protected, with the allocation of adequate human resources relative to the volume of your community participation.

Human-in-the-loop

A "human-in-the-loop" process requires human oversight and intervention at various stages of AI-powered operations to help ensure accuracy, quality, and ethical considerations that the model might mess up. It's a very common feature of institutional AI principles because they've decided that humans remain ultimately responsible and accountable for final actions and decisions, including the use of AI outputs.

Take for example the AI moderation feature. Even with the technical ability to address harassment (like dialogs that nudge users towards more civil language as they're writing), maintaining human involvement at the administrator level is critical. Hybrid models combine AI, crowdsourced filtering (where other participants can flag problematic language for review), and "expert" human review can be effective in your large-scale engagements.

In this setup, the AI system might automatically flag potentially problematic comments in discussions. Human moderators would then review these flags to interpret the nuances of the conversations (especially complex posts that AI might misinterpret), and make the final decision about content removal and/or intervention with the potentially offending user. The human review helps ensure that conversations remain productive and civil, limiting the negative impacts from bad-faith participants, while reducing the chance that the automated system mis-steps in its interpretation and response. 

This type of model is especially important for participatory processes that involve meaningful decision-making power, as discussions among different groups  can become contentious, and the process results can be consequential. AI outputs should always be verified by humans, and ideally followed as the process is running, rather than just at the end.

Communicating clearly with participants

Transparency regarding AI use is another common principle. Your AI use, including that of your digital platform, should be reasonably documented and disclosed to your community. Any significant decisions that were made or supported by AI should also be disclosed. We recommend noting when you use AI translation so that participants speaking other languages can be wary of errors. Your commitment to transparency is vital for building and maintaining trust with participants.

Managing ethical concerns and public perception

Despite its benefits, AI adoption presents significant challenges, including bias and ethical use. AI-enabled tools risk reinforcing inequalities, eroding trust, or further excluding marginalized voices. With and without AI, it's crucial to be in conversation with your community throughout the participatory process, be ready to adjust, and be open to taking your community's pulse on whether and how they would like to see AI implemented.

Key ethical and societal concerns surrounding AI include:

  • Bias: AI models can and do reproduce social biases originating in their training data.

  • Lack of transparency and explainability: Many widely available AI models have proprietary and opaque training data and algorithms, making effective oversight difficult. The technology itself can make it difficult for even the models' creators to understand why it's producing certain outputs.

  • Manipulation risks: AI can generate large volumes of text (bots) or realistic-seeming deepfake audio and video that could be used to pollute participatory processes. For example, an interest group could create a large number of credible fictional personas, diminishing the contributions of and outvoting human participants.

  • User perception: AI introduces variation in user experience. People are understandably wary of AI, and may lose trust in a process that uses it, especially if it's used without clear disclosure.

  • Inclusion and equity: If not carefully managed and checked, AI could unintentionally reinforce existing power imbalances like over-representing majority cultures and language groups.

  • Data privacy: Protecting data privacy is crucial, as AI services might train on participants' data or accidentally "leak" users' data to other users or third party connections.

To manage these concerns, you should:

  • Prioritize applying AI in contexts that respect human dignity and promote equitable access to resources.

  • Prioritize inclusive and democratic approaches to training, testing, and deploying AI tools.

  • Be an active participant in shaping norms around your community's AI use and developing responsible practices for it.

  • Strive to use AI tools in the most ethical, sustainable, and transparent ways possible, while maintaining vigilance against their still emerging harmful effects.

Next: Looking Forward
Previous:
Cybersecurity