Using AI during a participatory process, part 1

 

Using AI during a participatory process, part 1

Translation

Example platforms with this feature: Civocracy, Assembl, Your Priorities, Decidim

Translation is the most commonly used AI feature we observed, appearing in 19 out of 30 digital participation platforms. This frequency illustrates the appeal for participation platforms. AI translation lets platforms reach as many communities as possible.

Human translation is generally acknowledged to be more accurate, especially between certain languages, but its cost can make it prohibitive for participatory programs. When a program can't afford human translation across multiple languages, AI (or "machine") translation can make a participatory process and its related materials (not to mention the large volumes of participants' inputs) far more accessible.

In countries like Indonesia, where over 700 languages are spoken, the time and cost required to fully localize a product and participants' contributions would be prohibitive. Progress is being made to leverage AI to reinforce so-called "endangered" languages globally. The result is that people can participate in whichever language they feel most comfortable interacting in.

Although not the same as translating between two languages, AI is also being used to make government legalese and other dense texts more legible to more people. The Empurrando Juntas team in Brazil, for example, uses AI to "translate" institutional jargon and acronyms to more approachable language.

Parsing, summarizing, and classifying participant contributions

Example platforms with this feature: Polco, deliberAIde, Bang the Table, EngagementHQ, CartoDEBAT, Place Speak, coUrbanize, Fluicity (Efalia Engage), Insights, Konveio, PublicInput, 76engage, Sensemaker, Consult (i.AI UK)

Traditional survey methods often require citizens to choose from a finite set of potential answers, like a multiple choice question. That's because it makes it easier for the group running the survey to aggregate and tally the results. The surveyors get the analog equivalent of structured data back from participants. But asking participants to choose from limited answer sets too often restricts the diversity and depth of respondents' answers. The opportunity to learn more about their lived experience and expertise is lost. 

As the Carnegie Endowment for International Peace put it:

"While multiple-choice polls can capture top-of-mind opinions on predefined options, they rarely surface deeper insights or emergent ideas. In contrast, open-ended surveys, personal reflections, and group deliberation offer far richer input—but have often gone underutilized, not because they lack value but because institutions lacked the time, expertise, or tools to apply them. Research, for example, shows that open-ended responses offer windows into public attitudes, but extracting that meaning requires conceptual grounding and interpretive care."

Generative AI's ability to efficiently parse large amounts of speech, text, and other forms of engagement could allow hosts of participatory processes to ask more open-ended qualitative questions that allow people to share how they really feel. And thanks to some of the AI features here, administrators will be able to deftly handle large volumes of text, even if they receive a hundred thousand pages of participants' transcripts.

That figure is not an exaggeration. The UK government runs about 600 public consultations each year, with some individual consultations attracting over 100,000 responses. Staff and contractor time analyzing those responses amounts to hundreds of thousands of hours, and further delays the turnaround time between participation and results.

AI has been used for years to address these challenges, even predating the current generative AI boom. More mature participation AI features apply Natural Language Processing and similar methods to mine large volumes of text (participants' contributions) and classify discussion topics or estimate sentiment (how people felt about a topic). They can cluster wide-ranging conversations into a more finite set of topics for participants and process hosts alike to more easily navigate, during and after a discussion.

Topic clustering provides an example of the steady performance improvements AI has made in recent years. While it's been possible for some time, improvements to clustering models have unlocked better and more specific descriptions of the conversational "buckets". The resulting theme descriptions are more interesting; Consider the difference between "people discussed park infrastructure" and "the community is adamant that water fountains are working during summer heatwaves."

How does AI perform with such a task? In the same UK study, Consult's AI theme identification was paired against human reviewers doing the same task. The study found that the model "correctly identified all themes for three-fifths of responses and had generally good overall performance". Its score on the task wasn't perfect, but not far off from subjective disagreements between human reviewers, either.

Konveio's screenshot shows AI-powered topic extraction and sentiment analysis. https://www.konveio.com/features/analytics-reporting Screenshot taken from Konveio.

Likely because these technical methods have been available for years, we found that digital participation platforms are currently more likely to use them than Large Language Model (or generative AI) features. This finding could easily shift as LLM-based features mature and developers have more time to test and integrate them.

These earlier AI features still require ethical consideration; accurate sentiment analysis is notoriously tricky, for example. But the rapid introduction of LLM-based approaches has introduced other concerns.

As you've probably experienced firsthand, generative AI can also do magical things with large amounts of text. It can summarize, rewrite, translate, expand, and more. Sometimes it'll insert or distort things that weren't really there in the data. This problem is referred to as 'hallucination' and fixing it is a very active area of research.

And how does the public feel about their comments being reviewed by AI? In a different study, Nesta worked with the UK government's AI office to evaluate public opinion about the same Consult tool's use of AI to summarize participants' contributions. 

Generally, the UK public understood the rationale for using AI here and valued the efficiency gains that Consult provides, but quite astutely "wanted to know what the cost- and time-savings would be used for". Efficiency is not an end-game in its own right; resources spared from one task could be re-invested in another aspect of engagement. The public was also concerned about the potential for AI model manipulation and its environmental impact.

And they, like many others, seek human oversight for the Consult tool. Their arguments, that "more sensitive and local issues should require a higher degree of human checking to ensure that the Consult tool didn’t miss critical information," are sound.

Adjust the AI model to your program in real-time

Example platforms with this feature: Decidim, Participativo Brasil

Another way to improve an AI model's performance is to give people the ability to train it on the specific data coming out of your digital participation process. Decidim has developed this real-time feature for the purpose of moderating spam on your platform. One can imagine how training a model on your specific process could open other benefits, such as reducing hallucination and adapting it to context-specific speech, terminology, and behavior.

Brazil's national participation platform, built on Decidim, evaluates real-time participation data to flag imbalances in who's participating. Its program managers can then intervene to encourage greater balance. The Stanford Online Deliberation Platform nudges users who aren't participating, and helps organizers keep conversations on-topic and following the meeting agenda.

Previous: Using AI before a participatory process starts
Next: Using AI during a participatory process, part 2