The challenges AI presents

 

The challenges AI presents

In the context of digital participation platforms, the most fundamental challenge AI presents is to what degree we allow it to replace human involvement. People Powered member Brendan Halloran says, "AI is increasingly able to engage people, get their input, integrate it together, and allow us to understand and engage with that content, all with less actual interaction among people. What are the tradeoffs here in contexts of polarization, isolation, low trust, and so on?" Democracy is a process, and people working to make decisions together is a core part of that process. What do we lose by taking shortcuts in that journey?

This isn't the only challenge, of course. If you consider the opposite scenario of each of the value statements in People Powered's AI policy, you will have a good starter list of many of the concerns people have regarding AI:

  • Mission Alignment: The direction of AI could be or become fundamentally misaligned with your broader organizational goals.

  • Human-Centered: AI could dehumanize us, or worsen existing inequalities. In the context of participation, we don't yet know what effect automating participatory processes will have on participants' level of engagement. Would you be less likely to write up your thoughts on something if you felt that no person would read it?

  • Transparency: AI systems are famously opaque, and their ubiquity could eventually erode disclosure of its use.

  • Accountability: Agentic AI will accelerate taking action on behalf of users.

  • Sustainability: The industry at large is consuming ever-larger amounts of energy, water, and other resources at a time when the environment can least afford it.

  • Equity & Inclusion: AI developers aren't particularly representative, and the limited availability and expense of advanced AI models reinforces inequality and exclusion. The training data that informs AI models is often rife with problematic and inaccurate biases. Mainstream models fail to include smaller languages or more broadly, less-digitized cultures.

  • Privacy and Security: AI data privacy breaches and attack vectors happen regularly, and are difficult to predict.

Additional fears include AI's potential to cause widespread job loss, harm to critical thinking capacity, and its tendency toward hallucination, plagiarism, and sycophancy (unquestionably supporting whatever a user says). Some people are also concerned with the existential threat AI could pose if its incentives no longer align with humanity itself.

While making it easier for more people to contribute to a digital participation platform is absolutely a positive development, the increasing amount of AI slop on the internet is already polluting social media. Digital participation platforms may not be immune to this trend. The sheer volume of machine-generated content could drown out the contributions from actual humans, diminishing their value and possibly even the political legitimacy of the process. Participants may be less eager to engage if they believe that they're interacting with computers instead of other people.

The proliferation of digital disinformation found online is also exacerbated by generative AI. Generative AI images, videos, and audio are already interfering with first responders in crisis situations. This issue is likely to worsen as AI's production values improve, and the images become harder and harder to discern from reality.

If not purposefully designed and effectively anchored in human rights and democratic values, AI systems may subtly shift decision-making power away from communities. This would reduce opportunities for meaningful civic engagement. AI can also marginalize vulnerable voices.

There will also be "second-order effects". These are the effects that follow a technology after society has adapted to the availability of that technology. For example, the proliferation of high-quality video cameras in every smartphone eventually led to TikTok influencers. In the case of AI, high volumes of "AI slop" flooding social networks are already changing peoples' experiences on them. And second-order effects are much harder to predict than the initial promises made by advocates for a given technology.

AI's benefits and harms are nuanced, and rapidly changing. As just one example, consider the environmental impacts of AI. It's well known that training new models can be incredibly energy intensive. There's also been significant media coverage about the environmental impact of each individual query we make to an LLM like ChatGPT. But some models can be run locally once trained, without taking up additional server resources per use. It's also hard to measure the net effect of AI: does completing a task in a fraction of the time it used to require using AI end up using more or less energy than the old way of doing things, and how do we know?

Regardless of the per-query environmental impact, there is a clear reason for concern at the macro level: the big tech companies are drastically ramping up their energy usage to train and host AI services. They're discarding their 2030 climate goals and investing heavily in potentially risky energy sources, like nuclear, to meet demand.

As the tech companies race ahead to win market share, the ethical concerns surrounding AI are likely to evolve, but unlikely to go away completely. AI development is outpacing the capacity of many governments to set effective rules to regulate it.

 Meanwhile, the concentration of AI development in a relative few countries and corporations risks reinforcing digital colonialism, where tools and norms are exported without regard for local contexts, languages, or values. Failure to support underrepresented languages risks excluding entire communities from civic engagement.

Beyond language, several members of the People Powered community have found that mainstream AI models don't reflect their cultures, either. As Charlie Martial Ngounou of AfroLeadership put it, "In African countries, most processes are not really digitized to begin with. You don't really have the data to feed AI models…So we'll end up adopting models trained on another kind of data with no real connection to our own reality, because our data is missing." Then, in participatory programs, low baseline digital literacy combined with AI agents participating on people's behalf could result in a situation where the process has essentially been "hijacked, colonized once more."

We will spotlight the more ethical alternatives that people are working on throughout this guide so you can make choices that align with your values. Here are some common concerns about today's AI, and emerging approaches to mitigate them:

[table]

The mitigation approaches here are not perfect solutions: many of the examples above lag years behind state of the art commercial AI models, for example. They can also be less widely available and more difficult for novices to use.

Not all concerns about AI have a corresponding and sufficient mitigation approach. For example, AI is already replacing and reducing many people's jobs. While the concept of creative destruction predicts this phenomenon, and suggests that new types of jobs will emerge as a result, the years ahead could be a period of great economic suffering and instability for many people while society transitions.

Participatory programs introduce additional constraints and contextual challenges. In addition to all of the above, participation AI might also encounter issues with data scarcity. A community's data may not be sufficiently digitized and represented in training sets. AI might not be able to figure out important contextual meanings, like the acronyms prevalent throughout the public and social sectors. 

The sheer speed at which the AI industry is moving presents another challenge. Many local governments are still in the process of "digital transformation", which includes an enormous amount of work to "upskill" the technical literacy of millions of civil servants. The speed and complexity of preparing data and AI systems, evolving responsible data practice, and AI literacy, generally complicates using it in these settings.

To ensure that AI systems serve all communities equitably, and don't reinforce existing disparities, AI accountability must be embedded in broader governance frameworks. This includes clear institutional mandates, regulatory oversight, and mechanisms for redress. UNDP, for example, supports countries in developing such ecosystems through its AI policy guidance and capacity-building programs.

Previous: What do we mean by “AI?”
Next: AI's opportunities for participation