What do we mean by "AI"?

Guide to Digital Participation Platforms (2025) When to Use Them, How to Choose & Tips for Maximum Results

What do we mean by “AI?”

Artificial Intelligence is now an umbrella term that encompasses an array of competing technology methods. These systems perform tasks traditionally associated with human cognition, such as understanding language, recognizing patterns and supporting decision-making. Simply put, we call something AI when it allows a computer to act like a human would.

Over time, tech we take for granted stops being considered "AI." Take, for example, speech processing. Over decades of work in speech processing and human computer interaction, computers have developed to a point where they can decently understand how we talk (at least some of the time, and in common languages).

Our discussion of AI for participation will include the full subset of AI methods, from Natural Language Processing to Machine Learning. Much of what's currently referred to as "AI" relies on Large Language Models, or generative AI. We'll focus on the actual features each type of AI unlocks for participatory democracy rather than the underlying computer science.

AI Principles

Depending on whom you talk to, AI can be viewed as a panacea (cure-all) or anathema (scourge). One's opinion might even fluctuate based on the quality of one's most recent AI interaction. It's clear that this is a complicated topic, and only growing more so as AI's developers race ahead to impact the world in ever-deeper ways. A flourishing of actors have launched in recent years to address these topics.

For this reason, People Powered worked with its global community to co-develop an AI policy. The agreement acknowledges that AI is already ubiquitous in the technology platforms we rely on. Developing a policy provides a framework to ensure that use of AI furthers the organization’s mission while holding us accountable to its values, members, and community of beneficiaries.

Like other organizational policies on AI, the People Powered AI policy establishes core guiding principles that will remain steadfast even as the technology quickly changes. Your organization may have already conducted a similar process, or can embark on one. By first clarifying the principles that matter to you and the communities you serve, you can prioritize them even when new AI releases seem tempting. 

Some of the guiding principles in the People Powered AI policy include:

  • Mission Alignment: All decisions related to AI use and development will be evaluated on the potential to meaningfully advance our mission compared to their relative costs and risks.

  • Human-Centered: AI will only be used in ways that respect human dignity. Any AI tools we develop will be designed primarily to increase the equitable access to our resources.

  • Transparency: AI use should be reasonably documented and disclosed to our community. Any significant organizational decisions made or supported by AI must be disclosed. 

  • Accountability: Humans will remain ultimately responsible for all organizational decisions and actions, and will be accountable for the deployment and use of any AI outputs.

  • Sustainability: We will strive to use AI tools in ways that are as sustainable as possible, prioritizing more energy and cost efficient solutions and tools wherever we can.

  • Equity & Inclusion: In the development and use of AI tools we will prioritize inclusive and democratic approaches to training, testing, and algorithmic design.

  • Learning: We will approach new and existing AI tools with an experimental and innovative mindset, sharing our learnings, successes, and failures with our members.

  • Privacy and Security: We will endeavor to protect the data privacy of all those in our network to prevent personally identifying or proprietary data from being fed into public AI models.

In addition to organizational policies like People Powered’s, several global frameworks offer essential guidance for ethical and inclusive AI governance. The United Nations’ Global Digital Compact calls for universal connectivity, human rights-based digital cooperation, and safeguards for emerging technologies like AI. It emphasizes multistakeholder governance and the need to ensure that digital technologies — including AI — serve the public good and uphold democratic values. 

Complementing this, the UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) provides the first global normative framework for AI ethics. It outlines principles such as proportionality, fairness, transparency, and sustainability, and calls for impact assessments, inclusive governance, and protection of data and privacy. These standards are particularly relevant for civic tech platforms and participatory democracy tools that rely on AI to engage diverse communities. 

The 2025 UNDP Human Development Report (HDR), titled A Matter of Choice: People and Possibilities in the Age of AI, offers a people-centered analytical framework for AI. It emphasizes that AI’s impact will be shaped not by what it can do, but by the choices societies make in its design, deployment, and governance. The report proposes three strategic pillars for AI-augmented human development:

  • Building a complementarity economy — where AI augments rather than replaces human capabilities. 

  • Driving innovation with intent — aligning AI development with socially valuable outcomes. 

  • Investing in capabilities that count — ensuring people have the skills and agency to thrive in an AI-enabled world.

UNDP’s own principles for AI echo and reinforce these global standards. They include commitments to human dignity, equity and inclusion, transparency, accountability, and sustainability. UNDP also emphasizes mission alignment, ensuring that AI tools support development goals and do not exacerbate inequalities or undermine civic trust. These principles are operationalized through internal guidance on risk assessment, ethical design, and responsible data practices.

Together, these frameworks provide a robust foundation for organizations deploying AI in participatory processes. By aligning with UNDP and UN-wide standards, civic tech actors can ensure that AI strengthens democratic engagement, protects rights, and contributes to inclusive and sustainable development.

Next: The Challenges AI Presents