Understanding the ratings

Evaluation criteria

The ratings are based on six criteria: cost, capacity requirements, features, accessibility, ethics and transparency, and track record and reliability. The score for each was standardized to a scale of 1-100, giving each criterion equal weight.

  1. Platform cost: The cost of the software. Platforms with lower software and configuration costs (if any) are assigned more points.

  2. Capacity requirements: This criterion focuses on the internal capacity requirements necessary to configure and use a platform for participatory processes. There are five variables:

    1. Tech expertise required for configuration. Platforms received a higher score if they don’t require an expert to configure.

    2. Tech expertise required for maintenance. Platforms also scored higher if they don’t require an expert for maintenance.

    3. Tech support. Platforms offering full-service support received a higher score. Support may include integration into existing processes (either virtual or on-demand) and onboarding support. 

    4. Process planning guidance. This includes templates for specific types of participatory process, and other backend tools that enable users to design all steps of participatory activities or processes according to their needs. 

    5. Hosting capacity. Platforms that provide flexible hosting options received a higher score.

  3. Features: Platforms received a point for each feature and N/A if a feature isn’t offered or the committee members could not determine if it is. Thirteen possible features were identified:

    1. Idea collection. This allows a person to submit their proposal.

    2. Survey.

    3. Proposal-drafting. This allows multiple people to cooperate and co-create a proposal together (i.e. draft a policy in a shared space).

    4. Voting.

    5. Discussion forum.

    6. Commenting and sharing. This refers to social engagement features.

    7. Mapping (allowing projects and user contributions to be connected visually to a particular location).

    8. User timeline. Tools that allow users to visualize where they are in the process, e.g., idea collection stage, voting stage, etc. 

    9. Notifications to participants. 

    10. User verification.

    11. Data visualization (e.g., heat mapping, or using data visualization to indicate strength of support for an idea).

    12. Quantitative data analysis.

    13. Sentiment analysis (categorizing the emotional tone of discussions).

    Note: This criterion is based on the type/quantity of features, i.e. whether or not the platform has any of these features. It does not rate the features based on quality. In the future, we may explore how we can evaluate feature quality.

  4. Accessibility. Platforms were given more points if they allow meaningful participation, including by those facing barriers. Seven variables were evaluated: 

    1. Number of countries where the platform has been used. (This metric helps in indicating if a platform is adaptable to different contexts)

    2. Functionality in multiple languages.

    3. Accessibility for people with disabilities. (Clients can customize platforms for people with visual and hearing disabilities)

    4. Hybrid integration with in-person activities (platforms that better integrate with in-person activities received a higher score).

    5. Browser and technology requirements for users (platforms compatible with the most-used browsers or technology received a higher score). 

    6. Connectivity requirements (platforms suitable for communities with connectivity challenges received a higher score).

    7. Degree of mobile device responsiveness or compatibility (platforms that are fully functional on mobile devices received a higher score).

  5. Ethics and transparency. Platforms received more points if they publish rules governing data use, protection of personal information and content moderation. Five variables were considered: 

    1. Open source: Is the code published under an open source license? Is the source code easy to find and recently updated?

    2. Data policy: Data policy transparency (e.g., do platforms inform users how they use the data?) and ethical use (i.e., do platforms sell user information to third parties?)

    3. Data protection: Are collected data protected from leaks and outside use?

    4. Transparency of moderation: How transparent are the content moderation services? Do they include moderation against hate speech? 

    5. Raw data export: This feature increases (in theory) transparency as well as autonomy.

  6. Track record and reliability: Platforms were given more points if they have been on the market longer, indicating maturity and reliability. They also received more points if they have a diverse clientbase. These three variables are summarized below:

    1. Length of time on the market. (A new platform that makes it into the ratings will progressively win points over time.)

    2. Profile and breakdown of institutional users (for example, governments, schools, CSOs).

    3. Diversity of contributors. (This metric indicates the extent to which a platform relies on multiple actors to be sustained. For instance, if only one company contributes, if it failed, the platform would disappear.)

 
 
 

A coach teaches a resident in Rosario, Argentina, how to engage with the government online.