🗂️
Module Library
  • Governance Modules Library Overview
    • Projects using the Governance Modules
  • Power Attribution
    • Neural Governance
      • Specification
      • Implementation Instructions
      • Tuning Guidelines
      • Simulations
  • Signalling Forms
    • Quorum Delegation
      • Specification
      • Implementation Instructions
      • Tuning Guidelines
      • Simulations
  • Reputation Metrics
    • Trust Graph Bonus
      • Specification
      • Implementation Instructions
      • Tuning Guidelines
      • Simulations
  • Identity Management
    • Tier-based Role Certifier
      • Specification
      • Implementation Instructions
      • Tuning Guidelines
      • Simulations
Powered by GitBook
On this page
  • Introduction
  • General Definitions
  • Example Implementation in Python
  • Resources
  1. Reputation Metrics
  2. Trust Graph Bonus

Specification

PreviousTrust Graph BonusNextImplementation Instructions

Last updated 1 year ago

Authors: BlockScience and SDF, July 2023

Introduction

A notebook containing an end-to-end example implementation for this document can be found on the .

General Definitions

The admissible user actions for the Trust Bonus Module in each round are:

  • Assign Trust to another User:

    • Creates a new edge in the trust graph

  • Remove Trust from another User:

    • Requires first having assigned Trust to that User

    • Removes an edge from the trust graph

Type of PageRank

For the PoC we’ve selected a Scaled Canonical PageRank algorithm for the Trust Bonus function. This is justified by its familiarity (it’s relatively well-known through the technical community) and simplicity (the algorithm is relatively simple and several out-of-the-box implementations are available).

As of now, it uses a damping factor of 0.85 and uniform seeding across the trust graph. The results are min-max normalized, which means that the raw trust scores per user will range between 0 and 1.

We do expect that choice to be updated over time, as alternatives are explored, such as using an Aggregated Personalizing PageRank. It’s also possible that new formulations will appear as we build the testing apparatus and refine the forms against the desirables & influx of data.

Example Implementation in Python

# 1) Definitions

# Key is the Trusting User and the Value Set are the Users being Trusted
TrustGraph = dict[UserUUID, list[UserUUID]]

def compute_trust_score(raw_graph: dict) -> dict[UserUUID, float]
    """
    Computes an Trust Score as based on the Canonical Page Rank.

    This is done by computing the Page Rank on the whole Trust Graph
    with default arguments and scaling the results through MinMax.
    
    The resulting scores will be contained between 0.0 and 1.0
    """
    G = nx.from_dict_of_lists(raw_graph,
                              create_using=nx.DiGraph)

    pagerank_values = nx.pagerank(G, 
                                  alpha=0.85, 
                                  personalization=None, 
                                  maxiter=100,
                                  tol=1e-6,
                                  nstart=None,
                                  weight=None,
                                  dangling=None)
    
    max_value = max(pagerank_values.values())
    min_value = min(pagerank_values.values())
    trust_score = {user: (value - min_value) / (max_value - min_value)
                   for (user, value) in pagerank_values.items()}
    return trust_score

# 2) Backend inputs

TRUST_GRAPH = {'A': ['B', 'C'], 'B': ['C'], 'C': ['A']}
TRUST_BONUS_PER_USER = compute_trust_score(TRUST_GRAPH)

# 3) Implementing an Oracle

def trust_score(user_id: UserUUID, _2, _3) -> VotingPower:
    """
    Oracle for the Trust Bonus.
    """
    return TRUST_BONUS_PER_USER[user_id]

Resources

BlockScience/scf-voting-mechanism GitHub repository
https://github.com/BlockScience/scf-voting-mechanism
https://medium.com/sourcecred/exploring-subjectivity-in-algorithms-5d8bf1c91714