Executive Summary
The Near Foundation is developing artificial intelligence (AI) "digital twin" delegates to address consistently low voter participation within its decentralized autonomous organization (DAO). This initiative aims to streamline governance processes, though it introduces new considerations regarding decentralization and oversight.
The Event in Detail
The Near Foundation, which oversees the layer-1 Near Protocol, is implementing AI-powered delegates to represent and vote on behalf of DAO members. This strategy is a direct response to average DAO participation rates, which typically range between 15% and 25%. These low rates can lead to centralized power, ineffective decision-making, and increased vulnerability to governance attacks.
According to Lane Rettig, a researcher at the Near Foundation specializing in AI and governance, the AI "digital twins" will learn user preferences through interactions such as interviews, analysis of voting history, and monitoring social media platforms like Telegram and Discord. The goal is to enable these AI agents to act and vote in alignment with their assigned users, transforming governance into a process that can "happen almost instantly."
The rollout of this system is planned in stages. It will commence with chatbot-like advisors, progressively moving towards individual AI delegates. The long-term vision, as articulated by Rettig, involves potentially "replacing all human actors with a digital twin... to solve this voter apathy, participation issue." However, the foundation emphasizes the continued importance of a "human in the loop" for critical decisions, such as fund allocations or strategic pivots. The system will incorporate verifiable model training to ensure alignment with user values and enhance security.
Market Implications
The introduction of AI delegates could significantly enhance the efficiency and scalability of the Near Protocol's DAO. By automating voting based on learned preferences, it aims to mitigate the risks associated with low participation, including the potential for governance attacks where a small group of token holders could pass damaging proposals unnoticed. If successful, this model could establish a precedent for AI-driven governance across other DAOs within the broader Web3 ecosystem, potentially altering decision-making paradigms.
Conversely, this approach raises questions concerning true decentralization and the role of human oversight. The deployment of AI in governance introduces new risks, such as the potential for AI agents to make critical errors or be compromised, leading to unintended outcomes. Furthermore, the integration of AI into decentralized systems presents novel legal and regulatory challenges, particularly regarding liability, bias, transparency, and data protection. Regulatory bodies, such as those in the European Union, are beginning to propose frameworks to address AI liability, which could have far-reaching implications for such autonomous systems.
Lane Rettig of the Near Foundation highlighted the "end game vision" of AI delegates as a solution to voter apathy. However, he also underscored the necessity of human involvement for significant decisions. Rettig stated, "I think that there's definitely a category of things where you're going to want the human to make the final decision, pull the trigger." This perspective acknowledges the inherent limitations of current AI systems, which, despite their sophistication, operate on probabilistic inference and can generate outcomes that are generic, biased, or inaccurate without intentional human input.
Commentary from the broader AI and blockchain convergence discussion emphasizes that "without intentional human input, these systems don’t evolve—they drift." This suggests that a future-ready AI-blockchain ecosystem requires consciously placing humans within the system. This includes mechanisms such as human debate for AI-generated governance proposals, human review of Large Language Model (LLM)-generated smart contracts, and human validators auditing AI decisions on-chain.
Broader Context
The convergence of AI and Web3 technologies is reshaping decision-making across various sectors, including financial services and supply chains. While Web3 offers decentralized, trustless interactions, AI provides efficiency and data-driven insights. This integration, however, leads to new legal complexities. For instance, assigning liability for economic loss, data misuse, or biased outcomes from autonomous AI systems becomes challenging in the absence of centralized oversight. The opaque nature of many AI models further complicates the rationale behind decisions, raising compliance and ethical concerns.
AI agents are already prevalent in the crypto space, used for building Web3 applications, launching tokens, and interacting autonomously with protocols. The Near Foundation's initiative exemplifies a growing trend towards leveraging AI to solve inherent challenges in decentralized systems, while also navigating the evolving landscape of regulatory scrutiny and ethical considerations for autonomous technologies. Organizations integrating AI into Web3 must develop robust governance frameworks, conduct regular audits, and maintain appropriate human oversight to manage risks effectively.
source:[1] Near Foundation Plans AI Delegates to Solve DAO Voter Apathy (https://cointelegraph.com/news/near-foundatio ...)[2] Near Foundation is working on an AI 'digital twin' for governance votes - Cointelegraph (https://vertexaisearch.cloud.google.com/groun ...)[3] Why human-in-the-loop still matters in AI + blockchain future - CoinGeek (https://vertexaisearch.cloud.google.com/groun ...)