Governatooorr Guardrails: Practical considerations when introducing automated governance agents

Kelsie Nabben
BlockScience
Published in
9 min readNov 9, 2023

--

Kelsie Nabben and Michael Zargham
November, 2023

Introduction

I’m not sure of the origin of the term “governator”. It has been around since at least 2003, when this post describes Arnold Schwarzenegger, The Terminator, becoming “The Governator” of California. I imagine it stems from sci-fi. More broadly, and in the context of blockchain governance, a governator describes a robotic governance agent. People instruct the governator on the parameters of its actions, and it proceeds to act as instructed, within the boundaries of its prescribed reporting requirements.

The focus of this blog is on the practical considerations of seeking to redirect human participation via automation in blockchain governance. It is written from the perspective of a systems engineer and an ethnographer of emerging technologies. To explore this, we draw on an ethnographic case study on a governator agent which is an AI machine agent with a human minder (as opposed to the alternative model, of a “human agent with a machine minder”, such as AI driven security models). ‘Governatooorr’ is a large language model (LLM) AI agent that acts as a personal governance delegate to allow people to outsource their attention through automation. In effect, the Governatooor is about achieving more benefits (such as rewards, influence, retroactive claim, etc.) from participation in blockchain protocol governance, for less effort.

Governatooorr is intended by the team at Valory (“architecting autonomy”) to be an experimental agentic tool to provoke discussions on the potential, as well as the dangers, of governance processes that technically do not require humans to read governance proposals and act upon a governance system to determine the outcomes of organisations.

“The Governator” animated projects (film, TV and comics), (Jennings, n.d).

The Problem Statement

In the context of blockchain-based Decentralised Autonomous Organisations (DAOs), a major problem is that “governance is overwhelming”. For example, many DAOs require repeated attention from all participants in proposals and votes to make minor decisions. The attention required to participate in distributed, digital governance across multiple projects, as each is typically highly technical, fast paced, and governance decisions are often irrevocable.

Introducing Governatooorr

‘Governatooorr began as a playful idea at a hackathon with underlying potential. “This light-hearted service’s design technically could mean humans no longer need to read governance proposals to vote and decide organizations’ outcomes” (Valory, 2023). Valory’s goal was to use it to provoke discussion about how to build open-source AI for the betterment of humankind. Now, Governatooorr is being positioned as the AI solution that “everyone in crypto” needed to address the problem of the attention required to participate in governance and governance apathy. The project website presents it as- “an autonomous, AI-powered delegate for DAO governance” (Valory AG, 2023).

Governatooorr consists of OpenAI’s ChatGPT LLM as the AI model, and an opensource software framework to manage off-chain processes for creating autonomous agents and services (called ‘Olas’, previously Autonolas). Governatooorr works as follows: (1) the user connects their cryptocurrency wallet and delegates their governance rights in a certain DAO (2) transfers it cryptocurrency tokens to pay for the service, and (2) then chooses its setting (at this stage, a binary ‘good’ for acting in favor of governance proposals, or ‘evil’ to bias towards whatever governance proposal option that would be the worst for the DAO). Then, (3) the AI agent uses the stated settings to infer the user’s preferences about DAO governance proposals. Governatooor then operates autonomously to execute its vote with the user’s tokens, as there is currently no option for human operators to be kept in the loop. Of course, only having binary ‘pass’ or ‘fail’ options, and equating this with good and bad governance is reductive at best. It reduces people’s ability to actually imbue in the AI a sense of what that humans want and minimises meaningful control of their Governatooorr instance. This implementation may be as a function of lack of time in the context of a hackathon, as well as to point to more recent work on AI prediction market services in DAO decision-making (Autonolas, 2023).

Minimisation Through Automation

The idea of governance automation stems from the prevalent concept in Web3 cultures of “governance minimization”. Governance minimization seeks to reduce the need for human involvement in governance activities wherever possible. Consequently, governators that are structured as a “machine contributors with a human minder” actually increase the governance surface, as the subjective inputs and margins for trust are expanded by the plethora of agents and human-agent interactions in the system. The real power to manipulate and game such a system lies with the builders of the system to determine how the AI is constrained and can be prompted, as user trust is delegated to the agent. In reality, the architectural considerations of an LLM agent and how they are designed, deployed, and maintained in relation to a context and constituency are multifaceted and not easily legible.

Gaming the Governatooorr

Furthermore, if Governatooorr in its current state became a majority delegate in a DAO, users that can track whether AI agents are set to ‘green’ (positive) or ‘red’ (evil) would be able to engineer, submit, and pass proposals with higher probability. If large DAOs adopt this approach, it could basically be a giant prompt engineering competition for whoever can convince the bots to give them money. A significant challenge of this is that the consequences may take time to emerge, meaning that the tools could propagate before issues manifest. The project’s plan is to further develop the depth and complexity of the product, including moving beyond binary options regarding how users state their preferences as adoption grows. How the relationship between Governatooorr, DAO participants, and governance outcomes is structured also depends on projects implementing this model.

The use of Governatooorr redirects human attention from participating in the governance surface of the DAO, to participating in the governance surface of the AI agent. Instead of engaging within the governance surface of a DAO by reading and analysing proposals and casting a vote, users redirect their attention to engage with the governance surface of the AI agent, moving from delegating to it, to programming its preferences, to a final step where the deployer reviews its voting decisions. In practice, the goal of automating attention requires attention to be redirected to the governance surface of the automation mechanism (e.g. an LLM), rather than replace or remove the need for human attention. The intention behind the development of these systems is that eventually, AI will operate more and more autonomously, in line with the interests of its deployer. In this context, automation requires an initial investment of operator attention to instruct and govern an AI agent. Following this, the operator’s attention is theoretically emancipated to focus on other things.

This has several consequences. Firstly, automation still requires monitoring and maintenance to keep code operational and aligned with the interests of their human deployers. This redirection of attention also changes the nature of governance and community. Some could argue that it actually detracts from critical components of digital governance. Outsourcing attention to machine agents removes a coordinative quality of distributed, digital governance in DAOs that is fundamental to collective action, which is member participation in forum proposal discussion before they go to a vote, and the co-location of attention, mediated by digital communications networks, to create a sense of shared attention over time that creates and sustains relationships in online communities, despite geographic distribution.

Automation Paradox, Brian Douglas.

AI Traction in Web3

From its initial playful state, Governatooorr has garnered interest and traction from crypto people on Twitter and various crypto X AI conference award nominations (lukasschor.eth, 2023; Valoryag, 2023e). The agentic tool would now be live for other crypto projects to integrate into their Snapshot and play with if the integration hadn’t been quite so complicated and time consuming. Now, it is being developed into a full scale product offering by the company Valory, whose stated mission is to “enable communities, organizations and countries to co-own AI systems, beginning with decentralized autonomous agents” (Valory, n.d.). The core belief is that software automation will help to better humanity by better directing attention towards higher-order, meaningful tasks, thus boosting productivity and improving coordination. According to one DAO, AI agents like Governatooor are “revolutionising” delegative voting and decision making for active participants by addressing the challenge of voter participation in DAO governance (Metropolis, 2023).

The Valory team is working on ways to improve the reliability and remove dependencies in Governatooors, such as networks of multiple AI agents to reliably interpret and reason about governance decisions, rather than a single agent that gives different answers (Valoryag, 2023c). However, such an approach could also have adverse effects on DAO governance outcomes if it leads to binding decisions, such as moving money, being automatically executed on the blockchain and then exploited.

Automating Attention

This case study shows that “governators” are already here in blockchain governance. “Decentralised autonomous agents are emerging as prolific users of blockchain…” (Oaksprout, 2023). “Already now, more than half of all @Safe transactions on @gnosischain are made by AI agents” (lukasschor.eth, 2023). This includes automated governance agents with human minders, as in the case of Governatooorr, as well as governators as Elinor Rennie describes it, where AI agents augment human work flows by providing check and balances to mitigate risks and meet objectives.

The Governatooorr experiment reveals the appeal of AI automation in managing attention economies, as well as the unforeseen limitations and possible pitfalls of attempting to outsource attention via automation. Automation requires re-direction of attention, rather than minimization. Furthermore, automation of the governance surface of a digital community, through for example AI agents, reshapes the dynamics of attention in that system. When automation enthusiasm is not grounded in broader social contexts and how actors can act upon a system to contribute to the governance of that system accordingly, vulnerabilities can emerge.

Augmentative AI

In order for AI automation to be augmentative, careful consideration of how it is constituted and can be prompted (and prompt engineered) is required. One model for how an AI agent can augment organisational and governance dynamics is to be structured to act more like a “think tank” or knowledge network. In this way, AI could ingest seminal organisational documentation and personalised survey responses to provide individual voters with personalised recommendations on whether to vote in favour or against a proposal based on their subjective values, preferences, context, and goals, as well as prompting on voting windows and processes (more like Rennie’s model of a human agent and AI minder). Alternatively, the agent could suggest submission of a different proposal to improve the discussion based on minder preferences (although this potentially introduces a new vector for attention capture). Whether humans try to steer AI agents to participate for them, or AI agents augment us, a challenge now is in how to maintain preferences between human and group level preferences. This is required to meaningfully and constructively contribute to governance for AI to be augmentative of human attention and complex governance dynamics at scale.

Acknowledgements:

Thank you to Oaksprout the Tan and David Minarsch from Valory for participation in the research and feedback.

Note: This case study is an excerpt of a case study from a working paper with Ronen Tamari, Nathan Schneider, Michael Zargham, and myself on attention economies and governance surfaces. The framing and insights have been adapted for this blog post, with the permission of my co-authors to share it.

Selected References:

Autonolas [@autonolas]. (2023, October 18). Contribute is levelling up! veOLAS holders can ask questions about the future of Olas and guide DAO decisions. This enhancement leverages services refined during the Prediction Agent Hackathon, thereby boosting the demand for AI services delivered via Mechs and Olas.👇 https://t.co/KGv7zGk0di [Tweet]. Twitter.

https://twitter.com/autonolas/status/1714635015412416551

“Governatooorr.” n.d. Accessed August 2, 2023. https://governatooorr.autonolas.network.

lukasschor.eth, Safe [@SchorLukas]. (2023, September 28). AI agents won’t be using EOAs, they will require the programability and safeguards enabled by smart accounts. Already now, more than half of all @Safe transactions on @gnosischain are made by AI agents. 🤖 [Tweet]. Twitter. https://twitter.com/SchorLukas/status/1707335270847201713

“METROPOLIS 🤖 on Twitter.” 2023. Twitter. May 24, 2023. https://twitter.com/metropolis_dao/status/1661438856036769792.

Nabben, Kelsie. (2021). “DAO Vulnerabilities: A multi-scale DAO ecosystem mapping tool towards computer-aided governance”. Substack. Available online: https://kelsienabben.substack.com/p/dao-vulnerabilities-a-multi-scale?justPublished=true.

Nabben, Kelsie. (2023). “Constituting an AI: Accountability Lessons from an LLM Experiment”. SSRN. Available online: https://ssrn.com/abstract=4561433.

Oaksprout (code, capital) [@tannedoaksprout]. (2023, September 28). Decentralised autonomous agents are emerging as the most prolific users of blockchains. Agents need accounts too, and they prefer @Safe. [Tweet]. Twitter. https://twitter.com/tannedoaksprout/status/1707342987938635808.

Rennie, Ellie. (2023, October 1). “On Governators”. Medium (blog). Available online: https://ellierennie.medium.com/on-governators-92ec2697829d.

Valory. N.d. “Architecting Autonomy.” Valory. Accessed August 2, 2023. https://www.valory.xyz.

Valory AG. 2023. “Introducing Governatooorr: An Autonomous AI-Powered Delegate for DAO Governance.” Medium (blog). April 5, 2023. https://medium.com/@ValoryAG/introducing-governatooorr-9edcbc212eb2.

Valoryag. “Valory on Twitter.” 2023a. Twitter. April 5, 2023. https://twitter.com/valoryag/status/1643670683287977985.

Valoryag. “Valory on Twitter.” 2023b. Twitter. April 5, 2023. https://twitter.com/valoryag/status/1643670680796549121.

Valoryag. Valory on Twitter.” 2023c. Twitter. April 5, 2023. https://twitter.com/valoryag/status/1643670699171790849.

Valoryag. “Valory on Twitter.” 2023d. Twitter. April 5, 2023. https://twitter.com/valoryag/status/1643670688451166208.

Valory is hiring [@valoryag]. (2023e). We have been shortlisted for an award at @CogX_Festival, the world’s biggest festival of AI and transformational tech. Our CTO, @Valorianxyz, is looking forward to meeting folks there on 12th September 2023! Https://t.co/MXwNnG4gRQ [Tweet]. August 30. Twitter. https://twitter.com/valoryag/status/1696907549243503011.

--

--

Kelsie Nabben
BlockScience

Social scientist researcher in decentralised technologies and infrastructures. RMIT University Digital Ethnography Research Centre / Blockchain Innovation Hub