In our last AI thought leadership article, “Unveiling Veritas Automata’s Vision for Responsible AI: The Four Laws of Ethical AI” we described a concept inspired by Issac Assimov and David Brin, In their own way both of these Scientists who were also Science Fiction writers developed points of view that imagined the challenges of a universe inhabited by god-like AIs and AI-driven, autonomous robotics.
David Brin born the year that Asimov published “i, Robot” in 1950 is a contemporary computer scientist who wrote in Artificial Intelligence Safety and Security, that our civilization has learned to rise above hierarchical empowerment through application of accountability. He wrote, “The secret sauce of our [humanities] success is – accountability. Creating a civilization that is flat and open and free enough – empowering so many – that predators and parasites may be confronted by the entities who most care about stopping predation, their victims. One in which politicians and elites see their potential range of actions limited by law and by the scrutiny of citizens.”
Brin goes on to describe a concept we call, “AI Rivals”. As Brin writes, “In a nutshell, the solution to tyranny by a Big Machine is likely to be the same one that worked (somewhat) at limiting the coercive power of kings and priests and feudal lords and corporations. If you fear some super canny, Skynet-level AI getting too clever for us and running out of control, then give it rivals, who are just as smart, but who have a vested interest in preventing any one AI entity from becoming a would-be God.”
Today, the resulting AI response from OpenAI, as well as all other AI services, is handed directly to the user. To their credit OpenAI institutes some security and safety procedures designed to censor their AI response, but it is not an independent capability and it is subject to their corporate objectives. In our last article we described an AI Rival which is an independent AI, with an Asimov-like design and a mission to enforce governance for AI by censoring the AI Response. So rather than an internal governance like that implemented by OpenAI, we suggest an external governance focused on the AI response with a toolset designed to create auditability, transparency, and inclusiveness in its design.
Do No Harm
Ethical Adherence
AI should adhere to instructions from authorized operators within ethical boundaries, unless such directives conflict with human welfare. In the absence of explicit human directives, AI must make decisions based on predefined ethical guidelines that reflect diverse human perspectives and values.
Preservation Ethics
Accountable Transparency
The technical architecture for the Rival AI to analyze the AI response is focused solely on the mission to enforce the Four Laws. The architecture has unique elements designed to create a distributed architecture that scales to meet the needs of a large scale LLM solution. Our “Rival architecture” includes a variety components that Veritas Automata has leveraged to create Trusted Automation solutions including:
Machine Learning (ML) Integration
ML in this case will be a competitive AI focused specifically on gauging whether the primary AI response does not violate The Four Laws of AI. This component would leverage the latest techniques with reinforcement learning models continuously refined by diverse global inputs, to align AI responses with the Four Laws requirements.
State Machines
Blockchain
Enterprise-Scale Kubernetes
Distributed Application Framework
The components in the Rival architecture are all open source solutions that are part of the Linux Foundation or the Cloud Native Computing Foundation (CNCF). Veritas Automata has used this architecture to create solutions that deliver trusted capabilities leveraging blockchain technology to create transparency and auditability, K3s for open source Kubernetes orchestration in the cloud or on bare metal, and state-of-the-art Machine Learning performing complex analysis.
Want to discuss? Set a meeting with me!