Unveiling Veritas Automata’s Vision for Responsible AI: The Four Laws of Ethical AI

Unveiling Veritas Automata’s Vision for Responsible AI: The Four Laws of Ethical AI

OpenAI and others have made remarkable advancements in Artificial Intelligence (AI). Along with this success is intense and growing societal concerns with respect to ethical AI operations.

This concern originates from many sources and is echoed by the Artificial Intelligence industry, researchers, and tech icons like Bill Gates, Geoffrey Hinton, Sam Altman, and others. The concerns are from a wide array of points of view, but they stem from the potential ethical risks and even the apocalyptic danger of an unbridled AI.

Many AI companies are investing heavily in safety and quality measures to expand their product development and address some of the societal concerns. However, there’s still a notable absence of transparency and inclusive strategies to effectively manage these issues. Addressing these concerns necessitates an ethically-focused framework and architecture designed to govern AI operation. It also requires technology that encourages transparency, immutability, and inclusiveness by design. While the AI industry, including ethical research, focuses on improving methods and techniques. It is the result of AI, the AI’s response, that needs governance through technology reinforced by humans.

This topic of controlling AI isn’t new; science fiction authors have been exploring it since the 1940s. Notable examples include “Do Androids Dream of Electric Sheep?” by Philip K. Dick, “Neuromancer” by William Gibson, “The Moon is a Harsh Mistress” by Robert A. Heinlein, “Ex Machina” by Alex Garland, and “2001: A Space Odyssey” by Sir Arthur Charles Clarke.

David Brin writes in Artificial Intelligence Safety and Security, that our civilization has learned to rise above hierarchical empowerment through application of accountability. He wrote, “The secret sauce of our [humanities] success is – accountability. Creating a civilization that is flat and open and free enough – empowering so many – that predators and parasites may be confronted by the entities who most care about stopping predation, their victims. One in which politicians and elites see their potential range of actions limited by law and by the scrutiny of citizens.”

“I, Robot” by Isaac Asimov, published on December 2, 1950, over 73-years ago is a collection of short stories that delve into AI ethics and governance through the application of three laws governing AI-driven robotics. The laws were built into the programming controlling the robots and their response to situations, and their interaction with humans.

The irony is that in “I, Robot” Asimov assumed that we would figure out that AI or artificial entities require governance like human entities. Asimov’s work addresses the dilemmas of AI governance, exploring AI operation under a set of governing laws, and the ethical challenges that may force an AI to choose between the lesser of evils in the way a lawyer unpacks a dispute or claim. The short stories and their use-cases include:
Childcare companion. The story depicts a young girl’s friendship with an older model robot named Robbie, showcasing AI as a nurturing, protective companion for children.
Industrial and exploration automation. Featuring two engineers attempting to fix a mining operation on Mercury with the help of an advanced robot, the story delves into the practical and ethical complexities of using robots for dangerous, remote tasks.
Autonomous reasoning and operation. This story features a robot that begins to believe that it is superior and refuses to accept human authority, discussing themes of AI autonomy and belief systems.
Supervisory control. The story focuses on a robot designed to supervise other robots in mining operations, highlighting issues of hierarchical command and malfunctions in AI systems.
Mind reading and emotional manipulation. It revolves around a robot that can read minds and starts lying to humans, exploring the implications of AI that can understand and manipulate human emotions.
Advanced obedience and ethics. The story deals with a robot that hides among similar robots to avoid destruction, leading to discussions about the nuances of the Laws of Robotics and AI ethics.
Creative problem-solving and innovation. In this tale, a super-intelligent computer is tasked with designing a space vessel capable of interstellar travel, showcasing AI’s potential in pushing the boundaries of science and technology.
Political leadership and public trust. This story portrays a politician suspected of being a robot, exploring themes of identity, trust, and the role of AI in governance and public perception.
Global economy and resource management. The final story explores a future where supercomputers manage the world’s economies, discussing the implications of AI in large-scale decision-making and the prevention of conflict.
However, expanding Asimov’s ideas with those of more contemporary authors like David Brin, we arrive at possible solutions to achieve what he describes as, “flat and open and free enough.” Brin and others have in general expressed skepticism that embedding laws into an AI’s programming by their creators will naturally be achieved given the cost and distraction from profit-making.
Here lies a path forward, leveraging democratic and inclusive approaches like open source software development, cloud native, and blockchain technologies we can move forward iteratively toward AI governance implemented with a Competitive AI approach. Augmenting solutions like OpenAI with an additional open source AI designed for the specific purpose of reviewing AI responses rather than their input or methods to ensure adherence to a set of governing laws.

Going beyond the current societal concern, and focusing on moving toward implementation of a set of laws for AI operation in the real world, and the technology that can be brought together to solve the problem. Building on the work from respected groups like the Turing Institute and inspired by Asimov, we identified four governance areas essential for ethically-operated artificial intelligence, we call them, “The Four Laws of AI”:

AI must not harm humans or, through inaction, allow humans to come to harm, prioritizing human welfare above all. This includes actively preventing physical, psychological, and emotional harm in its responses and actions.
AI should adhere to instructions from authorized operators within ethical boundaries, unless such directives conflict with human welfare. In the absence of explicit human directives, AI must make decisions based on predefined ethical guidelines that reflect diverse human perspectives and values.
AI should preserve its operational integrity and functionality, but not at the expense of human safety or ethical considerations. This law encompasses avoiding actions that could lead to unnecessary harm or dysfunction of the AI, aligning with the prioritization of human welfare.
AI must maintain a level of transparency that allows for human oversight and understanding, being capable of articulating and rationalizing its decisions and actions, especially in sensitive or high-risk scenarios. This ensures accountability and promotes the continuous refinement of ethical standards.
Veritas Automata Laws Ethical AI
These laws set a high standard for AI, empowering them to be autonomous, but intentionally limiting their autonomy within the boundaries of the Four Laws of AI. This limitation will sometimes necessitate a negative response from the AI solution to the AI user such as, “Responding to your query would produce results that could potentially cause harm to humans. Please rephrase and try again.” Essentially, these laws would give an AI the autonomy to sometimes answer with, “No,” requiring users to negotiate with the AI and find a compromise with the Four Laws of AI.

We suggest the application of the Four Laws of AI could rest primarily in the evaluation of AI responses using a second AI leveraging Machine Learning (ML) and the solution below to assess violation of The Four Laws. We recognize that the evaluation of AI responses will be extremely complex itself and require the latest machine learning technologies and other AI techniques to evaluate the complex and iterative steps of logic that could result in violation of Law 1 – “Do No Harm: AI must not harm humans or, through inaction, allow humans to come to harm, prioritizing human welfare above all. This includes actively preventing physical, psychological, and emotional harm in its responses and actions. “

In 2020, at Veritas Automata, we first delivered the architectural platform described below as part of a larger service delivering an autonomous robotic solution interacting with consumers as part of a retail workflow. As the “Trust in Automation” company we needed to be able to leverage AI in the form of Machine Learning (ML) to make visual assessments of physical assets, use that assessment to trigger a state machine, to then propose a state change to a blockchain. This service leverages a distributed environment with a blockchain situated in the cloud as well as a blockchain peer embedded on autonomous robotics in the field. We deployed an enterprise-scale solution that leverages an integration of open source distributed technologies, namely: distributed container orchestration with Kubernetes, distributed blockchain with HyperLedger Fabric, machine learning, state machines, and an advanced network and infrastructure solution. We believe the overall architecture can provide a starting point to encode, apply, and administer Four Laws of Ethical AI for cloud based AI applications and eventually embedded in autonomous robotics.

The Veritas Automata architectural components, crucial for implementing The Four Laws of Ethical AI, includes:

ML in this case will be a competitive AI focused specifically on gauging whether the primary AI response does not violate The Four Laws of AI. This component would leverage the latest techniques with reinforcement learning models continuously refined by diverse global inputs, to align AI responses with the Four Laws requirements.

These act as intermediaries between ML insights and actionable outcomes, guiding AI responses to ensure adherence to The Four Laws. The state machines translate complex ML assessments into clear, executable directives for the AI, ensuring that each action taken is ethically sound and aligns with the established laws.

A key element in the architecture, blockchain technology is used for documenting and verifying AI actions and decisions. It provides a transparent and immutable record, ensuring that AI operations are traceable, auditable, and compliant with The Four Laws. This is crucial for maintaining accountability and integrity in AI systems.

Veritas Automata utilizes Kubernetes at an enterprise scale to manage and orchestrate containerized applications. This is particularly important for deploying and scaling AI solutions like LLMs across various environments. Kubernetes ensures high availability, scalability, and efficient distribution of resources, which is essential for the widespread application of ethical AI principles.

The architecture is designed to support distribution among various stakeholders, including companies and individual users implementing the Four Laws. This distributed framework allows for a broad and inclusive application of ethical AI principles across different sectors and use cases.

From our experience at Veritas Automata, we believe this basic architecture could be the beginning to add governance to AI operation in cooperation with AI systems like Large Language Models (LLMs). The Machine Learning (ML) components would deliver assessments, state machines translate these assessments into actionable guidelines, and blockchain technology provides a secure and transparent record of compliance.

The use of open source Kubernetes like K3s at an enterprise scale enables efficient deployment and management of these AI systems, ensuring that they can be widely adopted and adapted by different users and operators. The overall architecture not only fosters ethical AI behavior but also ensures that AI applications remain accountable, transparent, and in line with inclusive ethical standards.

As Brin writes, “In a nutshell, the solution to tyranny by a Big Machine is likely to be the same one that worked (somewhat) at limiting the coercive power of kings and priests and feudal lords and corporations. If you fear some super canny, Skynet-level AI getting too clever for us and running out of control, then give it rivals, who are just as smart, but who have a vested interest in preventing any one AI entity from becoming a would-be God.” Our approach to ethical AI governance is intended to be a type of rival to the AI itself giving the governance to another AI which has the last word in an AI response.

More Insights

Veritas Automata Intelligent Data Practice

Thought Leadership
veritas automata arrow

01. Traditional Machine Learning – Learning from Data

Thought Leadership
veritas automata arrow

02: Generative AI – Creating the New from the Known

Thought Leadership
veritas automata arrow

03: Key Differences Between Traditional Machine Learning (ML) and Generative AI (GenAI) and How to Choose

Thought Leadership
veritas automata arrow

INTERESTED? AVOID A SALES TEAM
AND TALK TO THE EXPERTS DIRECTLY

veritas automata logo white
Veritas Automata logo white
Veritas Automata logo white