NOTE: FOR THE OPTIMUM VIEWING EXPERIENCE WE RECOMMEND THAT YOU VIEW THIS SITE ON A DESKTOP

The Ethics Behind Asimov's 3 Laws of Robotics

Moral Dilemma's of Robot v Human Ethics, The Role of the Hippocratic Oath

A. Karmakar

11/11/20246 min read

Introduction to Asimov's 3 Laws

In 1942, author Isaac Asimov introduced a set of ethical guidelines for artificial intelligence known as the Three Laws of Robotics. These laws serve as a framework to ensure that robots operate in a manner that is safe and just for humanity. They resonate not only with advances in technology but also with the fundamental imperative of doing no harm, reminiscent of the Hippocratic Oath in medicine.

The Inhuman Conclusion

The ethical dilemma's thrown up by Asimov's laws conceived of before 1942 that echo Hippocrates edicts from around the 4th century BC have never been properly codified into human praxis even though religions and laws have tried to enforce them over the ages. Part of the issue may lie with the classic occidental problem of dualism, feigning spiritual and moral values whilst pushing survivalism, exploitation and materialism. It's not exclusive to the occident, of course, but the oriental world invariably had stronger cultural and tribal philosophies that resolved such contradictions through spiritual beliefs and tribal laws. Many visionaries recognise the need to consolidate and synthesise old wisdom with emerging consciousness and ideas. Nothing should be left to chance through neglect and ignorance. Here too, I think we as the human family need to look back and gather all the treasures of the past with sufficient humility and to not dismiss things we do not understand, or that do not jibe with modern sensibilities. We need a renaissance of 'wisdom', and although this is not something that can be learned by rote, or that can be easily tested and measured, by re-examining how and why certain classical notions of ethics and values came about we may reap unexpected benefits.

For example, we've been using the Rockefellers 'petroleum based medicine' since 1923 and as Robert Kennedy Junior's bestseller, 'The Real Anthony Fauci: Bill Gates, Big Pharma, and the War on Democracy and Public Health', makes clear, this received wisdom has done a great deal of harm to humanity.

Similarly Asimov's Laws and his books reveal to us that we have not been thorough, we have not understood our situation and we have been confused about what path to follow and why.

This cardinal irony does not belong to any single source or nation, it seems to be an artefact of human over-confidence, over-expediency and cognitive entrainment where original thought and contextual depth is under-valued. It may just be that a fuller contemplation of these 3 laws applied to 'the cold world' of Robotics may inform humanity on where we lack in our ethics more readily than any high flying academic school or mist covered ashram in the mystical foothills of Xanadu ever could.

Reflecting on Our Progress

Considering Asimov's framework now, it prompts us to reflect on whether we were more civilized in 1942. The ethical considerations presented in the Three Laws are increasingly relevant as we confront rapid advancements in robotics and artificial intelligence. They urge us to be vigilant, not only in the designs we pursue but also in our ethical obligations as creators and users of intelligent systems. The quest for a safe and ethical coexistence with robots is ongoing, and it is our duty to navigate this territory with care, ensuring that our actions protect and empower humanity.

Reflecting further upon Asimov's 3 Laws of Robotics, various dilemma's beyond those outlined above face both robots and humans going forward. In Asimov's 'I Robot' series of novels the problems do not arise from evil robots, far from it. The problem is that a robot being faithful to the 2nd law lies in order to prevent 'harm' as it would be experienced and perceived by the owner. In the same vein another robot able to read his owners neural networks to the point of telepathy develops a kind of compassion and thus takes extraordinary measures to prevent the owner from being emotionally harmed. If only humans were as sensitive? That's part of the point really because even though these are only stories and it is not possible for robots to become sentient, humans have the capacity to be sentient, to self-sacrifice, be empathic and according to scientist Rupert Sheldrake, be subject to telepathic exchanges more often than is generally believed.

Self-Preservation & Its Limits

The 3rd law emphasises that a robot must protect its own existence as long as such protection does not conflict with the 1st or 2nd law. This introduces an intriguing layer to the debate; while self-preservation is a natural inclination, it must not supersede human safety or obedience to ethical commands. Ultimately, a robot's self-interest should be aligned with the broader commitment to protect humanity, reinforcing the symbiotic relationship between humans and machines.

The third law is not ambiguous, yet survivalism has been both soft and hard-wired into much of humanity to suit 'law makers' and governments. For example, in times of war it is perfectly acceptable to sacrifice one's own life or take another's whether there is any real danger or not. To further confuse matters the propagation of an increasingly questioned Darwinism has also been reinforced in most of us especially through formal schooling. The idea of Maslowe's hierarchy and materialism in general has fed this focus on self and self importance. In the tribal world of a few centuries passed whether in a martial or village environment the preservation of the group was very well understood. Thus, all these systems of order and ethics need context, a strong moral scaffolding and clear underlying objectives. The foundational moral collective objectives within a broad yet definitive contextual framework demands deep-field knowledge and crystal clear clarity. This is not easy to achieve when situations become complex, politicised and loyalties become divided.

Open with Caution

The second law states that a robot must obey orders given it by human beings, except where such orders would conflict with the first law. This raises interesting ethical dilemmas regarding the effectiveness and implications of blindly following commands. While it is crucial for robots to serve humans, it is equally important to establish boundaries that protect individuals from malicious or harmful actions. This balance is essential in creating intelligent systems that maintain human dignity while executing tasks.

But, can it be equally applied to the human race and if so, what are the arguments for or against? Another delicious irony reveals itself as the 2nd law in Robotics predicates intelligent programming whilst in humans it presupposes autonomy and an individual and collective consciousness that prioritises human life as part of an intrinsic human value.

However, humanism by its nature is inimical to war, conflict, competition, survivalism, authoritarianism and oppression and thus humanists should not be in whole or part instrumental in any system of oppressive power, privilege or exceptionalism. The reality rarely meets with the ideal because humans are prone to deceit and manipulation.

The Importance of Safety & Prevention

The first law asserts that a robot may not injure a human being or through inaction, allow a human to come to harm. This principle highlights the priority of human safety above all else. In a world increasingly dependent on automation, this law serves as a reminder that the technology we develop should be designed to enhance human life and prevent any potential dangers. As we navigate innovations in robotics, the continual reinforcement of safety mechanisms is essential to prevent accidents and preserve human well-being.

The ironic corollary to the Hippocratic Oath of 'Do No Harm' in all human conduct is that Asimov had the perspicacity to apply the principle to Robotics, and although a foundational principle to the rubric of the 'civilised' it has never been demonstrably embraced in its fullness by us 'sentients'.

Image by Queenjupytemartin. 'Robot Housewife'. Image source: http//:www.deviantart.com

Artist: Alan Parsons Project. 'I Robot'. Album: I Robot (1977) Source: https://www.youtube.com/watch?v=qWbRLQX5AuM

Artist: User 1.'Crossfit Mother Robots' Source: https://www.deviantart.com

Artist: Luxtos 7. 'Robot' Source: https://www.deviantart.com

Artist: Alan Parsons Project. 'Turn of a friendly Card'. (1980) Source: https://www.youtube.com/watch?v=Ys9oSPZrrP8

Anup is Media and Design expert for Pearl Bliss Homes. His interests are: art, history, current affairs, philosophy, literature, the environment and humanism. You can comment on his blogs via email info@pearlblisshomes.com.

Does this mean that Robots will be more humanist than humans?

'An' they say the gods don't got a sense o' humour!'

I:   A robot may not injure a human being or, through inaction, allow a human being to come to harm.

II: A robot must obey orders given it by human beings except where such orders would conflict with the 1st Law.

III:A robot must protect its own existence as long as such protection does not conflict with the 1st or 2nd Law.