Article Text


I, Robot
  1. Katharine Harding
  1. Department of Neurology, University Hospital of Wales, Cardiff, UK
  1. Correspondence to Dr Katharine Harding, Institute for Psychological Medicine and Clinical Neurosciences, Cardiff University, University Hospital of Wales, Cardiff CF14 4XW, UK; katharineharding{at}

Statistics from

Cardiff book club’s latest foray into classic science fiction was with I, Robot, by Isaac Asimov.1 Originally a set of short stories published in magazines in the 1940s, these were then combined into one book and connected by an interlinking story in which a writer interviews Dr Susan Calvin, a robopsychologist, about her career. It combines the futuristic nature of robotics with a definite feel of its time: everything is made of metal, humans are quite violent towards robots and having a woman scientist as the main character is mildly shocking.

I, Robot famously includes the Three Laws of Robotics: ‘1 - A robot may not injure a human being, or through inaction allow a human being to come to harm. 2- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3 - A robot must protect its own existence as long as such protection does not conflict with the First or Second Law’. Many of the stories explore how the laws affect robot behaviour, and the consequences of the laws, which can be unexpected! Anyone who has tried programming a computer will know how its literal interpretation of commands may generate unexpected consequences. Other science fiction writers have explored the idea of trying to produce ethical behaviour by building programming rules into artificial intelligence; Terry Pratchett, for example, described a convincing and moving scenario of when such laws go wrong.2

Our members discussed whether Asimov’s laws of robotics were in fact ethical. We felt that the First Law prevented robots from having autonomy, and was excessively paternalistic. It also raised interesting questions around free will: the robots cannot have free will because of the laws, but this begs the question of whether humans really have free will. Are we programmed by our genes, or by other factors? Our recent reading of The Selfish Gene 3 added pertinence to this question. How much free will do we really have?

We also noted that the First Law was perhaps impossible for a robot to keep. Humans are inherently prone to take risks and harm themselves, and so regardless of robot action a human may well end up coming to harm. Eventually in the book the robots have emotion built in to modulate the starkness of the laws and to improve their ability to behave and respond appropriately. This made us recall Descartes’ Error and its thesis that emotion is essential for cognitive function and decision-making.4

Another interesting discussion was around what exactly a robot is. Those in I, Robot start off almost as pets; as they become more complex they become tools, but eventually end up in charge, although humans are unaware of their control. We felt that a key characteristic of a robot is an ability to make some decisions in order to carry out a task. For example, a robot vacuum cleaner today can scan any room and use the information to decide how to move around the room, even if it has never been there before. It is this aspect that is different from the powerful yet ultimately passive computers to which we all have ready access nowadays.

What is the practical clinical relevance of I, Robot? First, we discussed its exploration of autonomy, paternalism and ethics, all central to the practice of medicine; and second, as robots increasingly take over the work of doctors, we reflected on those aspects of medicine that a robot cannot perform. A robot can analyse big data to guide overall treatment, but cannot discuss with a family, comfort a distressed patient or provide a nuanced judgement. Medicine is not just about pills or operations but involves a relationship between the patient and the doctor that is essential. Human doctors retain a vital role in an increasingly technological world.

I, Robot succeeds, as great science fiction should, by throwing light on significant questions about what it is to be human.


View Abstract


  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.