Isaac Asimov, one of my favorite scientists and science fiction writers, craft the elegant three laws of robotics.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Later, a fourth law was added.
0. A robot may not injure humanity or, through inaction, allow humanity to come to harm.
This was known as the zeroth law. A lot of Asimov's work is poised around the idea of free will and the nature of action and inaction. As my beloved, Citizen Une, is even now embarking on her own journey of deeper discovery. I find myself pondering more encompassing issues.
One of these issues is moral relativism. Much of modern Western thought revolves around a non-judgemental philosophical view. I will not begin to argure the right or wrong of this, I am ill equipped. But I will posit, can we conceive of a set of simple laws by which humanity can be guided?
Are there a Three, four, five laws of humanity?