Ma réaction, postée sur leur site:
The question of introducing ethics in robotics has recently been debated in South Korea.
But do not mistake the presently debated, actual question, with the old science fiction Asimov’s Three Laws of Robotics.
Because the ethical question is twofold, and the first part, i.e. ethical laws which should be imposed to intelligent, fully aware, robots in their relation with humans, is not at the order of the day, as long as robots are to be programmed to act, and will have no other choices in their actions than the already programmed ones. The second side of the question, the one which is already at stake, is the treatment of robots by their users, the ethics in programming. How the robots react with their user and with other humans is still fully under the programmer’s responsibility. What is ethical programming? Do machines have rights not to be misused, as well as they have obligations to perform what they are programmed to do? Here is the question, and this one is not fiction, or faraway future, it is already necessary to look at the problem. And when a robot harms people, does the robot have any part of responsibility, particularly when the robot has been built to harm people (military robots)?