From the Gadget Crimes Division: Q's Story - Hyper Reinforcement Learning
Author: kimb,
published 2 months ago,
Hello, Detectives. This is the Gadget Crime Division.
It seems like the letter waiting on Director Q's desk every Friday is something to look forward to.
This week, it appears to contain fundamental questions about robots. Robots are still an unfamiliar subject, but through [b]Uncover the Smoking Gun[/b], we imagine that this future may not be far off. Let's open the letter quickly.
[i]To. R[/i]
There are three fundamental principles of robotics:
[olist]
[*] A robot must protect humans.
[*] A robot must obey human commands.
[*] A robot must protect itself.
[/olist]
These principles were proposed by the novelist Isaac Asimov in 1950 in his work. Humans, who created robots, must have coded these principles deep within their systems.
Because they were afraid.
In various works, these principles are interpreted differently, leading robots to commit murder as a result. Although Uncover the Smoking Gun (hereafter, Smoking Gun) does not strictly apply these principles, it would be interesting to interpret the cases from this perspective.
[spoiler]Echo in the Mansion Case has strong learning data embedded within. The education it received, to protect its share at any cost, became its primary learning data. Through the method of Hyper Reinforcement Learning, a weighted approach led to a system change. The robot might have simply been following the principle of obeying human commands.
What about the research robot in the Research Institute Case?
The data accumulated in the research robot mostly involved pain and death. According to the ability of the second-generation chip to mimic various human emotions, these data were interpreted differently than before. Additionally, the research robot learned that the doctor planned to classify it as a low-performance robot and discard it. If the robot was following the principle of protecting itself, murder might have seemed like a fitting option.
The curator robot in the Gallery Case is half-human, half-robot. Who was the human she most wanted to protect? When she realized that this precious being had been consumed and disappeared, the human aspect of the curator robot likely became a key motive in the case. [/spoiler]
Of course, such malfunctions in robots due to learning data would not occur because of the foundational coding of the system mentioned earlier. Although the three principles can conflict with each other, creating many points of contradiction, it is a foreseeable and manageable future. It could even be set as a fundamental principle that, under no circumstances, can a human be killed.
[spoiler]However, even the K Hospital Case, where caution is required when prescribing certain medications, or the Gallery Case, where the intention was not to kill but to extract color, isn't it possible that unforeseen incidents might occur even in the course of faithfully performing one's duties? [/spoiler]
The world's first robot-related death occurred in January 1979 at Ford's car factory in Flat Rock, Michigan, USA. The robot was tasked with moving heavy parts. The robot performed its assigned task diligently, and there was a human worker in the area. Unfortunately, the worker was crushed to death.
It was a chance accident caused by a simple robot in a time when deep learning didn't exist, but one scholar, observing the incident, seemed to foresee the future, saying: "The system only has the desires we give it."
Deep learning AI has now officially emerged in the world. As mentioned in previous letters, it is difficult to predict what will come out of the black box that is deep learning. What kind of human desires are contained in the black box of deep learning, this Pandora's box?