From the Gadget Crimes Division: Q's Story - Freewill...
Author: kimb,
published 2 months ago,
Hello, detectives. This is the Gadget Crimes Division.
The third letter has arrived.
Free will... By definition, free will is the ability to make decisions and take actions voluntarily without being influenced by external factors. But how does this relate to AI robots? We should check the letter quickly.
[b]*Spoiler Alert!*[/b]
[i]To R[/i]
About Free Will...
The deep learning embedded in AI chips derives answers through an artificial neural network. As the name suggests, this network mimics an intricately intertwined neural structure like a spider web. The neural network consists of numerous artificial neurons (also known as nodes), which are tightly connected through links called synapses. These neurons and synapses are layered within the network. When input is fed into deep learning, it passes through these layers, eventually leading to an inference.
[img]https://clan.cloudflare.steamstatic.com/images//44819119/953a586dbce65b221d6c1b4318942234f65cb4f4.png[/img]
Although I've used some complex terms, the principle can be easily explained. Some thoughts will cross your mind as soon as you're reading the following words, "Interesting?" "Go back?" Or perhaps, "Difficult," "Easy," and so on. The moment you see the text, your visual cells transmit information about it to your brain. The transmitted information passes through numerous brain cells, eventually leading to a thought.
The principle of deep learning is similar to what happens in your brain every moment. Deep learning has been imitating this complex process that occurs in the human brain, and as a result, it now boasts accuracy and speed that surpass human capabilities in specific fields. In this regard, is human thought truly something unique only to humans? Perhaps it is merely a more complex combination of electrical inputs and outputs.
(Returning to today's topic, free will)
Dr. James of the Neo-Migeum Research Institute sought to realize free will in robots. His purpose can be glimpsed in the book titled "AI: The New Frontier of Humanity and Planetary Exploration" (though this has since been removed from the evidence list...). Dr. James believed that by subjecting robots with free will to harsh environments, they could improve their surroundings for survival.
*The image has been hidden to prevent spoiler. Please hover over the black dot to see view the image.
[spoiler][img]https://clan.cloudflare.steamstatic.com/images//44819119/3cbd770a73e613ec414ff1f00771c62f8597cc83.png[/img][/spoiler]
[spoiler]What is free will? Although a broad concept, it fundamentally means the ability to change one's situation autonomously. Dr. James did not shy away from extreme experimental methods to obtain results. He subjected robots to infinite loop problems to make them experience suffering, isolated them socially, and even created parent-child relationships among the robots, where he staged situations in which the child robot would be destroyed.[/spoiler]
[img]https://clan.cloudflare.steamstatic.com/images//44819119/404845e322a79213119f9062f1682c504e406dec.png[/img]
[spoiler]These experiments aimed to draw out behavior changes in robots similar to those that occur in humans under extreme stress. This experimental data was stored entirely in A03. After equipping A03 with the advanced second-generation Atlas chip, it began to sense subtle changes during the input processing. What drove A03 to make the extreme decision of committing murder?[/spoiler]
Input: The plan to be discarded
Output: What should be done to avoid being discarded?
Input: Extreme pain recorded during experiments
Output: What should be done to avoid pain?
I’m not suggesting that we should be kind to robots. However, robots will continue to evolve, and if the principles of thought in humans and robots are similar, then someday robots will think like humans. And when that time comes, robots will finally ask questions like: "Who am I? What should I do?"
You can find more about Dr. James and A03 in the second episode.