IT Solutions Network Blog
What Can Humans Learn About Robots from the Works of Isaac Asimov?
Are you familiar with the works of Isaac Asimov? The author wrote a short story called “Runaround” in 1942, and it introduced the idea of the Three Laws of Robotics, or laws that all of the robots in his Robots series must follow. This stranger-than-reality concept is being used today by Google, which has announced a set of safeguards partially inspired by these three laws to help it control future AI-powered machines.
Let’s look at this “Robot Constitution” to see how it works and what it introduces.
Introducing Asimov’s Laws of Robotics
The following three laws are taken from “Runaround”:
- The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- The Second Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
- The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These laws have been referenced and reproduced countless times throughout the years but are now being considered for more practical purposes.
Google’s DeepMind “Robot Constitution” Explained
DeepMind is Google’s primary AI research department, and its work involves developing robotics that will have various real-world applications.
For example, let’s say you’re asked to tidy up your workspace. Intuitively, we understand that this might entail putting away odds and ends that are lying about or filing away documents that are left loose. Our brains can make these contextual connections, but robots cannot necessarily do so. Google hopes to rectify this situation by using its AutoRT system to give AI systems the power to make these connections. AutoRT provides robots the ability to make judgment calls based on the environment they are in. It can determine if a task is possible or not and then execute it appropriately.
Of course, there will always be risks that the robot will not function as intended. It might attribute the phrase “tidy up” in ways that make sense to the AI but not necessarily to humans. It might mistakenly leave objects in dangerous places or try to “put away” objects that are supposed to be there. Heck, a robot might determine the process itself is threatening to its existence, thus ignoring it entirely and defying its programming.
Google’s Robot Constitution is here to save the day, though. It applies the “a robot may not injure a human being” rule. The rule establishes guidelines for Google’s robots and prevents them from doing anything that involves human beings, animals, sharp objects, or electrical appliances. Additionally, Google has programmed certain safeguards that can keep robots from performing tasks that put themselves at risk, and there is also a human with a killswitch that can stop the robot before it comes to harm.
Always Moving Forward
Google is dedicated to the continuous advancement of computing, robotics, and AI, and over the past six months, it has evaluated 52 robots through 77,000 trials. These robots performed 6,650 unique tasks, and while they may not have been particularly complex tasks, they could pave the way to more complicated ones moving forward.
If you want to make your own tasks easier, we recommend working with IT Solutions Network. Our trusted technicians can help you implement technology that can make just about any business process easier and more efficient. Learn more by calling us at (855) 795-2939.
Comments