![artificial intelligence](https://fringescience.ca/wp-content/uploads/2022/05/artificial-intelligence-3382510_1920-1-1024x683.jpg)
In the Terminator films, a fictional, military-grade artificial intelligence known as SkyNet is activated by the US government to assist in global defense. What they didn’t know, however, was that SkyNet would eventually decide that it’s own creators, humans, are a threat to it’s existence, and would take proactive measures to eliminate said threat.
From there, the films delve into a hypothetical dystopian future where homicidal robots use their newly gained AI abilities to hunt humans down like animals.
As entertaining as these films are, artificial intelligence of this degree has become less fictional, and more a scientific fact. With each passing year, AI researchers are inching their way closer and closer to what those in the business refer to as “the singularity”, or scientifically speaking, Artificial General Intelligence.
AGI, as opposed to AI, is the hypothetical version of AI that is capable of accomplishing any intellectual task that a human being can. Though we do have some AI constructs that are able to function much faster and more accurately than any human can, their abilities are often limited to singular tasks. For instance, AI have been developed to understand speech and speech patterns (Amazon’s Alexa, Microsoft’s Cortana, Google’s Assistant, Apple’s Siri), or to analyze facial structures in facial recognition software. They can even recognise and diagnose diseases faster than a human, as well as possible dangers on the road in the case of Tesla’s new automated driving AI. However, in all these cases, the AI program developed was solely for the purpose of completing the one task it was assigned; true AGI, hypothetically, would be capable of accomplishing any, and all tasks that we can.
Now, you may have realised by now that not all humans can do everything, but we do have the capacity to learn. This is another feature that a true AGI would have. If we could make a machine that could learn any task it is assigned, and then accomplish that task more efficiently than any human can, then we will have reached the aforementioned singularity.
A crucial component of that learning ability, however, would be the capacity for the AGI to take what it has learned in one category and to apply that knowledge to a different category. As humans, we do this all the time. Take Mr. Miyagi’s “wax on, wax off” approach to studying karate in the famous film The Karate Kid for example. In the film, Mr. Miyagi teaches Daniel how to fight by having him clean his cars repeatedly for many hours straight, day after day. Though the two activities seemed frustratingly different from one another to Daniel, the knowledge he gained in one was eventually applied practically to the other. Modern AI do not have that ability.
Also, humans require significantly less learning experiences to achieve an understanding of a particular focus. Modern AI require incredible amounts of training data to match what a human can do, and as stated before, that knowledge is only applied to one task.
Now, I’m sure you find all this very interesting, but what you really want to know is how close we are to the singularity, don’t you? In a book called Architects of Intelligence, written by Martin Ford, Ford interviewed 23 of the world’s top AI researchers and asked them this tantalizing question. Of the 23, only 18 answered, and of those a mere two went on record. Ray Kurzweil, director of engineering at Google said that by 2029 there would be a 50 per cent chance that AGI will be developed, and Rodney Brooks, a roboticist and co-founder of iRobot, said 2200. Ford stated that the significant differences in estimates was likely due to a “rough correlation between how aggressive or optimistic you are and how young you are,”
Ford goes on to further explain that, “Once you’ve been working on it for decades and decades, perhaps you do tend to become a bit more pessimistic.”
So, it would seem that the prospect of AGI may be out of reach for even the youngest of our generations, but maybe this is a good thing. If there’s anything that the Terminator films have taught us, it’s to never trust a bucket of bolts that can think for itself.