With the thrills and excitement that comes with the Artificial Intelligence technology, there is a growing fear that its sporadic growth will make it difficult for humans to control in the soon future. Like losing control over AI machines? That would be disastrous.
Studies has shown in theory how superintelligent AI will be unable to be curbed by humans, as we may become inferior to machine learning intelligence in the next decade.
But in all these, what is the capability of superintelligent AI?
In the early stages of its creation, superintelligent AI has shown it can be a worthy opponent of humanity but despite playing a fundamental role in the growth of our species, with one factor remaining unstable, the existential development of AI.
Even with the affirmation by analysts and AI experts that empirical technological reckoning will break down walls on humanity, it is not happening soon.
With its application spread across all facets, artificial super intelligence is discoverable in all computational aspects of our time. With the artificial intelligence techniques drafted to some of the most basic games such as Chess, Jeopardy, with the solving of almost impossible mathematical questions, processes that would’ve taken years for humanity to fulfil, it appears humanity itself will have little left to do manually.
AI developed machines has developed and reached a very significant extensive level and has surpasses all known human mind limitations. A computer scientist at the Autonomous University of Madrid, Manuel Alfonseca, while highlighting what a superintelligent AI is capable of doing attempted to answer “the question about whether superintelligence could be controlled if created is quite old.”
“It goes back at least to Asimov’s First Law of Robotics, in the 1940,” he said.
The Asimov first law of robotics is set under three pillars that set the ground rules under one umbrella that a robot may not injure a human being. A look at the laws below:
- First Law: A robot may not have the capacity to cause pain or expose the human to harm.
- Second Law: As long as it did not contradict the first law, a robot must abide by any order coming from a human.
- Third Law: As long as it doesn’t contradict the first tow laws, a robot must safeguard its own survival.
It should be noted that these three laws are more philosophical than logical, with the ambiguity in the laws redirecting the meaning behind each one. The details in the law have not be painstakingly addressed despite discussing how not to inflict harm on a human.
The implication of this is that specific alterations need to happen for super intelligent AI to be directed with the two common ideas. One of the probable suggestions to limit their danger is that the robots be specified within certain limits, for example disconnecting AI from certain technical devices to disconnect it from the outside world, but this would adversely reduce AI’s superior power, making it less capable of answering various human needs.
The second idea talks opines that the implementation of ethical principles in coding would help program artificial super intelligence to accomplish human beneficial objectives, but it also has its own limits too as one has to rely heavily on a particular algorithm behaviour, making sure it cannot harm anyone no matter the circumstances. It can only suffice if the AI first behaviour is replicated, while being analysed for malicious harmful intention.
This procedure has been questioned and rubbished by Researchers, who affirmed that disclosing that our age’s current standard of computing cannot handle the creation of such an algorithm.
With digitalized super intelligence now in every technological facet of our lives, and knowing computers and machines controlled by computers following their programming code, it goes to say that is an AI is programmed by a human to inflict harm on another human, it will do exactly just that even when another human try standing in the way of the machine trying to fulfil its purpose.
The use of the internet is required for artificial super intelligence to function and connection to the internet is required to make the machine survive and the connection makes AI to access human data to enable it learn independently. The intellectual machinery could probably in the future get to a point it can substitute existing programs and obtain power over any machine online worldwide.
But then, many people have wondered if humanity will be able to strand against super intelligent AI, whether it has the required capabilities to do so and this prompted a group of computer scientists to implement theoretical calculations that revealed how profoundly inconceivable and unachievable for humanity to win the battle against digital super intelligence.
Manuel Cebrian, leader of the Digital Mobilization Group at the Centre for Humans and Machines was wary of the dangers when he said:
“A super intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned them. The question, therefore, arises whether this could at some point become uncontrollable and dangerous for humanity”.
But analysts are spoken out on what they think of the capability of superintelligent AI knowing fully-well the code-driven machines will enhance human capacities and effectiveness but also potentially expose human autonomy, agency, and capabilities to grave threats.
What is the future for humans when AI gets too powerful?
A question left unanswered.