AI
(Based on the book Superintelligence) Artificial Intelligence is divided into three categories (with a better term, period) in terms of decision-making power.
The first period (which we are currently in) is calledAI, the period in which we try to treat the problems we see around us as optimization problems and find solutions for them with mathematical methods. The strength of the computer compared to us humans is the speed of data processing, and we try to use this strength to develop machine learning algorithms.
Make no mistake, every algorithm written for machine learning is a kind of mathematical functions and nothing strange happens in it! Only huge amounts of calculations are done by computers. In this course, we are able to defeat humans (ourselves!) in many problems by using these algorithms.
Various tests have been defined for this (such as the Turing test) and we will reach the point of general intelligence when we successfully pass these tests, but it is interesting to know that we are at least 2 decades away from this point. There are many problems before us But what worries us is not this period, but the third period.
Whether on purpose or by mistake! For example, we may ask a super-intelligent machine to end world hunger, in one scenario the machine can end hunger by killing hungry humans!
If the thought of such a phenomenon seems crazy to you, remember that John von Neumann and Bertrand Russell supported an American nuclear attack on the Soviet Union (to prevent the Soviet Union from acquiring nuclear weapons)! Or think about the American atomic attack on Hiroshima and Nagasaki. For such cases, human expediency is not over for the benefit of “all” humanity! So this can happen by car too…
If you are a fan of sci-fi movies (like me), you definitely remember HAL 9000’s assistant in the movie 2001: A Space Odyssey. The idea of general artificial intelligence sparked the design of the HAL 9000 assistant by Stanley Kubrick and Arthur Clarke.

Artificial intelligence can endanger us mainly in two areas:
The first case: for times when artificial intelligence is used to carry out malicious missions. Automatic weapons programmed to kill humans. Such weapons will not be deactivated by simply pressing a button, because the countries and companies that manufacture such weapons do not want the weapons to be simply deactivated.
Or to better understand, imagine a machine that uses its data to generate fake news. (If you don’t believe in the impact of fake news on society, be sure to listen to Stringcast’s Post-Truth podcast to see how fake news can make Donald Trump president!)
The second case: Artificial intelligence is assigned to do something useful, but to do this useful work, it does something destructive. The example of ending hunger above is an example of this. Do we and machines reach a common understanding?! The warnings of Hawking, Musk and Gates are more in these areas.
These risks make us face the question of whether we will be able to control artificial intelligence. We still do not have a clear answer to this question. To better understand this answer, we must first understand the obstacles to our path to the singularity.

Obstacles in the way of general artificial intelligence
♦ The first obstacle: comprehensive learning algorithm
In his book The Master Algorithm, Pedro Domingos describes the belief that on the way to the singularity, we must first discover an algorithm that can learn without human supervision. In his book, he gives the figure below and writes in his explanation that this superalgorithm should be able to connect all branches of artificial intelligence.
♦ The second obstacle: data processing power
A smart device must be able to process terabytes of data. Processing this amount of data (with current hardware) may take years (or even centuries) because current learning algorithms are not as optimal as you might think.
On the other hand, we have not yet discovered the comprehensive algorithm, but we can say that that algorithm will not be as optimal as we think! So, as a result, our current tools are not enough to process this amount of data. Statistical studies tell us that in 2030 we will have the necessary hardware for such processing.
But what kind of hardware will this be? Quantum processor? Can we produce these hardware in small dimensions? We do not have a clear answer to this question.
♦ The third obstacle: the issue of consciousness.
When you play Call of Duty, you are the person controlling the soldier; You control that soldier through your computer. What controls us? This is a question that philosophers have repeated many times; What is the origin of human consciousness? What force controls us? Are we really free?
Can we produce a conscious machine? A machine that has the ability to think and make decisions with complete independence from humans and does not need humans to be controlled.
Conclusion
Our current visions of the future of artificial intelligence may not be clear and optimistic, but we must keep in mind that we are decades and centuries away from reaching that point.
We still don’t have clear answers to big questions like consciousness, and we have a long way to go.

