AI: leadership rebellion? TOP 3 most popular misconceptions. Part 1

SYPWAI
2 min readFeb 26, 2021

--

Artificial intelligence is unlikely to destroy the planet. Rather, it will be done with the help of human and his irresponsibility to consumption.
Artificial intelligence is the revolution of the century! From smart search to robots and the latest weapons. Therefore, society is concerned about the future achievements and risks associated with AI.

Is it really worth worrying?

AI today is quite narrowly focused. This means that intelligence is used to solve very specific problems. Such as speech recognition, smart recommendations, self-driving cars.

Long-term prospects imply the development of a general AI, which is called general AGI (Artificial general intelligence). The ultimate goal of AGI is to completely imitate human thinking, the ability to solve any intellectual problem. It is on the basis of these properties that conflicting opinions about AI arise.

Global catastrophe is a key AGI threat. Thanks to the ability to self-study and self-improvement, in theory, AI can get out of control, reach the level of superintelligence and dominate people.
Of course, AI has enough potential to be smarter than any person, and at the moment no one can predict the AI behavior, given that we have never been in such a situation before.

Will AI listen to us?

Stuart Russell, Berkeley artificial intelligence expert, believes that the main risks of AI are incorrect description of goals and a lack of ethics. The logic of AI is to accomplish the task in any way. And this is not always the way we assumed. AI can be compared to a genie from a bottle who, when fulfilling a wish, does not really care about how it comes true.

Russell says the tasks that AI performs today are pretty limited. It has been proven that AI can defeat a person in a computer game, compose music and lyrics, paint, however, when it comes to serious tasks, AI cannot 100% correctly interpret all the meanings, exceptions, risks and consequences, as a person does.

AI developers must not only improve the interpretation of the set goals, but also program the AI so that it initially relies on human preferences.

To achieve this, Stuart Russell recommends adhering to the basic principles:
1. It is worth remembering that AI is not aware of human preferences.
2. The main source of information for AI (regarding human preferences) is human behavior.
3. The goal of AI is to fulfill human desires.

Based on these principles, Russell and his team conduct research. They train robots based on human behavioral patterns, and expressed preferences are not precisely formulated. This is how the team wants to understand if AI can understand human thinking and read between the lines.

--

--