Today’s computers can be compared to dogs or very young children. Just like them, systems learn by observation, making improvements by trial and error. However, none of these processes take place without the human factor. Humans need to teach, give clues and correct mistakes. One of the basic challenges scientists are working on with artificial intelligence (AI) is to find a way to develop these systems into increasingly autonomous solutions.

The robot and his guardian

The machines learning under supervision receive huge amounts of data in the form of images, text or sound. Whole teams of specialists are responsible for providing this data. The information is part of the algorithms that allow computers to know what to look for. It takes an enormous amount of data for systems to know what they see and hear. The teacher and guardian must be human. Is there a chance to change this situation?

Award for results

One of the methods to bring machines closer to achieving greater autonomy is teaching with reinforcement. The process developed by Professor Richard Sutton of the Alberta University of Canada is based on the awarding of prizes for successful completion. The principle is very simple: a system that is assigned a task will continue to push for its completion until it is successful by trial and error. Do we know it from somewhere? That’s right, Professor Sutton’s research is based on the same principle as the famous rat experiments, which showed that clever rodents can even learn to drive toy cars – just to get a tasty reward (as was proved by an experiment conducted at the University of Richmond in 2019).

Predicting and playing chess (with oneself)

Another term related to computer self-development is “predictive learning”. According to its assumption, machines are supposed to recognize certain patterns on their own in order to be able to predict the results and decide on the method of operation.

Researchers also use authorization to motivate computers to learn faster and more effectively. The machine is playing against itself, which gradually increases its experience and, consequently, its skills.

I look, I learn – I evolve

Specialists have a lot of faith in the learning of machines by observing and drawing conclusions from a set of basic data. According to this method, a computer that receives only part of the information (e.g. a piece of video material) will be able to predict what will happen next. However, it is not that simple. According to the Turing Prize winner Dr. Yann LeCun, in order for the machine to learn how to draw conclusions from the seeds of information, it must first develop a certain set of data itself. The scientist gives an example of a system which, after watching millions of videos on YouTube, will be able to “distill a certain representation of the world from them”, which will become the basis for his decisions. This is not so obvious. Before the computer starts to create a proper picture of reality, it must first be aware of the existence of animated and inanimate objects, which behave completely different. “Inanimate objects have predictable trajectories, animated ones do not”, the scientist points out in an interview with The New York Times in April.

The creator and critic – the ideal arrangement

In 2017, the Google Brain AI laboratory boasted the Generative Adversarial Network system it developed. GAN was supposed to generate completely new content based on the information collected earlier. The method developed by Google consisted in confronting two algorithms – the creator and evaluator. In practice, one bot was delegated to create new content on the basis of the knowledge he had gained in the “real world”, while the other was to criticize these creations, scoring their imperfections.

This confrontational method was to help the system to create even more realistic images, sounds and other completely original creations. Thanks to the cooperation of the “creator” with the “critic”, the works of the former were much more realistic. According to Google experts, this process will allow robots to learn without human intervention.

Let them go, let them learn!

Sergey Levine is an assistant professor at the University of Berkley and head of Robotic AI & Learning Lab. The idea behind his department is to create algorithms that are general enough to place them in robots performing real activities in the real world. According to Levine’s idea, machines are to study their environment and on this basis build knowledge about it. It is supposed to lead to a situation in which the robot will be able to imagine something that can happen and then try to do it on its own. In the final stage of development, the robots will connect into a network in order to be able to share the acquired knowledge, and thus learn from each other.

Computer science meets biology

In an article published in Neural Networks in July 2019, a team of scientists from the University of Southampton described a general architecture to enable Artificial Intelligence to create original strategies for learning and adapting to changing scenarios. The solution, which is called “adaptive perception,” is based on both computer science and biology. The method minimizes the limitations of neural networks when learning with enhancement.

Researchers have concluded that it is possible to avoid using the neural network when it is not necessary. Instead of training the network using an algorithm that does not have to work in every scenario, AI was allowed to decide how to learn and use the neural network.

The system was subjected to a labyrinth test, during which it was shown that the robot can search the corridors and change its behavior when it recognizes a familiar environment. As a result, the system achieved its goal by starting from random start positions.

When will the robots start to learn on their own? According to Dr. LeCun, it is a matter of time to build a machine that is as intelligent as a human being.


PROGRAMMED INNOVATIONS TO DEVELOP YOUR BUSINESS

See also

Latest posts

<
>