Robots are really conscious, breaking through the narrow sense of AI's self-learning robot came out.

www.tmtpost.com/3744739.html

Titanium Media Note: this article is from AI_era, Science Robotics,columbia.edu, editor: Jin, Daming, Zhang Qian, Titanium Media is authorized to reprint.

"The conscious robot" appeared before the Spring Festival.

For decades, self-conscious robots have been one of science fiction's favorite subjects, and now the only thing that existed in science fiction is getting closer and closer to us.

Researchers in engineering at Columbia University have created a robot that "starts from scratch" knowing itself, without a priori knowledge of physics, geometry, or kinematics, and not initially knowing that it is a spider. Snake or arm, do not know what they look like.

After 35 hours of initial learning, the robot created a self-simulation. Then the self-simulator is used to consider and adapt to different situations, to deal with new tasks, even to detect and repair the body damage, and to continue to deal with the task.

Recently, this achievement was recently published on Science Robotics.

The success rate is 100, which is comparable to a glass of water for a man to close his eyes.

Although both humans and animals can self-regulate through thinking, for most robots, it is still learning to use human-provided simulators and models, or to correct errors through time-consuming trials. Robots have not yet learned to simulate themselves like humans.

One of the authors, Hod Lipson, professor of mechanical engineering at Columbia University and director of the Creative Machines Laboratory, and his Ph.D. students allowed a four-degree-of-freedom articulated manipulator to be considered. The process is as follows:

Initially, the robot will move randomly, collecting about 1,000 tracks, each of which includes 100 points. Then use deep learning to create a self-model.

However, the first model created is very inaccurate. The robot doesn't know what it is and how its joints are connected. But after less than 35 hours of training, the self-model has been highly consistent with the real situation of the robot.

This model performs a "pick and place" task in a closed loop system that allows the robot to recalibrate relative to the original position at each step of the trajectory based entirely on the internal self model. With closed-loop control, the robot is able to grab objects at specific locations on the ground and place them in designated containers with a success rate of up to 100%.

Even in the open-loop system, the robot is based on the internal self-model to perform the task, without any external feedback, the success rate of the robot to complete the pickup task is 44%.

It may seem simple, but the robot arm is different from the robot arm on the assembly line, which is a fixed program, and the former is completely autonomous learning.

"This task is like picking up a glass of water with closed eyes, even if it is difficult for humans to complete," said Kwiatkowski, Ph.D., a computer science student at Lipson.

Detect self-injury and simulate self again

The robot's strength also lies in its ability to detect its own damage.

The researchers used 3D printing to create a deformed part (red below) to simulate body damage, and the robot was able to detect the change and retrain its own model. The new model can successfully perform pickup and placement tasks at the expense of little performance loss.

In addition, self-modeling robots can be used to perform other tasks, such as writing text using a marker pen. In the future, I might write my own couplet?

So far, Hod Lipson says, robots have had to operate by explicitly simulating instructions from humans. "but if we want robots to be independent and adapt quickly to unpredictable situations, then they have to learn to simulate themselves."

Extract self-models without additional experimentation to accomplish multiple different tasks

Self-modeling (self-modeling) is not a new technology. Many robot systems use end-to-end training to learn a task without a model at all. However, tasks learned in this way often cannot be expanded, that is, robots can only accomplish the task that has been trained.

Therefore, how to implement model-free extensions, that is, general-purpose end-to-end, has become a problem to be solved.

Considering that the robot itself could be used to accomplish multiple tasks, the researchers wondered why not abstract a "self-model" from it and base it on it. Let the robot learn a variety of new tasks, in the process of constantly adjusting the original self-model. Wouldn't it be possible to achieve continuous self-supervised learning?

So they let robots (or, to be exact, robotic arms) run at random on their own, just like babies do their own hands and feet, and the resulting data sets are used to train a specially designed neural network. That is, to generate a primitive self-model.

Next, using a self-model, the robot begins to perform different tasks (step 3 above), namely "Pick-and-place" and "write" (Handwriting). These are two completely different tasks, regardless of the trajectory and weight of the arm.

The authors explain that closed-loop control allows the robot to recalibrate the actual position along each step along the trajectory through feedback received from the position sensor. In contrast, open-loop control is based entirely on an internal self-model without any external feedback.

As you can see from the image above, when the robot changed from "grab and place" to "write," the robot found inconsistencies. In order to simulate the new task, the shape changed suddenly (figure 4 above). The original self-model was updated with new data (step 5). After updating the self-model, the robot quickly changes its state and begins to perform tasks.

The authors stress in particular that the proposed new method allows the robot to perform two different tasks automatically without the need for additional physical experiments. In a sense, it is the first step that no model can be extended.

Breaking through the narrow sense of AI, an important step towards a self-conscious machine

Professor Lipson, who is also a member of the Columbia University Institute of data Sciences, was the most widely spread in the 2007 TED speech, which also showed off self-sensing robots.

Lipson pointed out that the imagination of the self is the key to let the robot break through the limitations of the so-called "narrow sense AI" and have more universal capabilities.

“Robots will gradually recognize themselves, which may be similar to what newborns do in cribs,” he said. “We suspect that this advantage may also be the evolutionary origin of human self-consciousness. Although our robots have this ability and Humans are still rough compared to humans, but we believe that this ability is paving the way for the birth of self-aware machines."

Lipson believes that robots and artificial intelligence can provide a new window for us to understand the mystery of this ancient consciousness.

"philosophers, psychologists, and cognitive scientists have been thinking about natural consciousness for thousands of years, but not much. We're still using subjective terms like 'reality canvas' to cover up the reality that we don't understand enough about the problem, but now the development of robotics, Force us to translate these vague concepts into concrete algorithms and mechanisms. "

Professor Lipson

However, Professor Lipson and Dr. Kwiatkowski also recognize the ethical issues that may arise. They warned: "Self-awareness will lead to a system that is more flexible and adaptable, but it also means that it is more likely to get out of control. This is indeed a powerful technology, but we should be cautious."

More exciting content, focus on Titanium Media WeChat (ID: taimeiti), or download Titanium Media App

Machine true conscious breakthrough narrow sense AI self learning robot come out

Read More Stories

© NVBOOK.com , New View Book