Robotics has become an essential part of our society, supporting industry wherever it is needed, under the umbrella of the AI that allows machines or robots to suit our needs. Now, however, a new landscape is emerging, with the emergence of robots that are governed by physical intelligence.
We are talking about a further step in that long-awaited challenge that many visionaries have had for decades, where humans and robots can communicate and interact with the environment, with no (or almost no) difference.
Examples of this new generation of robotics are already beginning to emerge, such as the soft robots developed by researchers at North Carolina State University and the University of Pennsylvania (USA) that can navigate complex environments, such as mazes, autonomously without the help of humans or software.
Made from liquid crystal elastomers, they have a twisted ribbon shape, resembling translucent rotini (a type of Italian pasta), whose work was recently published in the journal Proceedings of the National Academy of Sciences (PNAS).
Jie Yin, lead author of the study and professor of mechanical and aerospace engineering at NC State, explains that these are machines that have a design and materials that “allow them to perform in a variety of situations, as opposed to computational intelligence”.
Its mechanism means that when placed on a surface of at least 55 degrees Celsius – above room temperature – the part of the belt that touches the surface contracts, while the part of the belt exposed to the air does not, inducing a rolling motion in the belt.
That is, the hotter the surface, the faster it rolls.
Yin recalls that this experiment had previously been done with smooth-sided rods, which had the disadvantage that when it encountered an object, it simply spun in place, whereas this soft robot, being made in the shape of twisted ribbon, is able to negotiate these obstacles without human or computer assistance.
Specifically, the robot can avoid obstacles in two different ways. On the one hand, if one end of the belt encounters an object, the belt turns slightly to avoid the obstacle, and on the other hand, if the central part of the robot encounters an object, it “evades” it thanks to a ‘snap’ or rapid release of energy, and reorients itself.
Like robot hoovers
The scientist compares these mechanisms to the robot hoovers used in the home, at least in terms of how they move, but with the difference that the robot draws energy from its environment and operates without any computer programming.
The various experiments carried out by their developers show that they are able to move through a variety of labyrinthine environments, but also in desert environments, where they can climb up and down sandy slopes.
Apart from being “interesting and fun to watch”, says Yao Zhao, this opens up the possibility of new ideas on how soft robots can be designed to harvest thermal energy from the natural environment and autonomously negotiate complex and unstructured environments, such as difficult roads and deserts.
The challenge of autonomous robots
Experts are facing the new challenge of making autonomous robots capable of interacting with humans and the environment through intellectual abilities that are typically inherent in biological organisms. In other words, we are talking about a new era in Artificial Intelligence as well.
This is called Physical Artificial Intelligence, where the aim is to give realism to robotic developments, such as autonomous robots that can interact freely with the environment and communicate naturally with humans, based on the integration of the best of biological and artificial intelligence.
Putting this concept into practice is also nothing new. Many studies have been carried out, with varying degrees of success. An example of this is the Imperial College of London, which has already tried to lay the foundations for the development of this trend, including a paradigm shift in the training of new specialists.
The team led by Professor Mirko Kovac chooses to teach this subject as a combined discipline within academic programmes dedicated to Artificial Intelligence, materials science, mechanical engineering, computer science, biology and chemistry.
Teaching a robot
The approach to how to teach a robot to perform a specific task and have it think of the solution to a problem as if it were a human being is the quiz to all these questions, and something that the team led by Professor Hirokazu Takahashiat the University of Tokyo is working on, developing a system to ‘teach’ this step forward in robot intelligence.
The basis of this study is a neuron culture produced from living cells that is integrated into a computer and generates signals through electrical stimuli, enabling the robot to escape from a maze after ‘learning’ to recognise the environment and the goal to be attained.
The study, published in AIP Publishing, explains that these nerve cells, or neurons, grew from living cells and acted as a physical reservoir for the computer to build coherent signals.
These are considered homeostatic signals, which tell the robot that the internal environment is within a certain range and acts as a baseline as it moves freely through the maze.
Every time the robot veers in the wrong direction or looks in the wrong direction, the neurons in the cell culture were disturbed by an electrical impulse. Throughout the tests, the robot continuously received the homeostatic signals interrupted by the disturbance signals until it successfully solved the maze task.
The conclusion for the researchers is that intelligent task-solving skills can be produced by using physical reservoir computers to extract chaotic neural signals and deliver homeostatic or disruptive signals.
In doing so, the computer creates a reservoir that knows how to solve the task.
Professor Takahashi gives as an example the brain of a primary school child who cannot solve the mathematical problems in a university entrance exam, because the ability to solve tasks is determined by the richness of the repertoire of spatio-temporal patterns that can be generated.
Vision, hearing, touch
Antonio Torralba, who heads the MIT School of Artificial Intelligence and Decision Making, explains in an interview with the newspaper El País that they are working on developing systems that “learn to perceive the world by integrating all these senses (vision, hearing and touch) and that are capable of learning to discover objects and their properties, without the need for a person to provide knowledge about them”.
In this sense, he points out the importance of touch, in addition to the well-known importance of vision, since it is “what really allows us to enter into direct communication with the world around us”.
It is a sense that he considers that there is no greater problem than vision in incorporating it into machines, if we take into account that “touch forms an image on the skin, a pressure map; and the moment you have an input image, it doesn’t matter whether it is in black and white, colour or tactile”.
Therefore, he stresses in the interview with El País, “the fact that it is an image captured by the eye or by the skin makes no difference from a computational point of view, or how the signal will be dealt with later on”.
While the fields of vision and hearing have been under development, the field of touch, only very recently being explored, presents a number of challenges.
Achieving these goals will be a big step towards a new generation of robots that can interact and accompany us in a more efficient manner.