How robots are mastering the art of dexterity
![]() |
Credit: Sam Chivers |
Humans have proven themselves to be the master of dexterity, but with the advancement of technology robotics could catch up very soon. Robotics are learning dexterity much faster as technology advances
“Grasping is the critical grand challenge right now,”
says Ken Goldberg, an engineer at the University of California, Berkeley.
“You can build a robotic system for one specific task picking up a car part, for example,”
says Juxi Leitner, a robotics researcher at the Australian Centre for Robotic Vision (ACRV), based at the Queensland University of Technology in Brisbane.
“You know exactly where the part is going to be and where the arm needs to be,”
he says, because the robot picked up the same thing from the same place
“the last million times”.
Any robot hoping to interact physically with the outside world faces an inherent uncertainty in how objects will react to touch.
“We can predict the motion of an asteroid a million miles away far better than we can predict the motion of a simple object being pushed across the table,”
Goldberg says.
Subscribe now to remove this ad, read unlimited articles, bookmark your favorite post and soo much more
Others are improving the hardware, with grippers ranging from pincer-like appendages to human-like hands. And roboticists are also gearing up to tackle the challenge of manipulating objects gripped in the hand.
Commercial entities, particularly those involved in the movement of varied goods, are following developments closely.
“There’s a big demand. Industry really wants to address this because of how fast e-commerce is growing, with interest greater now than ever, it’s an opportunity for the research to really be put into practice.”
The Amazon Robotics Challenge asks teams of researchers to design and build a robot that can sort the items for a customer’s order from containers and place them together in boxes.
The items are varied, ranging from bottles and bowls to soft toys and sponges, and are initially jumbled together, which makes it a difficult task in terms of both object identification and mechanical grasping.
Subscribe now to remove this ad, read unlimited articles, bookmark your favorite post and soo much more
For each object the robot encounters, the researchers specify which effector it should try first. If that doesn’t work, the robot switches tools.
The main input for the robot is an RGB-D camera, a technology that is popular among roboticists and that can assess both colour and depth.
The camera looks down from the effector into the boxes below. From this vantage point, Leitner explains, Cartman labels each pixel according to the object it belongs to a form of deep learning known as semantic segmentation.
Once a cluster of pixels representing the desired object is found, the camera’s depth-sensing capability helps the robot to work out how to grab the item.
“In simple terms, we attach to the bit that pokes out the most,”
says Leitner.
“Software has been the bottleneck for ages, but it’s becoming more advanced thanks to deep learning,”
says Pieter Abbeel, a deep-learning specialist at the University of California, Berkeley.
Subscribe now to remove this ad, read unlimited articles, bookmark your favorite post and soo much more
These developments have, he says, opened up “whole new avenues of robotics applications”.
“With just a few hundred demonstrations done in this particular way, you can train a deep neural network to acquire a skill, and I don’t mean acquire a specific motion that it’s going to repeatedly execute, but acquire the ability to adapt the motion to whatever it’s seeing in its camera feed.”
says Abbeel.
The software lets an industrial robot pick objects from a pile with a success rate of more than 90%, even if it hasn’t seen those objects before. It can also decide for itself whether to use a parallel-jaw gripper or suction tool for a particular object.
“When you pick something like a pen up off a table, the first thing you touch is the table,”
says Oliver Brock, a roboticist at the Technical University of Berlin.
Subscribe now to remove this ad, read unlimited articles, bookmark your favorite post and soo much more
We do not think about where we need to place our fingers. The softness of human hands allows for something called compliant contact the fingers mould against the surface of the object.
Make more money selling and advertising your products and services for free on Ominy market. Click here to start selling now
“Because you get a lot of surface contact, you can much more intuitively reach out and grab, with soft fingers, we change the paradigm of grasping.”
says Daniela Rus, a roboticist at the Massachusetts Institute of Technology in Cambridge
The fingers are controlled by the movement of pressurized air, which allows them to curl and straighten as required.
“The world is designed for anthropomorphic hands,”
says Brock. But there’s also a romantic element to the anthropomorphic design of his robotic hand. “It’s embarrassing. People, even roboticists, are more fascinated by things that look human.”
Subscribe now to remove this ad, read unlimited articles, bookmark your favorite post and soo much more
He says
OThe robots are already being trialled in a factory setting, handling delicate produce without damaging it.
Its claws have three flexible fingers, arranged around a central suction cup that can be extended to draw in objects.
The design takes inspiration from birds of prey, says the company’s co-founder Lael Odhner. In these birds, most of the forearm musculature is attached to a single group of tendons that reaches to the tip of the claw, he says.
Similarly, all the power of the motors in Odhner’s robotic claws is put into a single closing motion. This simple action improves reliability a crucial consideration for grippers intended for commercial use at the expense of the ability to perform delicate motions. The extendable suction cup makes up for this shortfall.
“It replaces potentially dozens of fine actuators that you would otherwise have to place in the hand,”
says Odhner.
“It’s an extraordinarily powerful concept, but people are just beginning to figure it out,”
Subscribe now to remove this ad, read unlimited articles, bookmark your favorite post and soo much more
says Rus.
“People agree that it is important,” says Brock it’s just very difficult to do. He is pursuing two approaches to giving his soft robot hands a sense of touch.
The more-mature method involves embedding tubes of liquid metal in a silicone sheet wrapped around the finger.
His team can then monitor applied forces using the electrical resistance along the tubes. “It’s measuring strain all over the finger and inferring from that, through machine learning, what actually happens to the finger.” The team is currently assessing how many of these strain sensors are required on each finger to measure various forces.
In a proof-of-concept test, a microphone was placed inside the air chamber of a soft finger. The sound that was recorded enabled researchers to identify which part of the finger was touching something, the force of the touch and the material of the object.
Subscribe now to remove this ad, read unlimited articles, bookmark your favorite post and soo much more
The ability to tuck the microphone deep inside the finger and therefore avoid reducing compliance sidesteps a key issue with sensorizing soft fingers. Brock says that the details of this work will be published soon, and that he plans to work with acoustics specialists to improve his design.
Even if robots could achieve human levels of dexterity, Rus points out, “there are lots of things that we cannot pick up with a human hand.” But as robots become increasingly adept at handling variability, more tasks currently performed by humans will become automatable.
“These systems are not super robust yet, if you were to take that system and put it into an Amazon warehouse, I’m not sure how long it would actually work for.”
Story source
Nature