When the robot performs a task, such as stretching an arm into a small space or picking up a fragile object, it must know exactly where its robot arm is. Researchers at the Carnegie Mellon University (CMU) Robotics Institute in the United States say that attaching a camera to a robotic arm can quickly create a 3D model of the surrounding environment and let the robot know where its arm is currently.
Researchers at the Carnegie Mellon University Robotics Institute in the United States say that installing a camera on the robot's robotic arm can help it quickly create an environmental 3D model and make the robot feel its arm.
If the camera is not accurate enough and the robot arm is unstable, it is difficult to achieve real-time synchronization. But the CMU team found that the camera and the robotic arm could be combined to determine the shape of the camera using the angle of the joint, thus improving the accuracy of the drawing. Matthew Klingensmith, Ph.D. in Robotics, says this is critical for things like probing.
Researchers will present their findings at the IEEE International Conference on Robotics and Automation, which will be held on May 17th in Stockholm, Sweden. Siddhartha Srinivasa, associate professor of robotics, and Michael Kaess, assistant research professor, participated in the study.
Srinivasa says it's work to put a camera or other sensor on the robot arm, because the sensors are now smaller and more efficient. He explained that this is very important, because the robot "there is usually a pole with a camera installed in the head", so they can not have a better perception of the working environment like a human.
But if the robot can't see its own hand, simply installing an "eye" on the robotic arm is not enough because it can't sense the relative position of its hand to the object in the environment. This is a common problem for mobile robots that perform tasks in an unknown environment. A common solution is to synchronize the positioning drawing, which is abbreviated as SLAM in English. This method is to let the different parts of the robot work together to draw a 3D map of the new environment through the camera, lidar and wheel metric method, and calculate the position of the robot in the 3D world.
"There are several algorithms that can aggregate these resources and build 3D space, but they have very demanding requirements for sensor accuracy and computation," Srinivasa said.
These algorithms usually assume that the sensor's pose is unknown, such as the camera is handheld, Klingensmith said. But if the camera is mounted on the robot arm, it will limit its actions.
Klingensmith said: "Automatically tracking changes in joint angles allows the system to map high-quality environments, even when the camera is moving very fast, sensor data is missing, or accuracy is poor."
The researchers showed us the articulated robots, which are capable of real-time image positioning through a depth camera mounted on a lightweight robotic arm. When creating a 3D model of a bookshelf, its reconstruction task completion is comparable to or better than other mapping techniques.
“There is still a lot of work to be done to improve this approach, but we firmly believe that it has great potential for improving robot operation,†says Srinivasa. Toyota, the US Naval Research Office and the National Science Foundation also expressed support for this research.
Connecting Terminals,Micro Connecting Terminal,Aluminum Connecting Terminals,Connecting Copper Terminal
Taixing Longyi Terminals Co.,Ltd. , https://www.longyicopperterminals.com