top of page
1.jpg

SUMMARY

I joined the Spring 2021 Robotics Colab (12 weeks) at Circuit Launch. The team was composed of thee participants with a mentor. We used Reachy which is an open source humanoid robot by Pollen Robotics. We were interested in solving the pick and place problem using Reachy and ROS1.

Reachy: About

KEY POINTS

In order to achieve the pick and place task, we used the following tools. ROS1, Moveit, Gazebo, RVIZ, Apriltag, external camera, URDF provided by Pollen Robotics.

The code can be found here. Overall this was a great experience to work on a physical robot as well as get familiar with ROS. In the near future, I would like to continue working on grasping but with using state of the art detection. 

Untitled_edited.jpg

APRILTAG

In order to pick up an object we first need to find the location in 3D space of the object in reference to the robot. More specifically, from the camera's frame. There are many approaches to this including state of the art deep learning methods. However, we decided to keep it simple first by using an apriltag. An apriltag looks similar to a 2D barcode scan but it can provide 6DOF of it's location (x, y, z, yaw, pitch, roll). 

By placing the apriltag on the cube, we can get the cube location in reference to the camera frame. But we needed a way to translate that to the robot frame. Therefore, we placed another apriltag on the robot's chest. 

Although Reachy already has a camera built in, the previous team colab made hardware modification neck up. Therefore, we decided to use an external camera.

The image above shows the frames of the robot, cube and the camera. As you can see, the cube location can now be expressed in terms of the robot frame. 

For this, we used the apriltag_ros  package as well as the tf package. The `robot_state_publisher` published all the static transformation frames that could be displayed on RVIZ.

With this setup, as long as the specific external camera was calibrated properly and published the calibrated data to the apriltag_ros node, we could move the camera anywhere and still be able to get the cube location in reference to the robot.

Untitled (1).png

TF TREE

The image above was generated using the tf package. A static transformation was published from the pedestal of the robot to the chest to create a new frame for the apriltag. 

As you can see, the tree was extended to include the camera frame, apriltag on the chest, and apriltag on the cube.

Untitled (2).png

Z OFFEST COMPENSATION

We noticed that when we sent a goal position to reachy, it was never "perfect". This is due to many factors.

  • Hardware components are not rigid. And because it is 3d printed parts, there's a lot of play which reduces accuracy

  • Apriltag publishing the pose of the object is not perfect

  • The apriltag on reachy's chest is not measured accurately.


In order to resolve this issue, we decided to generate a 3d map of the error. More concretely, we attached the cube with the apriltag on reachy's hand. By finding the difference between the "real" apriltag position (position from the camera) and what reachy internally computed the position of the end effector is, we were able to find the error. We performed this for 27 way points (3 x 3 x3). The script collected the data at each point and once we created the map, we were able to interpolate within that space to compensate for the error.

This task was also difficult because the motors that Reachy had did not have sufficient torque to hold reachy's arm at certain locations.

Untitled (3).png

PRODUCT

For the demo, we created a state machine that would accordingly transition to different state as well as catch any errors and edge cases. The video below shows Reachy grabbing the cube and placing it to "home". Here, the home position was manually programmed. But the home position could be used with an apriltag as well. Moreover, we could also expand with more objects.

Reachy: Products
All Videos

All Videos

Watch Now
Reachy: Video Player
Reachy: Video
Reachy: Video
bottom of page