Turtlebot3 + OpenManipulator-X Pick & Place with state machine

I’m currently working on an asssignment for which I want to have a Turtlebot3 Waffle Pi + OpenManipulator-X perform AGV and pick and place tasks. The idea is similar to what you see in this video from Robotis: TurtleBot3 45 TurtleBot3 with OpenManipulator - YouTube

The goal is to let the Turtlebot3 move to position X, grab an object (using AR tags for accuracy and consistency), move to position Y, drop the object (again using AR tags) and then move to position Z. The triggers for performing each task will be a signal coming from a server. I want to program these triggers with Python in a state machine.

From the research that I’ve done so far, SMACH seems like a good package to use (in combination with the other relevant and required packages). Reason for this is that SMACH allows you to code states that use ROS packages and it’s written in Python.

Is there anybody that knows a step-by-step tutorial to achieve a the result from the Robotis video? There once was a tutorial for Pick and Place on the e-Manual website, but it has been removed. Since this video is so similar to want I want to achieve, a step-by-step tutorial would be a great basis for my project, on which I can expand with the required custom code.

Thanks in advance!

As we were updating our source code for TurtleBot3 and OpenMANIPUALTOR-X, there had been modification in our contents.
The main reason we abandoned the Smach is because it has not been maintained over a few years and there were bugs.
However, we recently created the same feature under the Manipulation section where the camera searches for the AR markers and deliver items to designated locations.
Please see TurtleBot3 Home Service Challenge for more details.
You can also try simulation example.
Please note that the example is written based on ROS 1 Kinetic.
Thank you.

Hi Will, thanks for your fast response!

The Home Service Challenge indeed looks like the behaviour I’m looking for. Thank your for this link, I completely missed this tutorial on your website.

I understand that you abandoned SMACH. Luckily I’m running ROS1 Kinetic on Ubuntu 16.04, so I’ll give the Home Service Challenge tutorial a try.

Thanks a gain for your quick support :slight_smile:

Oh, this challenge is new. I have stopped using the waffle and Manipulator as all the files moved and I have become, Lost I guess? I am trying to write my own keyboard controller as it is a to-do thing, but sadly I have two of these arms and the files conflict with the one on the waffle and one on my desktop.
I am sure they will fix it when they see how much it is needed. :upside_down_face:

Yes, this challenge was designed for a new competition that was going to be held in early 2020, but the Covid-19 ruined our event :frowning:
The good news is that we still have plan to hold this event this year.
As of today, OpenMANIPULATOR-X and TurtleBot3_manipulation libraries are not quite compatible, but we are keep working on developing our code to be more coherent.
Thanks for your interest! :smiley:

1 Like

Alright, I’ve performed all of the steps of the home service challenge tutorial, which means that I can now run the demo remote launch file with my own map (created using the SLAM tutorial steps). Now, there are multiple things that I would like to change, but after digging and searching through the code, I’m a bit lost on how to achieve what I want.

With the stock home service challenge, the Turtlebot will search for the first AR tag (using coördinates and the camera, I guess?), navigate to this object, grab this object and move the object from this location to another one (which is also found with coördinates and the camera, I guess?), releasing the object and then move back to the starting position. These steps will then be repeated for the other three objects.

What I want to achieve is the following:

  • Have the Turtlebot wait at the starting position
  • If my Python script reads a certain value from a server, the Turtlebot should move to AR tag 0 (where an object is located), fine-tune its position (I believe this is already implemented in the home service challenge) and give some feedback that the Python script can read/receive
  • Once the position has been fine-tuned, the Turtlebot should wait at the location and give some sort of feedback, so that my Python script can update a value on the server
  • When, during the waiting, my Python script reads another certain value from the server, the Turtlebot should actually grab the object and move to AR tag 1 (where the object has to be released). Again, as soon as the Turtlebot starts moving to the next position, some feedback should be given so that the Python script can update a value on the server.
  • Once the Turtlebot has arrived at AR tag 1, it should position itself in front of it (just like with AR tag 0) and release the object. After releasing the object, the Turtlebot should move to the starting position. Here again, as soon as the Turtlebot starts moving to the starting position, I’d like some feedback that the Python script can read/use.
  • Once arrived at the starting position, I want the Turtlebot to wait again for a signal from my Python script to repeat the steps above. Furthermore, once the Turtlebot arrives at the starting position, I want again a signal that the Python script can read.

Note that with my assignment, I’m only interested in moving one object from point A (AR tag 0) to point B (AR tag 1). With this plan in mind, there are several things I need to do. However, as I mentioned before, I’m a bit lost on how to solve it. So here’s what I need to change in the home service challenge and the issues that I face:

  • All actions should be seperated, instead of one mission that performs all the tasks consecutively.
    I think I can do this by simply splitting the scenario.yaml file into seperate scenario.yaml files, but I’m not sure if this is correct and (if so) how I can then activate the seperate steps (as listed above). My Python script that reads the server values, will need to publish the ROS commands, but they will have to be seperate commands for each step.
  • I need to update the position of the AR tags in the room.yaml file, but the coördinate system doesn’t make any sense to me, so I have no clue on how to adapt the values to my own map.
  • I need to change the different position of the OpenMANIPULATOR-X, which I can find in the config.yaml file, but I’m not sure how these positions are linked to the ROS commands that I publish. How does this work
  • As mentioned in the steps above, I will need to receive feedback from ROS when a certain task starts or has been completed. How do I do this, so that my Python script can read this feedback?

These are the main problems I have. If someone could help me to get started, that would be great! And if Python is not the way to go for the automation method (because that’s pretty much what it does, it reads server values and based on these, it commands the Turtlebot to perform certain tasks), then please let me know what would work too.

The setup that I’m using consists of:

  • Turtlebot3 Waffle Pi with the Raspberry Pi 3B+ running Rasbian with ROS 1 Kinetic
  • Remote PC is Ubuntu 16.04 on a VM that also has ROS 1 Kinetic installed
  • All the packages required for Turtlebot3, OpenMANIPULATOR-X and the Home Service Challenge have been installed
  • The server that I’m communicating with uses a MySQL database, so whatever automation script I will use (currently Pyton), will need to support communication with this database.
  • The Python script is running on the Remote PC

I hope I’ve given enough information for somebody to guide me. Every bit of help is appreciated, as I am a bit lost :slight_smile:

Another update: I’ve already created the custom positions of the OpenMANIPULATOR-X. Furthermore, I am able to set these positions by publishing a message through my Python automation script. So that part is done, three more steps to go:

  • All actions should be seperated, instead of one mission that performs all the tasks consecutively.
    I think I can do this by simply splitting the scenario.yaml file into seperate scenario.yaml files, but I’m not sure if this is correct and (if so) how I can then activate the seperate steps (as listed above). My Python script that reads the server values, will need to publish the ROS commands, but they will have to be seperate commands for each step.
  • I need to update the position of the AR tags in the room.yaml file, but the coördinate system doesn’t make any sense to me, so I have no clue on how to adapt the values to my own map.
  • I will need some feedback from ROS when the Turtlebot is done positioning itself at either AR tag 0, AR tag 1 or the starting position. This feedback has to be interpreted by the Python automation script. How can I achieve this?

If anyone can give me some tips on how to solve/tackle (any of) these three steps, that would be great :slightly_smiling_face:

Hi,
Just to make sure to share the information with other developers, see this issue thread for executing a separate task in the example.