Hello! My name is Rohan, and I am a first-year computer science major at Johns Hopkins. In late October I participated in the 2024 LeRobot Hackathon using the Koch v1.1 arm, a 6-servo robotic arm built using ROBOTIS’ Dynamixels. The aim in LeRobot was to train a machine learning algorithm to increase the accuracy of the Koch v1.1 arm. I am grateful for the sponsorship of ROBOTIS, who provided the parts to build the Koch v1.1 arm, including the OpenRB-150 board, the 3D printed connectors, and the Dynamixels.
In this article I wanted to share my experiences with the LeRobot Hackathon.
I received the parts from ROBOTIS a few days prior to the hackathon and spent Friday night assembling the Koch v1.1 arm. The hackathon provided Python code for training and control; Saturday morning found me trying to use this code to control the arm. However, I ran into an issue with the Linux environment and was unable to, so I decided to use Arduino instead to save time.
The way the Koch v1.1 arm is designed, each servo is connected to the servo in front of it and behind it. This reduces the number of direct connections to the OpenRB-150, which would otherwise provide a cap on the number of servos which can be used. However, this requires a way of distinguishing servos which are chained together; ROBOTIS hardware does this by assigning a unique ID to each servo. Saturday afternoon I was figuring out how to set this unique ID, which I did using Arduino. After this I was able to use one of the sample programs provided by the Dynamixel library to test random motion in the arm.
Around 3:00 pm I wrote two Arduino programs. One collected data on the servo positions over a period of time. The other took this data and attempted to execute the movements described by the servo positions; for each movement, it collected the actual servo positions, which were slightly off. Using these two programs I spent the rest of the evening creating the dataset for the machine learning algorithm. At the start of each datapoint, the arm would be retracted. I then ran the first program and manually moved the arm so that it picked up an object, moved it to a different position, and retracted again. I would then take the data and input it into the second program, which would attempt to repeat the motion, and would record any inconsistencies. This provided a dataset with the first program’s outputs as the intended movements, and the second programs outputs as the actual movements.
Sunday morning and afternoon I implemented the Action-Chunking with Transformers (ACT) algorithm using Python and PyTorch. ACT uses transformer models, which form the basis of large-language models such as ChatGPT. Transformer models are useful for mapping between two sequences, where each item in the sequence is dependent on other items. For example, the outputs of the first and second Arduino programs are sequences of servo positions. When the second Arduino program tries to copy the output of the first program, any errors in the servo positions get compounded over time. Using ACT, we can create a mapping between the second program’s outputs (actual movements) and the first program’s outputs (intended movements); using this mapping, we can improve the accuracy of the Koch v1.1 arm.
Sunday evening I trained ACT on Saturday’s dataset. I tested it by running the first Arduino program and doing manual control, then inputting that output into the ACT model, and inputting that output into the second Arduino program.
I enjoyed participating in the Hackathon and learning a lot about the Koch v1.1 arm and ACT.