If you’re ready to dive into the world of robotics with the Koch V1.1 Robotics Kit, this guide will walk you through the initial steps to get started, even if you’re an absolute beginner.
Although this guide does assume you have some basic familiarity with using the command line. Whenever possible, I’ll also be sure to include links like the one above any time I mention something that I won’t be able to include in the scope of this tutorial. Additionally, the example commands provided are for an Ubuntu Linux installation and may differ for other Linux distributions.
The first step in this process (after acquiring your kit) is to assemble it. The following video tutorial will walk you through the assembly process from beginning to end. If this is your first time assembling a DIY kit like this, I recommend preparing all your parts ahead of time, organizing them at your workspace, and watching the video all the way through at least once before starting your own assembly.
Once you’ve completed the assembly and double-checked to ensure that your actuators are in the correct orientation, we can move on to setting up the LeRobot machine learning framework that powers the AI features of the Koch kit. For those who might not be familiar, LeRobot is a comprehensive machine learning toolkit created by Hugging Face. It incorporates models, datasets, and tools for real-world robotics, designed to lower the barrier to entry so that everyone can contribute and benefit from shared datasets and pretrained AI models.
Currently, the only supported installation method for LeRobot is to build it from the source code available on the Hugging Face GitHub repository. The simplest way to do this is to open your preferred terminal and run the following command to clone the repository to your local machine:
git clone https://github.com/huggingface/lerobot.git
Next, you’ll need to move into the newly cloned directory:
cd lerobot
Now, you can begin setting up your Python development environment. Creating this environment helps to isolate and back up the complete configuration of your LeRobot installation. This makes it much easier to maintain a consistent development and testing environment, ensuring that all your configuration settings are correct and that any required packages are always installed and functional.
The first step in this process is to ensure that you have the latest version of Python 3 installed on your system, as well as the Miniconda toolkit for managing your Python environments.
To install Python 3:
sudo apt update
sudo apt upgrade
sudo apt install python3
To install the Miniconda toolkit:
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh
Once Python and Miniconda are installed, you can create a Python 3.10 virtual environment named lerobot with the following command:
conda create -y -n lerobot python=3.10
After creating this Python virtual environment, activate it using the following command:
conda activate lerobot
You’ll need to do that now in order for us to continue to the next step of the installation process, as well as any time you open a new terminal and want to work with the LeRobot installation.
Before you can actually start working with the toolkit, you’ll need to install some additional software required for LeRobot to function correctly:
First, you’ll need to install ffmpeg in your conda environment:
conda install ffmpeg -c conda-forge
This command will usually download the appropriate version of ffmpeg for your platform, but not all platform implementations of ffmpeg support all the features required by LeRobot. To ensure everything will work properly, you’ll need to confirm that your ffmpeg version supports the libsvtav1 encoder by running the following command:
ffmpeg -codecs | grep libsvtav
After running that command, the output will look something like this:
ffmpeg version n7.1 Copyright (c) 2000-2024 the FFmpeg developers
built with gcc 14.2.1 (GCC) 20250207
configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-amf --enable-avisynth --enable-cuda-llvm --enable-lto --enable-fontconfig --enable-frei0r --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libdav1d --enable-libdrm --enable-libdvdnav --enable-libdvdread --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgsm --enable-libharfbuzz --enable-libiec61883 --enable-libjack --enable-libjxl --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libplacebo --enable-libpulse --enable-librav1e --enable-librsvg --enable-librubberband --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpl --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-nvdec --enable-nvenc --enable-opencl --enable-opengl --enable-shared --enable-vapoursynth --enable-version3 --enable-vulkan
libavutil 59. 39.100 / 59. 39.100
libavcodec 61. 19.100 / 61. 19.100
libavformat 61. 7.100 / 61. 7.100
libavdevice 61. 3.100 / 61. 3.100
libavfilter 10. 4.100 / 10. 4.100
libswscale 8. 3.100 / 8. 3.100
libswresample 5. 3.100 / 5. 3.100
libpostproc 58. 3.100 / 58. 3.100
DEV.L. av1 Alliance for Open Media AV1 (decoders: libdav1d libaom-av1 av1 av1_cuvid av1_qsv) (encoders: libaom-av1 librav1e libsvtav1 av1_nvenc av1_qsv av1_amf av1_vaapi)
You’ll need to look for the flag –enable-libsvtav1 in the line starting with “configuration:” If that flag is present in the output, you’re all set to move on to the next step. However, if libsvtav1 is not supported on your platform, you have a couple of options:
You can target a specific ffmpeg version if needed, with ffmpeg 7.1.1 generally including all required functionality on most platforms:
conda install ffmpeg=7.1.1 -c conda-forge
Alternatively, if you’re a more advanced user, you can compile ffmpeg from source and set the --enable-libsvtav1 flag yourself during compilation. Since this method is more advanced, this tutorial will not cover it in detail, as most users won’t need to follow this path.
Finally, to install the LeRobot package, run the following command from inside the LeRobot folder to install it along with the DYNAMIXEL SDK to your Conda environment:
pip install -e ".[dynamixel]"
Congratulations! You’ve set up your environment and installed the essential components to start working with the Koch V1.1 Leader-Follower Robotics Kit. Now it’s time to move on to configuring the arms so we can get to using them.
The first step in this process is to find the serial ports associated with each of our Koch arms using the included lerobot/find_port script:
python lerobot/find_port.py
The output should look something like this:
Finding all available ports for the MotorBus.
['/dev/tty.usbmodem5', '/dev/tty.usbmodem3']
[...Disconnect corresponding leader or follower arm and press Enter...]
After disconnecting the cable for the arm you’re identifying (in this case, /dev/tty.usbmodem5), the script will detect which bus has been disconnected and will print the corresponding serial port for your leader or follower arm in the terminal output. You will need this port location for the next step, where you will configure the IDs for each of the DYNAMIXEL servos.
The port of this MotorsBus is /dev/tty.usbmodem5
Reconnect the USB cable.
Each DYNAMIXEL servo is identified by a unique ID on the bus. Brand new motors usually come with a default ID of 1, but for proper communication between the motors and the controller, we need to assign unique IDs to each connected DYNAMIXEL servo. We also need to configure the communication speed (baudrate) used to control the DYNAMIXELs over the serial port. If you are repurposing motors from another robot, you will still need to complete this step, as the IDs and baud rate may not match with the settings needed for the Koch arms.
To accomplish this, we must connect to each motor individually to configure its settings without interference. These parameters are written to the non-volatile memory of each motor, so you’ll only need to perform this setup once.
To begin, disconnect all cables from each DYNAMIXEL actuator and run the following terminal command:
python -m lerobot.setup_motors \
--robot.type=koch_follower \
--robot.port=/dev/tty.usbmodem5 # Replace this with the port found for the follower arm in the previous step
After running the command, you should see the following prompt in the terminal:
Connect the controller board to the 'gripper' motor only and press enter.
Plug in the gripper motor and press Enter to apply the settings. Ensure this is the only motor connected to the board and that it is not daisy-chained to any other motors. When you press Enter, the script will automatically set the ID and baudrate for the motor. You should see the following confirmation:
'gripper' motor ID set to 6
The script will then prompt you with the next instruction:
Connect the controller board to the 'wrist_roll' motor only and press enter.
Repeat this process for each motor on the follower arm. After completing the follower arm setup, proceed to configure the leader arm using the following command:
python -m lerobot.setup_motors \
--robot.type=koch_leader \
--robot.port=/dev/tty.usbmodem3 # Replace this with the port found for the leader arm in the previous step
Always be sure to double check your cabling at each step before pressing Enter. For instance, the power supply cable might disconnect as you manipulate the board, or you may accidentally leave a different actuator connected and misconfigure some settings that could cause issues going forward. When you are done, the script will simply exit, at which point the motors are ready to be used. You can now plug the 3-pin cable from each motor to the next one, and the cable from the first motor to the controller board.
The next step is to calibrate your robot to ensure that the leader and follower arms have matching physical positions, so they can correctly execute one another’s movements.
Run the following command to calibrate the follower arm:
python -m lerobot.calibrate \
--robot.type=koch_follower \
--robot.port=/dev/tty.usbmodem5 \ # Use the port for your follower robot
--robot.id=my_awesome_follower_arm # Give the robot a unique name to help you keep track
To complete the calibration process, move the robot so that each joint is positioned at the middle of its range, then press Enter. Next, move all the robot’s joints through their full range of motion. The following video from Hugging Face provides a visual example of how to complete the calibration process:
Once you’ve completed the calibration for both the leader and follower arms, you will have fully set up the Koch v1.1 teleoperation kit and will be ready to begin teleoperation.
To test this, run the following command:
python -m lerobot.teleoperate \
--robot.type=koch_follower \
--robot.port=/dev/tty.usbmodem58760431541 \ # Use your follower port here
--robot.id=my_awesome_follower_arm \
--teleop.type=koch_leader \
--teleop.port=/dev/tty.usbmodem58760431551 \ # Use your leader port here
--teleop.id=my_awesome_leader_arm
Keep in mind that the ID associated with a robot is used to store the calibration file. It is important to use the same ID whenever working with the same physical robot setup to avoid configuration or calibration issues.
Running this command should allow you to control the position of the follower arm by directly manipulating the leader arm.
Once you’re comfortable with teleoperation, you can record your first dataset to begin training your AI models.
LeRobot uses Hugging Face Hub features to upload datasets. If you haven’t used the Hub before, make sure you can log in via the CLI on your computer using a write-access token. This token can be generated from your Hugging Face account settings on their website. Add your generated token to the CLI by running:
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
It is also recommended to store your Hugging Face repository username in a variable for convenience:
HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER # This should print your Hugging Face username
Now you can record and upload your first dataset:
python -m lerobot.record \
--robot.type=koch_follower \
--robot.port=/dev/tty.usbmodem5 \ # Your follower port here
--robot.id=my_awesome_follower_arm \
--teleop.type=koch_leader \
--teleop.port=/dev/tty.usbmodem3 \ # Your leader port here
--teleop.id=my_awesome_leader_arm \
--display_data=true \
--dataset.repo_id=${HF_USER}/record-test \
--dataset.num_episodes=2 \
--dataset.single_task="Task name here"
Locally, your recorded dataset will be stored in the folder: ~/.cache/huggingface/lerobot/{repo-id}. At the end of data recording, your dataset will automatically be uploaded to your Hugging Face page. You can view it by running:
echo https://huggingface.co/datasets/${HF_USER}/record-test
Your dataset will automatically be tagged with LeRobot so the community can easily find it. You can also add custom tags to better categorize your recordings for yourself and others who might be interested.
Another useful feature provided by the LeRobot framework is the replay function, which allows you to replay any dataset episode you’ve recorded or episodes from other datasets. This helps you test the repeatability of your robot’s actions and assess transferability across robots of the same model.
You can replay the first episode you recorded on your robot with the following command:
python -m lerobot.replay \
--robot.type=koch_follower \
--robot.port=/dev/tty.usbmodem5 \ # Use your follower port
--robot.id=my_awesome_follower_arm \
--dataset.repo_id=${HF_USER}/record-test \
--dataset.episode=0 # choose the episode you want to replay
Your robot should replicate the dataset you just recorded! These simple recording and playback features are the foundation of LeRobot, but the framework is capable of much more. In this final lesson, I’ll show you how to train and test your own machine learning policy. After this, you’ll have all the tools you need to embark on your your AI and robotics journey.
LeRobot includes a Python script called lerobot/scripts/train.py for training. A few arguments are required to run the training script correctly, as shown below:
python lerobot/scripts/train.py \
--dataset.repo_id=${HF_USER}/record-test \
--policy.type=act \
--output_dir=outputs/train/record-test \
--job_name=act_koch_test \
--policy.device=cuda \
--wandb.enable=true
Here’s a quick breakdown of the command:
- --dataset.repo_id=${HF_USER}/record-test: Specifies the dataset to train on.
- --policy.type=act: Loads pre-configured machine learning settings from configuration_act.py. The policy automatically adapts to your robot’s motor states, actions, and available cameras or other hardware saved in your dataset.
- --policy.device=cuda: Specifies the GPU type to be used for training. Use cuda for Nvidia GPUs or mps for Apple Silicon.
- --wandb.enable=true: Enables Weights & Biases for visualizing training progress (optional, but requires a logged-in wanddb account).
Checkpoints created during training will be saved in outputs/train/act_koch_test/checkpoints. You can resume from the last checkpoint in the event training is interrupted using the following command:
python lerobot/scripts/train.py \
--config_path=outputs/train/act_koch_test/checkpoints/last/pretrained_model/train_config.json \
--resume=true
Training will likely take several hours, so feel free to take a break here!
Once training is complete, you can upload the latest checkpoint to Hugging Face with:
huggingface-cli upload ${HF_USER}/act_koch_test \
outputs/train/act_koch_test/checkpoints/last/pretrained_model
You can also upload intermediate checkpoints by specifying the checkpoint number with the following command, where CKPT is the number of the checkpoint you want to upload.
CKPT=010000
huggingface-cli upload ${HF_USER}/act_koch_test${CKPT} \
outputs/train/act_koch_test/checkpoints/${CKPT}/pretrained_model
To evaluate your trained policy, use the included lerobot/record.py script with the desired policy checkpoint as input. For example, to record 10 evaluation episodes:
python -m lerobot.record \
--robot.type=koch_follower \
--robot.port=/dev/ttyACM1 \
--robot.id=my_awesome_follower_arm \
--teleop.type=koch_leader \
--teleop.port=/dev/ttyACM0 \
--teleop.id=my_awesome_leader_arm \
--display_data=false \
--dataset.repo_id=$HF_USER/eval_koch \
--dataset.single_task="Put lego brick into the transparent box" \
--policy.path=${HF_USER}/my_policy
This is almost the same command as the one used to record your training dataset, with two key differences:
- The --policy.path argument specifies the path to your trained policy checkpoint. You can use a local path or a model uploaded to the Hugging Face Hub (e.g., ${HF_USER}/act_koch_test).
- The dataset name starts with eval to indicate that you are running inference (e.g., ${HF_USER}/eval_act_koch_test).
With these final steps, you now have the complete workflow to train, evaluate, and deploy machine learning policies on your Koch V1.1 robot!
Congratulations on completing the Koch V1.1 Robotics Kit tutorial! You have now assembled your robot, set up the LeRobot framework, recorded and replayed datasets, and trained your own machine learning policies. These are powerful tools that open the door to further experimentation, learning, and development in the fields of AI and robotics.
Whether you are building more complex tasks, contributing datasets to the Hugging Face community, or developing new policies for novel robotic behaviors, you now have a strong foundation to build on. Continue exploring, collaborating, and pushing the boundaries of what your robot can achieve.
Thank you for following this tutorial, and best of luck on your robotics journey!