TL;DR: We’re building a chat interface that helps you describe your robot and environment in plain language to fine-tune and host VLA-style models for your specific setup. Would you find that useful?
We’re a small group from Denmarks Technical University (DTU) working on a chat-based interface for fine-tuning robotic models, VLAs (Vision-Language-Action models) and LBMs (Large Behavior Models), using uploaded context like robot descriptions, environmental scans, and task videos.
Our vision is to make it possible for anyone to:
-
Describe their robot and its environment in natural language,
-
Upload relevant context (CAD models, camera scans, or demonstrations),
-
Run and fine-tune pretrained models on those contexts,
-
And store these personalized configurations for their own robots —
so that robots can be implemented and adapted quickly, without deep knowledge of control theory or programming.
Right now, we’re exploring how people with home or lab robot arms (e.g., SO-101, LeRobot setups, GR00T integrations, custom arms, etc.) would like to interact with such a platform, and whether this kind of tool would actually help you configure and adapt your robots faster.
We’d love to hear:
-
What kind of robot arms or setups you’re using,
-
What your biggest pain points are when setting up or teaching tasks,
-
And whether a conversational interface like this would fit into your workflow.
If you’re interested, we’d be happy to chat, share early concepts, or collaborate on testing when we have our first prototype.
Thanks for your time and insights!