Robotics Roundup: Mar 11, 2024

robotics-roundup


The Robotics Roundup is a weekly newspost going over some of the most exciting developments in robotics over the past week.

In today’s edition we have:

  1. Oh good, the humanoid robots are running even faster now
  2. Method rapidly verifies that a robot will avoid collisions
  3. GITAI Successfully Demonstrates Robotics Construction Capabilities for Lunar Communications Towers
  4. Meta’s open source AI and a tour of Boston Dynamics | Eye on America
  5. Large language models can do jaw-dropping things. But nobody knows exactly why.

Oh good, the humanoid robots are running even faster now

Shanghai-based startup, Unitree Robotics, has claimed that its bipedal robot, H1 V3.0, is the world’s fastest full-sized humanoid machine, reaching speeds of 7.38 mph, faster than Boston Dynamics’ Atlas at 5.59 mph. The H1 V3.0 is 71 inches tall, weighs around 100 pounds, and uses a 3D LiDAR sensor and a depth camera for 360-degree visual information. It can perform tasks such as transporting crates, jumping, climbing stairs, and executing choreographed dance routines. Despite its current lack of articulated hands, Unitree plans to develop and integrate them into future versions of the robot. H1 V3.0’s estimated cost ranges between $90,000 and $150,000, potentially more affordable than competitors, making it attractive to researchers and companies. Despite H1’s speed, it currently lacks the agility of Atlas and the delicate handling capabilities of Tesla’s Optimus prototype.


Method rapidly verifies that a robot will avoid collisions

image

MIT researchers have developed a rapid safety check technique that ensures a robot’s trajectory is absolutely collision-free. The method uses sum-of-squares programming to generate a moving hyperplane function, distinguishing between trajectories that differ by millimeters and certifying an entire path as safe in a few seconds. This technique could be beneficial for robots needing to avoid collisions in cluttered spaces or those required to move quickly. The algorithm’s mathematical proof can be quickly checked using relatively simple math. While highly accurate, the user must have a precise model of the robot and environment for optimal performance. Further research aims to speed up the process and apply it to specialized optimization solvers.


GITAI Successfully Demonstrates Robotics Construction Capabilities for Lunar Communications Towers

image

GITAI USA Inc., a leading space robotics startup, announced a significant milestone in lunar infrastructure construction technology in collaboration with KDDI Corporation. The company demonstrated its robotics technology by building a 5-meter-high communication tower in a simulated lunar environment using GITAI’s Lunar Rover and Inchworm robots, equipped with “grapple end-effectors.” The robots autonomously built the tower, attached a communication antenna and power cables, and later disassembled the structure, simulating maintenance operations. GITAI’s robots operate in a 1G environment, enabling extensive earth-based testing before space deployment. The demonstration validated the robots’ capabilities for scalable lunar urban development, with potential applications in construction, inspection, and maintenance services for lunar infrastructure. GITAI is also working on improving its Technology Readiness Level (TRL) with an autonomous dual robotic arm system (S2) tech-demo onboard the International Space Station (ISS).


Meta’s open source AI and a tour of Boston Dynamics | Eye on America

“Eye on America,” hosted by Michelle Miller, features stories from California, where Meta is sharing its artificial intelligence research with the global community, and from Massachusetts, where an in-depth demonstration of Boston Dynamics’ robots is showcased.


Large language models can do jaw-dropping things. But nobody knows exactly why.

image

Researchers Yuri Burda and Harri Edwards at OpenAI discovered a phenomenon called “grokking,” where language models suddenly learn tasks after prolonged exposure. This has piqued interest in the research community and deepened the mystery surrounding the inner workings of deep learning. The complexities of large models, like GPT-4 and Gemini, challenge existing statistical explanations and emphasize the fundamental puzzle of generalization in machine learning. Despite rapid advances in the field, largely based on trial and error methods, a comprehensive understanding of why these models work remains elusive. The phenomenon of double descent, wherein models improve as they increase in size, has only added to this mystery. Observations suggest large language models, based on transformers, operate similarly to Markov chains, hinting at hidden mathematical patterns in language. However, understanding these complexities remains a significant challenge. Ongoing debates aim to unravel the underlying principles of AI models to enhance their capabilities and manage potential future risks.