The Robotics Roundup is a weekly newspost going over some of the most exciting developments in robotics over the past week.
In today’s edition we have:
- Shape-Shifting Robot Swarms Self-Assemble, Adapt to the Unfamiliar
- ASK HACKADAY: WHAT’S THE DEAL WITH HUMANOID ROBOTS?
- Medics and Machines: Developing Robotic Technologies to Provide Assured Care in the Field
- Purdue researchers allow robots to see as well in pitch darkness
- Robotic grippers offer unprecedented combo of strength and delicacy
Shape-Shifting Robot Swarms Self-Assemble, Adapt to the Unfamiliar
A new robotic platform called Granulobot has been developed at the University of Chicago, the platform consists of motorized units embedded with a Wi-Fi microcontroller and sensors, using magnets to engage with other units. Inspired by the behavior of granular materials, Granulobot can change its properties from liquid-like to rigid, similar to wet sand. It can deform its overall shape and allows for individual units to recombine or move independently. This system blurs the boundaries between soft, modular, and swarm robotics, and represents a programmable material and a soft robot.
ASK HACKADAY: WHAT’S THE DEAL WITH HUMANOID ROBOTS?
Humanoid robots, with their resemblance to humans, have always fascinated people. However, task-specific robots, designed for function rather than form, often outperform humanoid robots in specific tasks. Industrial robots, for example, are more efficient at assembly line tasks, while drones excel at surveying land or delivering packages. Humanoid robots come with compromises and weaknesses, such as difficulty in mimicking human abilities like walking and grasping. They are also expensive to design and manufacture, less durable, and prone to malfunctions. While humanoid robots may appeal to our emotions and offer familiarity, the ideal robot may not necessarily look human.
Medics and Machines: Developing Robotic Technologies to Provide Assured Care in the Field
Researchers at the Johns Hopkins Applied Physics Laboratory (APL) are exploring the use of artificial intelligence (AI), augmented reality (AR), and robotics to support collaboration between medics, virtual assistants, and autonomous robots in combat and humanitarian crisis situations. The team aims to develop AI-based virtual assistants that can provide medical advice to medics in the field, while AR can offer real-time visualization of patient conditions. Additionally, robots can assist medics by performing various tasks. The research focuses on adaptive human-robot teaming, combining AI and machine learning to emulate human skill acquisition and allow robots to learn subtasks. The goal is to create a system where humans provide the structure of a task, and robots learn how to accomplish selected subtasks. The research aims to improve outcomes in field care and potentially save lives.
Purdue researchers allow robots to see as well in pitch darkness
Researchers at Purdue University have developed a new vision method called HADAR (heat-assisted detection and ranging) that enables robots to see in the dark. HADAR combines thermal physics, infrared imaging, and machine learning to create fully passive and physics-aware machine perception. Traditional thermal sensing methods have limitations in providing detailed information, while other vision systems like LiDAR, radar, and cameras have drawbacks such as signal interference, risk to eyes, or poor performance in low light conditions. HADAR overcomes these limitations by recovering texture and accurately identifying temperature, emissivity, and texture of objects in a scene. The research team conducted tests and found that HADAR TeX vision was able to pick up fine textures and details even in low light conditions. Further improvements are needed in terms of hardware size and data collection speed for practical applications in self-driving cars and robots.
Robotic grippers offer unprecedented combo of strength and delicacy
Researchers at North Carolina State University have developed a robotic gripping device that is capable of handling various objects, from a drop of water to a 6.4-kilogram weight. The gripper is made using the art of kirigami, which involves cutting and folding sheets of material to form three-dimensional shapes. The design achieves a balance between strength, precision, and gentleness, with a payload-to-weight ratio of about 16,000, which is 2.5 times higher than the previous record. The gripper’s attractive characteristics are primarily driven by its structural design, making it possible to fabricate the grippers out of biodegradable materials. The researchers also integrated the gripping device with a myoelectric prosthetic hand, enhancing its functionality for tasks that are difficult to perform using existing prosthetic devices. The gripper design has potential applications in robotic prosthetics, food processing, pharmaceutical manufacturing, and electronics manufacturing.