Tuesday, May 20, 2025
HomeTechnologyRoboticsEmpowering robots with human-like perception to navigate unwieldy terrain TechTricks365

Empowering robots with human-like perception to navigate unwieldy terrain TechTricks365


Credit: General Robotics Lab

The wealth of information provided by our senses that allows our brain to navigate the world around us is remarkable. Touch, smell, hearing, and a strong sense of balance are crucial to making it through what to us seem like easy environments such as a relaxing hike on a weekend morning.

An innate understanding of the canopy overhead helps us figure out where the path leads. The sharp snap of branches or the soft cushion of moss informs us about the stability of our footing. The thunder of a tree falling or branches dancing in strong winds lets us know of potential dangers nearby.

Robots, in contrast, have long relied solely on visual information such as cameras or lidar to move through the world. Outside of Hollywood, multisensory navigation has long remained challenging for machines. The forest, with its beautiful chaos of dense undergrowth, fallen logs and ever-changing terrain, is a maze of uncertainty for traditional robots.

Now, researchers from Duke University have developed a novel framework named WildFusion that fuses vision, vibration and touch to enable robots to “sense” complex outdoor environments much like humans do. The work is available on the arXiv preprint server and was recently accepted to the IEEE International Conference on Robotics and Automation (ICRA 2025), which will be held May 19–23, 2025, in Atlanta, Georgia.






“WildFusion opens a new chapter in robotic navigation and 3D mapping,” said Boyuan Chen, the Dickinson Family Assistant Professor of Mechanical Engineering and Materials Science, Electrical and Computer Engineering, and Computer Science at Duke University. “It helps robots to operate more confidently in unstructured, unpredictable environments like forests, disaster zones and off-road terrain.”

“Typical robots rely heavily on vision or LiDAR alone, which often falter without clear paths or predictable landmarks,” added Yanbaihui Liu, the lead student author and a second-year Ph.D. student in Chen’s lab.

“Even advanced 3D mapping methods struggle to reconstruct a continuous map when sensor data is sparse, noisy or incomplete, which is a frequent problem in unstructured outdoor environments. That’s exactly the challenge WildFusion was designed to solve.”

WildFusion, built on a quadruped robot, integrates multiple sensing modalities, including an RGB camera, LiDAR, inertial sensors, and, notably, contact microphones and tactile sensors. As in traditional approaches, the camera and the LiDAR capture the environment’s geometry, color, distance and other visual details. What makes WildFusion special is its use of acoustic vibrations and touch.







WildFusion uses a combination of sight, touch, sound and balance to help four-legged robots better navigate difficult terrain like dense forests. Credit: Boyuan Chen, Duke University

As the robot walks, contact microphones record the unique vibrations generated by each step, capturing subtle differences, such as the crunch of dry leaves versus the soft squish of mud.

Meanwhile, the tactile sensors measure how much force is applied to each foot, helping the robot sense stability or slipperiness in real time. These added senses are also complemented by the inertial sensor that collects acceleration data to assess how much the robot is wobbling, pitching or rolling as it traverses uneven ground.

Each type of sensory data is then processed through specialized encoders and fused into a single, rich representation. At the heart of WildFusion is a deep learning model based on the idea of implicit neural representations.

Unlike traditional methods that treat the environment as a collection of discrete points, this approach models complex surfaces and features continuously, allowing the robot to make smarter, more intuitive decisions about where to step, even when its vision is blocked or ambiguous.

“Think of it like solving a puzzle where some pieces are missing, yet you’re able to intuitively imagine the complete picture,” explained Chen. “WildFusion’s multimodal approach lets the robot ‘fill in the blanks’ when sensor data is sparse or noisy, much like what humans do.”

WildFusion was tested at the Eno River State Park in North Carolina near Duke’s campus, successfully helping a robot navigate dense forests, grasslands and gravel paths.

“Watching the robot confidently navigate terrain was incredibly rewarding,” Liu shared. “These real-world tests proved WildFusion’s remarkable ability to accurately predict traversability, significantly improving the robot’s decision-making on safe paths through challenging terrain.”







WildFusion helps robots identify safe paths through challenging terrain, such as tall foliage that might otherwise look unnavigable. Credit: Boyuan Chen, Duke University

Looking ahead, the team plans to expand the system by incorporating additional sensors, such as thermal or humidity detectors, to further enhance a robot’s ability to understand and adapt to complex environments.

With its flexible modular design, WildFusion provides vast potential applications beyond forest trails, including disaster response across unpredictable terrains, inspection of remote infrastructure and autonomous exploration.

“One of the key challenges for robotics today is developing systems that not only perform well in the lab but that reliably function in real-world settings,” said Chen. “That means robots that can adapt, make decisions and keep moving even when the world gets messy.”

More information:
Yanbaihui Liu et al, WildFusion: Multimodal Implicit 3D Reconstructions in the Wild, arXiv (2024). DOI: 10.48550/arxiv.2409.19904

Project Website: generalroboticslab.com/WildFusion

General Robotics Lab Website: generalroboticslab.com

Journal information:
arXiv

Provided by
Duke University

Citation:
Empowering robots with human-like perception to navigate unwieldy terrain (2025, May 19)
retrieved 19 May 2025
from https://techxplore.com/news/2025-05-empowering-robots-human-perception-unwieldy.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments