Delivery robots made by companies such as Starship Technologies and Kiwibot autonomously make their way along city streets and through neighborhoods.
Under the hood, these robots—like most mobile robots in use today—use a variety of different sensors and software-based algorithms to navigate in these environments.
Lidar sensors—which send out pulses of light to help calculate the distances of objects—have become a mainstay, enabling these robots to conduct simultaneous localization and mapping, otherwise known as SLAM.
However, these components are resource-intensive and require large amounts of memory for accurate mapping, limiting a robot’s ability to operate over long distances, explains Northeastern University doctoral student Zihao Dong.
“After a certain time, you might be accruing over 10 or 20 gigabytes of memory on your cache,” he says. “That can be a huge computational overhead for you to handle.”
It’s up to roboticists like Dong to help address these bottlenecks, delving deep into the algorithms that enable these robots to operate the way they do.
In newly published research, Dong, under the supervision of Michael Everett, a Northeastern professor of electrical & computer engineering, has developed a new 3D mapping approach that, in some cases, is 57% less resource-intensive than leading methods. The work is published on the arXiv preprint server.
Dong’s algorithm, Deep Feature Assisted Lidar Inertial Odometry and Mapping (DFLIOM), builds on another called Direct LiDAR inertial Odometry and Mapping (DLIOM), which uses inertial measurement units and lidar data for 3D mapping.
Similarly, DFLIOM uses the same technologies, but introduces a new method of scanning environments that not only requires the use of less data, but in some instances can help decrease inaccuracies, Everett says.
The research helps challenge the notion that more data equals better outcomes, Everett explains.
“There’s a big push from the people developing sensors to say, “We now have a sensor that can give you 10 times as many points than before,” he says. “It’s a way they market the sensors to be more useful.
“Actually, from the algorithm side, sometimes we get worried because now you have more data to process, and just having more data is not just a good thing because the algorithm can’t keep up,” he says.
With this work, Dong and Everett try to tackle this challenge and answer this question: “How can we write algorithms that can extract only the important pieces?”
The researchers tested the algorithm using Northeastern’s Agile X Scout Mini mobile robot outfitted with an autonomy kit that featured an Ouster Lidar, a battery pack, and an Intel NUC mini PC. The robot created 3D maps of various exterior parts of Northeastern’s campus, including Centennial Common, Egan Crossing and Shillman Hall.
More information:
Zihao Dong et al, LiDAR Inertial Odometry And Mapping Using Learned Registration-Relevant Features, arXiv (2024). DOI: 10.48550/arxiv.2410.02961
GitHub: github.com/neu-autonomy/Featur … M?tab=readme-ov-file
arXiv
Northeastern University
This story is republished courtesy of Northeastern Global News news.northeastern.edu.
Citation:
Robotics researchers develop algorithms that make mobile navigation more efficient (2025, May 2)
retrieved 2 May 2025
from https://techxplore.com/news/2025-05-robotics-algorithms-mobile-efficient.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.