Friday, April 25, 2025
HomeTechnologyRoboticsScientists are changing number of experiments run by employing coordinated team of...

Scientists are changing number of experiments run by employing coordinated team of AI-powered robots TechTricks365


NSLS-II computational scientist Phillip Maffettone simulated an experimental setup to test AI-driven robotic automation. Credit: Kevin Coughlin/Brookhaven National Laboratory

To build the experimental stations of the future, scientists at the National Synchrotron Light Source II (NSLS-II), a U.S. Department of Energy (DOE) Office of Science user facility at DOE’s Brookhaven National Laboratory, are learning from some of the challenges that face them today. As light source technologies and capabilities continue to advance, researchers must navigate increasingly complex workflows and swiftly evolving experimental demands.

To meet these challenges, a team of NSLS-II scientists is training a team of AI-driven collaborative robots. These agile, adaptable systems are being developed to quickly shift between tasks, adjust to different experimental setups, and respond autonomously to real-time data.

By taking on work using learning processes rather than preprogrammed steps, much like a human researcher, these robots are helping scientists realize a future where these systems can be deployed on demand, empowering them to explore new possibilities and fully harness the facility’s cutting-edge capabilities to investigate everything from battery technologies to quantum materials.

The team has successfully demonstrated this technology by rapidly deploying a prototype of one of these robotic systems to run an autonomous experiment overnight. The setup included different-sized samples that were randomly placed in the experimental environment without any preprogrammed knowledge of their location.

The simulated experiment proceeded for eight hours without errors, showcasing the potential for user-friendly, AI-driven robotic integration in scientific research. Their results were recently published in Digital Discovery.

“We’re envisioning a new path forward,” said Phillip Maffettone, a computational scientist in NSLS-II’s Data Science and Systems Integration (DSSI) division and lead author of the study. “This approach isn’t just about speeding up current experiments; it’s a roadmap for the next generation of beamlines—modular, intelligent, and deeply integrated with AI. We’re designing a system that dynamically adapts to user needs.”

Building an automation foundation

NSLS-II currently operates 29 beamlines, with three more under construction and several others in development. The range, complexity, and volume of experiments conducted across these beamlines presents a challenge: designing a system that can automate existing workflows while remaining flexible enough to adapt to new types of experiments and new beamlines as they come online.

The synchrotron community has already found a lot of success in automating macromolecular X-ray crystallography (MX) experiments using robotics. MX beamlines can now perform automated and semi-automated experiments that routinely reach 99.96% reliability, which has increased the throughput of MX experiments. At NSLS-II alone, almost 13,000 samples were mounted at the Highly Automated Macromolecular Crystallography (AMX) beamline over the past four months.

The robotic systems used at these beamlines are very effective for MX samples, and the robots have inspired scientists to think about what a more modular system could look like as they developed ideas for new beamline designs.

Daniel Olds is the lead beamline scientist at the upcoming High Resolution Powder Diffraction (HRD) beamline at NSLS-II. The beamline’s design enables users to take fast, in situ measurements that reveal real-time material behaviors such as battery cycling, catalytic reactions, and phase transitions—an approach that demands an innovative, adaptable system tailored to custom sample environments.

“We’re tackling a challenge faced by many researchers: how do we get the most science out of a limited window of beam time?” Olds said. “With so many formats and such little time, managing these experiments becomes a high-stakes logistical sprint.”

To envision what future experiments could look like, Maffettone, Olds, and a team of scientists from DSSI studied current experiments that would benefit most from flexible automation. They focused on the Pair Distribution Function (PDF) beamline, where visiting scientists, particularly those studying battery materials, often arrive with hundreds of unique samples. These can range from powders in narrow capillaries to flat “coupons” and even full pouch cell batteries like those used in electric vehicles. Some must be measured while charging and discharging in real time.

Instead of working in a single geometry or setup, a “smart” robot would be able to quickly learn how to handle a wide variety of sample types that differ in shape, size, and weight, just as a human scientist would. This kind of adaptability would reduce downtime, enable continuous beamline operation, and free researchers to focus more on insights than logistics.

Take capillary samples, for example. These are typically mounted on T-shaped brackets that hold 10 to 30 capillaries each. Once loaded and aligned with the beam, the capillaries are scanned sequentially as the bracket moves vertically, allowing different regions of each sample to be measured and averaged for more reliable data.

Scans are fast, with each bracket taking just five to 10 minutes, leaving users little time between sample changes. Currently, switching from a capillary containing battery material to an actual operando battery setup also requires stopping the experiment, opening the protective hutch, and manually swapping samples. An automated system could streamline these processes, but only if it’s intuitive and flexible.

For energy research in particular, this shift could be transformative. Progress in energy storage depends on the ability to screen new materials and quickly test them under real-world conditions with limited scheduled time at the beamline. Adaptive robotics at NSLS-II would dramatically accelerate that process, helping researchers develop the next generation of high-performance batteries for applications ranging from earbuds to electric vehicles.

This is only one example of the many types of experiments in several different fields that this kind of system is hoping to accelerate. As Maffettone explained, “The dream is to have smart robots that users can request on a per-beam-time basis. These applications are designed to be quickly deployed, removed, and redeployed based on the needs of the experiment while also being able to integrate AI-agent-driven automation techniques. Because of this, the robots we use would need to be light and portable, have a modular build, and plug into an accessible software infrastructure.”

Lending a helping articulated arm

To test the kind of hardware that this automation system would use, the team put together a prototype robot designed to help out at the PDF beamline. The Universal Robot UR3e model was used as a base for this first run. To grasp samples, they employed the two-fingered Robotiq Hand-E gripper.

This model has the grip strength and grasp ratio that users would typically require, and it can be quickly installed onto the UR3e.To “see” its environment, a camera with advanced depth sensors was mounted above the gripper with a custom coupling mount that was created by the team.

They also needed to find the right software architecture to manage this team of robots and the various tasks that they would learn to perform. Luckily, NSLS-II already had a toolbox flexible enough for a project like this within Bluesky, an open-source experiment specification and orchestration engine.

Bluesky has been adapted by many beamlines, even outside of NSLS-II, making it simple to “plug in” hardware like these robots and integrate AI and machine learning systems that could be used to automate them. To orchestrate the robots themselves, they would need software that was just as adaptable.

Many of the robots in use today rely on software developed and maintained solely by the vendor, which imposes several limitations. Robot Operating System 2 (ROS2), an open-source software development kit, provided an ideal solution. This vast library of software tools is supported by an active community that stays on the cutting edge of new developments in robotics.

By leveraging ROS2, many different compatible robots in a fast-growing ecosystem can be swapped for the UR3e in the future. It also provides tools to develop time-saving simulations.

“Developing applications for unique tools can take substantial effort and often require time at the beamline,” explained Maffettone. “With robots, we’ve been able to address this issue using ROS2. I can capture models of sample holding equipment and obstacles, load them into ROS, and then plug them into a simulated experimental environment. Developers can access these simulations and chart a robot’s motions to build the applications they need for an experiment before they ever see the robot—or arrive at the beamline.”

With everything in place, it was time to see how this system operated in a real environment with actual samples. After a few successful simulations, the team started with a few capillary brackets at PDF. The brackets in the experiment were configured arbitrarily on a tabletop at different positions and heights. Small unique visual markers, similar to QR codes, were adhered to the brackets so that the robot’s camera could detect them and feed the information to a server where the position and orientation would be determined in real-time and mapped back to a sample database.

As the experiment begins, an intricate dance occurs between Bluesky and ROS2. Bluesky has the experiment mapped out and uses AI agents to give ROS2 a goal for the robot. As the robot starts loading samples, it reports any possible obstacles, errors, or failures it experiences back to Bluesky so that the information can be used to decide what to do next. Current systems rely on pre-planned motions and rigid sample coordinates. This closed loop process keeps the experiment more dynamic and adaptive.

In the experimental environment, the robot successfully performed 195 continuous sample manipulations overnight with no errors. The automated system chose samples, loaded them onto a receiving mount, took simulated measurements, returned the sample from where it was found and chose the next sample based on the information it was getting.

While there is still work to be done to scale this work up, the initial results are already showing promise toward the goal of semi-autonomous experiments that give researchers the freedom to conduct more efficient and innovative experiments.

“Users would often make jokes as they switched out samples about how nice it would be to have a robot that could do it instead,” remarked Olds, “This work is pushing towards a place where that’s a reality. I’m excited to see these robots become a routine part of beamline operations that users can rely on.”

Towards a future where robots connect humans

The team is already looking at challenges that need to be met and ideas that need to be explored in order to reap the full potential of this project. The first big push would be to ensure that these robots can adapt to a variety of experimental conditions at several different kinds of beamlines.

This would require solutions that give robots the ability to swap out peripherals, like grippers, based on the sample type they’re working with. They are also exploring multi-agent-driven robotics for more complex experimental workflows and for robots that can better perceive their environment.

A system like this won’t just accelerate experiments, it could also open the door to new types of multimodality—experiments that can run the same samples at different beamlines. Users can maximize their beam time by measuring the same materials using different complementary techniques and have these automated systems communicate with each other in real time about how best to perform the experiment.

“Robotics will become increasingly necessary in the future,” said Stuart Campbell, NSLS-II chief data scientist, deputy division director of DSSI, and co-author. “As we refine a common way to integrate these robots across the facility, we’re also thinking about how that could work across the entire network of DOE light source facilities.

“Projects like this are starting to lay the foundation for even larger cross-functional initiatives. One day, we may be able to leverage automation and robotics to enhance multimodal experiments not only across beamlines but at laboratories across the country.”

More information:
Chandima Fernando et al, Robotic integration for end-stations at scientific user facilities, Digital Discovery (2025). DOI: 10.1039/D5DD00036J

Provided by
Brookhaven National Laboratory

Citation:
Scientists are changing number of experiments run by employing coordinated team of AI-powered robots (2025, April 24)
retrieved 24 April 2025
from https://techxplore.com/news/2025-04-scientists-employing-team-ai-powered.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments