The Nanobot Frontier: How Cornell’s Autonomous, Salt-Sized Robots Are Learning to Walk

Researchers at Cornell University have developed autonomous robots smaller than a grain of salt, powered by light and controlled by onboard circuits. Now, with the help of AI, these microscopic machines are learning to walk, paving the way for revolutionary applications in medicine, environmental sensing, and micro-manufacturing.
The Nanobot Frontier: How Cornell’s Autonomous, Salt-Sized Robots Are Learning to Walk
Written by Emma Rogers

ITHACA, N.Y. – In the quiet, controlled environment of a Cornell University laboratory, a new form of life is taking its first steps. Invisible to the naked eye and smaller than a single grain of salt, these microscopic machines are not biological. They are fully autonomous robots, complete with onboard electronic brains, actuators for legs, and photovoltaic cells that serve as a rudimentary metabolism, converting light into the electricity that powers their existence. This is not science fiction; it is the tangible result of a decade-long quest to shrink robotics down to the cellular scale.

For years, the field of microrobotics has been dominated by devices that were either tethered by wires for power and control or manipulated externally by magnetic fields, limiting their utility in complex, enclosed environments like the human body. The Cornell team, led by physicist Paul McEuen and researcher Itai Cohen, has shattered that paradigm. By leveraging the same semiconductor manufacturing technology that builds billions of transistors on a computer chip, they have created a fleet of robots, each about 100 microns wide, that operate with unprecedented independence. “This is the first time we’ve been able to build, in a very standard, scalable way, an autonomous robot at this scale,” Professor Cohen explained in an interview with Wired.

A Leap in Miniaturization Bypasses Tethers and External Fields

The core innovation lies in the integration of complementary metal-oxide-semiconductor (CMOS) electronics—the bedrock of the modern digital world—directly onto the robot’s chassis. This allows each microbot to carry its own control system, a simple circuit that functions as a clock, coordinating the firing of its ‘legs’. As detailed in their foundational paper published in the journal Nature, this onboard brain enables the robot to execute pre-programmed commands without any external guidance. The robot’s body is essentially a silicon wafer, meticulously etched and layered to house both the control circuit and the solar cells that power it.

This self-sufficiency is a monumental step forward. Previous micro-scale machines were often simple particles of iron oxide pulled through fluid by powerful, external magnets. While effective for simple tasks, this approach is akin to puppetry, with the device having no agency of its own. The Cornell robots, by contrast, carry their instructions internally. A laser beam, acting as a power source, illuminates the photovoltaic cells on the robot’s back, and the onboard circuit directs that energy to its actuators, initiating movement. This untethered freedom opens the door to navigating intricate, previously inaccessible environments, from the tangled pipe networks of a microfluidic chip to the delicate capillaries of living tissue.

Harnessing Light and Bubbles for Untethered Propulsion

The method of propulsion is as innovative as the electronics. The robot’s legs are not mechanical joints but tiny strips of platinum, only a few atoms thick, layered over titanium or graphene. When the onboard circuit sends a small electrical current to a platinum leg, it triggers an electrochemical reaction with the surrounding water, splitting it and creating a minuscule bubble of hydrogen and oxygen. The formation and subsequent collapse of this bubble generates a thrust, pushing the leg and propelling the robot forward. By alternating the current between its front and back legs, the robot can clumsily ‘swim’ or crawl.

This bubble-based actuation is remarkably efficient at this scale, but it is also difficult to control with precision. The chaotic nature of bubble formation makes smooth, directed movement a significant challenge. Early versions of the robots were programmed with simple, alternating signals, resulting in locomotion that was functional but erratic. The researchers knew that to unlock the robots’ true potential, especially for delicate tasks like targeted drug delivery or microsurgery, they needed to teach them to walk with purpose and control. The solution, it turned out, would come not from better hardware, but from smarter software.

From Rudimentary Actuation to AI-Driven Locomotion

In a significant advancement announced in early 2023, the Cornell team revealed they had successfully used artificial intelligence to teach their microscopic robots how to walk. As reported by the Cornell Chronicle, the researchers developed a computer simulation of the robot and its bubble-propulsion physics. They then unleashed a reinforcement learning algorithm on the simulation, allowing the AI to experiment with countless different sequences of leg activations to discover which patterns, or ‘gaits’, produced the most effective and efficient movement. This AI-driven approach bypassed the painstaking process of human trial and error.

The AI discovered several gaits that were far more effective than those a human engineer might have programmed. By varying the timing and intensity of the bubble generation, the AI learned how to make the robot walk faster and more directly toward a target. “The AI was able to find gaits that were more clever and effective than we could have imagined,” said Michael Reynolds, a former student in Cohen’s lab. These optimized commands are then translated into a program and flashed onto the robot’s onboard CMOS circuit, giving the physical machine a new, AI-honed ability. According to New Scientist, this method allows the robots to achieve speeds of more than 10 micrometers per second, a significant velocity for a machine of its size.

Charting a Course for In-Vivo Diagnostics and Micro-Manufacturing

The long-term vision for these autonomous microbots is transformative. Researchers envision swarms of these devices being injected into the bloodstream to patrol for cancer cells, deliver drugs directly to a tumor, or clear plaque from arteries. Their tiny size would allow them to perform diagnostics at a cellular level, a concept known as ‘in-vivo’ diagnostics. In the materials science and manufacturing sectors, they could be used to assemble microscopic structures, repair microelectronic circuits from the inside, or serve as mobile sensors in industrial chemical vats or sensitive environmental sites.

However, significant hurdles remain before these applications become reality. The current robots are powered by external lasers, which cannot penetrate deep into opaque materials like human tissue. Future versions will need more sophisticated onboard power storage or the ability to harvest energy from their local environment, such as chemical gradients or thermal energy. Furthermore, navigating the turbulent, crowded environment of the bloodstream is infinitely more complex than a petri dish. The robots will need advanced sensors and more sophisticated onboard intelligence to orient themselves and respond to their surroundings. Biocompatibility is another critical challenge; the materials must not trigger an immune response, and the bubble-propulsion system must be proven safe for use in living organisms.

The Path to Mass Production and Broader Intelligence

Despite these challenges, the project’s foundation in standard semiconductor fabrication is perhaps its most powerful asset. Because the robots are made on silicon wafers, they can be produced in parallel by the thousand, or even million, in a single batch. This scalability is crucial for creating the large swarms needed for many proposed applications and drastically reduces the cost per unit, moving microrobotics from a bespoke laboratory curiosity toward a mass-producible technology. The team is now working to integrate more complex sensors onto the robot’s chassis, which could detect temperature, chemicals, or specific biological markers.

The integration of AI marks a pivotal moment, transforming the devices from simple automatons into adaptable agents capable of learning. The next phase of research will focus on giving the robots the ability to make decisions based on sensory input—to change direction if an obstacle is detected or to release a payload when a specific chemical signature is found. As the onboard circuits become more complex and the AI training becomes more sophisticated, these salt-sized explorers are poised to unlock a new era of technology, one that operates on a scale previously confined to the domain of biology itself.

Subscribe for Updates

RobotRevolutionPro Newsletter

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us