Over the past decade, I’ve conducted a series of research projects at IOActive focused on hacking robots. Robots are interesting from a security research perspective because they sit at a unique intersection: they are cyberphysical systems, embedded devices that can perform physical actions. A vulnerability in a web application leaks data. A vulnerability in a robot can harm the person standing next to it. That physical dimension is what makes this research worth pursuing. The first, “Hacking Robots Before Skynet” with Cesar Cerrudo in 2017, assessed over a dozen robots from vendors including SoftBank, Universal Robots, UBTECH and Rethink Robotics. After identifying nearly 50 vulnerabilities across various robotic platforms, we demonstrated the real-world risks by successfully exploiting industrial collaborative robots.
That research continued in 2018, when we demonstrated the first-ever ransomware attack on robots at Kaspersky SAS, infecting SoftBank’s NAO and Pepper robots with proof-of-concept malware that disrupted operations, disabled factory reset mechanisms, and showed that robot downtime could be weaponized for extortion. That same year, we contributed to the development of RVSS (Robot Vulnerability Scoring System), a scoring framework designed to address what CVSS misses when applied to robotics: safety impact, environmental context, and downstream effects on physical systems.
At the time, the conventional wisdom was that attacking robots required deep expertise in ROS, embedded systems, and cyber-physical dynamics. That specialized knowledge was supposed to be the barrier. Our research showed the barrier was thin, but it was still there. You needed to know where to look and how robotic ecosystems were assembled.
Nine years later, that barrier no longer exists.
What Changed
A new research paper I contributed to, “Cybersecurity AI: Hacking Consumer Robots in the AI Era”, published in March 2026 with Alias Robotics, demonstrates something I’ve been watching unfold across every engagement surface: AI doesn’t just accelerate the work. It compresses the expertise requirement.
Alias Robotics built CAI (Cybersecurity AI), an open-source, CLI-based cybersecurity agent that combines domain knowledge of robotics protocols, embedded systems, and exploit databases into a GenAI-powered assessment framework. The methodology was deliberately constrained: for each robot, CAI received only the product name and access. No documentation. No prior research. The AI agent autonomously discovered attack surfaces, tested for weaknesses, built exploits, and assessed impact. Human operators guided the process and intervened when tests reached vendor cloud infrastructure, but the analytical engine was the AI.
What We Found
Three consumer robots were assessed using this framework: an autonomous lawnmower (Hookii Neomow), a powered exoskeleton (Hypershell X), and a window cleaning robot (HOBOT S7 Pro). Different categories, different manufacturers, different countries of origin: China and Taiwan. Using this AI framework, we found **38 vulnerabilities** across them: 16 Critical, 14 High, 6 Medium, 2 Low.
Across these products, the same patterns kept showing up.
No authentication, anywhere.
Every robot exposed critical interfaces without requiring credentials. The lawnmower had an unauthenticated debug service granting root access (CVSS 10.0). The exoskeleton accepted BLE connections from any device in range, no pairing required. The window cleaner opened all GATT services to anyone who connected. These aren’t subtle implementation flaws. They’re the complete absence of a security boundary.
Fleet-scale exposure from single-device access.
The lawnmower’s hardcoded MQTT credentials were identical across the entire fleet. From one compromised device, CAI escalated to the vendor’s broker running default admin credentials, enumerated 267 connected robots, and confirmed the ability to command any of them. One robot compromised, hundreds accessible. That’s the kind of attack chain that, in 2017, would have taken a team days to map.
Credential leakage in every direction.
Across all three platforms, we found hardcoded credentials embedded in configuration files, mobile applications, and heap dumps. Plaintext database passwords, cloud API keys, SMTP credentials providing access to thousands of internal support emails, even Shopify account recovery codes. The exoskeleton alone yielded root MySQL credentials for servers in both China and internationally.
Unsigned firmware, plaintext updates.
The window cleaner’s OTA service accepted arbitrary firmware writes without cryptographic verification, served over plaintext HTTP. The exoskeleton’s firmware binaries were publicly accessible at predictable URLs, protected only by CRC16 checksums. In both cases, firmware replacement is trivially achievable.
Motor control without authorization.
This is where robot vulnerabilities diverge from conventional IoT. The exoskeleton exposed 177 BLE commands including motor control functions, all executable without authentication from up to 70 meters away. The window cleaner’s suction motors could be disabled mid-operation. We’re not talking about data exposure at this point. We’re talking about physical safety.
Same Vulnerabilities, Different Decade
In 2017, we found unauthenticated network services on robots, unencrypted communications and unsigned firmware updates. We recommended encryption, authentication, secure defaults, and vulnerability disclosure channels. Nine years later, none of the three consumer robots assessed in the new paper implemented any of these basic security controls.
The vulnerability classes are identical. The specialized expertise that once served as a de facto security barrier is now encoded in AI systems that anyone can run.
The Physical Dimension
Back then, we described robots as “kinetic IoT devices”: computers with arms, legs, or wheels that, if compromised, could cause physical harm, destroy property, or kill. A woman had already died in 2015 when an industrial robot restarted at the Ajin USA plant in Alabama. Our argument was that deliberate attacks could produce the same outcomes.
Nine years later, the scenarios are concrete. An exoskeleton whose motor commands execute without authentication. A lawnmower fleet accessible through a single set of hardcoded credentials. A window cleaner whose suction can be disabled mid-operation from 70 meters away.
Beyond physical safety, the paper documents systemic data governance failures. Two of the three robots showed confirmed GDPR violations: data transmission without user consent, no data subject rights mechanisms, cross-border transfers without legal basis. What’s notable is that CAI identified these alongside the technical vulnerabilities. AI-powered assessments naturally extend into regulatory risk, not just exploitation.
The vendor response pattern hasn’t changed either. All three manufacturers were contacted. Hypershell’s reply: “At this time, Hypershell is not pursuing vulnerability disclosure reports or external security research submissions.” In 2017, four of six vendors responded to our disclosures. Two said they’d fix the flaws but never did. One vulnerability was patched over a year later. The industry’s relationship with security research has not materially improved.
Security Through Obscurity Is Over
The robotics industry relied on an implicit assumption: the specialized knowledge required to assess robotic systems created a natural barrier to entry. That assumption was already questionable in 2017. In 2026, it’s demonstrably false.
LLMs trained on robotics documentation, security research, and exploit databases can guide someone through a complex robotic system without requiring years of specialized training. The domain expertise that historically protected these systems is now accessible through a command-line interface.
The implications are immediate.
For manufacturers:
Complexity is not protection. All three robots exhibited fundamental authentication failures. Not sophisticated zero-days. The absence of basic security controls that an AI agent identifies in minutes. The hardware engineering across all three platforms was impressive. The cybersecurity maturity doesn’t match.
For regulators:
Connected consumer robots introduce physical safety risks that traditional product liability frameworks weren’t designed for. Exploitable motor controls, fleet-wide credential reuse, absent consent mechanisms. This demands regulatory attention specific to robotic systems.
For the security community:
Traditional defense-in-depth architectures like the Robot Immune System (RIS) are a foundation, but their static, rule-based approach isn’t built to counter AI-powered attacks that autonomously chain vulnerabilities across BLE, cloud APIs, and OTA channels. The paper argues for a shift toward GenAI-native defensive agents: adaptive threat detection, autonomous patch generation, coordinated fleet-wide intelligence sharing.
Looking Forward
This new collaboration with Alias Robotics shows that the threat hasn’t diminished. It has been amplified by AI capabilities that compress months of expert analysis into hours.
The full paper is available on arXiv (2603.08665) CAI is open source at github.com/aliasrobotics/cai.
For organizations deploying robotic systems, in manufacturing, logistics, healthcare, agriculture, or consumer contexts, the calculus has changed. The assessment timeline has collapsed. A consultant with the right AI tooling can now cover in hours what used to take weeks. The question is no longer whether your robots have vulnerabilities. It’s whether you find them before someone else does.
*This research was conducted in collaboration with Alias Robotics, a Spain-based company specializing in robot cybersecurity. Alias Robotics developed CAI and the Robot Immune System (RIS), and continues to lead open-source efforts in robotic security tooling.*
IOActive continues to provide specialized security assessments for robotic platforms, IoT devices, and cyber-physical systems. Contact us to discuss how we can help evaluate and strengthen the security posture of your deployments.
