ROS 2 manta-ray biomimetic robots pair pressure-compensated design with practical AI for quiet, deep underwater automation. See what makes them work.

ROS 2 Manta-Ray Robot: Deep, Quiet Underwater AI
Most underwater robots fail for boring reasons. Not because autonomy is “hard,” but because connectors leak, housings crack, and a single flaky sensor turns navigation into guesswork.
That’s why a pressure-compensated, oil-filled manta-ray biomimetic ROS 2 robot shared by maker bernardo caught my attention. The idea is simple and opinionated: remove the biggest depth limiter (air-filled electronics bays) and build an underwater platform that’s quiet, relatively low-cost, and open enough for small science teams and conservation groups.
This matters to the broader AI in Robotics & Automation conversation because underwater work has the same operational realities as factories and warehouses: harsh environments, limited maintenance windows, and the need to automate repetitive tasks safely. The difference is that underwater, “downtime” can mean a lost vehicle.
Why pressure-compensated design is the real depth enabler
A pressure-compensated, oil-filled robot is built around one key principle: stop fighting external pressure with thick walls and seals; equalize it instead. When there’s no trapped air, there’s far less differential pressure trying to crush a housing or push water through microscopic gaps.
Air-free electronics changes the cost curve
Traditional hobby AUV/ROV designs often rely on:
- Rigid pressure housings (acrylic tubes, aluminum pods)
- Penetrators and connectors (frequent failure points)
- Conservative depth ratings because one seal failure is catastrophic
In contrast, oil-filled electronics with compliant bladders or compensators push you toward:
- Fewer critical seals
- Smaller pressure housings (or none for many components)
- Better odds of surviving “unknown depth” experiments
The post claims “everything is oil-filled and air-free,” including camera and electronics, with modified components for higher pressure tolerance. That’s the right direction if your goal is more depth without a huge bill of materials.
Quiet propulsion isn’t a nice-to-have
A manta-ray form factor also signals something else: acoustic signature matters. Thrusters are loud; fin propulsion can be quieter. If your mission includes marine life monitoring, habitat surveys, or work near sensitive infrastructure, quiet operation isn’t just “cool”—it directly affects data quality and access.
And there’s a very practical industrial parallel: quiet, predictable motion is also what you want in human-adjacent automation (think hospitals, labs, and inspection routes near people).
What ROS 2 adds to biomimetic underwater robots
ROS 2 isn’t the point of the robot, but it’s the reason this concept scales beyond a one-off build. ROS 2 turns a clever mechanical platform into a repeatable software product.
Underwater autonomy needs modularity (more than it needs fancy AI)
Underwater autonomy stacks tend to sprawl quickly: IMU, depth sensor, leak sensor, power monitoring, camera, logging, teleoperation, navigation, and failsafes. ROS 2’s value is that it lets you:
- Swap sensors without rewriting everything
- Separate control loops from UI and logging
- Record and replay missions for debugging
- Add autonomy incrementally (dead-reckoning → waypoint nav → full mission behaviors)
In the source project, the UI uses NiceGUI connected to ROS 2, with a plan to support remote operation via secure networking when tethered. That’s a pattern I like because it’s realistic: tethered-first is how you de-risk autonomy.
Reliability beats elegance in field robotics
The maker’s next steps are refreshingly practical:
- Replace a dead IMU
- Move from I²C to SPI/UART for reliability
- Build dead-reckoning
- Add waypoint navigation in the GUI
- Add DVL later for better navigation
That list is basically the underwater version of “make it robust before you make it smart.” In industrial automation, the same rule applies: a model that’s 2% more accurate doesn’t help if your sensor bus locks up twice per shift.
The IMU problem: why I²C keeps biting robots in the field
The reported failure—cheap BNO08x board, clock stretching issues on Raspberry Pi, instability even after slowing the bus and bit-banging—maps to a common field failure mode.
Here’s the blunt take: I²C is fine inside a neat consumer device. It’s a frequent liability on mobile robots.
What’s actually happening when I²C “sort of works”
In real robots, I²C problems usually come from a mix of:
- Longer wires than the bus was designed for
- EMI from motors and switching regulators
- Weak pull-ups or mismatched voltage levels
- Clock stretching behavior that hosts handle inconsistently
- Inexpensive breakout boards with marginal layout and level shifting
When a system “works after slowing the clock,” that’s often your early warning that the margin is gone. The device isn’t “bad,” the integration is.
What to do instead (ranked by impact)
If you’re building underwater robots—or any robot you can’t easily retrieve—this is the reliability stack I recommend:
- Prefer SPI for IMUs when available. It’s less ambiguous electrically and tends to behave better in noisy systems.
- Use UART if the sensor supports it and your throughput needs are modest.
- Physically isolate sensor wiring from motor power lines and switching regulators.
- Add power integrity: dedicated regulator for sensors, local decoupling, and clean grounding.
- Log raw sensor health in ROS 2 (timeouts, re-init attempts, bus errors) so failures aren’t mysterious.
And yes—sometimes the board really is bad. Cheap IMU breakouts can be inconsistent batch-to-batch. If you need repeatability, pay for known-good supply chains.
Where AI fits: making a manta-ray robot “smarter” without making it fragile
People hear “AI underwater robot” and think of end-to-end policies. I take a more conservative stance: the best AI for underwater automation in 2025 is targeted intelligence wrapped in strong engineering.
AI that pays for itself underwater
These are AI capabilities that reliably improve mission outcomes without adding too much fragility:
- Perception-assisted piloting: detect obstacles, nets/lines, or structures from camera feed; overlay alerts in the operator UI.
- Quality control for science data: automatically flag blurry imagery, biofouled lenses, or CTD drift.
- Adaptive control: learn fin/thrust parameters that compensate for payload changes, minor leaks, or salinity differences.
- Anomaly detection: detect “this vibration pattern isn’t normal” or “power draw is rising” early enough to abort safely.
Notice the theme: AI as a co-pilot and watchdog, not a single point of failure.
A practical autonomy ladder for this platform
If you’re building on a ROS 2 biomimetic AUV/ROV like this manta-ray concept, a staged roadmap reduces risk:
- Tethered teleop + recording (validate mechanics, power, comms)
- Dead-reckoning (IMU + depth + velocity estimate)
- Waypoint navigation (with conservative geofencing and abort conditions)
- DVL integration (tighten velocity; reduce drift)
- Vision-based station-keeping (hold position relative to a reef/structure)
- Mission behaviors (survey patterns, transects, return-to-home)
Each step increases value while keeping the vehicle retrievable.
Underwater automation has clear parallels to logistics and industrial robotics
Underwater robotics can feel niche, but the engineering patterns translate directly to industrial automation.
“Harsh environment robotics” is one category, not two
Factories deal with dust, vibration, washdowns, temperature cycling, and EMI. Underwater adds pressure and corrosion. The shared requirements are the same:
- High uptime and predictable maintenance
- Fault-tolerant sensing and networking
- Remote monitoring and update capability
- Modular software that can evolve
A manta-ray biomimetic design also hints at a broader automation insight: non-wheeled locomotion matters. Warehouses have mostly standardized on wheels. Underwater, you can’t. That pushes innovation in control, estimation, and behavior—exactly where AI and ROS 2 shine when used carefully.
What this enables in real deployments
A low-cost, quiet, open platform isn’t just for hobbyists. It’s a candidate for:
- Infrastructure inspection (harbor walls, intake structures, aquaculture pens)
- Environmental monitoring (repeatable transects with consistent sensor payloads)
- Search tasks in confined or sensitive waters where loud thrusters are a downside
- Training and workforce development for robotics teams that can’t justify expensive commercial AUVs
If you’re in industrial automation, the lesson is transferable: design for maintainability and sensor reliability first, then add autonomy. That’s how you get systems that run for months, not demos that run for minutes.
What I’d build next on this ROS 2 manta-ray platform
If I were advising the next iteration (and I’ve found this order works across many robot types), I’d push these upgrades before “more features”:
1) Add instrumentation like it’s a production robot
- Battery coulomb counting and voltage sag logging
- Leak detection and humidity/pressure trend logging (even in oil-filled systems)
- Motor current sensing per actuator
The best autonomy is knowing when to stop.
2) Make the comms model explicit
Support two modes cleanly:
- ROV mode: tethered, low-latency teleop, high-rate video
- AUV mode: intermittent comms, mission scripts, strong failsafes
Don’t mix assumptions. Most robots get this wrong and end up unreliable in both modes.
3) Treat sensor selection as a system decision
A “good IMU” underwater isn’t just about specs. It’s about:
- Interface stability (SPI/UART)
- Temperature behavior in oil
- Calibration and repeatability
- Driver maturity in ROS 2
That’s also where community groups (like marine robotics subcommunities) become valuable: you learn what fails in the field, not on the bench.
Where this is heading in 2026
A pressure-compensated, ROS 2-based biomimetic robot that can run hours per mission is already the right shape of solution for underwater automation. Add a reliable navigation stack (dead-reckoning + DVL) and a small set of AI capabilities (watchdog + perception assist), and you’ve got a platform that can do real work without a research lab budget.
If you’re tracking the AI in Robotics & Automation space, pay attention to projects like this. They’re not flashy corporate prototypes. They’re the kind of practical designs that—once copied and improved—change who gets access to robotics.
The forward-looking question I keep coming back to: when underwater robots become as maintainable as warehouse robots, what stops ocean monitoring from becoming routine instead of rare?