AI

Lessons Learned from an AI Submarine

Posted by Q McCallum on 2021-06-07

I recently came across an article on the Royal Navy testing an AI-managed submarine. Though the idea certainly raises an eyebrow, I have high hopes.

If you see this submarine as a prelude to a Terminator-style, machine-war hellscape, consider three points I’ve noted from the article:

1 - This R&D serves a real purpose. Technology advances have a way of making dangerous jobs safer. Consider remotely-piloted aircraft (RPAs, also known as UAVs or drones) or remote-controlled bomb defusal robots. Both allow a human operator to accomplish a task – “gather intelligence in a hostile area” or “neutralize an explosive” – while keeping that person well out of harm’s way.

Unmanned, AI-controlled submarines could play a similar role for sailor safety. Instead of sending a human-crewed vessel on a dangerous, multi-week mission in hostile territory, why not send a machine?

2 - It’s being taken very seriously. This article aligns with other reports of AI adoption in the military: because the stakes are so high – a recurring theme is, “one mistake and we can start World War III” – everyone involved knows that they have to be thorough in their work. They can’t just shrug off a system failure, nor can they pass the liability to someone else. That means real AI, real planning, and real testing.

3 - The people working on this understand how to build this kind of system. When it comes to technology, a common mistake I see is for someone to try to automate an entire process at once. I prefer to decompose that process into smaller pieces, such that I can apply the technology where it best fits. And I see that kind of thinking here:

[Ollie Thompson, who works for MarineAI, the company building the sub’s AI brain] has no doubts about the challenge he and his colleagues face: “We know a lot of people don’t have confidence in AI. So we work with elements we can test, we separate things into boxes.”

He divides the AI problem into components - and mission management is the toughest. This attempts to simulate the presence of a trained captain in the little submarine’s programming.

Lessons learned

I’ve noted before that the commercial sector could borrow technology ideas from the military, and this submarine research is a prime example.

In particular, any company using AI for automation (as opposed to innovation) could learn from how militaries are looking into ML/AI. What I see in this submarine R&D is a thoughtful, measured approach that puts safety at the forefront and pushes the hype to the side.