Generative AI won’t make flying safer, but this AI will
November 13, 2024

Generative AI won’t make flying safer, but this AI will

Author
Mark Groden, PhD
Full name
Full name

Mark Groden is the founder and CEO of Skyryse, the developers of SkyOS, a Deterministic Expert AI-based universal operating system for flight.

October 23, 2024

Read More

Search summaries that suggest eating rocks. Buzz Aldrin bringing cats to the moon. Chatbot hallucinations. It’s easy to laugh at the absurdity of some of these stories, but the shortcomings of today’s non-deterministic artificial intelligence systems – like Generative AI – aren’t just limited to the digital space.

The past year’s high-profile accidents and safety investigations around self-driving cars reveal the real world dangers of deployment of non-deterministic AI systems – and the potential threat they pose to the advancement of highly-automated Deterministic Expert AI systems that can save lives today.

Skyryse, the company I founded over seven years ago, is the developer of SkyOS, a Deterministic Expert AI-based universal operating system for flight. Our first aircraft with SkyOS, the Skyryse One, is the world’s first highly-automated fly-by-wire helicopter with a single control stick.

I’ve spent my entire life working on making an automated future a reality and what I’ve found over my career is that the capability is just not there yet to safely adopt non-deterministic, user-replacing fully autonomous systems at scale in a consumer transportation landscape.

Even Elon Musk and Tesla – the kings of selling the dream of an autonomous future – have started to use the term “supervised” when talking about their Full Self-Driving feature. Maybe that’s a desire to protect themselves from litigation. Maybe it’s acknowledging reality. I don’t know for sure, but my gut tells me I’m probably not the only leader in this space who believes this.

But that doesn’t mean there isn’t a way for us to develop and deploy technologies that can make travel safer for all of us. In the United States alone, there are more than 1,200 aviation accidents  – and more than 400 deaths – each and every year in general aviation (aircraft that are smaller than a commercial plane).

We have a moral imperative to save those lives if the technology exists to save them. And it does. It’s just built using a different kind of AI – a Deterministic Expert AI system.

An example of a non-deterministic AI system is Generative AI, which, as you’d expect from the name, generates new data from patterns in input datasets based on ingestion of massive amounts of data with no rules on what data is good or bad. They might seem flexible and creative – especially when dealing with edge scenarios – but they’re massively unpredictable. The same set of inputs does not always equal the same output.

Deterministic Expert AI systems, on the other hand, operate with a defined set of rules and are built with data pulled from the minds of the best category experts. A Deterministic Expert AI system lends itself well to highly-automated systems because they’re all about predictability and consistency, generating the same output for the same input every single time. That’s really important in any situation where safety is paramount.

The one limitation to Deterministic Expert AI systems is the unanticipated edge scenario. Trying to write a rule for any and all edge situations is massively difficult at scale. This is where some argue for utilizing Generative AI-based solutions to fill that gap.

That argument is dead wrong.

The unpredictable nature of current non-deterministic AI systems don’t lend themselves to life or death decision-making. Especially not in aviation where there already is a flexible, highly-trained, creative problem solver for edge scenarios sitting in the cockpit – the human pilot.

Pilots work hard to learn how to fly and to learn the equipment they fly on. The right solution today for making aviation safer isn’t trying to replace pilots – it’s to harness the expertise of the best pilots in the world into highly-automated Deterministic Expert AI systems and put human beings even more firmly in command. The result would be to make flying more simple, more intuitive, and less prone to errors.

The greatest danger is that when these nascent and not-ready-for-prime-time technologies fail today – as we saw in San Francisco recently with self-driving cars – it harms trust in any solution based on AI and imperils any chance for a highly-automated future.

Given that, on stage this week at the National Business Aviation Association’s (NBAA) annual Business Aviation Convention & Exhibition (BACE), I defended our belief that until non-deterministic-based autonomous solutions are trustworthy enough to be deployed at scale, only Deterministic Expert AI-based solutions should be allowed in aviation with people onboard.

We need to protect the ability for the public to trust simple, safer, highly-automated Deterministic Expert AI-based solutions in transportation. By doing that we can save more lives today – and build more trust in eventual AI-based solutions in the future.

Related Stories

How SkyOS™ helps pilots avoid mast bumping in the Skyryse One

Read Article
Company Updates

The Skyryse Master Plan

Read Article
Company Updates

Skyryse sells out Skyryse One First Edition reservations in first six months

Read Article
learn more

Don't get left behind

Be the first person notified of training resources, product and company news. Let us know what information you'd be interested in receiving by signing up now.