Opinions expressed by Entrepreneur contributors are their own.
We’ve all read about how self-driving cars will soon dominate our roads. But what about pilotless passenger airplanes?
The technology is there, but experts predict it may be a while before commercial airline passengers fly without a human on the flight deck. Here’s why.
How people feel about self-driving planes
Before we dive into the technology behind autonomous air travel, let’s examine people’s attitudes about flying pilotless.
Surprisingly, the number of people comfortable with boarding a self-flying plane has increased over the years.
First, self-driving cars have become increasingly common, so people are getting used to the idea of trusting their lives to a machine.
Second, technology and compliance for automated plane systems have improved significantly. Admittedly, there were plane crashes in 2018 that may have been at least partly due to system design failures, but the response was to increase safety and testing even further.
The results have been encouraging, reducing the number of crashes and making planes safer. Leading companies like Wisk (in partnership with Boeing) are designing autonomous eVTOL (electric Vertical TakeOff & Landing) aircraft that can be certified for use without an onboard pilot.
Despite people’s comfort with self-flying planes, we’re still many years away from seeing fully automated passenger aircraft. Autopilot systems still rely on real humans to intervene in an emergency or make a tough decision regarding weather issues, radio interference or failure.
A plane that flies from start to finish without a human pilot has no such safeguard, so it must rely entirely on formally certified advanced programming to make all difficult decisions.
But commercial passenger aircraft increasingly have new evolving systems to increase automation further. For example, several aircraft have certified auto-landing systems where a button push will cause the aircraft to initiate and complete a fully automated landing at the nearest suitable airport. Airlines have used these systems in small single-pilot aircraft where the pilot became incapacitated.
How is AI different from automation?
Automation is about mimicking human action, but not necessarily human thinking. Automated systems are fully programmed in advance by humans and are incapable of making any decision not previously programmed in by a human. These automated systems sometimes rely on AI, but not of necessity.
Automation is already widely used in aircraft in flight path analysis, automatic flight scheduling, safety alerts, and standard autopilot systems. It is generally reliable, predictable, and dependable, as you can know precisely how a machine will respond to various events. In aviation, this is called “Determinism” and must be formally demonstrated via Certification.
By contrast, true AI can learn and produce a different output for the same input. In other words, you could type the same question twice into an AI language processing system and get two different answers as a result of that system learning from its prior decisions.
A true AI system in a plane might consider various factors and decide based on those factors, while another plane’s system might make a different decision based on the same factors. So unlike basic automation, the goal of AI is to mimic the way humans think. This type of AI is used in aviation, but it is currently limited to non-safety-critical and ground-based systems.
That’s because safety standards like DO-178C require precise reliability and predictability in avionics software, i.e., determinism. The aircraft designer needs to know and predict how the aircraft will react to, for example, a particular weather event or another aircraft crossing its path. At the same time, the very definition of true AI is that an equal input need not result in an equal output, making it unpredictable and potentially non-deterministic.
There’s no room for error in highly safety-critical systems. This means non-deterministic systems are currently out of the question for allowing a plane to make mid-flight safety-critical decisions entirely based on AI today. Fortunately, international aviation committees are rapidly addressing AI, including the Society of Automotive and Aviation Engineers, which is drafting a formal aviation AI standard.
What are the development and compliance challenges for fully autonomous aircraft?
When will AI be used in planes?
To extend the use of AI to fully autonomous aircraft, some things need to change.
One is deterministic systems based on automation – but not AI – that can cross-check the output of an AI system. Such an automated system can ensure that the AI software only acts in ways that fall within a safe range according to standards like DO-178C and standard pilot training.
Another critical development will be redundancies that do not depend on AI. For example, there will need to be an option for the deterministic monitoring system to require the plane to return to a base safe mode if the AI makes unsafe decisions. This safe mode could involve putting solid restrictions on the AI or even switching over to systems that do not use AI to avoid a situation in which the AI might malfunction and cause a crash.
Until we develop such monitoring and redundancy systems for commercial use, planes will continue to use automation and AI in a limited capacity.