THANK YOU FOR SUBSCRIBING
Artificial intelligence (AI) is having a profound impact on modern society. Virtually all industries are now impacted by AI. Generative AI is demonstrating remarkable abilities to match human performance in many areas, and all indications are that AI technologies will rapidly improve and often will become indispensable parts of many future systems, operations, and tasks.
Society has already had a glimpse of the impact of AI on ground transportation. Autonomous automobiles incorporating AI have the potential to quickly grow beyond human performance in terms of safety. Elon Musk famously tweeted in 2015 that “...when self-driving cars become safer than human-driven cars, the public may outlaw the latter.” Data seems to support this prediction. Fatality accidents in ground transportation have risen in recent years due to issues such as distracted driving, so there appears to be a growing safety case for AI-enabled driver-assist technologies as well as fully autonomous automobiles. Perhaps one of the most influential factors in defining trust in AI will be the digital native effect. Digital natives, having grown up in an era where technology and AI are naturally integrated into their daily lives, are often more comfortable and accepting of technological breakthroughs and innovations. As the proportion of digital natives expands, it is reasonable to anticipate that society will become more accepting of fully autonomous transportation technologies.
Nonetheless, regarding safety expectations, society distinctly differentiates between aviation and ground transportation. Reasons are often cited as a perceived lack of personal control, heightened public awareness, and the potential for more catastrophic consequences in aviation mobility. Yet, in recent years, the number of traffic fatalities has exceeded 42,000 deaths per year in the United States alone. In contrast, the aviation industry is now in the safest era since powered flight emerged. This is especially true for air carriers, which have not had a major fatality accident in 14+ years. 2023 was one of the safest years on record for the aviation industry. While aviation has become safer, ground transportation has not. This matches the societal expectations of aviation safety, which are much higher than those of the safety expectations of ground transportation.
To fuel future expectations, AI promises to profoundly improve aviation safety. For example, AI-enabled/driven predictive forecasting may significantly improve aircraft maintenance. This can reduce costs, but it can also potentially prolong the life of systems. With its unprecedented data processing and analytics power, AI has the potential to identify important reliability trends and artifacts that humans may not discover. AI is expected to improve weather predictions and forecasting for flight operations. AI also has the potential to positively provide benefits for air traffic management, flight planning and optimization, crew assistance/training, supply chain management, and may even assist in the detection of security threats. These scenarios hold considerable promise to improve safety, profitability, and streamline aviation business operations.
"As the proportion of digital natives expands, it is reasonable to anticipate that society will become more accepting of fully autonomous transportation technologies"
However, a more challenging aspect of AI in aviation will involve using AI technologies to enable automated flight activities (i.e. – “fly the aircraft”). What if a system incorporating AI can be trained using exponentially more training experience/data than is possible with a human pilot? Additionally, that AI system/model can be immediately implemented across entire fleets and continuously improved as flight operations continue in the future. It is easy to imagine a scenario similar to Elon Musk’s 2015 prediction, where AI-infused aircraft systems may have the ability to outperform humans rather quickly.
This opens up a profound challenge for society. How can those AI-driven systems be trusted, especially when there is an ability for continual improvement and self-training in AI?
While this sounds like a good thing, since the beginning of aviation, we have meticulously defined “trust” based on the aircraft and systems' repeatable and predictable (deterministic) nature. We expect a predictable and repeatable output or performance when we apply an input to one of these deterministic systems. That allows us to have dependable testing and certification processes. That is legacy trust.
However, those trusted processes will fundamentally change when a system used for flight can continually retrain and improve performance. While not inherently negative, there may not be the same fidelity of repeatability and predictability in a system reliant on AI. The fundamental prospect is that these AI-influenced/impacted systems will outperform humans for positive outcomes. But how can artifacts be detected that could adversely impact the aircraft's performance? Without distinct expectations of repeatability and predictability, how can such systems gain our trust?
Arguably, public trust is more crucial in aviation than in any other sector due to its critical nature in ensuring safety and global connectivity and its profound impact on people’s lives. Navigating the evolution of trusting AI in aviation systems requires a carefully executed, collective effort from our regulators, industry experts, policymakers, and the public. Building confidence will rely on implementing explainable AI systems and ensuring understandable insights into decision-making processes. Rigorous test, validation, and certification processes are crucial to establish the reliability and safety needed to foster the sense of trust required for aviation systems. Emphasizing human-AI partnerships rather than full automation, coupled with ongoing training and education on AI capabilities and limitations, is also key. Ultimately, industry stakeholders must uphold an unwavering commitment to balancing global ethics, adopting responsible practices, and prioritizing transparency, accountability, and effective communication.
Ultimately, in the vast expanse of AI possibilities, the final frontier rests in human hands, echoing the resolve to ensure success, reminiscent of the profound choices faced by astronauts in "2001: A Space Odyssey." The collective wisdom of our human agency will guide the trajectory of AI integration, making each decision a pivotal odyssey toward shaping the future of aviation.
Read Also