The Regulatory Landscape of AI in Aerospace
Dive into the aerospace regulatory implications of AI and discover the EASA roadmap for trust in AI technology.
The aerospace industry is at the precipice of a transformative shift, driven by the emergence of Artificial Intelligence (AI).
EASA's "Artificial Intelligence Roadmap 2.0" and the proposed "EASA Concept Paper: guidance for Level 1 & 2 machine learning applications" are setting the stage for AI's role in safety and the environment.
As an aerospace specialist, I'm eagerly strapping in for this disruptive tech journey.
But with great innovation comes even greater responsibility, leading to some crucial questions.
Here's what's on my mind:
What are the regulatory implications in the aerospace industry?
How can we establish trustworthiness in AI technology?
What's the regulatory roadmap?
Let's dive in! 🤿
The regulatory implications in the aerospace industry
EASA's 'Artificial Intelligence Roadmap 2.0' and the 'EASA Concept Paper: guidance for Level 1 & 2 machine learning applications' shed light on these pressing questions, providing insights into what lies ahead in the aerospace AI landscape.
Any safety-related applications, or environment-related applications within the domains of the EASA Basic Regulation (Regulation (EU) 2018/1139) are affected.
This includes:
Initial and continuing airworthiness
Maintenance
Air operations
ATM / ANS
Training
Aerodromes
Environmental protection
The criteria to establish trustworthiness in AI technology
In order to establish a trustworthy criterion in the use of AI technology, EASA has defined 4 building blocks:
Block #1 - AI trustworthiness analysis:
This block interfaces with EC Ethical Guidelines and it is the gate for the other 3 building blocks.
The assessment considers:
the characterisation of the AI application
Safety
Security
Ethics
This block also addresses the classification of AI application which is based on the level of “human oversight and authority” of the end user as per below:
Block #2 - AI assurance:
Addresses how the AI system is going to learn and be trained and produces understandable and readable information.
Block #3 - Human factors for AI:
Introduces the guidance to account for the specific human factors requirements when introducing AI.
Block #4 - AI safety risk mitigation:
Addresses residual risks associated with inherent AI uncertainty.
The application of proportionality
Two main criteria can be used to anticipate the application of proportionality of the Mean of Compliance:
The AI Level: We've discussed AI levels 1, 2, or 3.
The level of criticality of the AI system: EASA anticipates model validity limitations based on Assurance Levels. For example, for initial or continuous airworthiness or air operations, assurance levels range from catastrophic to minor, depending on the safety margins in case of AI application failure.
So what is in the regulation landscape?
EASA is adjusting and proposing timelines based on feedback from key players in the AI domain, including industry, research institutes, and other institutions.
The deployment of learning processes in civil aircraft certification projects has begun, with initial applications using limited AI/ML solutions.
However, the drone industry is pushing for full autonomy earlier than commercial air transport. There has also been an initial focus on commercial air transport and ATM/ANS.
Most industry players currently aim for Level 1 AI applications (human assistance) by around 2025, with plans to progress to Level 2 AI (more automation) by approximately 2035 for commercial air transport.
Level 3A (advanced automation with human supervision) and 3B (full autonomy) are expected between 2035-2050.
Conclusions
The aerospace industry is on the way of a transformative era, with Artificial Intelligence (AI) at its forefront.
In this dynamic landscape, embracing AI is not merely an option but a necessity.
It offers unprecedented opportunities to enhance safety, streamline operations, and drive innovation in aerospace. However, with these remarkable advancements comes a critical need to establish trust in AI systems and technologies.
EASA is adaping to AI's evolving landscape.
Embracing AI, building trust, and staying aligned with regulators is the path forward for a prosperous future in aerospace's AI horizon.