Aerospace Human Factors and AI [part 2/6]
The Relationship Between Human-Centred AI (HCAI) and Aviation Human Factors
In my article “Making a Positive Impact with Human-Centred AI (HCAI) in Aerospace”, I introduced the key aspects of HCAI, which included "user-centric design" and "safety and reliability" among them, fundamentals when considering Human Factors.
This is the second article of the 6 series about Human Factors (HF) and AI, which is focused particularly on the adoption of AI in Business processes and that cover:
Part 1 - Introduction to Aviation Human Factors (HF)
Part 2 - The Relationship Between Human-Centred AI (HCAI) and Aviation Human Factors
Part 3 - AI's positive Impact on Aviation Human Factors
Part 4 - Emerging Human Factors Challenges Due to AI Adoption in Aviation
Part 5 - Understanding EASA's Building Block #3: A Deep Dive into Regulatory Compliance
Part 6 - Future Directions and Integration Strategies for HF and AI in Aviation
Today’s focus is:
Governing HCAI: key aspects
The HCAI Governance layers
The Relation between HCAI and HF
Conclusions
Let’s dive in! 🤿
Governing HCAI: key aspects
To understand the relation between HCAI and HF we need to start talking about HCAI Governance.
Governance in AI is crucial for several reasons, particularly as AI technologies become more integrated into our daily lives and critical systems.
In particular, Governance in HCAI focuses on ensuring that AI technologies are developed and deployed in ways that prioritise human welfare, values, and ethical considerations.
This is achieved by considering the following key aspects:
Ethical Design: Establishing standards to prevent biases and ensure fairness, aligning AI with ethical norms.
Accountability: Defining who is responsible for AI decisions, especially when they impact human health and safety.
Privacy Protection: Ensuring robust data protection, complying with privacy laws and upholding individual rights.
Building Trust: Promoting transparency and reliability in AI systems, which is vital for user adoption.
Inclusivity: Making sure AI systems are accessible to all, preventing the marginalisation of any group.
Regulatory Compliance: Helping organisations navigate evolving AI regulations to avoid legal issues.
Societal Sensitivity: Tailoring AI to respect diverse cultural and societal values.
The HCAI Governance Layers
Let’s review the layers required in order to achieve a robust HCAI Governance:
The Government
The Industry providing oversight
Organisation using the AI tool
Organisation developing the AI tool
The governance elements of HCAI are typically distributed among the different layers, each playing specific roles to ensure the effective and ethical use of AI.
The following responsibilities are distributed through the different layers:
1. Government 🏛️:
Setting Standards and Guidelines: Develop and enforce comprehensive regulations that guide both the development and use of AI, ensuring standards for fairness, accountability, and transparency are met.
Legal Compliance: Monitor and enforce compliance with laws and regulations specific to AI, including data protection, anti-discrimination laws, and ethical standards.
Public Safety and Welfare: Address broader societal risks and implications of AI such as employment impacts, surveillance, and ethical challenges to protect public interest.
2. Industry 🏢:
Best Practices and Norms: Establish industry-specific guidelines and norms for responsible AI use that complement governmental regulations.
Collaborative Frameworks: Facilitate collaboration among companies to share best practices, insights, and strategies for effective AI governance.
Oversight and External Audits: Establish independent entities to monitor AI compliance, conduct periodic audits on data practices.
Issue of Approvals: Implement rigorous approval processes, mainly for those critical industries implementing AI in processes with a relevant safety impact.
Self-Regulation and Peer Reviews: Encourage self-regulation through peer reviews and shared accountability mechanisms within the industry.
3. Organisation that uses AI 🖥️:
Safety Culture and Leadership Commitment to Safety: Implement a culture of safety prioritisation through strong leadership commitment; ensure leaders actively promote and uphold safety standards in AI deployment and operations, embedding safety as a core value across all AI-related activities.
Implementation and Integration: Ensure that AI systems are integrated into business processes ethically and in compliance with both industry and legal standards taking into account Change Management principles.
Data Governance and Security: Manage the data used by AI systems to ensure quality, integrity, and protection of sensitive information, aligning with privacy laws.
Risk Management: Identify and mitigate risks associated with deploying AI, including operational, reputational, and ethical risks.
Accountability and Compliance: Maintain accountability for decisions made by AI systems and ensure alignment with regulatory and ethical standards.
Stakeholder Engagement: Engage with stakeholders such as employees, customers, and the public to gather feedback, address concerns, and enhance transparency in AI usage.
4. Organisation That Designs AI 🛠️:
Development and Provision: Design and develop AI systems according to ethical standards, ensuring the technology is robust, secure, and performs as intended.
Transparency and Explainability: Provide clear documentation and explanations of how AI systems work, detailing the logic and decision-making processes involved.
Performance Monitoring and Continuous Improvement: Regularly update and refine AI algorithms to maintain accuracy, address emerging issues, and integrate new ethical guidelines as they evolve.
Independent Audits and Research: Collaborate with external auditors and third-party researchers to ensure ongoing compliance and improvement based on cutting-edge research and ethical considerations.
The relation between HCAI and Aerospace HF
The relationship between Human Factors (HF) and Human-Centred Artificial Intelligence (HCAI) is both foundational and synergistic, focusing on optimising system performance and safety by ensuring that AI technologies are designed and implemented with a deep understanding of human needs, capabilities, and limitations.
Shared characteristics are:
Conclusions
In essence, the relationship between Human Factors and Human-Centred Artificial Intelligence is built on a mutual commitment to enhancing human capabilities, safety, and well-being through the thoughtful integration of AI into human environments.
This must be achieved across the various layers of HCAI governance.
By bridging the gap between human needs and technological advancements, this partnership aims to create a future where AI systems support humans in achieving their goals more effectively, ethically, and sustainably.
The next article will focus on the positive impact of AI on Aviation Human Factors.
Stay tuned to continue exploring.
That’s all for today.
See you next week. 👋
References
Shneiderman, B. (2022). Human-Centered AI. Oxford University Press
Standford University. Website Human-Centered Artificial Intelligence. Retrieved April 2024, from https://hai.stanford.edu/
IBM Research. IBM Research Blog on Human-Centered AI. Retrieved April 2024, from https://research.ibm.com/blog/what-is-human-centered-ai
McKinsey & Company. Human-centered AI: the power of putting people first. Retrieved April 2024, from https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/human-centered-ai-the-power-of-putting-people-first
Holzinger Group. Human-Centered AI lab. Retrieved April 2024, from https://human-centered.ai/
Microsoft Research. (2024). Advancing Human-Centered AI. Retrieved April 2024, from https://www.microsoft.com/en-us/research/blog/advancing-human-centered-ai-updates-on-responsible-ai-research/
Disclaimer: The information provided in this newsletter and related resources is intended for informational and educational purposes only. It reflects both researched facts and my personal views. It does not constitute professional advice. Any actions taken based on the content of this newsletter are at the reader's discretion.