• Egis
  • Search
Skip to content
  • Egis Group
  • Careers
  • Contact us
  • Home
  • Expertise
    • Consultancy & Advisory
    • Design & Engineering
    • Project & Programme Management
    • Operation & Maintenance
    • Sustainability
    • Digital & Cyber
    • Future Air Mobility
  • Sectors
    • Airports
    • Air Traffic Management
    • Institutions & Governments
  • Projects
  • Insights
  • Who we are
    • Working for Egis
    • Our Values
    • Leadership
  • Our Airport Network
  • Search

Tag: AI

January 2019February 28, 2021
As a passenger nervously sits down on their window seat and glances outside at the ground crew finishing up, their neighbour leans over and whispers: ‘you know, flying is the safest mode of transport – much safer than driving your car.’

This fact has never been truer. Since the 80’s aviation incidents have seen a steady decline, and for the first time in 2018, this global number dipped below a hundred. This is despite a constant rise in air traffic with over 4 billion passengers and a record amount of cargo, corresponding to over 35 million registered flights having busied our airspace this past year.

The aviation industry should be proud of its track record; however, this impressive lack of errors is leading to new and unexpected problems. Indeed, it is essential to train aviation professionals, from pilots to air traffic controllers, for failure and error management, but without examples and repeated exposure, how is one expected to learn? In response, the various aviation training organisations have developed methods from flight simulators to air traffic real-time simulation exercises, which enables the practice of these failure scenario. In certifying these professionals, we believe we have equipped them with the necessary tools and knowledge to handle the unlikely and unfortunate. However, one type of emerging technology may not yet have addressed the issue of a lack of error: Artificial Intelligence (AI) techniques.

The principle behind machine learning methods, the building blocks of AI, is that through exposure to vast amounts of data, the algorithm modifies itself according to set incentives, in order to accurately analyse a situation. The training of an AI is entirely dependent on the quality and quantity of data which it is presented with. Whilst current projects using these technics are mainly focused on post-facto analysis, the real benefits of AI methods are in prediction and optimisation. This is where the lack of errors and failures are becoming an issue: how can an AI predict the outcome of events it has not been sufficiently trained for?

In training humans, emotional stressors may enable more effective learning of failure situations. Afterall, who has not had an intense near-miss and thought: this is a mistake I’ll never make again! Whilst it is possible to increase the weighting given to some data, an AI which is exposed to only a few occurrences of a type of event will tend to over-fit – in other words, it will not be able to recognise similar situations because they are not precisely the same. In order to reap the benefits of AI techniques, how can this issue be addressed? Two main solutions come to mind:

  • The sharing of training between similar organisations. For example, several airports of comparable sizes which all experience snow every few years, could have an agreement which would allow an AI to be trained using all of their data in order to enable it to deal appropriately with snowy days. One of the benefits here is that the training of an AI does not require the storage of input data within the program, instead, through training the algorithm modifies the relative importance (called weights) given to various inputs. As such it is very difficult to reverse engineer and obtain the initial data, which can address security concerns linked with data sharing.
  • The production of simulated data which can be used to train the AI. A futuristic scenario may even imagine that these simulated scenarios could be produced by another AI.

Whilst the aviation industry should not have any room for error, it is important to recognise the benefits of failure in the training of both humans and AI systems. After all, we all learn from our mistakes.


June 2018February 28, 2021
In a prior post I outlined some of the challenges facing the application of AI techniques, particularly those using Deep Learning methods, in safety critical airport, ATM and aircraft operations. At the time I suggested that the current state of academic research in the area did not yet provide an answer to the challenges being faced, and for anyone looking to implement solutions right now that still remains the case. But there are promising developments underway that suggest in time we will have greater ability to apply AI across all areas of aviation.

The activity currently taking the lead in trying to gain greater insight into the way in which AI systems makes decisions is a DARPA initiative known as the Explainable AI programme (XAI). It is a four-year funded programme seeking to develop a suite of tools for different machine learning techniques to enhance the explainability of the decisions made as a function of the input information. Currently, the XAI programme is leading to a lot of interesting academic research in a range of areas that could lead to military and commercial applications.

Explained by interpretability or justification

The core problem for safety critical industries lies in explaining the outputs produced by an AI model. Explanation is currently being approached through the concepts of interpretability and justification. A model is considered to be interpretable if its workings can be understood by a person, either through introspection or produced explanation. For example, through the production of a simpler more understandable model that performs a reasonable mapping of inputs to outputs. A justification of an AI model’s outputs seeks to explain why a particular output is a good one, but may or may not do so by explaining exactly how it was made. Examples here include techniques such as ‘decomposition’ or those related to Principal Component Analysis (PCA) which describe the main features in any particular decision. Different researchers are currently working on aspects of interpretability and justification for a range of AI models. Added to this, are other researchers working on post-hoc interpretability as well as on new models that can explain their decision making in real time.

Post-hoc interpretability

Many of the current techniques for post-hoc interpretability are somewhat reminiscent of brain MRI scanning (e.g. t-SNE maps shown in [1])! They focus on elements of the model that activate in the presence of certain features in the input and attempt to infer how these elements combine to produce the outputs. This sort of approach lends itself to understanding why decisions were made, but being post-hoc in nature it doesn’t necessarily help us to understand how a particular model would work when exposed to new information and inputs.

Real-time decision making

A separate strand of research is concerned with models that can explain their decision making in real time. In particular, in image classification applications, there has been work on so-called secondary models, that learn to generate textual justifications for classifications of the primary model. As an example, the primary model may be attempting to determine whether an airfield stand is occupied or not, the secondary model may explain a decision that the stand is occupied with text such as “Stand occupied because an aircraft is parked, the boarding bridge is attached and cargo containers are being removed from the aircraft”.

Pulling the strands together

So, what does all of this mean for the explainability of AI and our ability to deploy it in safety critical applications? First and foremost, the research whilst interesting and offering insights is still very much at a preliminary stage. The XAI programme will inevitably kick-start broader work in the area of explainability which is vital for future aviation uses, but the programme itself runs through to 2021 and will only lay the groundwork of what our industry needs. Secondly, given the diversity of research avenues and approaches it will become important in the near future for aviation to be able to feed its own needs into the academic research arena to ensure that it will answer the right questions for our industry. I wonder who will step forward to make sure this happens?


[1] Graying the black box: Understanding DQNs, arXiv:1602.02658v3 [cs.LG] 17 Feb 2016.

Expertise

  • Consultancy & Advisory
  • Design & Engineering
  • Project & Programme Management
  • Operation & Maintenance
  • Sustainability
  • Digital & Cyber
  • Future Air Mobility

Sectors

  • Airports
  • Air Traffic Management
  • Institutions & Governments

Projects

  • All
  • Airports
  • ATM
  • Institutions & Governments

Insights

  • All
  • Digital
  • Sustainability
  • Resilience
  • Future mobility
  • Insider view

Who we are

  • Working for Egis
  • Our Values
  • Leadership

Our airport network

CONTACT

Egis
  • LinkedIn
  • Twitter
  • Facebook
  • Vimeo

© Egis 2021

  • Sitemap
  • Privacy & Cookies
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits.
Cookie settingsDECLINEACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled

Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.

CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.

Functional

Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.

Performance

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

Analytics

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.

Advertisement

Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.

Others

Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.

SAVE & ACCEPT