For human beings it is taken for granted, for a technical system not implemented so far: the possibility to listen to the conversation of other people and derive activities from the understood contents. The project AcListant (Active Listening Assistant, 2013–2015) implemented a prototype for understanding the controller-pilot-communication.
Marc Schulder
Research Associate in Computational Linguistics
My research interests include sign languages, natural language processing, and open science.
Publications
Abstract
The use of prior situational/contextual knowledge about a given task can significantly improve Automatic Speech Recognition (ASR) performance. This is typically done through adaptation of acoustic or language models if data is available, or using knowledge-based rescoring. The main adaptation techniques, however, are either domain-specific, which makes them inadequate for other tasks, or static and offline, and therefore cannot deal with dynamic knowledge. To circumvent this problem, we propose a real-time system which dynamically integrates situational context into ASR.
Abstract
Situation awareness of today’s automation relies so far on sensor information, data bases and the information delivered by the operator using an appropriate user interface. Listening to the conversation of people is not addressed until today, but an asset in many working situations of teams. This paper shows that automatic speech recognition (ASR) integrating into air traffic management applications is an upcoming technology and is ready for use now.
Abstract
This paper presents an approach for incorporating situational context information into an on-line Automatic Speech Recognition (ASR) component of an Air Traffic Control (ATC) assistance system to improve recognition performance.
Abstract
Air traffic controllers (ATCO) are a core element of the flight guidance process. Decision support systems with accurate output data could increase the controllers’ performance or reduce their workload. Nowadays radar data based identification of controllers’ intent causes delays of up to 30 seconds. The intents could be predicted better and earlier if their spoken commands would be taken into account. Therefore, the project AcListant® combines an arrival manager (AMAN) with an automatic speech recognizer (ASR). Spoken commands are automatically recognized and forwarded to the AMAN. The AMAN updates its plan (e.g. sequence and aircraft trajectories). The ATCO also receives a direct feedback of ASR recognition performance via an (optional) visual interface.