Warning system design incivil aircraft

Topic: Constitution
June 12, 2020
Torts Investigations
June 12, 2020

Warning system design incivil aircraft

Order Description

This is a three in one module coursework, had not so pleasant experience with one of the last writer and had to request of the coursework to be rewritten twice, but the other two writer did excellent job. will like to avoid that particular writer.

appreciate if careful attention is given to the examiners requirements. A lot of documents will be attached to give a good insight and provide almost all required information but not all will be needed to complete the 3 questions.

141
6 Warning system design in
civil aircraft
Jan M. Noyes, Alison F. Starr and
Mandana L.N. Kazem
Defining warning systems
Within our society warnings are commonplace, from the natural warning colours
in nature, the implicit warning proffered by the jagged edge of a knife, to
packaging labels and the more insistent auditory warnings (e.g. fire alarms)
requiring our immediate attention. Primarily a means of attracting attention the
warning often, and most beneficially, plays both an alerting and informational
role, providing information about the nature and criticality of the hazard.
In many safety critical applications hazards are dynamic and may present
themselves only under certain circumstances. Warning systems found in such
applications are, therefore, driven by a ‘monitoring function’ which triggers when
situations become critical, even life-threatening, and attention to the situation (and
possibly remedial actions) are required. In summary, current operational warning
systems have the following functions:
1. Monitoring: Assessing the situation with regard to deviations from predetermined
fixed limits or a threshold.
2. Alerting: Drawing the human operators’ attention to the hazardous or
potentially hazardous situation.
3. Informing: Providing information about the nature and criticality of the
problem in order to facilitate a reaction in the appropriate individual(s) who
is (are) assessing the situation.
4. Advising: Aiming to support human decision-making activities in
addressing the abnormal situation through the provision of electronic and/or
hardcopy documentation.
Safety-critical industries continually strive to attain operational efficiency and
maximum safety, and warning systems play an important role in contributing to
these goals. The design of warning systems in the civil flight deck application
will be considered here from the perspective of the user, i.e. as reported by the
crew. This emanates from a research programme concerned with the development
Jan M. Noyes, Alison F. Starr and Mandana L.N. Kazem
142
of an advanced warning system in this application area. One aspect of this
programme included a questionnaire survey of civil flight deck crew from an
international commercial airline; the aim being to highlight the user requirements
of future warning systems. Some of the findings from this work are discussed
towards the end of the three sections on alerting, informing and advising in order
to bring the pilots’ perspective to the design of future warning systems. This is
done within the context of the functions of the warning system highlighted in the
definition given at the start of this chapter.
Monitoring
The monitoring function is primarily a technology-based activity as opposed to a
human one. The role of the monitoring function is to ‘spot’ the deviation of
parameters from normal operating thresholds. When these threshold conditions
are crossed, a response from the warning system is triggered. The crossing of that
threshold has then to be brought to the attention of the operator. On the flight
deck, this is usually achieved through auditory and/or visual alerts. The earliest
monitoring functions were carried out by operators watching displays of values
waiting for this information to move outside of a limit. The simplest mechanical
sensor is activated when a set threshold condition is met. The mechanisms by
which the monitoring is now undertaken will vary from application to application,
depending on aspects relating to the safety critical nature of the system, the
functions being monitored, complexity of the system, and level of technology
involved. However, as the focus of this chapter is on the human activities these
mechanisms will not be discussed further and the three functions ‘alerting’,
‘informing’, ‘advising’ will provide the framework for consideration in the rest of
this chapter.
Alerting
In a complex system and when the situation is particularly critical a large number
of auditory and visual alerts can be activated, as in the Three Mile Island incident
(Kemeny, 1979). In this particular case, over 40 auditory alarms were triggered
and around 200 windows and gauges began to flash in order to draw the operators’
attention to the impending problem (Smither, 1994). A number of difficulties can
occur at this stage. For example:
a. The human operator(s) may fail to be alerted to the particular problem due
to overload or distraction. This can sometimes occur even with the
existence of the ‘attention-grabbing properties’ of the alerting system. An
example of this occurred on the Eastern Airlines L-1011 flight in 1972. All
of the flight deck crew became fixated with a minor malfunction on the
flight deck, leaving no operator flying or monitoring the rest of the aircraft.
Alerts indicating the unintended descent of the aircraft and thus significant
Warning system design in civil aircraft
143
fall in altitude were unsuccessful in regaining the attention of the crew and
alerting them to the hazardous situation developing. The result was that the
aircraft crashed into the Everglades swamps with disastrous results (Wiener,
1977).
b. The alerting signal may also be inaccessible to the operator if sensory
overload occurs. Sensory overload at this early stage is a growing problem
as the number of auditory and visual alerts on the flight deck continues to
increase. In their survey of alarm management in chemical and power
industries, Bransby and Jenkinson (1997) found that the total number of
alarms on older plants was generally less than the total number found on the
modern computer-based distributed control systems. Likewise on the civil
flight deck, the number of auditory and visual alerts has increased over the
decades. For example, during the jet era the number of alerts rose from 172
on the DC8 to 418 on the DC10, and from 188 on the Boeing 707 to 455 on
the Boeing 747 (Hawkins, 1987), and to 757 on the newer Boeing 747-400.
This increase has largely been seen as a result of enhanced aircraft system
functionality and therefore a more general increase in system complexity.
Paradoxically, this increase in the number of alerts intended to help crew
comprehend the ‘dangerous’ situation can lead to the reverse effect,
especially in situations where several alerts appear simultaneously and are
abstract, therefore requiring association with a meaning. A recent Federal
Aviation Authority (FAA) report highlighted this by stating ‘the more
unique warnings there are, the more difficult it is for the flight crew to
remember what each one signifies’ (Abbott, Slotte and Stimson, 1996, p.
56). When crew are overloaded with auditory alerts and flashing visual
messages, it may actually hinder appropriate response and management of
the situation.
It is important in the design of alerting systems to ensure that the flight crews’
attention will be drawn to a problem situation at an early stage in its development.
Flight deck alerting systems all have at least two levels of alert. The caution,
indicating that awareness of a problem and possible reaction is required, and the
warning, indicating a more urgent need for possible action. Ideally the alerting
system should enable the pilot to follow transitions between new ‘critical’
developments, and in conjunction with the flight deck information, as well as
maintaining awareness at all times of the current state of play.
Having a system that facilitates the anticipation of problems would provide the
crew with more time to consider the outcome of making various decisions. An
example of this can be seen in the EGPWS (Enhanced Ground Proximity Warning
System) found on some civil flight decks. In this system, dangerous areas of
terrain, as relating to aircraft position, are depicted on a display. Increasing risk is
depicted by a change in colour or colour saturation. Effectively this is an alert of
changing urgency, which should direct crew attention to problems at an early
stage (Wainwright, 2000).
Jan M. Noyes, Alison F. Starr and Mandana L.N. Kazem
144
Individuals amongst the flight crew surveyed, who flew aircraft with one of the
types of CRT-based warning systems, tended to agree that their aircraft’s alerting
system was effective in allowing them to anticipate a problem (Noyes and Starr,
2000). This is not surprising since their alerting system was designed with a low
level alert that triggered before the main caution or warning alert, thus allowing
problems to be anticipated. On this aircraft, the low level alerting element of the
system automatically displays the relevant system synoptics when parameters drift
out of tolerance, but before they have changed sufficiently to warrant a full
caution or warning level alert. The other salient feature evident from the survey
was that fleets with a third crewmember were also found to be in agreement with
the fact that current systems allow anticipation. These systems facilitate
anticipation, but do not ‘anticipate’ themselves. In a three-person flight crew, part
of the Flight Engineer’s role is to monitor system activity and anticipate failures.
In a two-person crew however, this aspect of systems’ management has been
replaced by increased numbers of cautions and warnings. Once these are
triggered, operators must undertake prescribed set actions. A possible solution
exists in developing systems, which can absorb this anticipatory role.
A truly anticipatory system has yet to be introduced to the flight deck.
However, there are many design difficulties in producing an anticipatory system
to be implemented in such a complex and dynamic environment. Given this fact it
is prudent to remember that design should not seek to replace the decision-maker,
it must support the decision-maker (Cohen, 1993); indeed, in some instances the
system design may not be capable of effectively replacing the decision-maker.
Results from our survey work highlighted some of the difficulties associated with
the development of anticipatory facilities. For example, the following comments
were made by flight deck crew in response to a question about having a warning
system with an anticipatory facility:
‘Most serious problems on the aircraft are virtually instantaneous –
instrumentation giving anticipation would be virtually useless except on noncritical
systems.’
‘Workload could be increased to the detriment of flight safety.’
‘Much aircraft equipment is either functioning or malfunctioning and I think
it lowers workload considerably to avoid unnecessary instrumentation and
advise pilots only of malfunctions.’
It could therefore be argued that perhaps it is best to leave the crew to fulfil all
but the simplest anticipatory tasks. The crews are after all the only individuals
with the benefit of experiencing the situation in hand; they may have information
not available to the system and therefore arguably are the only decision-makers in
a position to make appropriate predictions. Our survey work also indicated that
flight deck crew with experience of having a flight engineer bemoaned the fact
that the role of this person was gradually being phased out. This is particularly
pertinent given the anticipatory function of the flight engineer. However, systems
are becoming increasing complex. Interrelationships between aspects of different
Warning system design in civil aircraft
145
systems and the context in which a problem occurs are important factors in what is
significant for operator attention and what is not. Thus, returning to Cohen’s idea
of required operator support, some assistance with the anticipatory task could, if
correctly implemented, result in the better handling of problem situations.
A further consideration relating to alerting is that not all warnings may be ‘true’
warnings, as all warning systems can give false and nuisance warnings. False
warnings might occur, for example, when a sensor fails and a warning is
‘incorrectly’ triggered. In contrast, nuisance warnings are by definition accurate,
but unnecessary at the time they occur, e.g. warnings about open doors when the
aircraft is on the ground with passengers boarding, or a GPWS (Ground Proximity
Warning System) warning that occurs at 35,000 feet activated by an aircraft
passing below. Nuisance warnings tend to take place because the system does not
understand the context. The category of nuisance warnings may also be extended
to include warnings that are correct and relevant in the current situation, but have
a low level of significance under certain circumstances. For example, in some
aircraft, the majority of warnings will be inhibited during take-off as the
consequences of the fault(s) they report are considered to be low in contrast to
their potential to interrupt the crew during what can be a difficult phase of flight.
It could be concluded from our survey work that false warnings on modern
flight decks do not present a major problem, although in the words of one
respondent ‘One false warning is “too often”.’ If false or nuisance warnings occur
too frequently, they can encourage crews to become complacent about warning
information to the extent that they might ignore real warnings. This was summed
up by two respondents as follows: ‘… nuisance warnings have the effect of
degrading the effectiveness of genuine warnings.’ and ‘a small number of
‘nuisance’ warnings can quickly undermine the value of warnings’. Hence, there
is a need to minimise false and nuisance warnings at all times. This may not be
possible with existing systems, but their reduction needs to be a consideration in
the design of new systems.
Another related problem of increasing concern involves the sensors on the
aircraft that fail more often than the systems themselves. As already discussed,
sensors failing may trigger a false warning condition, and a warning system that
could differentiate and locate possible sensor failures would have operational
benefits. Systems with such capability would better inform the crew and thus help
prevent them from taking unnecessary remedial actions and ensure the
maintenance of the full operating capability of the aircraft.
There are a number of different system solutions that could be implemented and
developed to overcome these problems. More reliable sensors that fail less often
comprise one mechanism for reducing false and nuisance warnings. The use of
context such as phase of flight to suppress warnings in order not to interrupt a
critical phase of flight with information is a feature on the new ‘glass’ warning
systems. These aircraft suppress all but the most critical warnings from 80 knots
to rotation, since at this point of the flight it will almost always be safer to leave
Jan M. Noyes, Alison F. Starr and Mandana L.N. Kazem
146
the ground than attempt to stop since there may not be enough runway left to do
this. This type of contextual support could be used to provide better information
in the future. For example, sensor or warning logic that considers context such as
simple logic relating to weight on wheels and no engines running in order to
restrict an alert relating to a warning about the aircraft doors being open.
However, for other conditions, several more complex pieces of data may be
required and an ‘understanding of the goal’ of the warning.
Informing
Once the alert has been given, the operator(s) must use the information provided
by the alerting system, their knowledge, experience, and training as well as other
information displayed to them to be able to understand the nature and seriousness
of the problem. However, a number of human operator failures may affect this
process. Having been successfully alerted to a problem, the operator(s) may
respond by acknowledging the visual and auditory alerts, but fail to take any
further action, i.e. the operator(s) demonstrate a lack of compliance. On the civil
flight deck, crew bombarded by several loud auditory warnings (bells, buzzers and
other alarms) often initially cancel the alarms before attending to the problem.
However, this action of cancellation is no guarantee that they will do anything
further in terms of remedial action. This problem of initial response followed by
no further action has been well documented in aviation and medical environments
(see, Campbell Brown and O’Donnell, 1997; Edworthy, 1994). There are many
reasons for this. The crew may be distracted by the need to complete other
activities, and once having switched off the alerts may fail to turn their attention to
the reasons why the alerts occurred in the first place. Edworthy and Adams
(1996) studied the topic of non-compliance to alarms and suggested that operators
carry out a cost-benefit analysis in order to evaluate the perceived costs and
benefits of compliance and non-compliance to alarm handling. Information from
the warning system (including urgency information) will be considered in this
evaluation. Therefore, there is a need for the warning system to depict accurately
the nature and criticality of the problem in order to provide accurate information
for the pilot to aid their decision-making. At present there is much room for
improvement in this respect, especially with regard to auditory warnings
(Edworthy and Adams, 1996). For example, auditory alarms often activate too
frequently and are disruptively and inappropriately loud (Stanton and Edworthy,
1999). They also can be relatively uninformative. To quote a respondent from
our survey of civil flight deck crew ‘a lot of our audio systems are so powerful
they scare you half out of your skin without immediately drawing you directly to
the reason for the warning’ (Eyre, Noyes, Starr and Frankish, 1993).
Individuals need to assess the nature and extent of the difficulty, and to locate
the primary cause in order to initiate remedial actions. They have to evaluate and
consider the short-term implications of the difficulty, its criticality/urgency, any
Warning system design in civil aircraft
147
compromise to safety and immediate actions required, as well as the longer-term
consequences for the aircraft, its systems and the operation/flight being
undertaken. The consequences of any action taken, whether immediate or
planned, must also be included in the assessment. In the development of new
alerting ‘supportive’ systems, this is the type of information that could be of
significant use to the operator. The underlying system would need to facilitate the
provision of this type of information, which then has to be presented to the
operator.
The situation being monitored is often complex with many components,
influences and interactions, and there is a need to take into account a large number
of parameters in order to assess the situation. Optimally the alerting system
should assimilate relevant information from a number of sources or facilitate this
task. This is difficult to realise in design as it is not always possible to predict
which elements of the potential information set will be relevant to each other and
to the particular situation. However, approaches are available which enable the
relationships between elements, systems and context to be represented as we
indicated in our work on using a model-based reasoning approach to the design of
flight deck warning systems. In the past, integration of context/situation
information into the design of alerting systems has not been developed to any
great extent. For example, in the avionics application, warnings have been known
to be given relating to the failure of de-icing equipment when the aircraft was
about to land in hot climes, where there would be no need to have de-icing
facilities available.
Multiple warning situations are known to be a problem for crew, since the
primary failure may be masked by other less consequential cascade or concurrent
failures that take the crew’s attention, and maybe hinder location of the primary
cause. Cascade failures are failures that occur as a result of the primary failure
e.g. failure of a generator (primary failure) causing the failure of those systems
powered by the generator (secondary failures). However, secondary failures may
be displayed before the primary as the display of a warning in most systems is
related directly to the point at which the threshold associated with a warning is
crossed. To quote one crewmember ‘I find it very difficult in multi-warning
systems to analyse and prioritise actions’. A further problem relates to concurrent
failures. The problem-solving characteristics of human operators are such that we
tend to associate alerts occurring simultaneously (or within a short space of time)
as having the same cause when this may not be the case (Tversky and Kahneman,
1974). Concurrent failures may also cause conflict in terms of remedial actions;
i.e. one solution may resolve one problem but worsen the situation for another. It
can therefore be quite difficult for crew to handle warning information in these
types of situation.
Many current alerting systems present warnings/cautions in the order in which
the signal reaches the method of display, and this has implications for the handling
of warning information. With classic central warning panels, large cascade type
Jan M. Noyes, Alison F. Starr and Mandana L.N. Kazem
148
failures lead to distinctive patterns of lights; recognition of these patterns can
enable the crew to identify the primary cause hidden amidst the mass. With glass
multifunction alerting systems, alerts are listed by criticality, e.g. all red warnings
first followed by all the amber caution alerts. In general, within each of these
categories temporal ordering is still used; new alerts enter at the top of the
appropriate list (warning or caution list). This creates effectively a dynamic list
and can result in the primary causes of multiple alert situations becoming
embedded within its associated category list and possibly ‘hidden’ from view.
The crew in our survey noted this: ‘… it would be helpful if the most urgent was
at the top of the list’. However, some of these systems do use a limited set of
rules to analyse the incoming warning information and identify a set of key
primary failures which can lead to cascade effects e.g. generator failure. These
systems will pull out primary failures and present them first.
The issue of handling secondary failures was addressed within the survey. Just
under two-thirds of the flight deck crew (65%) surveyed felt that the alerting
systems on their current aircraft were deficient in providing consequential
secondary information. A closer analysis of this 65% indicated a clear
disagreement between flight crew of glass flight deck aircraft and crew of other
aircraft fleets. Less than 5% of the former group believed their alerting systems to
be deficient in this respect, indicating that the vast majority was satisfied.
Conversely between 45% and 70% of the respondents from each of the other
aircraft fleet groups regarded the provision of such secondary information, on
their aircraft, to be sub-optimum. Therefore, future alerting system designs should
facilitate the provision of secondary information.
Advising
A further aspect of the alerting system involves the use of instructional
information to support human decision-making activities, and ensure remedial
actions are appropriate and successful. On current flight decks, supporting
documentation can be both screen-based and in hard-copy format, whereas on
classic aircraft, i.e. aircraft that have warnings based on fixed legends on lights,
this information is provided in a paper Quick Reference Handbook (QRH). The
way in which this information is handled will depend on the severity, complexity
and frequency of the situation that activated the alert(s), as well as operator
experience, skills and knowledge. However, it should be noted that designers do
not always view advisory documentation as part of the alerting system. In our
work with flight deck crew it was viewed as an integral part of the alerting
systems, although, in certification terms, it may not be viewed as an essential
component of the operating system.
All of the aircraft within the questionnaire survey had a QRH or equivalent
document, e.g. the Emergency Checklist on the DC-10. For each aircraft, this
document serves as the primary source of reference for the necessary remedial
Warning system design in civil aircraft
149
actions to be taken in abnormal flying situations. The documentation is originally
designed by the airframe manufacturer and modified by the management of the
operating company to meet their operating procedures. It would seem that there
might be a trade-off between the level of completeness of the QRH information
(e.g. its quantity and detail) and the ease with which the document can be used,
i.e. the more information provided, the more difficult the document is to use in
practise. Paper presentation of such information will inevitably lead to this
problem as the information provided must be complete and therefore by nature
will be difficult to present in a format that can be used quickly and effectively.
Glass display presentation, on the other hand, could potentially help the pilot to
locate the appropriate material quickly by tailoring the information presented to
the situation.
Evolution of flight deck warning systems
This lack of assimilation is apparent throughout the evolution of flight deck
alerting systems (see, Starr, Noyes, Ovenden and Rankin, 1997, for a full review).
Briefly, the early warning systems were a series of lights positioned on the
appropriate systems’ panels, and so were located across the flight deck (GordenJohnson,
1991). At this stage of evolution, warning indications were
predominately visual, and crew had to scan the panels continually to check for the
appearance of a warning. This discrete set of annunciators was gradually replaced
by the ‘master/ central warning and caution’ concept, which involved the addition
of a master light that indicated to crew that a warning had been activated. This
was further developed into a centralisation of warning lights on a single panel
within the crew’s forward visual field (Alder, 1991).
The next development beyond physically locating the alerts together would be
to ‘integrate’ the alerting information for presentation to the crew, as mentioned
earlier. Although modern flight deck displays are referred to as integrated, they
are not truly integrated since they consist of single elements of information
displayed together according to circumstances and the current phase of flight
(Pischkle, 1990). A fully integrated alerting system would be capable of
monitoring and interpreting data from aircraft systems and flight operational
conditions in order to provide crew with a high-level interpretation of the
malfunction in the event of failures and abnormal conditions.
A fully integrated warning system has yet to be realised to any great extent
even in the latest civil aircraft, traditional alerting systems are generally used
which conform to a ‘stimulus’ (e.g. valve out limits) followed by ‘response’ (e.g.
warning light) concept. Also, monitoring to an identified risk point is traditional,
and in the past there has been a lack of sophisticated display and control
technology to achieve integration. This may be due to the inherent design
difficulties in predicting information requirements, briefly noted earlier, and
Jan M. Noyes, Alison F. Starr and Mandana L.N. Kazem
150
previous lack of technical ability to realise such a systems solution. However, the
advent and implementation of more sophisticated software and programming
techniques means that alerting systems with a greater capability to integrate
information from a variety of sources can be developed, and such solutions are
gradually becoming a more realistic proposition (Rouse, Geddes and Hammer,
1990). Care must be taken not to allow such systems to exceed their inherent
limitations (due in part to our limited ability to predict the information
requirements of unpredictable situations) or reduce data visibility. O’Leary
(2000) indicates that the very task of converting data to knowledge is vital to the
pilot in facilitating good pilot decision-making and therefore we must think
carefully before removing this role from the crew.
A further point of contention relates to the certification requirements of alerting
systems. Given the criticality of alerting information it may be that the certification
requirements prevent such systems becoming feasible or economically viable.
However, by functionally separating the primarily alerting processes from the more
informational and supportive processes of future alerting systems it may be possible
to incorporate data integration into a ‘support system’ whilst leaving the more
critical ‘alert’ to follow the more easily certifiable ‘stimulus-response’ concept.
General discussion
During each of the alerting, informing and advising functions, operator-involved
failures can occur: human operators may fail to be alerted to the warning, may fail
to assess it adequately, may neglect to respond to the warning situation and/or may
not make sufficient use of information available. As already stated, they may take
immediate action, but fail to make follow-up actions that will lead to the restoration
of normal operations, a point well documented by Campbell-Brown and O’Donnell
(1997) in their work on alarms in distributed control systems. In the process
control industry, as well as aviation, there are many reasons for this, from the
design of the warning system per se to task considerations and the overall design
philosophies of the organisation, operating policies and procedures, extending to
(user) practices (Degani and Wiener, 1994; Edworthy and Adams, 1996).
Analyses of specific human responses to warnings and explanations of their
failures are complex and multi-faceted, and outside the remit of the current chapter.
Perhaps the very idea of having humans interact with warning systems is a
problematic one. In many situations, the main part of the operator’s job may be
uneventful to the point of boredom with long periods of monitoring required. This
state can change very quickly when an event triggers an alarm or number of
alarms. Hence, the monitoring phase is interrupted by rapid activity, the
occurrence of which cannot be easily predicted, and may result in information
overload as the monitoring role assumed by the human operator changes to
diagnostician. This latter role requires the u