Match!

Incorporating software failure in risk analysis – Part 1: Software functional failure mode classification

Published on May 1, 2020in Reliability Engineering & System Safety4.039
· DOI :10.1016/J.RESS.2020.106803
Christoph Alexander Thieme4
Estimated H-index: 4
(NTNU: Norwegian University of Science and Technology),
Ali Mosleh30
Estimated H-index: 30
(UCLA: University of California, Los Angeles)
+ 1 AuthorsJeevith Hegde3
Estimated H-index: 3
(NTNU: Norwegian University of Science and Technology)
Sources
Abstract
Abstract Advanced technological systems consist of a combination of hardware and software, and they are often operated or supervised by a human operator. Failures in software-intensive systems may be difficult to identify, analyze, and mitigate, owing to system complexity, system interactions, and cascading effects. Risk analysis of such systems is necessary to ensure safe operation. The traditional approach to risk analysis focuses on hardware failures and, to some extent, on human and organizational factors. Software failures are often overlooked, or it is assumed that the system's software does not fail. Research and industry efforts are directed toward software reliability and safety. However, the effect of software failures on the level of risk of advanced technological systems has so far received little attention. Most analytical methods focus on selected software failures and tend to be inconsistent with respect to the level of analysis. There is a need for risk analysis methods that are able to sufficiently take hardware, software, and human and organizational risk factors into account. Hence, this article presents a foundation that enables software failure to be included in the general framework of risk analysis. This article is the first of two articles addressing the challenges of analyzing software failures and including their potential risk contribution to a system or operation. Hence, the focus is on risks resulting from software failures, and not on software reliability, because risk and reliability are two different aspects of a system. Using a functional perspective on software, this article distinguishes between failure mode, failure cause, and failure effects. Accordingly, 29 failure modes are identified to form a taxonomy and are demonstrated in a case study. The taxonomy assists in identifying software failure modes, which provide input to the risk analysis of software-intensive systems, presented in a subsequent article (Part 2 of 2) [1] .
  • References (46)
  • Citations (2)
📖 Papers frequently viewed together
3 Citations
7 Citations
2004
3 Citations
78% of Scinapse members use related papers. After signing in, all features are FREE.
References46
Newest
#1Christoph Alexander Thieme (NTNU: Norwegian University of Science and Technology)H-Index: 4
#2Ali Mosleh (UCLA: University of California, Los Angeles)H-Index: 3
Last. Jeevith Hegde (NTNU: Norwegian University of Science and Technology)H-Index: 3
view all 4 authors...
3 CitationsSource
#1Jeevith Hegde (NTNU: Norwegian University of Science and Technology)H-Index: 3
#2Eirik Hexeberg Henriksen (NTNU: Norwegian University of Science and Technology)H-Index: 3
Last. Ingrid Schjølberg (NTNU: Norwegian University of Science and Technology)H-Index: 8
view all 4 authors...
Abstract This article presents the process used to develop safety envelopes and subsea traffic rules for autonomous remotely operated vehicles (AROVs) used in subsea inspection, maintenance, and repair (IMR) operations. Preventing loss of subsea assets and the AROV is the overall goal of the proposed safety envelopes and subsea traffic rules. Currently, no such envelopes and rules exist. The safety envelope for the AROV is constructed using an Octree method. The proposed subsea traffic rules are...
Source
#1Christoph Alexander Thieme (NTNU: Norwegian University of Science and Technology)H-Index: 4
#2Ingrid Bouwer Utne (NTNU: Norwegian University of Science and Technology)H-Index: 20
Last. Stein Haugen (NTNU: Norwegian University of Science and Technology)H-Index: 14
view all 3 authors...
Abstract Marine Autonomous Surface Ships (MASS) are tested in public waters. A requirement for MASS to be operated is that they should be at least as safe as conventional ships. Hence, this paper investigates how far the current ship risk models for ship-ship collision, ship-structure collision, and groundings are applicable for risk assessment of MASS. Nine criteria derived from a systems engineering approach are used to assess relevant ship risk models. These criteria aim at assessing relevant...
4 CitationsSource
#1Børge RoksethH-Index: 2
#2Ingrid Bouwer UtneH-Index: 20
Last. Jan Erik VinnemH-Index: 22
view all 3 authors...
Technological innovations and new areas of application introduce new challenges related to safety and control of risk in the maritime industry. Dynamically positioned systems are increasingly used, contributing to a higher level of autonomy and complexity aboard maritime vessels. Currently, risk assessment and verification of dynamically positioned systems are focused on technical reliability, and the main effort is centered on design and demonstration of redundancy in order to protect against c...
12 CitationsSource
#1Sergio GuarroH-Index: 4
#2Michael K. YauH-Index: 2
Last. Matt KnudsonH-Index: 4
view all 7 authors...
9 CitationsSource
#1Jeevith Hegde (NTNU: Norwegian University of Science and Technology)H-Index: 3
#2Ingrid Bouwer Utne (NTNU: Norwegian University of Science and Technology)H-Index: 20
Last. Ingrid Schjølberg (NTNU: Norwegian University of Science and Technology)H-Index: 8
view all 3 authors...
Abstract The objective of this article is to present a method for developing collision risk indicators applicable for autonomous remotely operated vehicles (AROVs), which are essential for promoting situation awareness in decisions support systems. Three suitable risk based collision indicators are suggested for AROVs namely, time to collision, mean time to collision and mean impact energy. The proposed indicators are classified into different thresholds; low, intermediate and high. An AROV flig...
4 CitationsSource
Sep 23, 2015 in SAFECOMP (International Conference on Computer Safety, Reliability, and Security)
#1Asim Abdulkhaleq (University of Stuttgart)H-Index: 7
#2Stefan Wagner (University of Stuttgart)H-Index: 43
Safety-critical systems are becoming increasingly more complex and reliant on software. The increase in complexity and software renders ensuring the safety of such systems increasingly difficult. Formal verification approaches can be used to prove the correctness of software; however, even perfectly correct software could lead to an accident. The difficulty is in defining appropriate safety requirements. STPA Systems-Theoretic Process Analysis is a modern safety analysis approach which aims to i...
9 CitationsSource
#1Asim Abdulkhaleq (University of Stuttgart)H-Index: 7
#2Stefan Wagner (University of Stuttgart)H-Index: 43
Last. Nancy G. Leveson (MIT: Massachusetts Institute of Technology)H-Index: 49
view all 3 authors...
Abstract Formal verification and testing are complementary approaches which are used in the development process to verify the functional correctness of software. However, the correctness of software cannot ensure the safe operation of safety-critical software systems. The software must be verified against its safety requirements which are identified by safety analysis, to ensure that potential hazardous causes cannot occur. The complexity of software makes defining appropriate software safety re...
22 CitationsSource
#1Prasanna Kumar N (ISRO: Indian Space Research Organisation)
#2Smita Anirudha Gokhale (ISRO: Indian Space Research Organisation)
Last. K.M. Bharadwaj (ISRO: Indian Space Research Organisation)H-Index: 1
view all 6 authors...
1 CitationsSource
#1Gee-Yong ParkH-Index: 1
#2Dong Hoon KimH-Index: 1
Last. Dong Young LeeH-Index: 1
view all 3 authors...
Abstract A method of a software safety analysis is described in this paper for safety-related application software. The target software system is a software code installed at an Automatic Test and Interface Processor (ATIP) in a digital reactor protection system (DRPS). For the ATIP software safety analysis, at first, an overall safety or hazard analysis is performed over the software architecture and modules, and then a detailed safety analysis based on the software FMEA (Failure Modes and Effe...
6 CitationsSource
Cited By2
Newest
#1Marilia Abilio Ramos (UCLA: University of California, Los Angeles)
#2Christoph Alexander Thieme (NTNU: Norwegian University of Science and Technology)H-Index: 4
Last. Ali Mosleh (UCLA: University of California, Los Angeles)H-Index: 30
view all 4 authors...
Abstract Autonomous systems operation will in the foreseeable future rely on the interaction between software, hardware and humans. Efficient interaction and communication between these agents are crucial for safe operation. Conventional methods for hazard identification and safety assessment focus often on one of the aspects of the system only, e.g., human reliability, software failures, or equipment reliability. The method Human-System Interaction in Autonomy (H-SIA) was recently proposed, foc...
Source
#1Marilia Abilio Ramos (NTNU: Norwegian University of Science and Technology)H-Index: 1
#2Christoph Alexander Thieme (NTNU: Norwegian University of Science and Technology)H-Index: 4
Last. Ali Mosleh (UCLA: University of California, Los Angeles)H-Index: 30
view all 4 authors...
Abstract Maritime Autonomous Surface Ships (MASS) are the subject of a diversity of projects and some are in testing phase. MASS will probably include operators working in a shore control center (SCC), whose responsibilities may vary from supervision to remote control, according to Level of Autonomy (LoA) of the voyage. Moreover, MASS may operate with a dynamic LoA. The strong reliance on Human-Autonomous System collaboration and the dynamic LoA should be comprised on the analysis of MASS to ens...
2 CitationsSource