Match!

Incorporating software failure in risk analysis––Part 2: Risk modeling process and case study

Published on Jun 1, 2020in Reliability Engineering & System Safety4.039
· DOI :10.1016/J.RESS.2020.106804
Christoph Alexander Thieme4
Estimated H-index: 4
(NTNU: Norwegian University of Science and Technology),
Ali Mosleh3
Estimated H-index: 3
(UCLA: University of California, Los Angeles)
+ 1 AuthorsJeevith Hegde4
Estimated H-index: 4
(NTNU: Norwegian University of Science and Technology)
Abstract
Abstract The advent of autonomous cars, drones, and ships, the complexity of these systems is increasing, challenging risk analysis and risk mitigation, since the incorporation of software failures intro traditional risk analysis currently is difficult. Current methods that attempt software risk analysis, consider the interaction with hardware and software only superficially. These methods are often inconsistent regarding the level of analysis and cover often only selected software failures. This paper is a follow-up article of Thieme et al. [1] and presents a process for the analysis of functional software failures, their propagation, and incorporation of the results in traditional risk analysis methods, such as fault trees, and event trees. A functional view on software is taken, that allows for integration of software failure modes into risk analysis of the events and effects, and a common foundation for communication between risk analysts and domain experts. The proposed process can be applied during system development and operation in order to analyses the risk level and identify measures for system improvement. A case study focusing on a decision support system for an autonomous remotely operated vehicle working on a subsea oil and gas production system demonstrates the applicability of the proposed process.
  • References (22)
  • Citations (3)
📖 Papers frequently viewed together
2 Citations
78% of Scinapse members use related papers. After signing in, all features are FREE.
References22
Newest
#1Jeevith Hegde (NTNU: Norwegian University of Science and Technology)H-Index: 4
#2Eirik Hexeberg Henriksen (NTNU: Norwegian University of Science and Technology)H-Index: 3
Last. Ingrid Schjølberg (NTNU: Norwegian University of Science and Technology)H-Index: 10
view all 4 authors...
Abstract This article presents the process used to develop safety envelopes and subsea traffic rules for autonomous remotely operated vehicles (AROVs) used in subsea inspection, maintenance, and repair (IMR) operations. Preventing loss of subsea assets and the AROV is the overall goal of the proposed safety envelopes and subsea traffic rules. Currently, no such envelopes and rules exist. The safety envelope for the AROV is constructed using an Octree method. The proposed subsea traffic rules are...
Source
#1Sergio GuarroH-Index: 4
#2Michael K. YauH-Index: 2
Last. Matt KnudsonH-Index: 4
view all 7 authors...
9 CitationsSource
#1Jeevith Hegde (NTNU: Norwegian University of Science and Technology)H-Index: 4
#2Ingrid Bouwer Utne (NTNU: Norwegian University of Science and Technology)H-Index: 20
Last. Ingrid Schjølberg (NTNU: Norwegian University of Science and Technology)H-Index: 10
view all 3 authors...
Abstract The objective of this article is to present a method for developing collision risk indicators applicable for autonomous remotely operated vehicles (AROVs), which are essential for promoting situation awareness in decisions support systems. Three suitable risk based collision indicators are suggested for AROVs namely, time to collision, mean time to collision and mean impact energy. The proposed indicators are classified into different thresholds; low, intermediate and high. An AROV flig...
4 CitationsSource
#1Asim Abdulkhaleq (University of Stuttgart)H-Index: 7
#2Stefan Wagner (University of Stuttgart)H-Index: 48
Last. Nancy G. Leveson (MIT: Massachusetts Institute of Technology)H-Index: 45
view all 3 authors...
Abstract Formal verification and testing are complementary approaches which are used in the development process to verify the functional correctness of software. However, the correctness of software cannot ensure the safe operation of safety-critical software systems. The software must be verified against its safety requirements which are identified by safety analysis, to ensure that potential hazardous causes cannot occur. The complexity of software makes defining appropriate software safety re...
22 CitationsSource
#1Christin Lindholm (Lund University)H-Index: 5
#2Jesper Pedersen Notander (Lund University)H-Index: 4
Last. Martin Höst (Lund University)H-Index: 26
view all 3 authors...
Software failures in medical devices can lead to catastrophic situations. Therefore, it is crucial to handle software-related risks when developing medical devices, and there is a need for further analysis of how this type of risk management should be conducted. The objective of this paper is to collect and summarise experiences from conducting risk management with an organisation developing medical devices. Specific focus is put on the first steps of the risk management process, i.e. risk ident...
6 CitationsSource
#1Ali Mosleh (UMD: University of Maryland, College Park)H-Index: 31
Probabilistic risk assessment (PRA) has been used in various technological fields to assist regulatory agencies, managerial decision makers, and systems designers in assessing and mitigating the risks inherent in these complex arrangements. Has PRA delivered on its promise? How do we gage PRA performance? Are our expectations about value of PRA realistic? Are there disparities between what we get and what we think we are getting form PRA and its various derivatives? Do current PRAs reflect the k...
23 CitationsSource
#1Chetan Mutha (OSU: Ohio State University)H-Index: 2
#2David C. Jensen (UA: University of Arkansas)H-Index: 9
Last. Carol Smidts (OSU: Ohio State University)H-Index: 14
view all 4 authors...
11 CitationsSource
#1N. W. OzarinH-Index: 1
When complex software-controlled systems are subject to both software and hardware FMEA, conclusions are often incorrect in areas where software and hardware failures affect each other. These analysis errors occur because software specialists generally do not analyze hardware and hardware specialists generally do not analyze software, a situation that often leads them to use educated guesses when determining system-level effects in such crossovers. Sometimes a particular failure mode is assessed...
5 CitationsSource
#1Nancy G. Leveson (MIT: Massachusetts Institute of Technology)H-Index: 45
#2Cody Fleming (MIT: Massachusetts Institute of Technology)H-Index: 8
Last. Chris Wilkinson (Honeywell)H-Index: 5
view all 5 authors...
7 CitationsSource
#1Irem Y. Tumer (OSU: Oregon State University)H-Index: 23
#2Carol Smidts (OSU: Ohio State University)H-Index: 14
Software-driven hardware configurations account for the majority of modern safety-critical complex systems. The often costly failures of such systems can be attributed to software specific, hardware specific, or software/hardware interaction failures. The understanding of how failures propagate in such complex systems might provide critical information to designers, because, while a software component may not fail in terms of loss of function, a software operational state can cause an associated...
27 CitationsSource
Cited By3
Newest
#1Marilia Abilio Ramos (UCLA: University of California, Los Angeles)H-Index: 3
#1Marilia Abilio Ramos (UCLA: University of California, Los Angeles)
Last. Ali Mosleh (UCLA: University of California, Los Angeles)H-Index: 31
view all 4 authors...
Abstract Autonomous systems operation will in the foreseeable future rely on the interaction between software, hardware and humans. Efficient interaction and communication between these agents are crucial for safe operation. Conventional methods for hazard identification and safety assessment focus often on one of the aspects of the system only, e.g., human reliability, software failures, or equipment reliability. The method Human-System Interaction in Autonomy (H-SIA) was recently proposed, foc...
Source
#1Christoph Alexander Thieme (NTNU: Norwegian University of Science and Technology)H-Index: 4
#2Ali Mosleh (UCLA: University of California, Los Angeles)H-Index: 31
Last. Jeevith Hegde (NTNU: Norwegian University of Science and Technology)H-Index: 4
view all 4 authors...
2 CitationsSource
#1Marilia Abilio Ramos (NTNU: Norwegian University of Science and Technology)H-Index: 3
#2Christoph Alexander Thieme (NTNU: Norwegian University of Science and Technology)H-Index: 4
Last. Ali Mosleh (UCLA: University of California, Los Angeles)H-Index: 31
view all 4 authors...
Abstract Maritime Autonomous Surface Ships (MASS) are the subject of a diversity of projects and some are in testing phase. MASS will probably include operators working in a shore control center (SCC), whose responsibilities may vary from supervision to remote control, according to Level of Autonomy (LoA) of the voyage. Moreover, MASS may operate with a dynamic LoA. The strong reliance on Human-Autonomous System collaboration and the dynamic LoA should be comprised on the analysis of MASS to ens...
6 CitationsSource