For today’s systems that tend to have increasing complexity and interconnections with other systems, security becomes a necessary property that needs to be not only provided but also structured and argued in a clear way. It is especially the case when humans are involved in the loop and there are direct physical interactions between humans and systems, as such a setting makes the system safety-critical. For instance, an industrial robot, i.e., a (semi)-autonomous system, working in cooperation with humans or just near-by humans, needs to be secure, i.e., there should be enough confidence that its security level is adequate and in line with the current state-of-the-art of threats, attacks and system vulnerabilities. A systematic way to build an acceptable level of confidence in system security is to collect evidences and arguments over security measures adequacy in a security case. Providing a security case is a demanding task, it requires a significant amount of resources in terms of time, money, and human effort. An analogy can be made with a safety case, which is required to be built for certification purposes. The certification cost for a safety-critical system is estimated up to 75 % of its development costs. However, security solutions are much more dynamic compared to the safety ones with respect to continuous updates and further refinements. More importantly, a system cannot be stamped as being “secure” once and forever. Security as a system property can be provided and guaranteed to a specified extent only for a current state-of-the-art, as new system vulnerabilities are constantly exposed and new attack techniques are continuously being developed. Security is dynamic by its nature and requires run-time updates and refinements. Since developing a security case from scratch every time there is an update, is not feasible, the challenge of handling updates in a smart way within a security case needs to be addressed. The notion of a dynamic assurance case has already been proposed in the safety domain, e.g., the introduction of a set of rules for updates and a set of monitors for establishing a link between the system and a confidence structure within the safety case. We believe that such techniques can be adopted, further developed and complemented with other relevant solutions to handle a dynamic security assurance case.
The goal of this project is to investigate ways to enable dynamic security assurance using security case, which will allow run-time security case adaptation whenever new updates occur given that new threats and vulnerabilities have been detected. Given this we will be able to understand ways to identify parts of the case that are affected by a particular change and consequently re-examine them. We will develop an approach to trace dependencies between an introduced change in a security solution and arguments presented in the corresponding security case. Furthermore, we aim to complement a security case with techniques that allow an adaptation to changes at run-time and provide an adequate confidence in the possible consequences of the adaptation. The work will provide basis for a general methodology for the mapping and construction of a dynamic security case, possibly applicable also in a securityaware safety case (i.e., a safety case where safety relevant security aspects are considered). Introducing security considerations in safety cases leads to a necessity to handle the dynamic nature of security causes and relevant updates. A further potential extension of the idea is to investigate whether the proposed mapping methodology is suitable for analysis of safety-security trade-offs, i.e., identifying how the introduced security solution affects system safety and through the derived dependencies investigate possible ways to optimize these system properties based on provided criteria, such as cost or resource consumption.
Towards Security Case Run-time Adaptation by System Decomposition into Services (Oct 2018) Elena Lisova, Aida Causevic 44th Annual Conference of the IEEE Industrial Electronics Society (IECON'18)
Incorporating Attacks Modeling into Safety Process (Sep 2018) Amer Surkovic, Dzana Hanic, Elena Lisova, Aida Causevic, Kristina Lundqvist, David Wenslandt , Carl Falk 6th International Workshop on Assurance Cases for Software-intensive Systems (ASSURE 2018)
Towards Attack Models in Autonomous Systems of Systems (May 2018) Amer Surkovic, Dzana Hanic, Elena Lisova, Aida Causevic, David Wenslandt , Carl Falk System of Systems Engineering Conference (SoSE 2018)