Contemporary scheduling of real-time tasks on multicore architectures is inherently unpredictable, and activities in one part of a system (subsystem) can have negative impact on performance in unrelated parts of the system. A major source of such unpredictable negative impact is contention for shared physical memory. In essence, there are currently no mechanisms that allow a subsystem to protect itself from negative impact if other subsystems start stealing its memory bandwidth. For performance critical real-time systems, overcoming this problem is paramount.
In this project we will investigate novel methods to preserve performance of a subsystem in the face of changes in other subsystems or even during larger changes of the software architecture. We will do so by treating memory bandwidth as a shared resource to be arbitrated by the operating-system; thus guaranteeing each subsystem a certain amount of memory accesses. Our methods can not only be used to preserve performance in face of system changes, but also allow us to fine-tune performance and they allow the systems engineers to make tradeoffs between resource allocations to different parts of the system.
Our techniques will give significant contributions to solving two major industrial problems in performance-critical embedded systems: (1) migration of legacy, singlecore, systems to multicore, and (2) reuse of tested and validated subsystems in new contexts.
While building a conceptually simple idea, this project faces several scientific and industrial challenges. For instance a key goal in the project is to build our techniques on existing hardware platforms, i.e. we cannot rely on any special hardware to trace and arbitrate memory accesses. Our hypothesis is that performance counters, available in modern CPUs, will allow us to deduce the memory bandwidth consumed by analyzing e.g. number of stores, loads and cache-misses. To keep the overhead of a software based solution reasonable, we need to investigate methods that can find approximated values of the consumed bandwidth. Tracing and arbitrating the exact number of memory cycles used is likely to give unrealistic overhead. Hence, to allow use of our techniques in real-time applications, the approximations we use need to have bounded error that can be accounted for in real-time analysis.
This project brings together world leading competence in resource-arbitration using scheduling techniques (MDH), operating systems design and development (Enea), and development of performance critical embedded systems (Ericsson). The team-members each bring critical competence to tackle the above problem; a problem which neither organization is equipped to attack in a cost efficient manner by themselves.
|First Name||Last Name||Title|
|Moris||Behnam||Associate Professor,Head of Division|
Compositional Analysis for the Multi-Resource Server (Sep 2015) Rafia Inam, Moris Behnam, Thomas Nolte, Mikael Sjödin 20th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA'15)
Towards improved dynamic reallocation of real-time workloads for thermal management on many-cores (Dec 2014) Rafia Inam, Matthias Becker, Moris Behnam, Thomas Nolte, Mikael Sjödin IEEE Real-Time Systems Symposium Work-in-Progress (WiP) session (RTSS'14)
Worst Case Delay Analysis of a DRAM Memory Request for COTS Multicore Architectures (Nov 2014) Rafia Inam, Moris Behnam, Mikael Sjödin Seventh Swedish Workshop on Multicore Computing (MCC'14)
Formalization and verification of mode changes in hierarchical scheduling (Oct 2014) Hang Yin, Rafia Inam, Reinder J. Bril, Mikael Sjödin 26th Nordic Workshop on Programming Theory (NWPT'14)