| ![]() |
PIPELINED PROCESSORS AND WORST CASE EXECUTION TIMES
N. Zhang
A. Burns
M. Nicholson
Department of Computer Science,
University of York, UK
ABSTRACT
The calculation of worst case execution time (WCET) is a fundamental requirement of almost all scheduling approaches for hard real-time systems. Due to their unpredicatability, hardware enhancements such as cache and pipelining are often ignored in attempts to find WCET of programs. This results in estimations that are excessively pessimistic. In this paper a simple instruction pipeline is modeled so that more accurate estimations are obtained. The model presented can be used with any schedulability analysis that allows sections of non-preemptable code to be included. Our results indicate that WCET over-estimates at basic block level can be reduced from over 20% to less than 2%, and that the over-estimates for typical structured real-time programs can be reduced by 17%-40%.
1. Introduction
In real-time systems, predictable temporal behaviour is an essential requirement. To be able to predict, and therefore guarantee, the timing behaviour of a real-time system two issues need to be addressed. First, the worst case execution time of software should be known before run time. Secondly, the system's end-to-end response time must be predicted by taking into account scheduling and communication overheads, etc. In this paper we address the issue of estimating worst case execution time, but do so within a general scheduling approach.
Although almost all methods of guaranteeing timing behaviour require knowledge of worst case execution times there has been little published work in this area (compared with, say, scheduling models). Notable exceptions are the work of Puschner within the context of the MARS project17, 24, Park and Shaw15, 14, 20, 16, Mok12, Woodbury25 Sarkar18, and the language specific reports, e.g. Pearl4, and Real-Time Euclid23, 21, 22, 5. The general approach adopted in this work is to split up the computations of a task into basic blocks. A basic block has the property that if the first statement in the block is executed then all statements of the block are executed. There are then two independent issues:
g what is the worst case execution time of each basic block ? on a particular hardware platform;
g what is the longest (i.e. worst case) path through the basic blocks of the task.
To get the worst case execution time of each basic block, hardware enhancements such as cache and pipelining are usually ignored. The result is that estimations of execution times are overly pessimistic. Although some work has been done on predictable caches7, 8 useful models for pipelined processors have not been reported.
This paper attempts to rectify this by presenting such a model. Non-pessimistic analysis is possible if preemption is controlled. We do this in the context of a scheduling approach that already takes into account intervals of non-preemption. In the next section this general scheduling approach is outlined. Section 3 then presents the mathematical model, which, by utilising hardware implementation and software timing information in a table-driven fashion, produces good execution time estimation of assembler basic blocks. A simple processor pipeline (that used