The accelerated development of faster and cheaper electronic components faces the software designer with new challenges. One of them is to predict the viability of current architecture and the performance of current operating systems on CPUs able to operate at instruction-per-second rates about one order of magnitude higher than those available today. For this task we need to understand t only the principle of such an operating system but also the detailed mechanics and the scenario of actions determined by the random occurence of asynchrous events. Also we want to understand how this scenario changes with varying CPU and I/O device speed. One can realise that existing tools are only of limited help in pursuing this goal. For IBM/370 rNS systems there are several software monitors available e. g. the System Activity Measurement Facility ~/1 /4/1, the Resource Measurement Facility CRMF /5/) or traces like the Generalised Trace Facility CGTF /6/) and hardware monitors like the System Measurement Instrument (SMI) which all address mainly the aspect of system tuning. Software monitors can efficiently observe length of queues and resource utilisation percentages but will unavoidably distort the time scale by absorbing resources for their own execution. Hardware monitors do t distort the time scale but have only limited possibilities of observing the logic of operations. Their best use is in counting occurences of a limited number of well specified events. Moreover, ne of these tools will permit the simulation of processor speeds that differ from the real processor speed.
Springer-Verlag Berlin and Heidelberg GmbH & Co. KG