Assignment 1 COMPSCI 340 Christian Graf - cgra056 1. Look at the list of processes, arrival times and burst times on page 145 of the text. Draw a Gantt chart (ASCII art is suitable) showing the order of execution for these processes if they were scheduled by a LIFO stack-based dispatcher. What would the average waiting time be in this case? LIFO: last in, first out a) without pre-emption: process: | P1 | P4 | P3 | P2 | time: 0 8 13 22 26 stack at time 0 : P1 stack at time 8 : P2,P3,P4 stack at time 13: P2,P3 stack at time 22: P2 stack at time 26: - waiting time: P1: 0-0 = 0 P2: 22-1 = 21 P3: 13-2 = 11 P4: 8-3 = 5 Average waiting time: 37 / 4 = 9.25 b) with pre-emption: process: |P1|P2|P3| P4 | P3 | P2 | P1 | time: 0 1 2 3 8 16 19 26 stack at time 0 : P1 stack at time 1 : P1,P2 stack at time 2 : P1,P2,P3 stack at time 3 : P1,P2,P3,P4 stack at time 8 : P1,P2,P3 stack at time 16: P1,P2 stack at time 19: P1 stack at time 26: - waiting time: P1: first start: 0-0 = 0 second start: 19-1 = 18 P2: first start: 0-0 = 0 second start: 16-2 = 14 P3: first start: 0-0 = 0 second start: 8-3 = 5 P4: first start: 0-0 = 0 Average waiting time: 37 / 4 = 9.25 In this case, if I am not wrong with my calcullations, there is no difference in dispatching the processes with pre-emption or without pre-emption. Normally I would have expected a tiny improvement with pre-emption. Thus I am a little unsure if my calcullations are right. 2. There is a comment in the checkForInterrupts method of the A1Process class – “This is not how it would work on a real OS.” Describe why this method is necessary in the simulation and explain how a real operating system deals with the same situation. Modern OS today are interrupt driven, that means that if no software processes are executed, no hardware is to be accessed or served, there will be no interrupts and the OS will just do nothing except from waiting. If an event occurs, this is signaled from the hardware (as interrupt) or from the software (so called system call) via the system bus to the CPU. There it causes an immediate stop of the current execution and a transfer to an interrupt vector. This contains the starting address (as pointer) of the appropiate service routine assigned to the interrupt. The routine gains control over the system and handles the interrupt. Before doing that the address of the interrupted process must be saved to make it possible to go back after the handling of the interrupt. As the interrupt service routine might modify the processors state (i.e. registers) this must be saved as well to make an execution of the originally processed task possible. The process will then go on as if nothing has happened. I think there are three major differencen between the normal interrupt handling in an OS and the simulation done with the Java classes. First of all, the simulation does not include any 'real' register manipulation or hardware based interrupts, but that drawback lies in the nature of that aproach. Second, the handling is not done with service routines which are supposed to be seperated code segments. This would speed up the execution because such routines are likely to be implement in fast code, i.e. assembler. The third point is that 'our' simulation does not take care, if the process is really stopped, the registers' state saved and so one. Even more the interrupt routine has the possibility to change registers and the whole environment such, that I "interrupted" process might come in trouble when going on with execution. Additionally the address of the that process would be stored (this is not done in the implementation) to allow an further execution of the interrupted process. But this seems not to be necassary here because the only interrupts come from the threads itself and the state of the 'interrupts' are checked continiously by themselves. 3. Download from the 340 website the source for the Scheduler from the textbook chapter 6. Compile and run the TestScheduler class. You may need to extend the buffer in your Command Prompt window in order to see enough output from this program. What is wrong with the output? Why does the program not run as the authors intended? The program is supposed to run different threads simultanously, but with different priorities. Thus on the screen there should be an output according to the different priorities. If a thread X has priority k and thread Z priority 2k then the output of Z should be twice that of X. That's not the case when the the program runs. That's because JAVA programs run on a VM which allow priorities but do not implement it like in hardware. In the API it's just said: "Threads with higher priority are executed in preference to threads with lower priority." It's not said, that the assignement of priorities is linear and thus the VM runs thread Z more often than X, but not necassarely twice as often. This is explicitely pointed out in the Java Language specification: "Every thread has a priority. When there is competition for processing resources, threads with higher priority are generally executed in preference to threads with lower priority. Such preference is not, however, a guarantee that the highest priority thread will always be running, and thread priorities cannot be used to reliably implement mutual exclusion."