HOME

Top 10 List of Week 08

1. CPU Scheduling

CPU scheduling is a process which allows one process to use the CPU while the execution of another process is on hold(in waiting state) due to unavailability of any resource like I/O etc, thereby making full use of CPU. The aim of CPU scheduling is to make the system efficient, fast and fair.

2. Types of CPU Scheduling

CPU scheduling decisions may take place under the following four circumstances:

  • When a process switches from the running state to the waiting state(for I/O request or invocation of wait for the termination of one of the child processes).
  • When a process switches from the running state to the ready state (for example, when an interrupt occurs).
  • When a process switches from the waiting state to the ready state(for example, completion of I/O).
  • When a process terminates.

3. Preemptive vs Non-Preemptive

Parameter Preemptive Non-Preemptive
Basic In this resources(CPU Cycle) are allocated to a process for a limited time. Once resources(CPU Cycle) are allocated to a process, the process holds it till it completes its burst time or switches to waiting state.
Interrupt Process can be interrupted in between. Process can not be interrupted until it terminates itself or its time is up.
Starvation If a process having high priority frequently arrives in the ready queue, low priority process may starve. If a process with long burst time is running CPU, then later coming process with less CPU burst time may starve.
Overhead It has overheads of scheduling the processes. No overheads
Flexibility Flexible Rigid
Cost Cost associated No cost associated
CPU Utilization High Low

4. Dispatcher

The dispatcher is the module that gives control of the CPU to the process selected by the scheduler. This function involves:

  • Switching context.
  • Switching to user mode.
  • Jumping to the proper location in the newly loaded program.

5. Scheduling Criteria

There are several different criteria to consider when trying to select the “best” scheduling algorithm for a particular situation and environment, including:

  • CPU utilization : Ideally the CPU would be busy 100% of the time, so as to waste 0 CPU cycles. On a real system CPU usage should range from 40% ( lightly loaded ) to 90% ( heavily loaded. )
  • Throughput : Number of processes completed per unit time. May range from 10 / second to 1 / hour depending on the specific processes.
  • Turnaround time : Time required for a particular process to complete, from submission time to completion. ( Wall clock time. )
  • Waiting time : How much time processes spend in the ready queue waiting their turn to get on the CPU.
  • Response time : The time taken in an interactive program from the issuance of a command to the commence of a response to that command.

6. Scheduling Algorithms

7. Thread Scheduling

Execution of multiple threads on a single CPU in some order is called scheduling. The Java runtime environment supports a very simple, deterministic scheduling algorithm called fixed-priority scheduling. This algorithm schedules threads on the basis of their priority relative to other Runnable threads.

8. Multiple-processor Scheduling

In multiple-processor scheduling multiple CPU’s are available and hence Load Sharing becomes possible. Load sharing revolves around balancing the load between multiple processors.Multi-processor systems may be heterogeneous, ( different kinds of CPUs ), or homogenous, ( all the same kind of CPU ).

  • Approaches to Multiple-Processor Scheduling : Asymmetric multiprocessing (one processor is the master, the other one just runs the code) and Symmetric multiprocessing (where each processor schedules its own jobs)
  • Processor Affinity : Soft affinity (system attempts to keep processes on the same processor but makes no guarantees) and Hard affinity (a process specifies that it is not to be moved between processors)
  • Load Balancing : Push migration ( a separate process that runs periodically, and moves processes from heavily loaded processors onto less loaded ones) and Pull migration (idle processors taking processes from the ready queues of other processors)
  • Multicore Processors : Coarse-grained (switches between threads only when one thread blocks, say on a memory read) and Fine-grained ( occurs on smaller regular intervals, say on the boundary of instruction cycles)

9. Real-Time CPU Scheduling

Real-time systems are those in which the time at which tasks complete is crucial to their performance

  • Soft real-time systems have degraded performance if their timing needs cannot be met. Example: streaming video.
  • Hard real-time systems have total failure if their timing needs cannot be met. Examples: Assembly line robotics, automobile air-bag deployment.

10. Windows XP Scheduling

  • Windows XP uses a priority-based preemptive scheduling algorithm.
  • The dispatcher uses a 32-level priority scheme to determine the order of thread execution, divided into two classes - variable class from 1 to 15 and real-time class from 16 to 31, ( plus a thread at priority 0 managing memory. )
  • There is also a special idle thread that is scheduled when no other threads are ready.
  • Win XP identifies 7 priority classes ( rows on the table below ), and 6 relative priorities within each class ( columns. )
  • Processes are also each given a base priority within their priority class. When variable class processes consume their entire time quanta, then their priority gets lowered, but not below their base priority.
  • Processes in the foreground ( active window ) have their scheduling quanta multiplied by 3, to give better response to interactive processes in the foreground.