Lecture – Tasks and Task Management
(Chapter 12)

Ryan Robucci

Table of Contents

References

Vocabulary

Task Scheduling

CPU Utilization

EnterReadyRunExitWait(5) CPU available,task is next in line(2) interrupt or yield(4) Process completedor is terminated(1) I/O or delayrequest(3) I/O or delaycompleted

Preemptive OS

A preemptive OS can interrupt a running process and "involuntarily" have it moved to the ready queue before completion, whereas with a non-preemptive OS the process will only provide a mechanism to "voluntarily" yield the CPU or wait on a resource. In other words, the base cause for a process to leave the CPU in a non-preemptive OS can be found in the code, such as a yield or I/O call. The "involuntarily" interruption in a preemptive leverages hardware support associated with an interrupt.

Scheduling Priority

Blocking (resource-based blocking)

Priority Inversion (indirect resource-based blocking through CPU priority blocking)

☆☆☆ The priority inversion happens here.

Illustration added below

foo C C R R R->C reserved by CPU CPU CPU->C executing

foo C C CPU CPU C->CPU ready A A R R R->C reserved by CPU->A executing

foo C C A A R R A->R waiting for CPU CPU A->CPU blocked by C R->C CPU->C

foo C C CPU CPU C->CPU preempted by B A A R R A->R waiting for A->CPU blocked by C B B R->C CPU->B

At this point, B is holding the CPU while a higher-priority A cannot progress. Note that A and B do not depend on the same non-CPU resource.

Note

This course is more focused on basic understanding and awareness of problems and limitations with multi-tasking. You may study systematic identification of problems and mitigation in other courses.

More Vocab

Example Task Queues that could be implemented:

Scheduling of Tasks 12.4

Time-shared Systems

Rate monotonic scheduling (RMS)

Must adjust for blocking

Priority Scheduling

Real-time scheduling

Scheduling Algorithm Performance Metrics

Common to look at metrics such as average wait time or average process-completion throughput

Book Example:

5 processes queued at the same time

Process CPU Time Required
P1 10
P2 29
P3 3
P4 7
P5 12

Inter-task Communication (Book 12.6)

Shared Variables

Shared Buffers

Ring or Circular Buffer

Depiction of Length-24 and Length-3 circular buffer

Example Length-3 Implementation

Initially empty array, with writePtr and readPtr at the same location

write read

Put: write 3

write 3 read

Put: write 5

write 3 read 5

Get: read returning 3

write read 3 5

Put: write 7, writePtr wraps around to begining of allocated block

write read 3 5 7

Put: write 11, writePrt becomes same as readPtr defining onset of Queue Full

write 11 read 5 7

Get: returns 5

write read 11 5 7

Get: returns 7, readPtr pointer wraps around to begining of allocated block

write 11 read 5 7

Get: returns 11, readPtr becomes same as writePtr defining onset of Queue Empty

write 11 read 5 7

Mailbox

Messaging

Message Buffering

one consideration is if link guarantees message order regarding capacity

there are 3 options for buffering:

note book figures, including Figure 12.23

Access to Shared Resources

Atomic Access Flags

  1. check and flag use of resource, entry section
  2. flag the release of the resource, exit section

Solutions to Critical Section Problem

...must satisfy:

Tokens and Token Passing

Disabling Interrupts for Critical Sections

Semaphores

Mutual Exclusion and Mutex

Process Synchronization

Aside from managing access to resources, semaphores can
also be used to synchronize processes

Extending Semaphores

Monitors

Starvation and Deadlocks

Deadlock (Don't Distribute)

Mutual Exclusion - once a process has been allocated a resource, it has exclusive access to it (that’s a given here)
No Preemption - resources can not be preempted
Hold and Wait - some processes are holding one resource and waiting on another resource (forever if required)
Circular Wait - a set of processes exists in a state such that each process is waiting on a resource held by another in that set (of course)

Deadlock

Necessary conditions:

Handling Deadlocks

Deadlock Prevention -- Causes and Mitigation

Deadlock Avoidance

Deadlock Detection and Recovery

Wait-for Graph:

foo T0 T0 RA RA T0->RA waiting for T1 T1 RB RB T1->RB T2 T2 RC RC T2->RC RA->T1 reserved by RB->T2 RC->T0

Recovery

Once a deadlock is detected, the processes can be collectively or selectively terminated or have their
resources preempted. This is not simple since preempting resources or terminating processes can leave resources in dangerous sates. Mechanisms for allowing safe preemption or repair should be considered. The assumption is that terminating a process only requires it to restart its work resulting in wasted time (which can drive selection of what process should be terminated to waste as little time as possible). One should consider allowing for recovery points (checkpoints) in long tasks that may be terminated...like autosave in a wordprocessor. Recovery should not cause the deadlock to be repeated and should not starve processes.

Slide Fixes

2022-08: Extending Semaphores: have a ~~of~~ process ... ~~look~~ ++lock++ is gone