Multitasking systems solves a problem using several tasks, pieces of code that work together in an organized way.
Coordination requires:
exchanging information (sharing data between Tasks)
synchronizing tasks (with dependencies among tasks, executing parts of tasks correctly in time)
scheduling task execution (providing time for task to run on processor)
sharing resources (providing time for task to access a shared resource)
software that does those things is called an operating system
An operating system that can meet specified time constraints for task execution is called a real-time operating system (RTOS)
Tasks
Task are like the individual jobs that must be done (in coordination) for a large job.
We can partition the design based on things that we think can/should be done together, or just in a way to make the problem easier to think about, or based on knowing the most efficient partitioning for execution.
Example tasks/design partitions for a digital thermometer with flashing temperature indicator:
Detect & signal button pressed
Read temperature & update flash rate
Update LCD
Flash LED
Resources
Tasks/processes require resources or access to resources
processor (for execution)
stack
memory
registers
P.C.
I/O Ports
network connections
file descriptors
etc..
These are allocated to a process when it is executed by the operating system.
The contents of the P.C. & other pieces of data associated with the task determine its process state.
Processor is a shared resource
Each process requires some execution time to complete
The OS manages resources, including CPU time, in slices to create the effect of several tasks executing concurrently, seemingly performing their jobs at the same time.
Though, in actuality they may not be running at the same instant in time
unless you have a multi-threaded or multi-core processor.
We'll call the time between the start and termination of a task its persistence.
Since several tasks time-share the CPU and other resources, execution time may not equal the persistence.
Task 0:
execution time: (20-10)+(70-50)=30 us
persistance (time): 70-10=60 us
Scheduling
The illusion of parallel execution is created by scheduling a scheduling process the moves tasks between states.
Concurrent execution is the execution of statements, e.g. among different tasking, where there is flexibility the order of execution (whereas parallel execution requires additional hardware to actually run simultaneously)
Options for Scheduling Strategies:
Multiprogramming: tasks run until finished or until they must wait for a resource, e.g. I/o
Real-time: Tasks are scheduled and guaranteed to complete with strict timing specified
Time-sharing, tasks are interrupted, or preempted, after specified time slices to allow time for other tasks to execute
Processor Context Switchs Between Processes
Process enters system queue, state transitions between running and ready-waiting states, completes
Process State Diagram:
Preempting and Context Switching Overhead
Preempting/blocking requires saving the state of the process as it is running in the processor, called the context, including P.C. and registers, so that the process can be stopped/blocked and the context can be restored later for the process to resume just as it left off. (similar to saving state when a function is called and restoring state at the end of a function)
This saving of state of one process and loading of another is called context switching. It is the overhead of multitasking.
Overhead:
Threads
Think of a thread as an organizational concept that is the smallest set of (information about) resources required to run a program, including a copy of the CPU registers, stack, PC.
The OS manages several tasks as threads.
†James K. Peckol Figure 11.7
Ideally, each process should have its own private section of memory to work with, called is address space.
Along with hardware support (memory protection unit MPU), an OS may be able to enforce that processes do not access memory outside their own address space.
lacking feature on many simple microcontrollers
Organizational concepts (may have one or both):
Multi-process execution: multiple distinct processes running in separate threads
Multi-threaded process: a process with several execution threads (likely sharing memory and managing
resource use internally)
intraprocess thread context switching is typically less expensive than interprocess context switching.
Reentrant Code & Thread Safe Code
Cannot assume that all code is safe to run alongside other code simultaneously or even along side itself
Thread Safe: (safety with respect to other code)
A function with thread safe code means other threads or processes can run safely the same time
Re-entrant code (safety with respect to same code)
Re-entrant code behaves with concurrent execution of the same code (e.g. multiple calls to the same function)
Example: Thread safe but not reentrant
To allow multiple processes to safely time-share a resource, an OS
typically provides check, lock, and free utility functions. These are used to make a code thread safe.
These types of utility functions are called mutex (mutually exclusive) functions and are provided by the OS; we'll discuss mutex functions more later.
Example:
intAFunction(){// some function that checks and waits for// availability of a resources and locks/reserves// it so other processes won't access it// -> makes this thread safewait_for_free_resource_and_then_lock_access();do_some_wok();//free/unreserved the resourceunlock_some_resource();}
To look at reentrant lets look at simultaneous calls from a main thread and an ISR
Consider function synopsis
Wait and Lock: Wait for then Lock Resource
Use Resource
Unlock Resource
Example: Reentrant but not thread safe
Example code with READ access of a file:
can call over and over in the same process while function is still running, but if another thread deletes the file then from the other's threads perspective the file handle becomes bad without any warning while it is using it.
undisrupted access -- need a way for a process to "lock" a resource to be able to ensure that its assumptions are upheld
intfunction(){char*filename="/etc/config";
FILE *config;if(file_exist(filename)){
config=fopen(filename,"r");...assume success and
...read configuration from file
}else{...use program defaults...}}
• check if the file exists
• what if file is deleted by another process at this point?
• Many OSs will prevent deletion while the file is open
• multiple calls can read the file at the same time
• (writing might be dangerous)
--code source: http://en.wikipedia.org/wiki/Thread_safety
Reentrant and Thread Safe Coding Practices
Problematic - multiple calls access the same variable/resource:
global variables
process variables
pass-by-reference parameters
shared resources
Safer:
local variables - only using local variables makes code reentrant by giving each call its own copy
For example, some string functions like strtok() (provided via #include <string.h>) use global variables and are not reentrant
Kernel
The "core" OS functions are as follows:
perform scheduling -> handle by the "scheduler"
dispatch of processes -> handle by the "dispatcher"
facilitate inter-process communication
A kernel is the smallest portion of OS providing these functions
Functions of Operating System
Process or Task Management
process creation, deletion, suspension, resumption
Management of interprocess communication
Management of deadlocks (processes locked waiting for resources)
Memory Management
Tracking and control of tasks loaded in memory
Monitoring which parts of memory are used and by which process
Administering dynamic memory allocation if it is used
I/O System Management
Manage Access to I/O
Provide framework for consistent calling interface to I/O devices utilizing
device drivers conforming to some standard
File System Management
File creation, deletions, access
Other storage maintenance
System Protection
Restrict access to certain resources based on privilege
Networking - For distributed applications,
Facilitates remote scheduling of tasks
provides interprocess communications across a network
Command Interpretation
Accessing I/O devices through devices drivers, interface with user to accept and
interpret command and dispatch tasks
RTOS
An RTOS follows (rigid) time constraints. Its key defining trait is the predictability (repeatability) of the operation of the system, not speed.
hard-real time -> delays known or bounded
soft-real time -> at least allows critical tasks to have priority over other tasks
Some key traits to look for when selecting an OS:
scheduling algorithms supported
device driver frameworks
inter-process communication methods and control
preempting (time-based)
separate process address space
memory protection
memory footprint, data esp. (RAM) but also its program size (ROM)
timing precision
debugging and tracing
Task Control Block
The OS must keep track of each task. For this, a structure called as task control block (TCB) or a process control block can be used. They will be stored in a Job Queue implemented with pointers (array or linked-list).
A prototypical TCB follows
it's generic to support a variety of functions.
structTCB{void(*taskPtr)(void* taskDataPtr);//task function(pointer),one arg.void* taskDataPtr;// pointer for data passingvoid* stackPtr;// individual task's stackunsignedshort priority;// priority infostructTCB* nextPtr;// for use in linked liststructTCB* prevPtr;// for use in linked list}
Code Source: †James K. Peckol
Think about the aspect of being generic. A task can be just about anything that a computer can do. A generic template must be used to handle any task.
Each task is written as a function conforming to a generic interface (template):
voidaTask(void* taskDataPtr){// this task's code}
Code Source: †James K. Peckol
Each task's data is stored a customized container. The task must know the structure, but the OS only refers to it with a generic pointer.
Thought question: What are the options for you handling a user prompt longer than 1 char?
** Keep ISRs Short
Keep ISRs short is the rule of thumb,
but also consider alternatives
In the provided example, minimal preprocessing was performed in the ISR fetching the data
Minimal immediate preprocessing may be warranted if not doing so creates more net complexity and effort elsewhere
In the provided example, multiple consumers (tasks that use data) require the preprocessing, meaning the additional complexity of managing extra buffers and flags for the RAW and preprocessed data might not be a good alternative to immediate pre-processing
In the example, the data chuck size was only one byte
The small chunk size keeps the ISR work minimal on each call, and means the preprocessing will cause minimal blocking delay
Baseline Operation
Infrequent ISR, long routine with collection, preprocessing, processing
Infrequent ISR, routine is short with data buffered and derivative tasks flagged to complete in low-priority Task
Less disruptive to Task 1 and Task 2 schedules
Sleep and Conditional Waiting States for Processes
Notice that that prompt function is called forever even though it only acts
once per input
Would be better if prompt function could signal that it should be taken out of the queue when it is finished and reenter only when needed again
e.g. REMOVE_FROM_ACTIVE_QUEUE() , another process must wake the process
Rather than calling every task function every time through, it would be nice if a task has a way to tell the scheduler to "time out" itself for a while (take itself temporality out of calling or execution queue (ready-waiting).
e.g. REMOVE_FROM_ACTIVE_QUEUE_FOR(500 rounds)
Alternatively, a task could tell the kernel to only call it again upon some condition being set, by defining a software flag and waiting for it to be set by another task.
e.g. REMOVE_FROM_ACTIVE_QUEUE_UNTIL(flagIndexData)
An even more advanced and useful feature would be if there was a call that a task could make put itself to sleep somewhere the middle of the its code and resume where it left off.
SLEEP_HERE()
SLEEP_HERE_FOR(500 ticks)
SLEEP_HERE_UNTIL(flagIndexData)
Most of these are addressed/enabled by the operating system