Midterm
Okay, I can help answer the practice questions presented in the Midterm Review slides (lec9_midterm_review.pdf). Here are the questions and their answers based on the provided material:
system call is an exception
procedure call is neither an exception or interruption
1. Privileged Operations [cite: 61]
The question asks which operations should be privileged. Privileged operations are those that could affect system stability or security if executed directly by user applications. They must be executed in kernel mode. [cite: 7] Based on standard operating system principles discussed in the review[cite: 7, 8]:
- Executing I/O instructions (e.g., send a read operation to a disk): Privileged. Direct hardware access needs OS mediation.
- Invoking a system call: Not Privileged. Invoking the system call is the mechanism for a user process to request a privileged operation, but the invocation itself (like a specific instruction, e.g.,
SYSCALL) starts in user mode and causes a trap into kernel mode where the privileged operation is then performed by the OS. The slide marks this with an X, likely indicating it's not a privileged instruction itself, but the action performed by the kernel as a result is privileged. - Modifying memory protection information: Privileged. Only the OS should control which processes can access which memory regions.
- Destroying a process: Privileged. The OS manages processes and their resources. [cite: 15]
- Reading from memory (e.g., with the MOV instruction): Not Privileged. User processes need to read their own memory. The hardware memory protection, managed by the OS (a privileged operation), prevents reading other processes' memory. The slide marks this with an X. [cite: 61]
- Disabling interrupts: Privileged. Disabling interrupts affects the entire system's ability to respond to events and should only be done by the kernel for brief periods, often during critical sections. [cite: 13, 37]
- Calling
join()in a user-level thread: Not Privileged. If using a user-level threading library, operations likejoinare managed within the process's own context without direct kernel intervention (though the underlying kernel threads, if any, are managed by the OS). The slide marks this with an X. [cite: 24, 61]
2. Race Conditions [cite: 62]
The code involves two threads accessing a shared variable x initialized to 0. One thread increments x 10 times (AddToX), and the other decrements x 10 times (SubFromX). [cite: 63] Each increment (x++) or decrement (x--) is not atomic and typically involves separate read, modify, and write steps.
Because of potential interleaving of these steps (a race condition [cite: 30]), the final value of x can vary.
* If AddToX runs completely before SubFromX (or vice versa): x becomes 10, then returns to 0. Final value = 0.
* Worst case for AddToX (maximum positive value): AddToX reads x, gets interrupted, SubFromX runs completely (making x = -10), then AddToX resumes and writes its incremented value based on the old read value. This can happen repeatedly. Consider x=0. AddToX reads 0. SubFromX runs 10 times, x=-10. AddToX writes 1. x=1. This seems complex. A simpler view: if all reads by AddToX happen before any writes by SubFromX, and all reads by SubFromX happen before any writes by AddToX, you could potentially reach extremes.
* Extreme Positive: If AddToX performs all 10 increments without interference from SubFromX's decrements affecting its reads (e.g., AddToX reads x, increments its local copy, gets swapped out; SubFromX runs, then AddToX writes its value back), x could reach 10.
* Extreme Negative: Similarly, if SubFromX performs all 10 decrements without interference from AddToX's increments affecting its reads, x could reach -10.
The slide correctly states the possible range is -10 to 10. [cite: 63] This occurs because, in the worst case, one thread's operations might effectively overwrite the other's based on stale reads. For example, AddToX reads 0, SubFromX reads 0. AddToX calculates 1, SubFromX calculates -1. AddToX writes 1. SubFromX writes -1. One operation's result is lost. If this happens consistently, extreme values are possible.
3. Scheduling State Transitions [cite: 64]
Based on the provided state diagram[cite: 65]:
- Ready -> Running: Caused by the scheduler selecting the process/thread to run (dispatching). [cite: 65]
- Ready -> Waiting: Not Possible. A process only moves to Waiting from the Running state when it needs to wait for a resource or event (like I/O). [cite: 65] A ready process is waiting only for the CPU.
- Running -> Ready: Caused by a timer interrupt (in preemptive scheduling), or the thread voluntarily yielding the CPU (
yield). [cite: 65] - Running -> Waiting: Caused when the process/thread blocks for a resource (e.g., initiates an I/O operation, waits for a lock, encounters a page fault). [cite: 65]
- Waiting -> Running: Not Possible. A process must first become Ready when the event it was waiting for completes (e.g., I/O completion interrupt, resource becomes free). The scheduler then later moves it from Ready to Running. [cite: 65]
- (Implicit) Waiting -> Ready: Caused by the event the thread was waiting for completing (e.g., I/O completion interrupt, resource becomes free). [cite: 65]
4. Deadlock Resource Allocation Graphs [cite: 66]
Analyzing the four graphs for deadlock (circles are threads/processes T, squares are resources L/S, arrows T->R mean request, arrows R->T mean holding)[cite: 52]:
- Graph (1): T2 holds L1 and requests S1. T1 holds S1 and requests L1. T3 requests S1. Invalid Graph. Resource S1 appears to have two instances, but the arrows show T1 holding one instance and T2 requesting the resource S1 (implying only one type), while T3 also requests S1. More critically, T2 holds L1 and T1 holds S1, but the arrows seem to imply T2 requests from S1 and T1 requests from L1, which is backward for request edges. Assuming standard notation (T->R is request, R->T is hold): T1 holds S1 instance 1, T3 requests S1, T2 holds L1, T1 requests L1, T2 requests S1 instance 2. No cycle. However, the slide labels it Invalid[cite: 67], possibly due to ambiguity or incorrect representation (like T2 pointing to S1, but also having an arrow from L1 pointing to T1?). Without clear standard notation adherence in the diagram, deferring to the slide's judgment.
- Graph (2): T1 holds L2, requests L1. T2 holds L1, requests L2. Deadlock. There is a circular wait between T1 and T2 involving resources L1 and L2. [cite: 51, 67]
- Graph (3): T2 holds L1, requests S1. T1 holds S1 (instance 1), requests L1. T3 requests S1 (instance 2). No Deadlock. Although T1 and T2 have a potential cycle (T1 -> L1 -> T2 -> S1 -> T1), T3 only requests S1. If T1 finishes, it releases S1; if T2 finishes, it releases L1. There isn't a strict cycle involving holds and waits for all parties in a circle. T3 is just waiting. The slide confirms No Deadlock. [cite: 67]
- Graph (4): T1 holds L1, requests S1 (instance 1). T2 requests S1 (instance 2). No Deadlock. T1 holds L1 and requests S1. T2 requests S1. There is no cycle. T1 is waiting for S1, T2 is waiting for S1. If S1 becomes available, one can proceed. The slide confirms No Deadlock. [cite: 67]
5. Hikers-Bikers Problem [cite: 68, 70]
The goal is mutual exclusion between hikers and bikers, and limiting bikers to at most 2. The provided solution uses a lock (lock) and a condition variable (cv), along with shared counters hikers and bikers. [cite: 71]
hiker(): Acquires the lock. Waits (wait(cv)) while there are any bikers (bikers > 0). If it proceeds, it incrementshikers, releases the lock, hikes, re-acquires the lock, decrementshikers, broadcasts (broadcast(cv)), and releases the lock. [cite: 72]biker(): Acquires the lock. Waits (wait(cv)) while there are more than 1 bikers already (bikers > 1) OR there are any hikers (hikers > 0). If it proceeds, it incrementsbikers, releases the lock, bikes, re-acquires the lock, decrementsbikers, broadcasts (broadcast(cv)), and releases the lock. [cite: 73]
Analysis Points from Slides: [cite: 72, 73, 74]
* Why while loops for wait? This follows Mesa semantics for condition variables, where a woken thread must re-check the condition because another thread might have changed the state between the signal/broadcast and the woken thread re-acquiring the lock. [cite: 47]
* Why broadcast instead of signal? Using broadcast is often safer (though potentially less efficient) when multiple threads might be waiting for different conditions or when you're unsure which specific waiting thread should be woken. Here, waking all allows waiting hikers (if bikers became 0) and waiting bikers (if hikers became 0 or bikers dropped below 2) to re-check the conditions. Using signal might only wake one thread, which might not be the right one, potentially leading to deadlock or starvation if, for instance, a biker is woken but the condition hikers > 0 is still true. [cite: 74]
* Checking counts: The checks (bikers > 0 for hikers, bikers > 1 || hikers > 0 for bikers) correctly enforce the exclusion and limit rules before allowing a thread to proceed. [cite: 72, 73]
* Decrementing and Broadcasting: It's crucial that the broadcast happens after decrementing the counter, ensuring the state reflects the thread leaving before others are woken to check.
6. Make Me a Pizza Problem [cite: 75, 78]
The goal is to coordinate three chefs using two semaphores (crust_done and toppings_done, both initialized to 0) to ensure sequential production for each pizza i. [cite: 76, 79]
chef1(n)(Makes Crust): Loopsntimes. Prints "crust i". Signalscrust_done. [cite: 79] This indicates crustiis ready for Chef 2.chef2(n)(Adds Toppings): Loopsntimes. Waits oncrust_done. Prints "toppings i". Signalstoppings_done. [cite: 80] This ensures Chef 2 waits for crustiand signals that toppings are done for pizzai.chef3(n)(Cooks): Loopsntimes. Waits ontoppings_done. Prints "cooking i". [cite: 80] This ensures Chef 3 waits for toppingsi.
Analysis Points from Slides: [cite: 80]
* Coordination: The semaphores correctly enforce the dependencies: Chef 2 waits for Chef 1 (crust_done.wait()), and Chef 3 waits for Chef 2 (toppings_done.wait()). The signals (.signal()) pass the "baton" for the specific pizza down the assembly line.
* Concurrency: Multiple pizzas can be in different stages simultaneously. For example, Chef 1 could be making crust 3, while Chef 2 is adding toppings to pizza 2, and Chef 3 is cooking pizza 1. The number of pizzas in progress at once depends on how quickly the chefs work and scheduling, but up to 3 pizzas could be actively worked on (one per chef). [cite: 80]
* Locks and Printing: The question asks if a lock is needed around print. In this specific setup, since each chef handles pizza i in sequence for their step, and the printing is tied to that specific pizza number, a race condition on the output ordering for a single pizza is unlikely. However, if other shared resources were involved or if the printing wasn't strictly tied to the semaphore logic for that pizza i, locks might be needed for atomicity or to prevent interleaved output between different pizzas if that were undesirable. The current solution doesn't use locks for printing. [cite: 80]
These answers cover the practice problems presented in the lec9_midterm_review.pdf file. Let me know if you have other specific questions from the review!