140 lines
5.1 KiB
Markdown
140 lines
5.1 KiB
Markdown
---
|
||
type: theoretical
|
||
backlinks:
|
||
- "[[Overview#Multiprogramming]]"
|
||
---
|
||
|
||
## Intro
|
||
Processes frequently need to communicate with other
|
||
processes.
|
||
|
||
### Independent
|
||
Cannot affect or be affected by other processes.
|
||
|
||
### Dependent
|
||
The opposite
|
||
|
||
### Why?
|
||
- Information sharing
|
||
- Computation speedup
|
||
- Modularity
|
||
- Convenience
|
||
|
||
## Methods
|
||
### Shared memory
|
||
```mermaid
|
||
block-beta
|
||
columns 1
|
||
a["Process A"] Shared b["Process B"] ... Kernel
|
||
|
||
```
|
||
|
||
Processes/threads exchange information by writing in shared memory variables. This is where [Concurrency](Inter-Process%20Communication.md#Concurrency) becomes an issue.
|
||
#### Synchronization
|
||
To remedy the race condition issue, we should synchronize shared memory access. In other words, **when a process writes to shared memory, others mustn't be able to**. Multiple read access is allowed though.
|
||
### Message passing (queueing)
|
||
```mermaid
|
||
block-beta
|
||
columns 1
|
||
a["Process A"] b["Process B"] ... ... Kernel
|
||
|
||
```
|
||
|
||
Processes communicate with each other by exchanging messages. A process may send information to a port, from which another process may receive information.
|
||
We need to at least be able to `send()` and `receive()`.
|
||
## Concurrency
|
||
Accessing the same shared memory might at the same time will cause issues. This occurrence is called a **Race Condition**.
|
||
|
||
> [!IMPORTANT]- When does it occur?
|
||
> A race condition occurs when some processes or threads can access (read or write) a shared data variable concurrently and at least one of the accesses is a write access (data manipulation).
|
||
|
||
The result depends on when context switching[^1] happens.
|
||
|
||
The part of the program where shared memory is accessed is called the **Critical Section (CS)**.
|
||
|
||
### Critical Regions/Sections
|
||

|
||
|
||
### Avoiding race conditions
|
||
|
||
> [!IMPORTANT]- Conditions
|
||
>1. No two processes may be simultaneously inside their critical regions. (Mutual Exclusion)
|
||
>2. No assumptions may be made about speeds or the number of CPUs.
|
||
>3. No process running outside its critical region may block other processes.
|
||
>4. No process should have to wait forever to enter its critical region. (Starvation)
|
||
### Locks
|
||
A flag which tells us whether the shared memory is currently being written into.
|
||
|
||
### Mutexes
|
||
Mutual exclusion - a type of **lock**, specifically to enforce point 1 in [Avoiding race conditions](Inter-Process%20Communication.md#Avoiding%20race%20conditions) (i.e. enforcing mutual exclusion).
|
||
|
||
### Semaphores
|
||
Can alllow more than one thread to access a resource. Based on amount of permits. Could be a binary one, which is essentially just a lock, otherwise is called a counting semaphore, only allowing as much writes as implemented.
|
||
|
||
Use a `wait()`
|
||
```c
|
||
void wait(int *S){
|
||
while((*S)<=0); // busy waiting
|
||
(*S)--;
|
||
}
|
||
|
||
```
|
||
and a `signal()`:
|
||
```c
|
||
void signal(int *S){
|
||
(*S)++;
|
||
}
|
||
```
|
||
Just keep a fucking list of processes currently semaphoring.
|
||
#### Producer-Consumer Problem
|
||
- The producer produces items and places them in the buffer.
|
||
- The consumer removes items from the buffer for processing
|
||
We need to ensure that when a producer is placing an item in the buffer, then at the same time consumer should not consume any item. In this problem, the buffer is the **critical section**.
|
||
|
||
> [!example]- How do we solve this?
|
||
> To solve this problem, we need two counting semaphores – Full and Empty. “Full” keeps track of some items in the buffer at any given time and “Empty” keeps track of many unoccupied slots.
|
||
|
||
#### Readers-Writers Problem
|
||
* Multiple readers can access the shared data simultaneously without causing any issues because they are only reading and not modifying the data.
|
||
* Only one writer can access the shared data at a time to ensure data integrity
|
||
|
||
We already mentioned this in[Synchronization](Inter-Process%20Communication.md#Synchronization)
|
||
|
||
> [!example]- How do we solve this?
|
||
> Two solutions. We either give priority to readers or writers, but we have to do so consistently. This means that we let read/write happen first until exhaustion.
|
||
### Peterson's Algorithm
|
||
|
||
Where `i` and `j` are separate processes.
|
||
|
||
1. Two Boolean flags: one for each process (e.g.,`flag[0]` and `flag[1]`). Each flag indicates whether the corresponding process wants to enter the critical section.
|
||
2.
|
||
```c
|
||
flag[i] = true;
|
||
turn = j;
|
||
while (flag[j] && turn == j) {
|
||
// busy wait[^2]
|
||
}
|
||
```
|
||
3. Once the while loop condition fails, process i enters its critical section.
|
||
4. After finishing its critical section, process i sets flag[i] to false.
|
||
5. Repeat!
|
||
|
||
|
||
|
||
## Monitors
|
||
A high-level abstraction that provides a convenient and effective mechanism for process synchronization.
|
||
It defines procedures (i.e. methods).
|
||
|
||
> [!WARNING]
|
||
> Only one process may be executing any of the monitor's procedures at a time within the monitor at a time
|
||
|
||
It uses **condition variables** (often with wait and signal[^3]operations) to allow threads to wait for certain conditions to be met before proceeding.
|
||
|
||

|
||
|
||
---
|
||
|
||
[^1]: [Context switching](Processes%20and%20Threads.md#Context%20switching)
|
||
[^2]: Process repeatedly checks a condition (usually in a loop) until a desired event or state is reached. Not too different from polling.
|
||
[^3]: to the other threads
|