Process synchronization:
the problem of getting processes to work together in a coordinated manner.
Independent processes have execution which is reproducible.
Cooperating processes have an effect on each other, so their relative
execution speed is indeterminate and so the execution is not reproducible.
On a single processor, concurrent execution is achieved by time slicing.
On a distributed system, processes may be running anywhere and may be
truly simultaneous.
The Producer/Consumer or Bounded Buffer Problem
A
producer process produces information
A
consumer process consumes information
A process can be both a producer and a consumer.
Communication between two processes is almost always buffered.
The buffer has a finite amount of buffer space.
How do you implement this with shared variables?
The producer must wait if no buffer is available.
The consumer must wait if all buffers are empty.
A simple implementation:
Implement a shared circular buffer of size
n as follows:
item buffer[n];
int in,out,counter;
in points to the next free buffer
out points to the first full buffer
counter contains the number of full buffers
There is no data available when
counter = 0
There is no available place to store data when
counter = n
Bounded Buffer 1
Producer loop:
produce an item in nextp
while (counter == n) ;
buffer[in] = nextp;
in = (in+1) % n;
counter++;
Consumer loop:
while (counter == 0) ;
nextc = buffer[out];
out = (out+1) % n;
counter--;
consume the item in nextc
Although each routine is correct by itself, they do not function
correctly when run concurrently.
Consider how the statements
counter++ and
counter--
are implemented in a RISC architecture.
counter++:
R1 = counter
R1 = R1 + 1
counter = R1
counter--:
R2 = counter
R2 = R2 - 1
counter = R2
Suppose
counter = 5 with
n = 10 and the producer loses the
CPU after executing
R1 = counter.
After producer produces one item and the consumer consumes one item, we have
counter = 6, but it should be 5.
Bounded Buffer 2
Next Notes
Back to CS 3733 Notes Table of Contents