An operating system virtualizes the CPU so that it becomes easier for it to be used. Even when there is a single CPU, the OS can make it appear as if multiple processes are running at the same time, concurrently. To make that work, an OS needs to schedule which process will run and for how long. The scheduler may optimize for different metrics, but one idea is that of a proportional-share scheduler, which tries to make each process have a certain percentage of CPU time. Interestingly enough, one example of a fair-share scheduler is the CFS (Completely Fair Scheduler) from Linux. It is a pretty complex scheduler with more than 10K lines (see GitHub mirror).

Taking a step back, we have lottery scheduling1. From Wikipedia2:

Lottery scheduling is a probabilistic scheduling algorithm for processes in an operating system. Processes are each assigned some number of lottery tickets, and the scheduler draws a random ticket to select the next process. The distribution of tickets need not be uniform; granting a process more tickets provides it a relative higher chance of selection. This technique can be used to approximate other scheduling algorithms, such as Shortest job next and Fair-share scheduling.

Lottery scheduling solves the problem of starvation. Giving each process at least one lottery ticket guarantees that it has non-zero probability of being selected at each scheduling operation.

Why would you choose lottery scheduling over more deterministic approaches? The main advantage is simplicity: the algorithm is straightforward to implement and reason about. Additionally, it naturally prevents starvation—any process with at least one ticket will eventually run. The trade-off is that scheduling decisions are probabilistic rather than deterministic, which means actual CPU time percentages will only approximate the desired proportions over time.

The “random” part is certainly not simple and when you are developing an operating system you need to make sure you have a good source of randomness that you can make use of. To be really fair, the source of randomness and any calculations you do on it should be unbiased when picking the tickets. If we write an OS from scratch and we opt into using the lottery scheduler, we may need to implement such a random function. For now, I am building a simple simulator, so I will make use of Zig’s standard library. In std.Random (zig 0.16.0-dev.577+c50aa2b95), we have:

// Returns an evenly distributed random
// integer at_least <= i <= at_most. (...)
pub fn intRangeAtMost(
    r: Random,
    comptime T: type,
    at_least: T,
    at_most: T
) T

With random number generation handled by the standard library, we can create a simple structure for our jobs:

const Job = struct {
    id: usize,
    length: u32,
    tickets: u32,
};

and create a random number generator for later usage:

var prng = std.Random.DefaultPrng.init(seed);
const random = prng.random();

Once you have the jobs initialized, we need to implement the main loop for processing them. The scheduler will repeatedly select a job to run for a fixed time slice called a quantum, which represents how long a job runs before the scheduler makes another decision.

As an example, consider we have two jobs:

  • Job 0: Length=8, Tickets=75
  • Job 1: Length=6, Tickets=104

For each scheduling decision, we need to:

  1. find how many tickets there are in total (75+104=17975 + 104 = 179), and then
  2. run a lottery to get a number in the range [1,179][1, 179]
  3. loop over jobs while accumulating their tickets
    • if accumulated total is greater than or equal to winning ticket, run the job
    • otherwise, keep looping

In Zig, we could do that with the following:

while (hasPendingJobs(jobs)) {
    jobs = clearPendingJobs(jobs);

    var total_tickets: u32 = 0;
    for (jobs) |job| {
        total_tickets += job.tickets;
    }

    const winning_ticket = random.intRangeAtMost(u32, 1, total_tickets);
    var accumulated_tickets: u32 = 0;

    for (jobs) |*job| {
        accumulated_tickets += job.tickets;
        if (winning_ticket <= accumulated_tickets) {
            if (job.length >= quantum) {
                job.length -= quantum;
            } else {
                job.length = 0;
            }
            break;
        }
    }
}

If we run the full simulator with 2 jobs and a quantum of 2:

Job 0: Length=8, Tickets=75
Job 1: Length=6, Tickets=104
Quantum: 2
(tix=179, win=079, job=1) [0:L=08:T=075]  [1:L=04:T=104]*
(tix=179, win=143, job=1) [0:L=08:T=075]  [1:L=02:T=104]*
(tix=179, win=027, job=0) [0:L=06:T=075]* [1:L=02:T=104]
(tix=179, win=010, job=0) [0:L=04:T=075]* [1:L=02:T=104]
(tix=179, win=039, job=0) [0:L=02:T=075]* [1:L=02:T=104]
(tix=179, win=161, job=1) [0:L=02:T=075]  [1:L=00:T=104]*
(tix=075, win=054, job=0) [0:L=00:T=075]*

Each line shows one scheduling decision. The format is:

  • tix: total tickets available
  • win: the winning ticket number drawn
  • job: which job won the lottery
  • [id:L=length:T=tickets]: job state (remaining length and ticket count)
  • *: marks the job that ran in this time slice

Job 1 has more tickets (104 vs 75), so it has roughly a 58% chance of being selected each round. Over time, this translates to Job 1 receiving proportionally more CPU time, though the exact distribution varies due to randomness.

Summary

Lottery scheduling demonstrates how a probabilistic approach can achieve proportional-share scheduling with a simple algorithm. While it’s primarily of academic interest — production schedulers like CFS use more sophisticated deterministic approaches — understanding lottery scheduling provides valuable insight into the fundamental concepts of fair-share scheduling and the trade-offs between simplicity and predictability.


  1. “Lottery Scheduling: Flexible Proportional-Share Resource Management” by Carl A. Waldspurger and William E. Weihl. OSDI ‘94, November 1994. ↩︎

  2. https://en.wikipedia.org/wiki/Lottery_scheduling ↩︎