Some topics from this lecture are also covered in the Introduction to Algorithms textbook:
7 Quicksort
10.1 Stacks and queues
31.2 Greatest common divisor
Here is a table reviewing the time complexities that we will commonly see. I’ve also listed recurrences and sums that typically lead to these complexities.
It may not be obvious how to solve a given recurrence to find the associated complexity. In the last lecture we saw how a recursion tree can sometimes point us to the solution for a recurrence, though that will not work for all of the cases here. You may study general methods for solving recurrences in a more advanced mathematics or computer science course. But for now, try to become familiar with these common cases.
notation 
name 
typical recurrences 
typical sums 

O(1) 
constant 
T(n) = k 

O(log n) 
logarithmic 
T(n) = T(n / 2) + k 

O(n) 
linear 
T(n)
= T(n

1) + k 
O(1
+ 1 + 1 + … + n) 
O(n log n) 
log linear 
T(n) = 2 ⋅ T(n / 2) + O(n) 

O(n^{2}) 
quadratic 
T(n) = T(n  1) + O(n) 
O(1 + 2 + 3 + … + n) 
O(n^{3}) 
cubic 
T(n) = T(n  1) + O(n^{2}) 
O(1^{2} + 2^{2} + 3^{3} + … + n^{2)} 
O(n^{k}) 
polynomial 
T(n) = T(n  1) + O(n^{k}^{ }^{ 1}) 

O(k^{n}) 
exponential 
T(n) = k ⋅ T(n  1) + O(n) 

O(n!) 
factorial 
T(n) = n ⋅ T(n  1) + O(n) 

A function pointer is a value that refers to a function.
You can declare a function pointer type like this:
type intFun = function(a, b: integer): integer
Here is a function whose signature matches this function pointer type:
function add(a, b: integer): integer; begin add := a + b; end;
Use the @ (“addressof”) operator to get a pointer to a named function in your program:
var p: intFun; begin p := @add;
You can call a function through a function pointer:
writeln(p(3, 4)); // writes 7
We can often generalize a function by giving it a function pointer argument that indicates how some operation is to be carried out.
For example, consider the insertion sort function we saw in the last lecture. Here it is again, with minor changes so that it operates on an array of strings rather than of integers:
procedure insertionSort(var a: array of string); var i, j: integer; v: string; begin for i := 1 to length(a)  1 do begin v := a[i]; j := i; while (j > 0) and (v < a[j  1]) do begin a[j] := a[j  1]; j := j  1; end; a[j] := v; end; end;
As written, this procedure always compares strings using Pascal’s builtin comparison operator ‘<’, which is casesensitive. So if we sort the array
('apple', 'cherry', 'watermelon', 'Pear', 'Banana')
we will get
('Banana', 'Pear', 'apple', 'cherry', 'watermelon')
The words beginning with a capital letter come first, since capital letters precede lowercase letters in the ASCII encoding.
We might want to sort strings differently, e.g. caseinsensitively, or ignoring diacritical marks, or in other ways. We can generalize our insertion sort procedure to work with any string ordering. To do this, we define a function pointer type:
type ordering = function(s, t: string): boolean; // returns true if s < t
And now we modify insertionSort
so that any caller can
pass a function that implements the ordering they wish to use. Here
is the modified implementation:
{$mode delphi} uses sysutils; type ordering = function(s, t: string): boolean; // returns true if s < t procedure insertionSort(var a: array of string; less: ordering); var i, j: integer; v: string; begin for i := 1 to length(a)  1 do begin v := a[i]; j := i; while (j > 0) and less(v, a[j  1]) do begin a[j] := a[j  1]; j := j  1; end; a[j] := v; end; end; function less(s, t: string): boolean; begin less := s < t; end; function lessIgnoreCase(s, t: string): boolean; begin lessIgnoreCase := compareText(s, t) < 0; end; function reverseIgnoreCase(s, t: string): boolean; begin reverseIgnoreCase := compareText(t, s) < 0; end; var a: array[1..5] of string = ('apple', 'cherry', 'watermelon', 'Pear', 'Banana'); begin insertionSort(a, @less); insertionSort(a, @lessIgnoreCase); insertionSort(a, @reverseIgnoreCase); end.
In the last lecture we saw stacks, which are an abstract data type: we can implement a stack in various ways, such as using an array or using a linked list.
Queues are another abstract data type. The interface to a queue looks like this:
type queue = … procedure init(var q: queue); procedure enqueue(var q: queue; i: integer); function dequeue(var q: queue): integer; function isEmpty(q: queue): boolean;
The enqueue
function adds a value to a queue, and
dequeue
removes a value from queue. Unlike stacks,
queues are a first in first out data structure: the first
value added to a queue will be the first one to be removed. For
example:
var q: queue; begin init(q); enqueue(q, 4); enqueue(q, 77); enqueue(q, 12); writeln(dequeue(q)); // writes 4 writeln(dequeue(q)); // writes 77
A queue, then, is like a set of values waiting in line. We say that the value that will next be removed is at the head of the queue, and the value that was last added is at the tail.
We can easily implement a queue using a linked list. We must keep pointers to the first and last nodes of the list, which represent the head and tail of the queue. A complete implementation appears below.
With this implementation, the enqueue
and dequeue
operations run in constant time, i.e. O(1).
type node = record i: integer; next: ^node; end; queue = record head: ^node; tail: ^node; end; procedure init(var q: queue); begin q.head := nil; q.tail := nil; end; procedure enqueue(var q: queue; i: integer); var n: ^node; begin new(n); n^.i := i; n^.next := nil; if q.head = nil then begin q.head := n; q.tail := n; end else begin q.tail^.next := n; q.tail := n; end; end; function dequeue(var q: queue): integer; var p: ^node; begin dequeue := q.head^.i; p := q.head; q.head := q.head^.next; dispose(p); if q.head = nil then q.tail := nil; end; function isEmpty(q: queue): boolean; begin isEmpty := (q.head = nil); end;
Another way to implement a queue is using a dynamic array. In
addition to the array itself (a
), we keep integer
variables head
and tail
that indicate the
array positions of the first and nexttobeadded elements. For
example, if head = 3 and tail = 6 then there are currently three
elements in the queue, with values a[3], a[4] and a[5].
As we add elements to the queue, they may wrap past the end of the array back to the beginning. For example, if length(a) = 8, head = 6 and tail = 2 then there are four elements in the queue: a[6], a[7], a[0] and a[1].
If the tail reaches the head, then the array is full and we need to expand it. Expanding the existing array would be awkward if the queue currently wraps past the end of the array, so we instead allocate a new array, doubling the size of the existing array. We then copy all elements from the old array to the new array.
Suppose that we insert N elements into an initially empty queue. The time to insert each element when the array is not expanded is O(1). We can assume that expanding the array to size S (including copying the elements) takes time O(S). Because we double the array size each time, the total array expansion time will be proportional to
1 + 2 + 4 + … + N = O(N)
So we can insert N elements in O(N) time. Then the average time to
insert each element will be O(1) (just as when we implemented a stack
using an array). The dequeue
operation never changes the
array size and always runs in O(1).
Here is the complete implementation:
type queue = record a: array of integer; head: integer; // index of the first item on the queue tail: integer; // index of the next item to be added end; procedure init(var q: queue); begin setLength(q.a, 2); q.head := 0; q.tail := 0; end; function next(const q: queue; i: integer): integer; begin next := (i + 1) mod length(q.a); end; procedure enqueue(var q: queue; i: integer); var b: array of integer; size, count, j: integer; begin if next(q, q.tail) = q.head then // queue is full begin size := length(q.a); count := size  1; setLength(b, size * 2); for j := 0 to count  1 do b[j] := q.a[(q.head + j) mod size]; q.a := b; q.head := 0; q.tail := count; end; q.a[q.tail] := i; q.tail := next(q, q.tail); end; function dequeue(var q: queue): integer; begin dequeue := q.a[q.head]; q.head := next(q, q.head); end; function isEmpty(const q: queue): boolean; begin isEmpty := (q.head = q.tail); end;
In the last lecture we saw several sorting algorithms (bubble sort, insertion sort, merge sort). Quicksort is another sorting algorithm. It is efficient in practice and is very commonly used.
Quicksort sorts values in an array in place. In this discussion we will assume that array indices begin at 0, like in dynamic arrays in Pascal. Given an array a[0 .. n – 1] of values to sort, quicksort divides the array into two nonempty partitions a[0 .. p] and a[p + 1 .. n – 1] for some integer p. It does this in such a way that every element in the first partition (a[0 .. p]) is less than or equal to every element in the second partition (a[p + 1 .. n – 1]). After partitioning, quicksort recursively calls itself on each of the two partitions.
The highlevel structure of quicksort is similar to merge sort: both algorithms divide the array in two and call themselves recursively on both halves. With quicksort, however, there is no more work to do after these recursive calls. Sorting both partitions completes the work of sorting the array that contains them. Another difference is that merge sort always divides the array into two equalsized pieces, but in Quicksort the two partitions are generally not the same size, and sometimes one may be much larger than the other.
There are two partitioning algorithms commonly used in quicksort. In the class we will discuss and use the Hoare partitioning method, which works as follows. The array to be partitioned must have at least two elements. We first choose some element in the array to use as the pivot. The pivot has some value v. All elements in the first partition will have values less than or equal to v, and elements in the right partition will have values greater than or equal to v. In our initial implementation, we will choose a value at the midpoint of the array to use as the pivot, specifically a[(n – 1) div 2].
We define integer variables i and j representing array indexes. Initially i is positioned at the left end of the array and j is positioned at the right. We move i rightward, looking for a value that is greater than or equal to v (i.e. it can go into the second partition). And we move j leftward, looking for a value that is less than or equal to v. Once we have found these values, we swap a[i] and a[j]. After the swap, a[i] and a[j] now hold acceptable values for the left and right partitions, respectively. Now we continue the process, moving i to the right and j to the left. Eventually i and j meet. The point where they meet is the division point between the partitions, i.e. is the index p mentioned above.
Note that the pivot element itself may move to a different position during the partitioning process! There is nothing wrong with that. The pivot really just serves as an arbitrary value to use for separating the partitions.
For example, consider quicksort’s operation on this array:
The array has n = 8 elements. We choose a[(8 – 1) div 2] = a[3] = 3 as our pivot. Now we set i = 1 and j = 8. We move i rightward, looking for a value that is greater than or equal to 3. Immediately we find a[0] = 6 ≥ 3. We also move j leftward, looking for a value that is less than or equal to 3. We soon find a[6] = 2 ≤ 3. So we swap a[0] and a[6]:
Now a[1] = 5 ≥ 3. j moves leftward until j = 3, since a[3] = 3 ≤ 3. So we swap a[1] and a[3]:
Now i moves rightward until it encounters a value that is greater than or equal to 3; the first such value is a[3] = 5. Similarly, j moves leftward until it hits a value that is less than or equal to 3; the first such value is a[2] = 1. So i = 3 and j = 2, and j < i. i and j have crossed, so the partitioning is complete. The first partition is a[0..2] and the second partition is a[3..7]. Every element in the first partition is less than every element in the second partition. Quicksort will now sort both partitions, recursively.
Notice that in this example the pivot element itself moved during the partitioning process. As mentioned above, this is common.
A complete implementation of quicksort appears below.
procedure swap(var i, j: integer); var k: integer; begin k := i; i := j; j := k; end; // Partition an array of at least two elements. // Return an index p < high(a) such that // every element of a[0 .. p] <= every element of a[p + 1 .. high(a)] function partition(var a: array of integer): integer; var v, i, j: integer; begin v := a[high(a) div 2]; // pivot value i := 1; j := high(a) + 1; while true do begin repeat i := i + 1; until a[i] >= v; repeat j := j  1; until a[j] <= v; if i >= j then break; swap(a[i], a[j]); end; partition := j; end; procedure quicksort(var a: array of integer); var p: integer; begin if length(a) < 2 then exit; p := partition(a); quicksort(a[0 .. p]); quicksort(a[p + 1 .. high(a)]); end;
The
partition
function is simple, but it may not
be immediately obvious that it is correct. We make three claims:
All array accesses that
happen as partition
runs are within the array bounds.
The returned value p is in the range 0 <= p < high(a) (and so both partitions contain at least one element).
All elements in the left partition are less than or equal to all elements in the right partition.
Here is an argument in support of these claims. On each iteration of the while loop, at the point where we test ‘if i >= j’, one of three things is true:
i < j: the values will be swapped, and the loop will continue
i = j: the indices are equal, and the loop will exit
j < i: the indices have crossed, and the loop will exit
The condition (a) will hold for some number of iterations. During those iterations all array accesses will be within bounds, since we have 0 <= i < j <= high(a). Furthermore, the following will always be true at the beginning of the while loop:
for all k such that 0 <= k <= i, a[k] <= v
for all k such that j <= k <= high(a), a[k] >= v
This invariant is vacuously true at the beginning of the first iteration, and is preserved by the loop body. Immediately after the repeat loops, a modified invariant with the stricter conditions 0 <= k < i and j < k <= high(a) is true.
Eventually we will reach condition (b) or (c). If we reach (b), then certainly 0 <= i = j <= high(a), so i = j is within bounds and all array accesses have stayed within bounds as well. And in fact we must have j < high(a). That’s because if we reach (b) on the first iteration, we must have reached the pivot element, whose index is always less than high(a) due to the way we computed it. That’s because the array has at least two elements, so high(a) >= 1, so high(a) div 2 < high(a). And if we reach (b) on any other iteration, we have decremented j at least twice, so j < high(a) in that case as well. In any case the loop will exit, and the value i = j is a valid partition point.
Or we may reach condition (c), in which case the pointers have crossed. This cannot happen on the first iteration, where both pointers will stop at the pivot. So let i_{0} and j_{0} be the values of i and j at the end of the previous iteration. We know that 0 <= i_{0} < j_{0} <= high(a). Now, at the moment where we test ‘if i >= j’, we have j < i. It must be that i <= j_{0}, since a[j_{0}] >= v so i will stop there (if not before) as if advances. Furthermore we must have j = i – 1, since a[i – 1] <= v, so j cannot have moved past that point. Also, i > i_{0}, so j = i – 1 >= i_{0} >= 0. This shows that both i and j remain within bounds. The loop will exit. At this point, for any k <= j we have k < i, so by the invariant a[k] <= v. And for any k > j the invariant gives us a[k] >= v. The partition point p = j is valid.
To analyze the performance of quicksort, we may start by observing that partition(a) runs in time O(n), where n = length(a). This is because the number of swaps must be no more than n / 2, and because i and j stay within bounds so their values will increment or decrement at most n times.
In the best case, quicksort divides the input array into two equalsized partitions at each step. Then we have
T(n) = 2 T(n / 2) + O(n)
This is the same recurrence we saw when we analyzed merge sort in the last lecture. Its solution is
T(n) = O(n log n)
In the worst case, at each step one partition has a single element and the other partition has all remaining elements. Then
T(n) = T(n – 1) + O(n)
This yields
T(n) = O(n^{2})
So, then, how will quicksort typically perform?
This depends critically on the choice of pivot elements. Suppose that in the implementation above we had chosen a[0] as the pivot element at each step. Then if the input array was already sorted, the pivot element would always be the smallest in the array, and quicksort would run in the worstcase time of O(n^{2}).
Choosing the element at the midpoint will generally be more
robust, and should perform well almost all the time. Still, it is
possible to construct input arrays for which Quicksort will perform
poorly even with this pivot choice. To be safer, we can choose a
random pivot element whose index is from 0 to high(a)  1. (The
random element must not be the last, since then partition
could return p = high(a) !) If we do so, then quicksort’s expected
performance will be O(n log n) for any input array. We will not prove
this fact in this class. Note, however, that it is somewhat similar
to our (also here unproven) claim that if elements are inserted into
a binary tree in a random order, then the expected height of the tree
will be O(log n), where n is the number of elements in the tree.
Let us review a bit of number theory. We say that an integer d divides an integer a, written d  a, if a = kd for some integer k. Note that if d  a, then d  ka for any integer k. Furthermore, if d  a and d  b then d  (a + b).
A linear combination of integers a and b is any value of the form ua + vb for some integers u and v. Suppose that x is a linear combination of a and b, and that d  a and d  b. Then x = ua + vb for some u and v, and d  ua and d  vb, so d  x as well.
The division theorem states that for any integers a and b with b > 0, there exist unique integers q and r with 0 <= r < b such that a = bq + r. q is called the quotient and r is the remainder. We have seen these values before: as long as a >= 0, we have q = a div b and r = a mod b. So, rewriting the division theorem in Pascal’s terminology,
a = (a div b) * b + (a mod b)
For integers a and b, the greatest common divisor of a and b, also written as gcd(a, b), is the largest integer d such that d  a and d  b. The greatest common divisor is useful in many situations, such as simplying fractions. For example, suppose that we want to simplify 35/15. gcd(35, 15) = 5, so we can divide both the numerator and denominator by 5, yielding the simplified fraction 7/3.
One way the find the greatest common divisor of two numbers is to factor the numbers into primes, then look for common primes. For example consider finding gcd(252, 90). 252 = 9 * 28 = 2^{2} * 3^{2} * 7^{1}. And 90 = 9 * 10 = 2^{1} * 3^{2} * 5^{1}. So the common primes are 2^{1} * 3^{2} = 18, and we have gcd(252, 90) = 18.
Euclid’s algorithm is a much more efficient way to find the gcd of two numbers than prime factorization. It is based on the fact (proven below) that gcd(a, b) = gcd(b, a mod b). So, for example,
gcd(252, 90) = gcd(90, 72) = gcd(72, 18) = gcd(18, 0) = 18.
It’s easy to write Euclid’s algorithm in Pascal:
function gcd(a: integer; b: integer): integer; begin if b = 0 then gcd := a else gcd := gcd(b, a mod b); end;
To show that this works, we must prove that gcd(a, b) = gcd(b, a mod b).
Suppose that d  b and d  (a mod b). We know that a is a linear combination of b and (a mod b) by the division theorem above. So d  a.
Now, conversely, suppose that d  a and d  b. We can rewrite the division theorem as
a mod b = a – (a div b) * b
This shows that (a mod b) is a linear combination of a and b. So d  (a mod b).
We have shown that every divisor of b and (a mod b) is also a divisor of a and b. Conversely, every divisor of a and b is also a divisor of b and (a mod b). So the common divisors of a and b are the same as the common divisors of b and (a mod b). It follows immediately that gcd(a, b) = gcd(b, a mod b).
What is the running time T(a) of the above function gcd(a, b) , where a > b? First note that if a > b, then a mod b < a / 2. For if a / 2 < b < a, then a div b = 1, so a mod b = a – b < a – a / 2 = a / 2. Or if 0 < b <= a / 2, then a mod b <= b – 1 < a / 2.
Suppose that Euclid’s algorithm run on gcd(a, b) recurses for at least two iterations. Then it computes
gcd(a, b) = gcd(b, a mod b) = gcd(a mod b, ...) = …
In two iterations the first argument has dropped from a to (a mod b) < a / 2. So after every two iterations of the algorithm, the first argument will be reduced by at least a factor of 2. This shows that the algorithm cannot iterate more than 2 log_{2}(a) times. It follows that T(a) = O(log a).
To find the gcd of several numbers, we can invoke Euclid’s algorithm more than once. The gcd function is associative, so we can compute either
gcd(gcd(a, b), c)
or
gcd(a, gcd(b, c))
The least common multiple of integers a and b is the smallest integer m such that a  m and b  m.
We can compute the least common multiple by factoring a and b into prime powers, just like with the greatest common divisor. But there is an easier way. For all a and b,
lcm(a, b) = a b / gcd(a, b)
To compute the lcm of several numbers, we can use the above formula more than once:
lcm(lcm(a, b), c)
or
lcm(a, lcm(b, c))