Lecture Notes by Anthony Zhang.

\[ \newcommand{\set}[1]{\left\{ #1 \right\}} \newcommand{\tup}[1]{\left\langle #1 \right\rangle} \newcommand{\abs}[1]{\left\lvert #1 \right\rvert} \newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\ceil}[1]{\left\lceil#1 \right\rceil} \newcommand{\mb}[1]{\mathbb{#1}} \newcommand{\rem}{\operatorname{rem}} \newcommand{\sign}{\operatorname{sign}} \newcommand{\imag}{\boldsymbol{i}} \newcommand{\dee}{\mathop{}\!\mathrm{d}} \newcommand{\lH}{\overset{\text{l'H}}{=}} \newcommand{\evalat}[1]{\left.\left(#1\right)\right|} \newcommand{\sech}{\operatorname{sech}} \newcommand{\spn}{\operatorname{Span}} \newcommand{\proj}{\operatorname{proj}} \newcommand{\prp}{\operatorname{perp}} \newcommand{\refl}{\operatorname{refl}} \newcommand{\magn}[1]{\left\lVert #1 \right\rVert} \newcommand{\rank}{\operatorname{rank}} \newcommand{\sys}[2]{\left[ #1 \mid #2\hskip2pt \right]} \newcommand{\range}{\operatorname{Range}} \newcommand{\adj}{\operatorname{adj}} \newcommand{\cof}{\operatorname{cof}} \newcommand{\diag}{\operatorname{diag}} \newcommand{\formlp}{\operatorname{Form}(\mathcal{L}^P)} \]


Algorithm Design and Data Abstraction.

Instructor: Dave Tompkins
Section 003

The Course Website is expected to be checked by each student at least once a day.

6/1/14 - Tutorial

This course covers Racket for the first part, and then C and abstract concepts for the rest.


The clicker ID for the room is CB, but the prof just said he'll use AA.

Assignments usually due every week Wednesday at noon.

If you don't want to use the clickers, enter your student number as the clicker ID and you can pass assignment 0.

Every clicker question is out of 2, best 75% of responses counted, every tutorial attend increases final grade 0.1%, but is part of clamped 5% participation grade.

Assignments are to be submitted to Marmoset. MarkUs is used too for viewing comments, but students submit to Marmoset. There is less hand marking in this course and more automated testing.

Assignments must be done individually. Do not share or discuss code, not even the strategies.

We are using C with Clang/LLVM (in the RunC environment) and Racket (full Racket, not the teaching languages). Mostly C, though.

RunC works with C and Racket, helps testing, and offers improved error messages.

Lab hours are Tuesdays at 10am to 6pm in MC3003. They give help setting up the environment if needed.

Design Recipes

Design recipes help design new functions from scratch, and aid communication by providing documentation.

Because we are now apparently familiar with designing functions, we focus on documentation.

The new design recipe for this course is now:

You are also expected to test your code yourself, though they are not required.

Main topics

The most common programming paradigms are functional, imperative, and object oriented.

Programming involves a lot of design choices and tradeoffs. Each decision has to be made based on the situation and the requirements.

An abstract data type is a data type with certain properties, but we don't really care how it works. An example would be a dictionary/mapping - we only care about lookup, insert, and remove, not whether it uses association lists or binnary trees or anything.


Racket, by default outputs the resulting values of the top level expressions to the interactions window.

But in the real world, we often want to do explicit input and output - keyboard, mouse, screen, speakers.

> "hello, world"
"hello, world"
> (printf "hello, world")
hello, world

In this course we are required to use printf rather than top-level expressions.

There is also the related function format that accepts values in the same form as printf, but rather than printing out the formatted value to standard output, produces the value as a string.

In CS135 the programs written were not interactive - the sessions in the interactions window were not part of the programs.

printf does not insert the newline at the end of the line:

> (printf "a") (printf "a")
> (printf "a\n") (printf "a")

\n here means a newline. To make a literal backslash, use two backslashes: \\.

What if we need to print a number? The first parameter of printf must be a string. We can use a placeholder:

> (printf "the number is ~a, though the others are ~a and ~a, plus ~a.\n" 4 'test123 "bla" '(1 2 3))
the number is 4, though the others are test123 and bla, plus (1 2 3).

printf is available only in full racket. To print out a literal tilde (~), use two tildes (~~).

In racket, we can do input by using (read). This allows the user to enter a single line. The resulting value is what (read) evaluates to:

> (printf "hello, ~a" (read))
% Anthony
hello, Anthony

Functional vs. Imperative vs. Object Oriented

A programming paradigm is an approach or philosophy to programming.

The functional programming paradigm (CS135) is to only use new values that stay constant. It is a mathematical way of computing. We produce new values rather than changing old ones. Racket is a multi-paradigm language, but is primarily used as a functional langauge.

The imperative programming paradigm (CS136) is to be able to modify state. We compute things based on changing values that we already have. In imperative programs, we give a sequence of statements that tell the computer what to do. C is a multi-paradigm language, but is primarily used as an imperative langauge.

The object oriented programming paradigm (CS246) is to have objects that we can send messages to and receive responses from.

This course focuses on imperative programming.


State is the value of some data/information at a moment in time. It is related to memory.

In Racket, the state is the definitions made. But the states that were already defined at any moment in time were never changed. In contrast, C allows the already existing states to be changed.

Actually, we can still change state in Racket, thanks to the special form (set! CONSTANT VALUE) ("set-bang").

> (define n 5)
> n
> (set! n "six")
> n

The constant behaves as if it was defined as it was set that way in the first place:

> (define n 5)
> (define (f) (add1 n))
> (f)
> (set! n 1)
> (f)

The ! suffix in the function name means that it changes state, and we should be careful when using it.

When we change state, we mutate it. Manipulation of state is mutation. In imperative programming languages, programmers may not use the term "mutation", but use the term "immutable" rather than "constants".

Now we will call constants in Racket "variables", since we can use set!. When the difference matters, we use "constants" for things that don't change, and "mutable variables" for things that can.

Section 2 is covered in Monday's tutorial.


In larger projects, we need to break our code into pieces to make it easier to work on. Modularization is the practice of doing so in such a way as to group related functionality together into modules.

A good real-world example of modularization is a battery - a self-contained package that provides electrical power. It has all three qualities of a good module:

Modularization is also called separation of concerns.

A program that allows students to order school supplies online might have the following modules:

Consider a tax processing module. The tax code is rather complex, so we want the module to have the three properties:

Modules are available in Racket too:

;; frobnicate.rkt

(provide fun?) ; make `fun?` visible in the program scope

;; frobnicates?: Int -> Bool
;; Determines whether `k` frobnicates the discombobulator
(define (frobnicates? k) (= k 2))

;; some_program.rkt

(require "frobnicate.rkt")

(frobnicates? 2)

provide and require are special forms that make variables visible externally from other programs and import them other programs, respectively.

In other words, provide exports the specified variables from the module, and require imports the exports from a module.

All Racket files are already modules, but by default they have nothing provided, so the requiring program can't access anything in them.

Now we have three levels of scope (for this course):


Note that require will basically run the program too, so if there are any top-level values, they will get printed out.

Modules should not make unnecessary output. This might confuse the automated marking systems and cause assignment tests to fail:

;; this is a module
"test" ; unnecessary output

In modules, make sure to only put definitions, and not values.

The module interface is the list of functions the module provides, including the contract and purpose (documentation). It is everything a client would need to use the module.

The module implementation is the definitions of the functions in the module - the code itself.

The interface of an AA battery is a cylindrical container with two terminals at the ends. The implementation is nickel metal-hydride or something else.

A module interface should have some basic elements:

In this course function contracts have been extended from those in CS135 with preconditions and postconditions:

;; sum-first: Int -> Int
;;     PRE: k >= 1
;;     POST: produce an Int >= 1
;; (sum-first k) produces the sum of the integers 1..k
(define (sum-first k)

Note that preconditions and postconditions are part of the contract, not their own element. The preconditions are the conditions that must be true before calling the function, and the postconditions are the conditions that will be true after the function is called - a specification of what the function produces.

More complex preconditions and postconditions should be written as English sentences describing what the data should look like.

This allows the contract to convey more information about what is allowed: following the preconditions and obeying the contract while calling the function results in the postconditions being met. These conditions are both logical statements.

A function that takes no parameters has a contract that starts off something like FUNCTION_NAME: -> RETURN_VALUE, or alternatively, FUNCTION_NAME: Void -> RETURN_VALUE.

A precondition that is always met can be written as PRE: #t. Likewise with postconditions.

In Racket, there are even built in facilities in the language that help enforce the preconditions and postconditions. However, we will not be using them - the conditions in our programs will be informal.

13/1/14 - Tutorial

Review of differences between teaching languages use in CS135 and the full version of Racket:

The scope of an identifier is the region in the code where it is available for use - the locations where it is visible. local creates a new scope where local definitions are bound to the scope of the local's extents.

Scopes of top-level definitions are global, and scopes of function parameters and local definitions are local.

The scope of a definition usually starts where it is defined, and ends at the end of the block where it resides in, excluding any nested local scopes where it is shadowed by a local definition in the nested scope:

; c is not available here
(define c 3)
; c is available here
(local [(define c 4)]
    c) ; this is a different c, and the original c is not available here
; c is available here

Definitions made inside a function body are implicitly local:

(define (f x)
    (define c 10) ; this is a local definition
    (add1 c))

The above is equivalent to:

(define (f x)
    (local [(define c 10)] ; this is a local definition
        (add1 c)))

There is an implicit local in the body of every function, so all definitions inside a function body are assumed to be local to the function.


Racket modules should have their interfaces at the top of the file.

In the implementation, we do not need to rewrite the documentation for the stuff that is provided.

Functions that are not provided should have their contract and purposes included with them.

The top of the file should be visible only to the client, while the rest is for the developer only.

Example module (sum.rkt):

;; A module for summing numbers [description of module] ; description should appear on the first line

(provide sum-first sum-squares) ;; [list of functions]

;; sum-first: Int -> Int
;;     PRE: k >= 1
;;     POST: produce an Int >= 1
;; (sum-first k) produces the sum of the integers 1..k

;; sum-squares: ...

;;;;;;;;;; IMPLEMENTATION ;;;;;;;;;;

;; see interface above [no further info required] ; functions that are provided should not have their documentation rewritten
(define (sum-first n) ...)

;; private-helper: Int -> Int
;; [contract & purpose for private helper function]
(define (private-helper p) ...)

Locally defined functions should also have their contracts and purposes.

We cannot redefine stuff with the same identifier. To simulate this, we can use local scopes.


Racket has no check-expect, and we are to avoid using top-level expressions. How do we do testing?

A good solution is to use testing modules. For example, for the sum.rkt module, we might write test-sum.rkt:

;; this is a simple testing module for sum.rkt

(require "sum.rkt")

; Each of the following should produce #t

(equal? (sum-first 1) 1)
(equal? (sum-first 2) 3)
(equal? (sum-first 3) 6)
(equal? (sum-first 10) 55)
(equal? (sum-first 99) 4950)

To test private functions, it might be useful to provide testing functions, like test-sum for the sum module.

Interface Design

The goals of interface module design are:

;wip: it seems like the Boolean in CS135 is changed to Bool

Abstract Data Types

On a related topic, an abstract data type (ADT) is simply a module, that:

A collection ADT is one that can store an arbitrary number of items, and is generalized to be able to store many different types of items.

An example of this is a dictionary, which maps keys to values.

In some contexts, referring to an "ADT" is the same thing as referring to a "collection ADT" in this context - there are no such things as non-collection ADTs in these contexts.

Not all modules store data, and not all data storage modules are ADTs.


Security and flexibility is good. But so is transparency - exposing various details of the implementation to empower the client to do more with the module.

Interface design involves a lot of tradeoffs between things like the amount of information hiding used in order to make the best possible interface for a given set of requirements.

Functional C

We now move to C programming. We will only be using the features that also exist in Racket for now to make the transition easier.

C was designed to be a portable systems programming language that is easy to translate into machine code.

It is designed to give low-level access to computer memory, while still being easy to port to other machines.

We use the C99 standard in this course, finalized in 1999.

Just as Racket can be used in an imperative way, C can be programmed in a functional way, using the functional programmming paradigm.

Comments in Racket:

; single line comment

Comments in C:

// single line comment


Racket constant: (define n 42) C constant: const int n = 42;

const indicates the variable is immutable, and int indicates that the variable is an integer. The = 42 is the initializer. The whole thing together is a definition.

All definitions in C end with a semicolon.

Identifiers must sart with a letter or underscore, and can contain only letters, numbers, and underscores.

We will use underscore_case for our variable names.

Racket uses dynamic typing - the type of an identifier is determined while the program is running, which allows us to change its type at runtime. C uses static typing - the type of an identifier is determined before the program runs, which means that the type of an identifier cannot be changed while the program is running.

For example, in C, we cannot define an integer constant, and later set it to a string. However, this is entirely possible in Racket.

Because types cannot be changed and are always explicit in the code, we no longer need to check the input types like we did in Racket (integer?, false?, etc.).

In C, all constants (immutable variables) must be initialized. All mutable variables should be initialized.


C uses infix notation for expressions, similar to math, and expression operators all have their own precedences (conceptually similar to order of operations in math):

const int k = 3 + 4 * 2; // `k` is now 11
const int l = (3 + 4) * 2; // `l` is now 14

Note that parentheses now serve only to group things together, and otherwise have no syntactic significance.

If an expression is ambiguous or confusing, use parentheses to make it clearer.


In C, operators such as addition and division are distinct from functions in syntax and functionality. The order of operations is similar to those in math.

Operators are distinct language constructs with their own rules for syntax. They are described in detail in the textbook. Operators may have different associtivities and precedences.

C has almost 50 operators in total, many of which work on data.

The a / b operator in C over integers divides two numbers, always rounding towards 0. It behaves similar to quotient in Racket.

The a % b operator in C behaves the same as the remainder function in Racket. With negative numbers, C can give confusing results - avoid using negative numbers with this operator for this reason.

In Racket, we apply a function, which consumes arguments and produces a value.

In C, we call a function, which is passed arguments and returns a value.



;; my-sqr: Int -> Nat
;;     PRE: #t
;;     POST: produce square of `x`
;; (my-sqr x) produces the square of `x`
(define (my-sqr x)
    (* x x))


// my_sqr(x) squares `x`
//     PRE:  true
//     POST: return value >= 0
int my_sqr(int x) {
    return x * x;

We no longer need an explicit contract because the types are already specified in the function declaration.

The curly braces indicate the function block, the body of the function.

The type of every value must be specified explicitly. The return statement is mandatory for functions that have non-void return types.

As a result, errors like (my-sqr "a") in Racket give a runtime error, while in C, this is caught by the compiler - contract violations cannot possibly happen at runtime.

We call functions in a way similar to defining them:

// my_add(x, y) adds `x` and `y`
//     PRE:  true
//     POST: true
int my_add(x, y) {
    return x + y;

// my_num() returns 42
//     PRE:  true
//     POST: return value == 42
int my_num(void) { // `void` here means there are no parameters
    return my_add(40, 2);

In C, our function documentation starts with the purpose and doesn't need the contract (due to static typing). The preconditions and postconditions are still necessary.

Entry Point

The entry point is the place where we start running the program.

In Racket, the entry point of the program is the top of the file being run.

In C, the entry point of the program is the body of the main function. Every C program must have a main function.

main has no parameters (for now) and an int return type to indicate if the program was successful (zero) or not (non-zero).

// no documentation is required!
int main(void) {
    // program starts here
    // this is the only typed function where we can omit the return value and it will assume 0 (success)

Modules don't need a main function, because they are not meant to have entry points.

The void means that the function does not accept any parameters, and is different from just not using anything (int main()).


C has no built-in functions. However, it has a lot of modules that are almost always available.

First C program:

// first C program

#include <stdio.h>

int main(void) {
    printf("Hello, World!\n");

In C, printf uses different placeholders for each type, and the placeholder specifier (~ in Racket) is %. A literal percent sign is %%. As with Racket, the number of extra arguments needs to match the number of placeholders:

printf("I am 100%% sure that %d + %d = %d", 2, 2, 5);

printf is found in a lot of computer languages, and it has much more functionaity than is covered here. For example:

printf("%04d", 42); // prints 0042, length 4 with "0" padding


In C, top-level expressions are invalid, except for ones resulting in constant values:

// top level

const int a = 3 * 3 // valid
const int b = my_sqr(4) // invalid
3 * 3; //invalid
my_sqr(4); //invalid

We can still use expressions inside definitions inside of functions: int main(void) { const int a = my_sqr(4); }.


In C, False is represented by 0, and any non-zero value is considered True.

Equality testing in C is done using a == b, resulting in 1 or 0. The a = b is for assignment only and accidentally using it instead of == is one of the most common C errors.

Non-equality testing in C is done using a != b, resulting in 1 or 0. Negation is done using !a.

Logical AND is done using a && b, and logical OR is done using a || b, both returning 1 or 0. Like in Racket, they short-circuit with the same rules. Remember to use && and || rather than & and |, which are bitwise operators and do different things.

The <, >, <=, >= operators are self-explanatory.

Note that > has a higher precedence than ==, so 1 == 3 > 0 is 1 == (3 > 0), which could be confusing. Always use parentheses when there is the possibility of confusion.

The a ? b : c is known as the ternary operator (conditional operator). It works like (if a b c) in Racket. If a is true, it evaluates and returns b, otherwise c. It short-circuits, so one of b or c is not evaluated: x >= 0 ? x : -x results in the absolute value.

Like in Racket, we must define a function before we can call it. However, C needs to know the type signature of the function before being able to compile it.

As a result, in C, a function must be declared before it is called at all. When compiling, the compiler scans the code from top to bottom, and whenever compiling a function, needs to know the type signature before proceeding.

20/1/14 - Tutorial

Dictionary interface:

(struct dict (alist))

;; dict-new: -> Dict
(define (dict-new)
    (dict empty))

;; dict-lookup: Dict Int -> Any
(define (dict-lookup d k)
    (define (al-search al)
        (cond [(empty? al) #f]
              [(= k (first (first al))) (second (first al))]
              [else (al-search (rest al))]))
    (al-search (dict-alist d)))

;; dict-add: Dict Int Any -> Dict
(define (dict-add d k v)
    (dict (cons (list k v) (dict-alist d))))

;; dict-remove: Dict Int -> Dict
(define (dict-remove d k)
    (define (al-remove al)
        (cond [(empty? al) empty]
              [(= k (first (first al))) (al-remove (rest al))]
              [else (cons (first al) (al-remove (rest al)))]))
    (dict (al-remove (dict-alist d))))

;; dict-size: Dict -> Nat
(define (dict-size d)
    (length (dict-alist d)))

Consider the following Racket function:

;; gcd: Nat Nat -> Nat
;;     PRE: #t
;;     POST: produce the GCD of `a` and `b`
;; (gcd a b) calculates the GCD of `a` and `b` using the Euclidean algorithm
(define (gcd a b)
    (if (= b 0) a (gcd b (remainder a b))))

We can translate this into C:

// gcd(a, b) finds the GCD of `a` and `b`
//     PRE: #t
//     POST: produce the GCD of `a` and `b`
int gcd(int a, int b) {
    return b == 0 ? a : gcd(b, a % b)


What about mutual recursion in C? A function must always be defined before it is called, yet this is impossible in mutual recursion:

int a(int x) {
    b(x - 1)

int b(int x) {
    a(x - 1)

There is no possible ordering of functions that allows this to be valid.


Here, we must use a forward declaration:

int b(int x); // note that we can also omit the name, like `int b(int)`, but this is not recommended

int a(int x) {
    b(x - 1)

int b(int x) {
    a(x - 1)

This is a function declaration (something that specifies the type of an identifier). It gives the types of the parameters and return values.

The header is present, but the body code block is omitted.

The actual functions with code blocks are definitions (something that specifies the information necessary to create the identifier). All definitions must include a declaration.

We can also declare variables before defining them, using the extern keyword: extern const int x;. Declarations cannot be initialized.

Identifiers can be declared as many times as needed, but can only be defined once. Declaring something is the same thing as saying that it will be defined at some point.

We can use this to implement modules:

// sum.c
// define `p` and `sum`
const int p = 2;
int sum(int k) {
    return (k <= 1) ? 1 : k + sum(k-1);

// client.c
// declare `p` and `sum` - specify that they are defined elsewhere
extern const int p;
int sum(int k);
int double_sum(int n) {
    return p * sum(n);

Note that we did not actually define the functions in the client code. They are simply declared to exist somewhere, and then we add them in at the linking stage.


const int a = 5; // GLOBAL/PROGRAM SCOPE - at the top level of the program
static const int s = 17; // MODULE SCOPE - `static` keyword makes declaration in the module scope

int f(int b) { // GLOBAL/PROGRAM SCOPE - at top level of the program
    // `b` is a 
    const int c = 3; // LOCAL SCOPE - inside a block
        const int d = 11; // LOCAL SCOPE - inside a local scope inside the function
    // `d` is no longer available here

static int g(int e) { // MODULE SCOPE - `static` keyword makes declaration in the module scope
    return e;

// `b` and `c` are no longer available here

Note that a block created using { ... } creates a new local scope inside. This works similarly to Racket's local.

Note that static can be used to make a variable or function visible only in the current module.

By default, all functions and variables exist in the program scope, and other modules (C source files) can access them.

The scope rules in C are roughly the same as those in Racket - the innermost binding occurence shadows all the outer ones.


Modules, as stated earlier, have an interface and an implementation. In C, we can put the interface in one file, and the implementation in another.

The interface goes in a header file (x.h) and the implementation in a source file (x.c). They should have the same name for RunC to know that they are related.

The source file should always #include the header file, because the header file might define constants or structures or similar that we want to use in the implementation, and because it allows us to catch differences between the interface and implementation.

The #include statement is a preprocessor directive. The preprocessor is the first stage of compilation, an informal part of the C standard, and works on the source code as text. Preprocessor directives must appear on the first column of a line, without any preceding whitespace.

The preprocessor does not know anything about C and works purely on text. In fact, it can be used with other programming languages. Preprocessor directives are special instructions that modify the source code text in some way.

For example, #define X Y performs a string replacement, replacing all instances of X with Y.

When we #include a file, we basically paste its contents into the current file at that location.

Now in client code, when we need a module, we just need to #include its header file to include its interface. Note that when we include the header file, we no longer need to have the forward declarations at the top of the client code. This is, in practice, similar to what Racket's require does, but they are conceptually very different - here, we are copy/pasting the interface rather than running a file.

The interface documentation for a function that isn't provided to clients (helper functions) should not go in the header file. Instead, they go directly above the function definition.

If asked to write a module, but only allowed to submit a C source file (*.c), put the interface documentation right above its definition.

Otherwise, for modules, above the definition of the function, put a comment to the effect of ;; see header file.

If asked to write a C program, put the interface documentation right above its definition. Note that the main function does not require any interface documentation.


C has basically no useful built-in functionality - a lot of functionality is provided through the library. The standard modules can be included by using #include <NAME>, where NAME is the filename of the module header. Some examples would be #include <stdio.h> (standard I/O) and #include <math.h> (math functions).

The < and > simply changes the place the preprocessor searches to find the specified file.

The stdbool.h header provides the Boolean type bool and the constants true and false (for example, const bool x = true;). These are still just the integers 0 and 1, but look cleaner and are more clear.

If we need to use Boolean types in the interface, then we can simply put the #include <stdbool.h> into the header file.

C has no symbols, but we can simulate them by defining constants with unique integer values, and using those constants as we would use symbols.


The assert.h module provides assertion functionality, similar to check-expect in Racket. For example:

#include <assert.h>

int main(void) {
    assert(1 == 1);
    return 0;

The module provides a function assert(x) which terminates the program if given a false value, and does nothing otherwise.

The advantage of using this module is that we can compile our programs in debug and release modes. In debug mode, the program will catch any assertion errors and report them for easy fixing. In release mode, the function can be made to do nothing at all, which improves performance.

We do testing with a separate testing module. This module does not really need a header, and only really needs a main function and to include the assert library and the module we need to test. In the main function, we have our assertions:

// test-thing.c: testing module for thing.h
#include <assert.h>
#include "thing.h"
int main(void) {
    assert(do_something(2) == 5);
    assert(do_it(3) == 1);

Whenever possible, we should assert the preconditions of our functions. For example, if a function accepts only positive values for its parameter a, then it should have assert(a > 0); near the beginning.


struct posn {
    int x;
    int y;

This is a structure declaration. Unlike Racket, it does not define any functions to help operate on these structures. The semicolon at the end is required.

We can define a new instance of this structure by using it as a type: const struct posn p = {2, 3};. Note that the type is struct posn, as one atomic unit.

The initializer can also be written as {.y = 3, .x = 2}. Using this syntax it is not necessary to remember the order of the fields.

The initializer syntax is valid only in these forms. The curly braces cannot be used in expressions or other places.

All fields that are not initialized are automatically set to 0.

We can access elements of this structure using the . operator: p.x and p.y access one of each field in the structure.

We cannot compare structures using ==, or print them out with printf. Structures must be compared or printed field-by-field, and it may be useful to write helper functions to do this.

Structure declarations are in the module scope by default, which makes them opaque to client code, but if we put it in the interface file, they are transparent.


C Memory Model

The most common number systems in use today are binary, decimal, and hexadecimal.

Hexadecimal has the special property of each digit representing 4 digits of binary, simply by replacing the digit with the binary digits. This is because \(2^4 = 16\) and there are 16 digits.

In C, hex numbers are written as 0x1234. Without the 0x prefix, the number is assumed to be decimal.

Computers use bits internally, which are grouped into groups of 8 called bytes. Because they have 8 digits, they have 256 possible states.

The size of a kilobyte can be 1000 or 1024 bytes, depending on the context. In scientific literature, the kilobyte (KB) is 1000 bytes, and a kibibyte (KiB) is 1024. Likewise with megabytes, gigabytes, etc. These two can differ up to 7% for gibibytes vs. gigabytes.

Most computer archtectures have primary memory (RAM) and secondary memory (HDD, SSD, flash drives, optical disks, etc.). When we say memory, we usually refer to primary memory.

Primary memory is very fast, smaller, higher cost, and volatile. Contrast this with secondary storage is slower, larger, cheaper, and persistent.

Programs can only really run in primary memory - programs need to be copied into primary memory before executing. In this course, we will only need to think in terms of primary memory.

The smallest unit of memory we can access is a byte. The location of the byte in memory is the address of the byte. For example, with 1 MB of memory, the address of the first byte is 0 and the last is \(2^{20} - 1\). We usually represent addresses in hex.

In this course, we will now need to be aware of the memory our program is using. In CS135 we had stepping rules. Now we introduce the C memory model.

When C gets to a variable definition, it reserves the storage for that variable, keeps track of the address of that storage location to refer to it later, and stores the initial value of the variable in that location. Then, when we reference a variable, we retrieve or set it from the address we determined earlier. Note that variable declarations do not do this.

In the CS135 model, variables are basically aliases for values. Now, we consider them as names for locations where a value is stored. When a variable appears in an expression, C obtains the contents of that location.

We can determine how much memory a variable uses by using the sizeof(X) operator. This looks like a function but is actually an operator. Usually sizeof(int) is 4. sizeof also works on variables and literals. We can assume integers are 4 bytes in this course.

If we define an int, C finds 4 consecutive bytes of memory to store it, and then keeps track of the first address. Then, it updates those 4 bytes to the initial value.

Note that because integers only has 4 bytes, we can only represent the integers from \(-2^{31}\) to \(2^{31} - 1\).

If we #include <limits.h>, INT_MIN and INT_MAX are defined to these limiting values.

If we use unsigned int, we can represent integers from 0 to \(2^{32} - 1\).

We should always try to work within these limits. If we work outside of these ranges, an overflow occurs. For example, if we add 1 to \(2^{31} - 1\), we wrap around to \(-2^{31}\), like in modular arithmetic.

In C, we can also have variables that use only 1 byte. The char type can store any integer from -128 to 127. They are not very useful for calculations, but we often use them to store characters, like ASCII character codes.

In ASCII, upper case chars start with "A" at 65, lower case starts with "a" at 97, and a space is 32. 0 to 31 are the control sequences. For example, 7 is a bell and 10 is a linefeed (newline or \n).

On Windows, the end of line character is a line feed and a carriage return. On Linux the end of line character is just a line feed. This can cause confusion when working on files made on different operating systems.

Char types are internally just numbers:

const char a = 'a'; // single quotes mean char types
const char other_a = 97; // this is the same thing as the above

printf("%d %c", a, a); // %d prints the integer value, while %c prints the ASCII character - this prints "97 a"

It is important to note that single quotes ('x') denote characters and double quotes ("x") denote strings.

char to_lowercase(char c) {
    return (c >= 'A' && c <= 'Z') ? c - 'A' + 'a' : c;

27/1/14 - Tutorial

Didn't attend this one, but it was about translating Racket into C - pretty trivial stuff.


Note that a structure declaration does not reserve any memory. Memory is reserved only when a structure type variable is defined.

The sizeof a structure is at least the sum of the sizeof every field. However, it may be more due to padding or alignment - extra space used to improve efficiency. The exact size of a structure is not deterministic and can depend on a lot of things.

Floating point numbers in C are inexact numbers. They have a certain amount of precision and all operations on these nunmbers do not necessarily preserve all of the precision. They work a lot like inexact numbers in Racket.

Floating point values can lose precision in a lot of operations. Adding, subtracting, and a lot of others. It is similar to using significant digits, and then cutting off everything after a certain number of significant digits.

Floating point numbers can result in bugs and precision errors, and are a common source of issues.

For example, the square root of 2 actually results in a number that is close to, but not exactly, the square root of 2.

Floating point numbers are called floating point because they have a number of bits (24 on our 32-bit architecture) called the mantissa, and an 8-bit exponent, which represents the position of the decimal place. This decimal place is floating, which gives us the name.

We can also use a double type, which is usually larger than a float, and therefore has a larger mantissa. Therefore, it can represent numbers with more precision.

The placeholder for a float in printf format strings is %f. For example, printf("%f", 1.05).

Integers are always exact, and we should always try to use integers when possible. For example, to represent money, instead of using a float for the number of dollars, we can use an integer for the number of cents.


We model our memory with 5 regions known as sections, ordered from lower addresses to higher addresses:

Computer hardware groups these sections into segments. If we try to access memory outside of the segment we're currently in without explicitly allowing it, then we get a segmentation fault or segfault.

There is also another section, temporary storage, that stores the intermediate results of expressions.

All global variables are initialized before main is called. First the memory for global variables is reserved and initialized, then the main function is called. This is done at the compilation stage.

The sections can have different sizes depending on the program, but each program has a fixed size for each section.

As a result, once main is called, no more global variables can be defined.

Running Programs:

  1. For each module:
  2. Verify all the functions and variables are defined and only defined once.
  3. Create read-only and global data sections for global variables.
  4. Convert the program to machine code.
  5. Perform RunC testing (more in section 7).
  6. Run main.

Control Flow

In Racket, our semantic model is based on substitution rules - we replaced something with other things based on patterns that we looked for. In C, we use control flow to model our programs.

Call Stack

While running our program, we keep track of the program location, which is where we currently are in the code.

However, when we call a function, we need to keep track of the new program location (the function we called) and the place we want to go back to when we use return (right after the place we called the function). The place we want to go back to is called the return address.

If \(f\) calls \(g\), and \(g\) calls \(h\), we need to keep track of where we were in \(f\) when we called \(g\), and where we were in \(g\) when we called \(h\), and where we currently are in \(h\).

In this course, we represent return addresses with CALLING_FUNCTION:LINE_NUMBER, except for main. The return address for the main function is OS, for the operating system.

We can store these chunks of context information in a stack. This stack we call the call stack. Every time we call a function is called, we push an entry onto the stack. Every time a function returns, we pop an entry off the top of the stack and go back to the return address specified in the entry.

The entries stored on the stack are known as stack frames, or frames of reference. Every stack frame contains the return address, all currently existing local variables, and all arguments to the function. Note that the return value goes into the temporary storage section.

Local variables includes variables in block scopes:

int f(void) {
    const int a = 5; // this is one local variable
        const int a = 6; // this is another local variable

A function must be called with simple values, like with our Racket stepping rules. Calling a function makes a copy of all these simple values and puts it into the new stack frame. This is called a call-by-reference calling convention.

Recall that global variables have their memory reserved before the program starts. In contrast, local variables have their memory reserved on the stack, inside a stack frame. They are created when the containing function is called.

Because local variables are created automatically when a function is called, they are sometimes called automatic variables.

In hardware, the program location is called the program counter. It is the address of the machine code instruction that is currently excuting.

Programming has a duality between what we want to do, and what it is actually doing. Programming is the task of making these two match up, either by changing what you want, or sitting down and writing code.

Using this new control flow model, self-recursion is simply jumping over to the beginning of the function, but with a new stack frame. The only unusual part is that the return address is inside the same function.

The call stack can be found in the stack section of the memory model seen earlier. Note that it starts at the highest available address with the stack frame for main, and new stack frames are created at lower addresses - the stack grows downwards.

If the stack gets too big, it will grow all the way into the heap section. This might cause a segmentation fault, or worse, corrupt data and cause undefined behaviour. This is called a stack overflow, and is usually caused by infinite or very deep recursion.



The address of an identifier is the location in memory where its first byte occurs. We can obtain this address using the & operator:

const int f = 2
int g = 1;
int main(void) {
    const int h = 6;
    int *i = malloc(sizeof(int));
    printf("CODE: %p, READ-ONLY: %p, GLOBAL DATA: %p, STACK: %p, HEAP: %p", &main, &f, &g, &h, i);

The pointer placeholder is "%p" in C. It prints out the pointer address in hex.

This allows us to get low-level access to the memory and work with pointers - addresses in memory.

A pointer is a type in C that stores an address in memory:

const int i = 42;
const int *p = &i; // `p` points to `i`
const int *q = p; // `q` now points to the same thing that `p` does - to `i`
printf("%p %p %p", &i, p, q); // these are all the same thing

The asterisk * is part of the type, and denotes a pointer type.

Every type like char or int has a corresponding pointer type char * or int *. This are pointer types, like "char pointer" (pointer to an int).

The size of a pointer is the same as the number of bits that the computer has - a 32-bit computer will use 32-bit addresses. The sizeof a pointer is always the same size regardless of the size of the data that it points to. In this course we assume that each pointer is 4 bytes (32 bits).

The value of the pointer is the address that it points to. We represent this as p, with no asterisk or any other operator. This results in a memory address.

The indirection/dereference operator is also the asterisk, but should not be confused with the asterisk in the variable declaration. They are two different things.

Pointers also have addresses. We can obtain their address just like any other type of variable, using &p.

Now there are three different ways to look a pointer:

// `p` is a pointer
p // address of the pointer
*p // the value being pointed to by the pointer
&p // an address that points to the pointer

Note that *&i is the same thing as i. We are dereferencing the address of i, which simply results in i.

The * in a declaration belongs to the identifier. This becomes apparent when we use multiple declarations on one line: const int *a = &i, *b = &i;.

We can also make pointers to pointers, and pointers to pointers to pointers, or any level of pointing:

const int i = 42;
const int *pi = &i;
const int **ppi = &pi;
const int ***pppi = &ppi;

The null pointer is essentially a pointer to the address 0, which is reserved and means "nothing" or "invalid".

We use the null pointer to represent "nothing". This is generally used to indicate an error or a lack of a value.

In code, we represent null with NULL, and is defined in the stdlib module (as well as several others). To use it, we must first make it available using something like #include <stdlib.h>.

Usually, we will want to assert that a pointer is not null in functions that accept pointers, and state that null pointers are not accepted in preconditions. We can do this by using assert(p), since null is essentially 0.

Imperative C

In our model of C, state is a combination of the location in the code, and the contents of the memory at the moment.

If we have these two pieces of information, we can track what the program will do at any moment in the future. However, this can be difficult because the state can change quite often.

The imperative paradigm is based on the manipulation of state. Functional programming has no side effects, while imperative does.

A side effect is when an expression does something more than just producing a value. Side effects change the world - the state or something external to the program. For example, printf does not produce any value at all, and is used only for its side effect.

Racket supports some imperative features:

begin is not used very often because each function definition in Racket implicitly uses begin, just like they implicitly use local.

We list the side effects of a function in the postconditions. However, the implementation details should not be exposed in doing so.

In Racket, a function that returns nothing (like printf) actually produces #<void>. In contracts, this is represented with Void.

In C, printf actually produces the number of characters printed. This is useful for checking for printing errors, or getting the string length.

A function in C can simply not return a value, or not accept any parameters:

void f(void) { // this function neither accepts or returns any values
    printf("Look ma, no arguments!");
    return; // this is optional for void functions

Before, we saw that printf("Look ma, no arguments!") is an expression. When an expression is the only thing inside a statement, the statement becomes an expression statement: printf("Look ma, no arguments!"); (note the semicolon, which denotes that the code represents a complete statement).

Other expression statements are 3 + 2; or a * 7.

The values of expression statements are simply ignored, like those in Racket's begin. Therefore, they are really only useful for their side effects.

In C, a statement is a construct such that it gets executed, but the result is ignored. Statements cannot be used inside expressions, but expressions can be used inside statements.

3/2/14 - Tutorial

Another example of floats being horrible:

const float big = 1000000000.0;
const float small = 100.0;
const float diff = big - small;
// diff is now 999999872.0

When we compare floats, we usually want ones that are very close to each other to be considered equal. We can compare them using an epsilon (error):

const float a = 10.0/3.0;
const float b = 3.333333;

assert(a == b); // failure

const float epsilon = 0.000001;
assert((a - b < 0 ? b - a : a - b) < epsilon); // make sure we compare the absolute value of the difference

// alternatively, we can make two assertions:
assert(-epsilon < a - b);
assert(a - b < epsilon);

// or combine them into one
assert(-epsilon < a - b && a - b < epsilon);

Consider a function to check if an overflow will occur when adding two integers:

#include <limits.h>
#include "safe_overflow.h"

bool safe_add(int a, int b) {
    int abs_a = a < 0 ? -a : a;
    int abs_b = b < 0 ? -b : a;
    return ((abs_a == a && abs_b == b) || (abs_a != a && abs_b != b)) ? // check if signs are the same (only case in which overflow can occur)
        abs_a <= INT_MAX - abs_b : true;

Lockstep is when multiple arguments in structural recursion approach the base case at the same rate.


Terminology summary:

#include <stdio.h> // preprocessor directive
int add1(int x); // function declaration
int add1(int x) { // function definition
    // and block statement
    const int y = x + 1; // local definition
    printf("add1 called\n"); // expression statement (side effect)
    2 + 2 == 5; // expression statement (no side effect)
    return y; // control flow statement

The substitution rule for begin is that each argument is evaluated, and then the simplified value of the last argument is produced.

The three different types of statements are:

Compound Statements

In C, the block { CODE_GOES_HERE } creates a compound statement - it groups a bunch of statements together and makes it appear like one statement. Bocks can also have local definitions, which are not statements.

The block is similar to Racket's begin. However rather than producing the last value, it produces no value at all, because it is a statement.

Some functions have only side effects, and don't produce a value. These have the void return type.

Some functions have no side effects, and only produce a value.

It is not useful to have a function that has neither side effects or a return value.

In expressions, we can also use a comma operator (,) to evaluate multiple expressions but produce only the last one. For example, a == 1 ? (a = 2, 5 + 6) : 7 + 8. This is very similar in functionality to begin in Racket.

Control Flow Statements

Control flow statements are those that change the control flow - they can change the program location and call stack.

Function calls also modify control flow. They change the program location to the target and push a stack frame to the call stack.

An example of a control flow statement is return, which pops an entry off the top of the call stack and moves the program location to the return address in the popped off stack frame.

Another control flow statement is if:


This executes STATEMENT if and only if CONDITION is truthy. STATEMENT is known as the body of the if statement, while CONDITION is the condition.

Note that STATEMENT can be a compound statement to execute multiple statements in the body of the if. Note that the parentheses around the condition are required.

We should always put braces around the body of the if, even if there is only one statement. This is a matter of style and improves readability:

if (n < 0) {

Since if doesn't produce a value, it is only really useful to do control flow or side effects in the body of an if statement.

if also supports an optional else clause that specifies another statement that gets executed if CONDITION fails:

if (x < 10) {
    // ...
} else {
    // ...

The else always belongs to the most recently entered if: if (a) if (b) x; else y; will only do y when a is true and b is false. This is called a dangling else.

Note that we can also use another if statement as the body of the else clause:

if (x < 10) {
    // ...
} else if (x < 20) {
    // ...
} else if (x < 30) {
    // ...
} else {
    // ...

We can now have a use for multiple return statements in one function:

if (k < 0)
    return k;
    return -k;

Note that since the return statement will stop the rest of the function from executing, the above code is the same as the following:

if (k < 0)
    return k;
return -k;


When we declared variables earlier, we used const TYPE IDENTIFIER .... Declared without the const keyword, variables are mutable - the value of these variables can be changed as often as needed.

Even though we may now use mutation, it is still good practise to use const whenever possible. This is because const communicates to readers and the compiler that the variable is not to be modified, which helps prevent accidental mutation and improves performance.

Mutable variables can be declared without initializing them: int k;. This is generally considered bad style, however.

Global variables, if not initialized, are automatically initialized to 0. Local variables can be initialized to any value, though usually it will be some entries left over from old stack frames.


The substitution stepping model stops working with imperative features like set! - these forms have a different meaning when substituted.


In C, we mutate with the assignment operator (=). It takes the form of IDENTIFIER = EXPRESSION, where IDENTIFIER is a variable or stricture field:

int m = 5; // this is NOT an assignment, it is part of the initialization syntax
// here, m is 5
m = 28;
// here, m is 28
m = 3;
// here, m is 3

Note that the = in the first line is part of the initialization syntax, not an assignment. They often behave the same, but the difference becomes clear with things like syntax for structures, strings, and arrays:

// we can also use structure fields
struct posn n = {1, 2};
n.x = 4; // this is how we set structure fields

The assignment operator takes the value on the right side of the = and puts it in the address specified on the left side.

So in x = x + 1, x is treated as a value on the right side, and as a address on the left side.

As a result, the left hand side must be addressable - like variables, structure fields, and array elements. These values are called lvalues. So x = x + 1 is valid, but x + 1 = x is not.

The assignment operator changes state because it directly modifies memory (unless we use an identity like x = x;).

The assignment operator is an operator like any other, so it actually produces the right hand side as a value. This is idiomatic for things like a = b = c = 0, which sets all 3 variables to 0. We can then do things like z = 1 + (z = z + 1), but this is considered very bad style.

This is the cause of a lot of bugs. If we accidentally use = instead of ==, we can create some very subtle bugs:

if (x = 1)

This code actually assigns 1 to x, and then produces 1, which causes the code to always print "A".

To mitigate this, we can write equalities like 1 == x. This way, if we write = by accident, the resulting code will result in an error, 1 = x, since there isn't a valid lvalue on the left.

In C, operations like x = x + y or x = x * y are so common that there are special operators that do it in a shorter way:

x += y // equivalent to `x = x + y`
x -= y // equivalent to `x = x - y`
x *= y // equivalent to `x = x * y`
x /= y // equivalent to `x = x / y`
x %= y // equivalent to `x = x % y`
++ x // equivalent to `x = x + 1`
-- x // equivalent to `x = x - 1`
x ++ // equivalent to `x_old = x, x = x + 1, x_old`
x -- // ALMOST equivalent to `x_old = x, x = x - 1, x_old`

The postfix x ++ form is different from ++ x in that it produces the original value of x, before incrementing the value. In contrast, ++ x increment it, and then produces the new incremented value. So the produced value differs by 1.


int i = 5;
int j = 6;

int *p = &i;
int *q = &j;

p = q; // pointer assignment

After the last line, p now points to the same place q. It changes the value of p to the value of q, so p then points at j.

Now suppose we instead do the following:

int i = 5;
int j = 6;

int *p = &i;
int *q = &j;

*p = *q; // pointed value assignment

When *p appears on the left side of the assignment, it behaves like we actually wrote the thing it was pointing at, which is i. When *q appears on the right side of the assignment, it resolves to the value of q, which is j.

So after the last line, the value of what p points to is changed to the value q points to. In other words, it changes i to 6, even though i doesn't appear in the assignment.

Aliasing is when the same memory location can be accessed from more than one variable. Pointers allow aliasing in C.

Note that arguments to functions are copies of the original values we passed to the function. Therefore, when we change arguments in functions, the original value is unchanged:

void f(int i) {
    ++ i; // this is a useless function since it has no persistent side effects and produces no values

int main(void) {
    int j = 5;
    // `j` is still 5 even though it changed in `f`

This is a consequence of using the pass-by-value convention, since a copy of the variable in made in the new stack frame.

Another convention is pass-by-reference, where a variable passed to a function can be changed by that function. Some languages support both.

In C, we can sort of simulate this using pointers. If we pass the address of the variable we want to allow the function to change to the function, we can have the argument be a pointer to the variable. This allows us to change the variable using pointer value assignment:

void f(int *p) {
    // now `p` is a pointer pointing to `j`
    ++ *p; // this increments the value of the thing being pointed to by `p`

int main(void) {
    int j = 5;
    // now `j` is 6

This is still pass-by-value because we passed the address as a value. By passing the addres of j, we can change the value of j. This is also referred to as "passing a pointer to j".

One thing to be aware of is that *p ++ is the same as *(p ++), due to the order of operations. To make it work properly, we can use parentheses, like (*p) ++. When in doubt, always use parentheses to make the code clearer.

We can use this to produce multiple values from a function, or work with multiple values at once:

int swap(int *x, int *y) {
    int temp = *x;
    *x = *y;
    *y = temp;

Note that the following will not work:

int swap(int *x, int *y) {
    int *temp = x;
    *x = *y;
    y = temp;

This is because afterwards, both x and y will both point to the value of x, so they will not swap the values.

For example, a division function might produce the quotient and remainder with by-reference parameters. This would free up the return value to allow us to give information such as error codes or statistics:

int divide(int n, int d, int *q, int *r) {
    if (d == 0) return 1;
    *q = n / d;
    *r = n % d;
    return 0;

We can return 1 on invalid input to let the client know something went wrong.

The return type of a function can also be a pointer. However, we must not return pointers to addresses in the current stack frame. Whenever we return from a function, the data in the stack frame should be considered invalid and off-limits. In other words, we should not return pointers to local variables or parameters.

If a function has implementation-specific side effects, such as modifying internal variables, we do not want to show this information in the interface. Instead, we can put these specific details with the implementation of the function.

10/2/14 - Tutorial

// collatz(n) returns the length of the Collatz sequence of `n` that ends with 1
//     PRE; n >= 1
//     POST: produce the legnth of the Collatz sequence starting at the number `n`
int collatz(int n); {
    assert(n >= 1);
    if (n == 1)
        return 1;
    else if (n % 2 == 0)
        return 1 + collatz(n / 2);
    return 1 + collatz(3 * n + 1);

cond also implicitly uses begin for the answers:

(cond [#t (display "hello") 'return-value])

The above will display the string but produce return-value.

Postconditions in the design recipe now need to focus on the side effects of mutation and are explained in natural language. They should not include too many implementation-specific details.

Note that the following two lines are equivalent:


The -> operator is a special operator that accesses the structure field of a structure given by a pointer.


When we use the static keyword on local variables, we can make it behave like a global variable that is only accessible inside of the function - we make a persistent local variable:

int f(void) {
    static int counter = 0; // this is initialized at the start of the program
    return counter ++;

Just like global variables, they are stored in the global data section in memory. As a result, the global data section is also known as the "global/static data" section.

int *p = NULL is equivalent to int *p; p = NULL;, not int *p; *p = NULL;.

When we call a function, all of the arguments are copied into the local stack frame. When we pass larger structures to function by value, this can get very inefficient - we have to copy the entire structure into the new stack frame. In addition to reducing performance, this causes our stack frames to get much larger and may even contribute to a stack overflow.

Therefore, we can pass pointers to structures instead to reduce overhead:

int sqr _ dist(struct posn *p1, struct posn *p2) {
    const int xdist = p1->x - p2->x;
    const int ydist = p1->y - p2->y;
    return xdist * xdist + ydist * ydist;

Note the use of the second structure selection operator, ->, which works on pointers to structures rather than the structures itself.

If a function accepts a pointer parameter, it should clearly state whether the parameter gets mutated in the documentation.

We can also use the const keyword to denote that the parameter isn't to be changed:

void immutable_posn(const struct posn *p) {
    p->x = 5; // this cannot be changed because the parameter is constant - the compiler will catch this

It is good style to use const for pointer parameters whenever possible.

int *p; // pointer and pointed value mutable
const int *p // pointer mutable, pointed value immutable
int * const p = &i; // pointer immutable, pointed value mutable
const int * const p = &i; // pointer and pointed value immutable

Note that const int i is the same as int const. const always applies to the type on the left, unless there are no types on the left, in which case it applies to the type on the right.


Looping combines mutation with control flow. Looping allows us to do things multiple times.

Instead of recursion, we now have iteration. The interesting thing about iterating is that it will never result in a stack overflow.

While loop

The while control flow statement takes the form of while (CONDITION) BODY;. It works as follows:

  1. Evaluate CONDITION. If false, we are done with the loop and move on.
  2. Evaluate BODY.
  3. Go back to step 1.

For example, printing numbers from 1 to 100:

int i = 1;
while (i <= 100) {
    printf("%d\n", i);
    i --;

It is easy to accidentally make an infinite loop. Common forms include:

while (i < 100) f(); i --; // bad nesting
while (i < 100); f(x); // extra semicolon
while (i = 100) f(x); // assignment rather than equality
while (true) f(x); // best way to make an infinite loop intentionally

Do-While Loop

The do-while control flow statement takes the form of do STATEMENT while (CONDITION);. It works almost exactly like the while loop, except the condition is evaluated at the end:

  1. Evaluate BODY.
  2. Evaluate CONDITION. If false, we are done with the loop and move on.
  3. Go back to step 1.

For example, ensuring success:

int i = 1;
int success; // uninitialized variable
do {
    success = g(); // this is guaranteed to run at least once
    i --;
} while (!success)

This loop is occasionally useful because it guarantees that its body will be run at least once.

For Loop

The for control flow statement takes the form of for (SETUP; CONDITION; UPDATE) BODY;. It is simply a shortcut for writing a very common pattern used in while loops.

This is almost exactly equivalent to { SETUP; while (CONDITION) { BODY; UPDATE; } }.

The only difference is that when we use continue, the for loop will also execute UPDATE before continuing with the next iteration.

  1. Evaluate SETUP.
  2. Evaluate CONDITION. If false, we are done with the loop and move on.
  3. Evaluate BODY. If continue is encountered, UPDATE and go back to step 2.
  4. Evaluate UPDATE.
  5. Go back to step 2.

This allows us to write an extremely common looping pattern into one concise statement. For example, printing numbers from 1 to 100.

for (int i = 1; i <= 100; i ++) printf("%d\n", i);

Note that in C99, we can use definitions too in our statements, like in SETUP. These definitions are in the scope of the for loop.

We can also omit the statements. If we omit the condition, it is assumed to be true:

for (;;) f(x); // infinite loop
for (; i < 50); // skip setup
for (i = 1, j = 2;;) // statements can use the comma operator to cram in multiple side effects

Extra Control Flow

Inside of loops, additional control flow statements are available that allow us to do some extra things.

The break; statement ends the current innermost loop immediately. and moves to directly after the loop.

The continue; statement skips to the end of the body of the innermost loop and goes to the next iteration of the loop. It means that we want to continue looping, but want to skip the rest of this particular iteration.

These statements are useful, but they can always be avoided by restructuring the code.

Using loops to solve problems is called using an iterative approach. Before, we used the recursive approach. Both have their own advantages. Iteration is simply an alternative to recursion, used much more often in imperative programming. Sometimes recursion is easier to understand than iteration, and sometimes the other way around. They are different tools that are useful in different places.



I/O stands for input/output. It refers to how programs interact with the real world.

This includes receiving input via a keyboard or mouse, or sending things to a screen or printer, or even to and from a hard drive.

With Racket we had printf with ~a (basic placeholder) and ~v (placeholder with extra type information). With C we have printf as well, with placeholders %d (decimal integer), %c (character), %f (float), %p (pointer/address).

In C, writing to files is quite similar to writing to the standard output:

#include <stdio.h>
int main(void) {
    FILE *file_ptr;
    file_ptr = fopen("hello.txt", "w"); // w for write
    fprintf(file_ptr, "Hello World!\n");

Tracing code is the process of adding logging output for the program to aid debugging and diagnostics.

We can use things like trace variables to turn tracing on and off:

const bool DEBUG = true; // set to false to turn off tracing
// ...
if (DEBUG) printf("The value of i is: %d\n", i);

In Racket, '"string" is the same thing as "string", and '14 is the same as 14. The quotes do not change numbers and strings.


The read function in Racket attempts to read a value from the keyboard. If there are no values, it pauses the entire program until there is. Read can also produce #<eof>, which means that there is no more input (End Of File).

In RunC, we can send EOF using Ctrl-D. EOF is a special character in the operating system that is treated specially by many programs.

>>> (read)
#> abc
>>> (read)
#> 14
>>> (read)
#> "string"
>>> (read)
#> (a
#> b c
#> d)
'(a b c d)

Read has rather complicated behaviour, but for now it is sufficient to think of it as prepending a single quote ' to our input and then interpreting it as a Racket literal value.

We can test for read giving EOF by using the type predicate eof-object?, which returns a boolean indicating whether a value is an EOF.


In C, scanf is the input counterpart to printf. It "scans" the standard input for certain patterns like %d for a decimal integer, simimlar to printf's placeholders.

It usually takes the form of scanf("%d", &i), where "%d" specifies the things to scan for, and &i specifies the pointer to which the scanned value is stored. scanf produces an int.

In this course, we will only be using one placeholder at a time. scanf places the result at the specified address, and returns an integer representing the status. The return value is the number of items actually read in. So a value of 0 means no values were read in (error or invalid input) and a value of 1 means we successfully read one value.

All whitespace is is ignored, including spaces, newlines, and tabs.

scanf can also return EOF (usually defined as -1) when it encounters an EOF in the input. Basically, we have an error if we get 0, success if we get 1, and end of input if we get EOF.

We could use scanf("%c", &x), but there is also a function getchar() designed specifically for this purpose. getchar accepts no parameters and produces the next character typed by the user on the keyboard as an integer representing the ASCII value of that character, or EOF if an EOF is given to the program.

getchar returns an integer because it could return any of the 256 possible characters, plus EOF.

int x = getchar();
int y;
if (scanf("%d", &y) == 1)
    printf("You entered: %d", y);


Using these I/O functions, we can now create interactive testing modules - modules that let us interactively test some functionality provided by our module:

#include "addsqr.h"
int main(void) {
    char func;
    int x,y;
    do {
        func = getchar();
        if (func == 'a') {
            scanf("%d", &x);
            scanf("%d", &y);
            printf("add %d %d = %d\n", x, y, add(x,y));
        } else if (func == 's') {
            scanf("%d", &x);
            printf("sqr %d = %d\n", x, sqr(x));
    } while (func != 'x');

This is a very barebones testing module that can get the job done. It doesn't need to handle invalid inputs or edge cases, since it is only meant to be used by testers.

Now when we run it, we can use it to test various values of our functions:

>>> s 10
sqr 10 = 100
>>> a 1 1
add 1 1 = 2

In fact, RunC has facilities to automate this sort of interactive testing module.

If the interactive testing module is located in test.c, then RunC will look for test.in.1 in the same directory, and then run the program with the contents of that file as the input, as if the user was typing the contents into the terminal directly. The resulting output of the program is stored in test.out.1. This also works for test.in.2, test.out.2, etc.

If there is a test.expect.1 file, then RunC will compare the actual output with the output in this file, and if there are any differences, it generates test.check.1, which shows the differences side by side.

This allows us to automate testing in a much more straightforward way, and also allows to test program outputs.

If we are using non-interactive testing (using assert, for example), we can just use an empty test.in.1 file and still use a test.expect.1 file to test output.

When we are testing code, we often want to check for:

When testing a module with side effects (like the shopping cart question in assignment 4), it is sometimes not sufficient to test each function by itself - we have to test sequences of function calls, testing the interaction between functions.

Integration testing is when we test interactions between modules and groups of modules all at once.


Arrays and Strings

Arrays are, in addition to structures, one of the only compound data structures in C.

They take the form of:

int some_array[6] = {4, 8, 15, 16, 23, 42};

Once again, the curly brackets { and } are part of the initialization syntax, and cannot be used in all expressions.

Array have fixed size and all elements must have a particular type. This is very different from lists in Racket.

To define an array, we have to know the length of the array at compile time. Each value inside the array is known as an element, and elements are referenced through an index.

The first element of an array some_array is some_array[0] (C is zero-indexed). The second is some_array[1], and the last is some_array[5]. note that there is no some_array[6].

If we try to access memory outside of the array bounds, like some_array[6], we will simply access the memory location outside of the array, which may cause a segmentation fault, corrupt data, or worse. It is important to ensure indices are in the array bounds.

We can treat array accesses like variables. For example, we can do some_array[*&some_array[0]] ++.

Uninitialized global arrays are initialized to 0. Uninitialized local arrays are filled with arbitrary values from the stack. We should always initialize arrays.

If there are fewer elements in the braces than there are elements in the array, then the rest get initialized to 0. For example, int a[5] = {1, 2} has the last 3 elements initialized to 0. We often see int a[5] = {0}.

If we are initializing the array, we can also omit the array size: int a[] = {4, 8, 15, 16, 23, 42}.

We can also use the partial initialization syntax, just like with structures: int a[100] = {[50] = 45, [99] = 8}.

Memory Model

The length of an array is the number of elements in the array.

The size of an array is the number of bytes it occupoes in memory.

For example, an array of 100 integers has length 100, but size 400, because each integer occupies 4 bytes of memory.

Arrays have their elements stored adjacently in memory. So the addresses of each element of the array are all offset by the size of the type.

For example, if an array

If an array a has length n and size s, then s == n * sizeof(a[0]).

C has no facility for obtaining the length of the array. We often need to keep track of it separately:

const int a_length = 6;
int a[a_length];

Note that we cannot determine the length of an array by using sizeof(a) / sizeof(a[0]), because sizeof(a) does not necessarily obtain the exact size of the array.


Arrays are a lot like pointers. For example, a is the same thing as &a[0], which is the same thing as &a.

Basically, the address of an array is the same as the address of the first element, which is the same as just writing the array's identifier.

So dereferencing the array is the same as the first element of the array: *a == a[0].

However, the array itself cannot be changed; we cannot do a = b, even if b is another array. We can think of this as the array address itself being constant.

Arrays are basically constant pointers to blocks of memory.


When we pass an array to a function, only the address is copied into the new stack frame. This saves memory if we have a lot of elements, since we don't need to copy all those elements into the stack frame.

Conventionally, we also pass the length of the array to the function if it is not always the same. This is because we cannot determine the length of the array inside the function otherwise.

void array_add1(int a[], int len) {
    for (int i = 0; i < len; i ++)
        a[i] ++

int main(void) {
    int a[] = {4, 2, 7};
    array_add1(a, 3);

If a function doesn't need to mutate the array, it is good practice to make the parameter constant using the const keyword to reflect this, just as with structures:

int array_sum(const int a[], int len) {
    int sum = 0;
    for (int i = 0; i < length; i ++)
        sum += a[i];
    return sum;

Pointer Arithmetic

Let p be a pointer, pointing at memory location 1000. Let i be an integer.

Then p + 1 is a pointer to the memory location 1000 + sizeof(*p). In other words, the amount we offset the pointer by depends on the type of data the pointer points to. If we add 1 to, say, an int pointer, it then points to the "next slot for an int", rather than the next memory location.

In general, p + i is a pointer to the memory location 1000 + i * sizeof(*p).

Also, addition of a pointer and an integer is commutative - p + i == i + p.

In the same way, we can do p - i, p += i, p ++, and p --.

Let q be a pointer, pointing at memory location 1100.

Then p + q is invalid, because we cannot add two pointers. This is because pointers are often rather large numbers, and could easily overflow if added, resulting in an invalid pointer. It is simply not useful to add pointers.

Then p - q is valid as long as p and q point to the same type of data, like int and int. p - q would result in (1000 - 1100) / sizeof(*p), or -25 if we assume p and q are int pointers.

This allows pointer arithmetic to satisfy basic algebriac identities, like how p - q = -25 given that q = p + 25, since (1000 - 1100) / sizeof((int)0) == -25. Basically, subtracting two pointers results in the number of slots of the data type that would fit between the two.

Then p < q, p > q, p <= q, p >= q, p != q, and p == q are valid, as long as p and q point to the same type of data. These compare the memory locations of the pointers as if they were abstract integers.

Officially, pointer arithmetic is only valid when p and q and the result of the pointer arithmetic are all inside the same array or structure. Formally, there must be an array or structure that fully contains p, q, and the result of the pointer arithmetic.


The array indexing operator a[i], where a is an array and i is an integer, is exactly equivalent to *(a + i).

In fact, this operator works on pointers too - p[0] is the same thing as *p.

Because of this equivalence, it is always possible to rewrite code using the [] operator to use pointer arithmetic, and vice versa:

int sum_array(const int *a, int len) {
    int sum = 0;
    for (int *p = a; p < a + len; p ++)
        sum += *p;
    return sum;

Is exactly equivalent to:

int sum_array(const int a[], int len) {
    int sum = 0;
    for (int i = 0; i < len; i ++)
        sum += a[i];
    return sum;

As a result, there is also no difference between using const int a[] and const int *a in function declarations. However, there are some small differences between pointers and arrays, which will be discussed later.


Array indices can be negative if the pointer is not at the beginning of the array. For example:

int a[] = {1, 2, 3, 4, 5};
int *p = a + 4;
int x = p[-2];

Our computer memory is one dimensional. Multidimensional data can be represented using 1D arrays by mapping the higher dimensions as offsets on the lower dimensions. For example:

int a[] = {1, 2, 3,
           4, 5, 6}; // 2 by 3 array (2 rows, 3 columns)
a[r*2 + c]; // element at row `r` and column `c`

In general, the element in array a at row r and column c is a[r*NUMBER_OF_COLUMNS + c].

Function Pointers

In Racket, functions are first class values. We can store them in variables and structures, or pass them as arguments and return them from functions.

In C, functions are not first class values, but we can do the same thing with function pointers. This is a pointer to the beginning of the function in the code section.

We obtain a function pointer by just writing the name of the function:

int add1(int x, int y) {
    return x + y;

void array_map(int (*f)(int), int a[], int len) {
    for (int i = 0; i < len; i ++)
        a[i] = f(a[i]);

int main(void) {
    int a[] = {1, 2, 3, 4};
    array_map(add1, a);

The syntax to declare a function pointer is FUNCTION_RETURN_TYPE (*POINTER_IDENTIFIER)(PARAMETER_1_TYPE, PARAMETER_2_TYPE, ...);. For example, int (*fp)(int); is a pointer to any function that accepts an int and returns an int. In this case, we can set it to add1, since it follows this type signature.


Null Termination

In ASCII, the character at index 0 is known as the "null character". It means "nothing". ASCII precedes C, and in C, we call it the null terminator.

In C, there are no built in string types. As a convention only, most people use an array of characters with a null character at the end:

char my_string[] = {'c', 'a', 't', '\0'}; // we use `'\0'` instead of just `0` because it makes it clear that it should be a null terminator

We define a string in this course to be an array of characters, ending at the first null character. If there are any other values after this null character, they are ignored.

This is called the null-terminated string convention.

The advantage of this approach is that it is no longer necessary to pass the length of a string to a function along with the string itself. A function can take a string and determine where it ends by itself, by looking for the first null character:

// counts the number of "e"'s in a string
int e_count(const char s[]) {
    int count = 0;
    while (*s) { // not the null terminator
        if (*s == 'e' || *s == 'E') count ++;
        s ++;
    return count;

Since this pattern is so common, there are easier ways to write character arrays:

char a[]  = {'c', 'a', 't', '\0'};
char b[]  = {'c', 'a', 't', 0};
char c[4] = {'c', 'a', 't'};
char d[]  = {99, 97, 116, 0};
char e[4] = "cat";
char f[]  = "cat";

All of the above are equivalent.

If we do something like char a[3] = "cat", the null terminator gets cut off. This is because we explicitly specified the length, so the null terminator isn't added automatically.

In the array initialization syntax, the double quote "str" notation adds the null terminator automatically if the length is not explicitly specified. It is exactly equivalent to writing out the characters individually using array initialization syntax. This is different from the string literal notation in expressions.

We must be careful to include the null terminator. If a null terminator is forgotten, then functions like strlen might try to keep reading past the end of the string, which can cause segmentation faults and worse.


In expressions, the double quote "str" notation actually allocates a constant char array in the read-only section, and substitutes the array for the literal. This means that printf("abc"); is equivalent to:

const char string_literal_1[] = "abc";

As a result, the read-only data section is also known as the literal pool.

If we want to print out a string, we use printf witht he placeholder %s. This allows it to accept a null-terminated string and substitute it for the placeholder:

const char pattern[] = "the %s in the hat";
const char value[] = "cat";
printf(pattern, value);

printf pritns out characters until a null terminator is encountered.


The string.h library provides utility functions for working with strings.

The int strlen(const char s[]) function in string.h returns the length of the string s. Note that this is only the number of values before the first null terminator - not the length of the character array itself.

We can implement our own strlen as follows:

int my_strlen(const char *s) {
    int len = 0;
    while (s[len]) len ++;
    return len;

Or alternatively:

int my_strlen(const char *s) {
    const char *p = s;
    while (*p) p ++;
    return p - s;

When we compare strings directly using string1 == string2, we are simply comparing the array pointers. However, we usually actually want to compare them in lexicographic order.

Because "less" and "greater" is ambiguous - it could refer to the length of the string or other properties - we use precedes and follows to represent one thing coming before or after another in lexicographical order, respectively.

Characters are compared based on their ASCII code. A smaller ASCII value means one precedes than another. So numbers precede than capital letters, which are are less than lowercase letters.

In lexicographic order, we compare the characters one by one. If the character at the same index in both strings is the same, we move to the next character and try again. If one is less than the other, then that string lexicographically precedes than the other. If we reach the end of both strings, then they are equal.

If we reach the end of only one string, the shorter string lexicographically precedes than the other. This rule isn't really needed because the null terminator always llexicographially precedes all other characters.

The int strcmp(const char a[], const char b[]) function in string.h compares two strings in lexicographical order. It returns a negative value if a < b, zero if a == b, and a positive value if a > b. An easier way to think about it is that strcmp(a, b) COMPARISON_OPERATOR 0 is equivalent to a COMPARISON_OPERATOR b.

Lexicographic order can actually be used for any kind of sequence of elements, like arrays or lists.

We must be careful to not use comparison operators on strings. This generally does not give the intended behaviour. Instead, we use the following:

strcmp(a, b) == 0 // `a` and `b` are equal
strcmp(a, b) < 0 // `a` precedes `b`
strcmp(a, b) > 0 // `a` follows `b`


Strings and I/O

We can use strings with printf and scanf using the %s placeholder. For example, printf("%s, %s!", "Hello", "World!").

printf simply prints out characters in the specified string, stopping when a null character is encountered.

scanf first ignores preceding whitespace, then copies the characters from the input until a whitespace character is encountered:

char name[81]; // this is enough for 80 characters and the null terminator.
scanf("%s", name);
printf("You entered: %s", name);

It is extremely important to reserve enough space in the char array to hold the input. However, it is impossible to know in advance how many characters the user will input.

For example, if we ran the above program and entered more than 80 characters, scanf would simply keep copying data into memory past the end of the array, which can result in overwriting the null terminator, memory errors, or worse. This can be a major security issue since we can overwrite memory arbitrarily.

This is called a buffer overflow/buffer overrun. In practice, we would never use scanf in a real application for this reason. However, in this course this is good enough.

We can also use the gets(char result[]) ("Get S") function, which works similar to scanf("%s", char result[]), except it only stops when it hits a newline, rather than any whitespace. This is useful for reading in an entire line of input:

char sentence[81]; // this is enough for 80 characters and the null terminator.
printf("You entered: %s", sentence);

There are library functions that can help mitigate this. We can also use a loop with getchar that checks for the input being longer than the buffer and stops if this happens.

Buffer overruns can also happen for reading strings:

char a[3] = "cat"; // not null terminated
printf("%s", a);

This would continue to print out characters until a segmentation fault or a null character is encountered in memory.

string.h also has the useful functions strcpy(char dest[], const char src[]) (copy a string into another buffer overwriting the previous value) and strcat(char dest[], const char src[]) (copy a string to the end of another buffer without the first null terminator - concatenate the string to the end), but these are both susceptible to buffer overruns. We simply need to make sure dest is large enough to hold the result.

There are "safe" versions of many string library functions that avoid buffer overruns by allowing the user to specify the maximum number of characters to process. Some examples are strnlen, strncmp, strncpy, and strncat.

Pointers vs Arrays

void f(void) {
    char a[] = "pointers are not arrays"; // this is array initialization syntax
    char *p = "pointers are not arrays"; // this is a pointer initialized to point to a string literal

The first one stores the character array in the stack frame.

The second one stores the character array in the read-only data section like a string literal, and makes p point to it.

So the first one is mutable, and the second is not - we can do a[0] = 'H', but not *p = 'H'.

So the first one stores 24 bytes in the stack frame, while the second stores only 4.

Overall, the second one uses 4 more bytes of memory. Arrays are like aliases for a chunk of memory, while pointers can point to these chunks of memory. However, pointers and arrays have very similar operations and can be treated very similarly.

a itself has a constant value, we cannot assign another array to it, like a = p. We can change p to point at something else, like p = a.

An array that is not using the string initialization syntax char a[] = {'c', 'a', 't', '\0'} is almost the same as char *p = a. However, a == &a while p != &p and sizeof(a) == 24 while sizeof(p) == 4.


An algorithm is a step-by-step description of how to solve a problem. For example, which clothes to wear in the morning.

For this course, problems are often function interfaces, and the solutions are often function implementations.

An example of an algorithm for checking if a list of numbers is greater than a given k is:

If the first element of the list (call this f) is greater than k, produce #t, otherwise reduce k by f and then repeat for each remaining element. If no more elements remain, produce #f.

We want to be able to evaluate and compare different algorithms. For example, we can compare it based on understandability, robustness, accuracy, adaptablility, and how efficient it is.

The most common measure of efficiency is time efficiency - how long it takes an algorithm to solve a problem given a certain inputs. In this course, efficiency will mean time efficiency by default.

Space efficiency is also important. - how much memory the algorithm requires to solve a problem.

Tiem and space efficiency can depend on how we actually implement the algorithm. As a result, efficiency is always measured in terms of a specific implementation.

Measuring Time Efficiency

We are interested in the running time of an algorithm. However, it is not useful to simply use a stopwatch and measure how long it takes for the algorithm to run:

In Racket, we might measure time efficiency by counting the number of substitution steps needed to evaluate the program. However, this is misleading because each step might take a varying amount of time. For example, foldr can take a very long time for a large list, yet is only one substitution step to evaluate.

In C, we might measure the number of "operations" performed - the C equivalent of a step. For example:

sum = 0; // 1 operation
i = 0; // 1 operation
while (i < 5) { // 1 * 6 operations
    sum = sum + 1; // 2 * 5 operations
    i = i + 1; // 2 * 5 operations

Here, there are a total of 28 operations.


We are always interested in the time taken with respect to the size of the input. We represent this using \(n\), \(m\), or \(k\), for different inputs.

For example, consider the following:

(define (sum lon)
    (cond [(empty? lon) 0]
        [else (+ (first lon) (sum (rest lon)))]))

The number of steps needed to run this function is dependent on (length lon). As an asid, it happens to be \(7n + 2\) steps, where \(n\) is the number of elements in lon.

What \(n\) or other variables represent can depend on the context. For example, a list of numbers might have \(n\) represent the length of the list.

Another example is a list of strings. Here, \(n\) can represent the length of the list, the total length of the string, or even the longest string.

A good example is for trees. We may have \(n\) represent the number of nodes, or the depth of the tree, or a variety of other properties.

The idea is that sometimes the time taken is dependent on more than just the size of something, and that there can be multiple variables. Also, there can be ambiguity as to what the variables represent, and it is important to clarify if it is unclear.

This type of analysis is known as running time analysis, and we denote the running time of a function as \(T(n)\).

Consider now another function, rsum>?:

(define (rsum>? lon k)
    (cond [(empty? lon) #f]
        [(< k (first lon)) #t]
        [else (rsum>? (rest lon) (- k (first lon)))]))

Notice that the running time now depends on both the length of lon and the value of k. We write this as \(T(n, k) = \ldots\).

We sometimes also care about the amount of memory used for any given input.

Asymptotic Complexity

We are interested in the worst case and best case - the smallest possible running time, and the largest possible running time, given all possible inputs.

In other words, we want to minimize or maximise \(T(n)\).

For example, for rsum>?, the best case is \(T(n) = 5\) (if \(k\) is less than the first element), and the worst case is \(T(n) = 10n + 2\) (if \(k\) is greater than the sum of the entire list).

When we say running time, by default we mean the worst case running time. The average case running time is also useful to know about, but this is much harder to determine.

Big-O Notation

In practice, we do not really care about these small variations in running time - they are not significant and are highly dependent on hardware and implementation. Instead, we care about the behaviour of the running time as the input grows arbitrarily large.

This is the order (growth rate) of the running time - the term that gets the largest as \(n \to \infty\), without any constant coefficients. For example, \(7n + 4\) has order \(n\), and so does \(10n + 2\).

We can represent an order \(f(n)\) with \(O(f(n))\). This is big-O notation. So the order \(n^2\) is represented as \(O(n^2)\).

In this course, we will only encounter the following orders: \(O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(n^3) < O(2^n)\).

For example:

Note that \(a^n = O(2^n), a > 1\) because we can rewrite it as \(O(a^n) = O(a^{n \log_a 2}) = O(2^n)\). Likewise, we can write the base of a logarithm as a constant factor and remove it.

Big-O notation is much more abstract than running time analysis. Big-O notation can then be associated with an entire algorithm rather than just an implementation of an algorithm.

Given orders \(O(f(n)), O(g(n))\), the following are identities:

For example, \(T(n) = 1 + 2n + 3n^2 = O(1) + O(n) + O(n^2) = O(n^2)\).

Note that at this point, this is all informal notation. Big-O notation can and will be defined more formally later on.

Big-O notation isn't just for time complexity - we can find the order of any function.


For functions that are either:

We add an additional field to the contract to denote the time complexity:

;; merge-sort: (listof Int) -> (listof Int)
;;     PRE: true
;;     POST: produces lon sorted in ascending order
;;     TIME: O(n \log n), n is the length of `lon`
(define (merge-sort lon) ...)

Efficiency Analysis

The lower the order of an algorithm, the more efficient it is. So a \(O(n \log n)\) algorithm is more efficient than a \(O(n^2)\) algorithm. If two algorithms have the same order, they are equivalent for our purposes.

We can intuitively analyze an algorithm to determine its time complexity. For example, rsum>? is clearly \(O(n)\), simply by inspection. However, we can also find the order algorithmically:

  1. If the function does not use recursion or iteration:
    1. Operators in C are all \(O(1)\).
    2. Functions can have their own orders, like the \(O(n)\) length in Racket. Make sure to account for the input size being changed, like passing a constant list to length.
    3. The order of this function is the sum of all the orders in the function - the largest order. This ensures we always consider the worst case.
  2. If the function uses recursion:
    1. Determine the order of the function if it did not use any recursion.
    2. Determine the size of the input given to the next recursive call.
    3. Write the running time function as a recurrence relation.
    4. Look up the closed form of the recurrence relation in the table.
  3. If the function uses iteration:
    1. Work from the innermost loop to the outermost, nesting their summations.
    2. Determine the number of iterations in each loop in the worst case as a function of the input size or outer loop counter.
    3. Determine the worst case running time of each iteration.
    4. Write the iteration using summation notation and simplify it.

Racket Built-in Efficiencies

Function time complexities in Racket:

The differences between lists and arrays can now be made clear. Array lookup is \(O(1)\), while list lookup is \(O(n)\) (lookup means retrieving a given element in the collection), which makes arrays much more efficient in this aspect. However, arrays cannot be expanded, while lists can.

Racket has arrays as well, but they are known as vectors. We create them with the (vector ...) function (for example, (vector 1 2 3)) and lookup elements using the (vector-ref some-vector position).

Functions like max and list in Racket have their running time determined by the number of parameters they are given. Since at this point, we can only pass a constant number of parameters to a function, these are always \(O(1)\).

We assume in this course that all operators in C are \(O(1)\), and all arithmetic functions in Racket (like +, *, expt) are also \(O(1)\). In practice, since Racket supports arbitrarily large numbers, things like + are actually \(O(\log n)\), where \(n\) is the larger of the numbers.

As a result of the above, = is \(O(1)\). Note that string=? is \(O(n)\) where \(n\) is the length of the shortest string, since it must compare lexicographically. Likewise, equal? is \(O(n)\) where \(n\) is the size of the smallest argument. However, symbols are special in that symbol=? is \(O(1)\).

(member e lst) is \(O(mn)\) where \(n\) is (length lst) and \(m\) is the size of e. This is because it could potentially call equal? on every element of lst.

C Built-in Efficiencies

Function time complexities in C:

Fixed Inputs

If the length of the input is fixed or has an upper bound, then it takes constant time by definition.

However, practical limits such as the size of computer memory are not considered upper bounds, because technology allows us to basically arbitrarily increase this as time goes on.

For example, (length l) is \(O(n)\) where \(n\) is the length of l, but if we know that l has no more than 25 elements, then \(n \le 25\) and \(O(n) \le O(25)\), so this is \(O(1)\).

The time complexity is only a function of \(n\) when it can become arbitrarily large.

Recurrence relations

Recurrence relations are used to analyze the running time of recursive functions.

It is possible to derive closed forms for many recurrence relations - forms that are non-recursive. In this course, we encounter the following:

Where \(k_1, k_2 \ge 1, c > 1\). It is not necessary to memorize this table, though it should be intuitively obvious how it is derived.

Consider the following code:

(define (find-max lon)
    (cond [(empty? (rest lon)) (first lon)]
          [(> (first lon) (find-max (rest lon))) (first lon)]
          [else (find-max (rest lon))]))

In the worst case, when the list lon is sorted in ascending order, find-max gets recursed on twice, so \(T(n) = O(1) + 2T(n - 1) = O(2^n)\).

However, if we avoid calling the function twice, the function can be made significantly more efficient:

(define (fast-max lon)
    (cond [(empty? (rest lon)) (first lon)]
          [else (max (first lon) (fast-max (rest lon)))]))

The worst case for this function is still when the list lon is sorted in ascending order, but since the function is only recursed on once, \(T(n) = O(1) + T(n - 1) = O(n)\).

;wip: do an example


Iterative summation

Iterative functions are summed up over each iteration. ;wip

For example:

sum = 0;
for (i = 0; i < n; i++) {
    sum += i;

We can write the time complexity of this code as \(\sum_{i = 0}^n O(1)\), since the body of the loop is \(O(1)\).

Some commonly used summations are:

Note that \(\sum_{i = 1}^n (n - i) = \sum_{i = 0}^{n - 1} i\)

The summation index does not have to perfectly represent the actual loop counter. For example:

k = n; // n is size of the input
while (k > 0) {
    printf(" * ");
    k /= 10;

This code is \(\sum_{k = 0}^{\log n} O(1) = O(\log n)\).

When we have nested loops, we can analyze them individually:

for (j = 0; j < n; j++) {
    for (i = 0; i < j; i++)
        printf(" * ");

The time complexity for this is \(\sum_{j = 0}^n \left(O(1) + \sum_{i = 0}^j O(1)) = \sum_{j = 0}^n \left(O(1) + O(j)\right) = \sum_{j = 0}^n O(j) = O(n^2)\).


Sorting is a common way to put efficiency analysis to practice.

Insertion sort

In insertion sort, we start with the empty sorted result, and then for each element in the list, we insert it into the result, maintaining sorted order.

Basically, we go one by one through the list, putting each element in its proper place.

In Racket, this can be written as:

(define (insert n slon)
    (cond [(empty? slon) (cons n empty)]
          [(<= n (first slon)) (cons n slon)]
          [ else (cons (first slon) (insert n (rest slon)))]))

(define (insertion-sort lon)
    (cond [(empty? lon) empty]
          [else (insert (first lon) (insertion-sort (rest lon)))]))

Clearly, insert is \(T(n) = O(1) + T(n - 1)\), or \(O(n)\) worst case and \(O(1)\) best case, so insertion-sort is \(T(n) = O(n) + T(n - 1)\), \(O(n^2)\) worst case (when the numbers are sorted in reverse order) and \(O(n)\) best case (when the numbers are already sorted).

We may also want to have an in-place version - one that sorts the array without building another array in memory. For an in-place insertion sort, we start at index 1 and assume everything to the left is sorted, then for each element, we shift elements from the left rightward until the new element can be put into the correct position:

void insertion_sort(int a[], int len) {
    for (int i = 1; i < len; i ++) { // go through each element
        int value = a[i];
        int pos = i;
        while (pos > 0 && a[pos - 1] > val) { // shift elements rightward until the element can be placed in sorted order
            a[pos] = a[pos - 1];
            pos --;
        a[pos] = value;

It is easy to see that the worst case time complexity is \(\sum_{i = 1}^n \sum_{j = 0}^i O(n) = O(n^2)\) where \(n\) is the numerical value of len, and the best case is \(O(n)\).

Selection Sort

In selection sort, we start with the empty result, pick the lowest element in the list, append it to the result, and repeat this picking and appending process until the list is empty and the result is sorted:

(define (find-min lon current-min)
    (cond [(empty? lon) current-min]
          [(< (first lon) current-min) (find-min lon (first lon))]
          [else (find-min lon current-min)]))

(define (selection-sort lon)
    (cond [(empty? (rest lon)) lon]
          [else (define m (find-min lon)) ; O(n)
                (cons m (selection-sort (remove m lon)))]))

selection-sort is always \(O(n^2)\), even in the best case, since find-min is \(O(n)\).

The in-place version of this can be implemented by going left to right through the an array, finding the smallest element of this one and those on the right of the current element, and swapping the current element with the smallest one:

void selection _ sort(int a[], int len) {
    for (int i = 0; i < len - 1; i ++) {
        int pos = i;
        for (int j = i + 1; j < len; j ++)
            if (a[j] < a[pos]) pos = j;
        swap(&a[i], &a[pos]); // see Section 06

This implementation can be analyzed to have a worst and best case of \(O(n^2)\)

Merge Sort

In merge sort, the list is split into halves, and each of these two sublists are sorted using merge sort (merge sorting a list with one element is the same list), then merged back together:

(define (merge slon1 slon2)
    (cond [(empty? slon1) slon2]
          [(empty? slon2) slon1]
          [(< (first slon1) (first slon2))
           (cons (first slon1) (merge (rest slon1) slon2))]
           (cons (first slon2) (merge slon1 (rest slon2)))]))

(define (merge-sort lon)
    (define len (length lon))
    (define mid (quotient len 2))
    (define left (drop-right lon mid))
    (define right (take-right lon mid))
    (cond [(<= len 1) lon]
          [else (merge (merge-sort left)
                       (merge-sort right))]))

Note that when merge recurses, either the length of slon1 or slon2 decreases by 1. It is difficult to analyze these two values since they depend on the input, but if we define \(n\) to be the total length of both lists, (+ slon1 slon2), then we can simply say that \(n\) decreases by 1 on each recursive call.

So merge has a time complexity of \(T(n) = O(1) + T(n - 1) = O(n)\).

So merge-sort has a time complexity of \(T(n) = O(n) + 2T(n / 2) = O(n \log n)\).

Merge sort has a best and worst case time complexity of \(O(n \log n)\). In fact, this is the fastest possible time complexity for comparison sorting algorithms.

Because merge sort has a reliable time complexity (as well as due to its stability and parallelizability), merge sort is very commonly used in practice. In fact, Racket's built in sort uses merge sort.

An in-place version of merge sort is more difficult to implement than the other sorting algorithms, but it is still possible:

;wip: implement it using the bottom-up approach

Quick Sort

In quick sort, on arbitrary element is selected as a pivot, and then all the other elements are either partitioned into those above this pivot, or those below this pivot. Those two subsets are then sorted using quick sort again, and then combined together afterwards:

(define (quick-sort lon)
    (cond [(empty? lon) empty]
          [else (define pivot (first lon))
           (define less (filter (lambda (x) (<= x pivot)) (rest lon)))
           (define greater (filter (lambda (x) (> x pivot)) (rest lon)))
           (append (quick-sort less) (list pivot) (quick-sort greater))]))

The worst case, when the pivot is always the lowest element, makes this function have a time complexity of \(T(n) = O(n) + T(n - 1) = O(n^2)\).

The best case, when the pivot is the median of the elements, makes this function have a time complexity of %T(n) = O(n) + T(n / 2) = O(n n)%.

The in-place version works by selecting the pivot, partitioning the array, and then recursively sorting the partitions:

void quick_sort_range(int a[], int len, int first, int last) {
    if (last <= first) return; // size is <= 1
    int pivot = a[first]; // first element is the pivot
    int pos = last; // where to put next larger
    for (int i = last; i > first; i --) { // move everything bigger than the pivot to the end
        if (a[i] >= pivot) {
            swap(&a[pos], &a[i]);
    swap(&a[first], &a[pos]); // put pivot in correct place
    quick_sort_range(a, len, first, pos - 1); // sort the left side
    quick_sort_range(a, len, pos + 1, last); // sort the right side

void quick_sort(int a[], int len) {
    quick_sort_range(a, len, 0, len - 1);

This works the same way as the Racket version, so it also has \(O(n^2)\) worst case time complexity and \(O(n \log n)\) best case time complexity.

Quick sort is used a lot in practice because it is very fast for a lot of real world data sets - its average case time complexity is good. There are ways to mitigate the worst case, such as by choosing a random pivot.


Consider the member function for arrays inf integers in C:

bool member(int item, int a[], int len) {
    for (int i=0; i < len; i++)
        if (a[i] == item)
                return true;
    return false;

This runs in \(O(n)\) time. However, if we know that the array is sorted before calling this function, we can use binary search:

bool sorted_member(int item, const int a[], int len) {
    int low = 0;
    int high = len-1;
    while (low <= high) {
        int mid = low + (high - low) / 2;
        if (a[mid] == item)
            return true;
        else if (a[mid] < item)
            low = mid + 1;
            high = mid - 1;
    return false;

Since the length of high - low (the range of input we are looking at) is halved on each iteration, there are \(O(\log n)\) iterations. Since each iteration is \(O(1)\), the function is \(O(\log n)\).

Formal Big-O Notation

\(O(g(n))\) is the set of all functions with "order" less than or equal to \(g(n)\).

So \(O(1)\) is actually the set of all constant functions.

Since the order is less than or equal, we can say things like \(3 \in O(2^n)\) and \(n^2 \in O(n^{100})\). Even though this is mathematically correct, this isn't very useful.

In this course, we will always use the smallest possible order to represent a given function.

Or alternatively, \(f(n) \in O(g(n)) \iff \exists c \in \mathbb{R}, n_0 > 0, \forall n \in \mathbb{R}, n \ge n_0 \implies f(n) \le c \cdot g(n)\).

This basically means that we can ignore constant coefficients. Also, that after a certain threshold value of \(n\), the order function multiplied by a constant is an upper bound for \(f(n)\).

The \(c\) is necessary (rather than just using 1) in order to account for functions with the same order. For example, if we have two constant functions with different constants, then one would always be above the other, yet it is still in the same order.

Prove that \(0.001n^3 + 1000n \in O(n^3)\):

Assume \(n > 11\). Let \(c = 1\). Clearly, \(0.001n^3 + 1000n \le 11n^3\).
So \(0.001n^3 + 1000n \in O(n^3)\).

Big-O notation is the asymptotic behaviour of a function - the behaviour of a function as its input approaches infinity.

In future courses, we will use the formal definition to prove algorithm complexities.

There are also other notations for various asymptotic behaviours. For example, \(\Omega(n)\) (big-omega) is the asymptotic lower bound, just as \(O(n)\) (big-O) is the asymptotic upper bound, and it means there exists some \(c\) such that \(c \cdot g(n) \le f(n)\) whenever \(n \ge n_0\).

As especially useful one is \(\Theta(n)\), which defines the exact asymptotic behaviour: \(f(n) \in \Theta(g(n))\) means that there exist \(c_1 \ge 0, c_2 \ge 0, n_0\) such that \(c_1 \cdot g(n) \le f(n) \le c_2 \cdot g(n)\) whenever \(n \ge n_0\). This means that \(\Omega(n) = O(n) \implies \Omega(n) = O(n) = \Theta(n)\).

We often use \(O(n)\) instead of \(\Theta(n)\), even though \(\Theta(n)\) is more appropriate. Maybe because it's easier to type.

There's also \(\omega(n)\) (little-omega) - \(f(n) \in \omega(g(n))\) means that for all \(c > 0\), there exists an \(n_0\) such that \(c \cdot g(n) \le f(n)\) whenever \(n \ge n_0\).

There's also \(o(n)\) (little-o) - \(f(n) \in o(g(n))\) means that for all \(c > 0\), there exists an \(n_0\) such that \(f(n) \le c \cdot g(n)\) whenever \(n \ge n_0\).

;wip: slide 67 to 78

Space Complexity

We can also analyze the amount of memory a function uses as the input grows arbitrarily large. If two algorithms have the same time complexity, the one using less memory is often preferable, and it is in many cases faster as well.

For example, consider a summing function using structural recursion:

(define (sum lst)
    (cond [(empty? lst) 0]
          [else (+ (first lst) (sum (rest lst)))]))

Conpare it with one that uses accumulative recursion:

(define (asum lst)
    (define (asum/acc lst sofar)
        (cond [(empty? lst) sofar]
              [else (asum/acc (rest lst) (+ (first lst) sofar))]))
    (asum/acc lst 0))

Both functions have \(O(n)\) time complexity, but the first one uses \(O(n)\) memory, while the second one uses \(O(1)\) memory.

Tail Recursion

This is because the second one uses tail recursion.

Tail recursion is when the function directly returns the value of its recursive call. For example, the following are recursive:

(define (f n) (f n))
(define (f n) (f (sub1 n)))
(define (f n) (define v 5) (f (some-other-function n)))

But the following are not:

(define (f n) (add1 (f n)))
(define (f n) (- 1 (f n)))

In the non-tail recursive examples, the return value is not directly the value of the recursive call - it performs some additional processing on the result before returning.

Tail recursion is important because of tail recursion optimization/tail recursion elimination. Note that at the moment we call the recursive function, there is no longer any need for the current stack frame, since after the function returns, we aren't doing anything except returning that value. Therefore, we can simply destroy the current stack frame and make the recursed function return its value directly to the function that called us.

Not all tail recursive functions can be optimized this way. For example, in C, a function can pass a pointer to a value in the current stack frame, which prevents us from destroying the current stack frame.

Tail recursion elimination is supported in Racket and in many modern implementations of C.

Dynamic Memory

In C, the length of an array is always fixed. In Racket, lists can grow to an arbitrary size.

This is possible because Racket lists are constructed with dynamic memory.

Dynamic memory is memory allocated from the heap, while the program is running.

Previously, we have used a maximum size for an array, like 80 characters for a last name. This was reasonable when we could define a clear upper bound for our input. In practice though, this is not always possible. Also, it can be very wasteful of memory.

When working with these arrays, we have to keep track of both the actual length of the data stored in the array, and the maximum length of the array.

This approach makes the length of the array "dynamic" (changeable at runtime), but the problem is that the amount of items cannot exceed the maximum. We call this the maximum length array.

Also, the exit(EXIT_CODE) function, defined in stdlib.h, exits the program immediately. stdlib.h also defines EXIT_SUCCESS (0) and EXIT_FAILURE (1) for convenience.

Heap memory

The heap is used as a "pool" or "pile" of memory. Memory can be dynamically borrowed and returned from the heap.

Borrowing memory from the heap is known as allocation. Returning memory to the heap is known as deallocation.

Allocating too much memory - when the heap is exhausted - causes allocation to fail.

The heap is located after the code, read-only data, and global data sections, and grows downwards. The stack is located at the end and grows upwards. This allows them to grow as much as possible before colliding.

The heap is also known as the memory pool or free store, in order to avoid confusion with an abstract data structure also known as the heap.

Likewise, there is an abstract data type known as the stack, but this is very conceptually similar to the stack in the memory model, so it is not that confusing.


void *malloc(size_t s) is a function that requests s bytes of memory from the heap. It can be found in stdlib.h.

It returns a pointer to a block of s bytes. If there isn't enough memory available, then the function returns NULL.

Good implementations of malloc are \(O(1)\). In this course, we will assume that malloc is always \(O(1)\).

As a result, it is highly recommended that we check for NULL after every malloc:

int *p = malloc(sizeof(int));
if (p == NULL)

The values at the allocated memory are uninitialized and can contain anything. calloc works identically to malloc, except it initializes the memory to 0 first. calloc is \(O(n)\), where \(n\) is the amount of memory requested.

void * is known as a generic pointer. It is a pointer that can point to type in memory. It is basically a pointer that does not have an associated type.

Since the result of malloc is a generic pointer, we can store any type of value in the heap.

size_t is an integer-like type that represents the size of something. The sizeof operator returns a size_t value. The placeholder for size_t in printf and similar is %zd.

Although the return type is a generic pointer, we can assign it to a typed pointer:

int *pi = malloc(sizeof(int));
struct posn *pp = malloc(sizeof(struct posn));

The typed pointer can then be used just like any other pointer. Note that the memory pointed to stays alive all the way until it is freed - it can be returned from functions or passed around as needed without worrying about going out of scope, unlike memory on the stack.


After we are done with the memory we allocated, we should always free it again so it is available for future allocations. This is done using the void free(void *p) function, which returns the allocated memory at p back to the heap. It can be found in stdlib.h.

malloc remembers how much memory we actually allocated at the address, so we do not need to give it the amount of memory we originally allocated - just the address of the memory we allocated.

The pointer p must have been allocated by a previous malloc, and must be heap memory. After calling it, the memory pointed to by p is no longer safe to access or modify. This pointer is no longer safe to perform any operations on, but it may be returned by a future call to malloc.

In RunC, it is required that every malloc is paired with a corresponding free. Freeing a null pointer does nothing.

free(p) is \(O(1)\) in this course.

Common Errors

Some common mistakes associated with dynamic memory are:


Garbage Collection

In Racket, we did not have to worry about freeing unused memory because the language has a built in garbage collector, like many modern languages.

This is the component that automatically detects memory that is unused and returns it to the heap.

The disadvantage is that the garbage collector is relatively slow, and can affect performance negatively.

As a result, things like games and scientific computing use manual memory management, like in C. This is changing now due to threaded garbage collectors, that can run on separate processors.

Dynamic memory allocations and deallocations are considered side effects and must be documented in the postconditions of the function.

Dynamic Arrays

Since the length parameter of malloc is a variable, we can actually allocate a dynamic amount of memory:

// POST: allocates an array on heap (caller must free)
// produce a pointer to the array or NULL if not possible
int *create_array(int len) {
    assert(len > 0);
    int *a = malloc(sizeof(int) * len); // array size
    return a; // array pointer can be returned from functions

Then in the client code, we would be responsible for freeing the memory after using it:

int a[] = create_array(25);
// do stuff with `a` here
free(a); // we have to free it afterwards

strdup(char s[]) is an unofficial but common function found in string.h, that duplicates a string. It does the same thing as the following:

// POST: caller must free
// returns NULL if there is not enough memory
char *my_strdup(const char s[]) {
    char *new = malloc(sizeof(char) * (strlen(s) + 1)); // we add 1 to account for the null terminator
    strcpy(new,s); // copy the old string into the new memory
    return new;

This technique is useful if we don't know the length of the array we need before we start storing the elements.

We can resize our array by making a new array, copying the items from the old array to the new one, and then freeing the old array. This lets us make arrays bigger or smaller:

// resize_array(old, oldlen, newlen) changes the length
// of array old from oldlen to newlen
// PRE: old must be a malloc'd array of size oldlen
// POST: returns new array or NULL if out of memory
// frees the old array, caller must free new array
// if larger, new elements are uninitialized
// TIME: O(n), where n is min(oldlen,newlen)
int *resize_array(int * old, int oldlen, int newlen) {
    int *new = malloc(sizeof(int) * newlen);
    int copylen = newlen < oldlen ? newlen : oldlen;
    for (int i=0; i < copylen; i++)
        new[i] = old[i];
    return new;

void *realloc(void *ptr, size_t s) is a function designed to make it easier to resize arrays. It allows a block of memory allocated previously by malloc to be made bigger or smaller, preserving the contents of the memory. New space is uninitialized.

We assume that the running time of realloc is \(O(s)\), where \(s\) is s. realloc(NULL, s) is the same as malloc(s), and realloc(ptr, 0) is the same as free(ptr) (though it returns NULL in this case).

Although we can now use realloc to implement resize_array, in practice most people just directly use realloc.

Note that the original pointer passed to realloc should no longer be used. It is common to overwrite the original pointer, like my_array = realloc(my_array, sizeof(int) * length), but in practice this could cause a memory leak due to what happens when there is an error.

If realloc fails to allocate enough memory, then it returns NULL and does not free the original memory block.

Amortized Analysis

Amortized time complexity analysis focuses on analyzing the worst case of the average time complexity of a function oiver multiple calls.

Consider the following implementation of dynamically expanding arrays:

int *numbers = NULL;
int numbers_len = 0;

void append_number(int i) {
    numbers = realloc(numbers, sizeof(int) * (numbers_len + 1)); // increase the length by 1
    numbers[numbers_len] = i;
    numbers_len ++;

This works, but the function is \(O(m)\) where \(m\) is numbers_len, so using the function to add up \(n\) numbers is \(\sum_{i = 1}^n O(i) = O(n^2)\) - it performs rather poorly.

An interesting strategy is to allocate more memory than necessary, and only call realloc when the array is full.

Consider a doubling strategy. Every time we hit the limit of the array length, we simply double the length of the array.

struct dyn_array {
    int *data;
    int len;
    int max;

struct dyn_array *make_dyn_array(void) { // we usually return pointers to structures rather than structures themselves in order to avoid copying them to stack frames
    struct dyn_array *a = malloc(sizeof(struct dyn_array));
    a->data = malloc(sizeof(int));
    a->len = 0;
    a->max = 1; // we would usually use a size like 64 in practice to avoid calling `realloc` so much for smaller arrays
    return a;

void destroy_dyn_array(struct dyn_array *da) {

void append_number(struct dyn_array *da, int i) {
    if (da->len == da->max) {
        da->max *= 2; // double the maximum size of the array
        da->data = realloc(da->data, sizeof(int) * da->max);
    da->data[da->len] = i;

Now the function is \(O(n)\) only when we are resizing, and \(O(1)\) otherwise. For example, for the first 33 calls to append_number, we copy the following number of elements: 0, 1, 2, 0, 4, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 16, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 32.

We can see that for \(n + 1\) calls, we have \(n + \frac{1}{2}n + \frac{1}{4}n + \ldots + 1 = n\sum_{i = 0}^{\lceil\log_2 n\rceil} \frac{1}{2^i} = n\frac{1 - \left(\frac{1}{2}\right)^{\lceil\log_2 n\rceil + 1}}{1 - \frac{1}{2}} = 2n\left(1 - \frac{1}{2}\frac{1}{2^{\lceil\log_2 n\rceil}}\right) = O\left(n - n\frac{1}{n}\right) = O(n)\) copying operations.

Since everything else in append_number takes \(O(1)\) for one call and \(O(n)\) for \(n\) calls, append_number has a \(O(n)\) worst case time complexity, averaged over \(n\) calls.

So the amortized runtime (averaged worst case) of append_number per call is \(O(n) / n = O(1)\).

For shrinking arrays, we might shirnk when the length of the data is less than or equal to a quarter of the maximum capacity. This would also have \(O(1)\) amortized time complexity.

This is actually what the vector data type in Racket uses internally.

Linked Data Structures

In computer science terms, Racket's lists are known as linked lists. A linked list is a set of nodes that store data, that are connected by links.

This is usually implemented by having the nodes be structures that store a value, and a pointer to the next element. This is a singly linked list - it only has a single link to the next one. There is also the doubly linked list, which has a forward link and backwards link, so we can travel forward and backward in the list very easily.

We usually have something along the lines of the following:

struct llnode {
    int value;
    struct llnode *next;

This is a recursive data structure - it contains a pointer to another instance of itself. We can represent the end of a list by using NULL as the pointer for llnode.next.

We usually expose linked lists through a pointer to the first element. So if the list is empty, we would simply have a null pointer. Otherwise, it would be a pointer to an llnode instance.

So a linked list is a pointer to a node.


We can implement linked data structures using a recursive or iterative approach, or a hybrid approach combining both.

Functional Approach

The functional approach avoids mutation and side effects, and can be implemented something like the following:

int first(struct llnode *lst) {
    assert(lst != NULL);
    return lst->item;

struct llnode *rest(struct llnode * lst) {
    assert(lst != NULL);
    return lst->next;

struct llnode *empty(void) {
    return NULL;

bool is_empty(struct llnode *lst) {
    return lst == empty();

struct llnode *cons(int f, struct llnode * r) {
    struct llnode *new = malloc(sizeof(struct llnode));
    new->item = f;
    new->next = r;
    return new;

struct llnode *clone_list(struct llnode *lst) {
    if (is_empty(lst)) return empty();
    return cons(first(lst), clone_list(rest(lst)))

void free_list(struct llnode *lst) {
    while (lst != NULL) {
        struct llnode *backup = rest(lst);
        lst = backup;

This is very similar to how lists work in Racket.

However, it is important to avoid memory leaks by always keeping the result of cons so we can free the allocated memory later. All return values from cons must have a corresponding free. We can free an entire list at the same time by using free_list.

We can now write very functional code using this:

struct llnode *my_list = cons(10, cons(3, cons(5, cons(7, empty()))));

And even write programs that are much more functional:

int length(struct llnode *lst) {
    if (is_empty(lst))
        return 0;
    return 1 + length(rest(lst));

Alternatively, we could write it in a more iterative style:

int length(struct llnode *lst) {
    int length = 0;
    while (!is_empty(lst)) {
        length ++;
        lst = rest(lst);

We must be very careful to avoid memory leaks from list functions. The old list must always be freed when we produce a new array intended to replace it.

Lists in Racket are immutable. There is a mcons type that constructs a mutable version o the list that behaves identically to regular lists. In Scheme, all lists are simpy mutable.

Consider the following:

struct llnode *a = cons(7, empty());
struct llnode *b = cons(5, a); // WARNING: `b` SHARES THE SAME NODES AS `a`
a = cons(3, a);

This is an interesting case because part of the list is new, and part of it is old. If we were to continue with the functional approach, we might replace the second line with struct llnode *b = cons(5, clone_list(a));. This makes a copy so both lists do not share any nodes.

In Racket, this was not noticable because the lists are immutable and it makes no difference whether we use the same part of the same list for two different things. Also, the garbage collector makes sure we don't leak any memory.

However, free is a function that causes mutation - it causes a piece of memory to be made invalid. The above code has problems because we made use of assumptions that hold for immutable values, but were forced to use mutation to free up the memory.

Imperative Approach

In the imperative approach, we go through each element of the list and mutate it. We can implement things like list squaring as follows:

void sqr_list_inplace(struct llnode *lst) {
    while (lst != NULL) {
        lst->item *= lst->item;
        lst = lst->next;

If we need to work on a copy of the list, we can use list_clone before calling the function, and pass it the copy.

We want to move away from the cons function, which is functional, to a more imperative approach. We then introduce add_to_front, which mutates a linked list to add a node to the front:

void add_to_front(int n, struct llnode **ptr_front) {
    struct llnode *new = malloc(sizeof(struct llnode));
    new->item = n;
    new->next = *ptr_front;
    *ptr_front = new;

This approach also makes it easy to remove nodes:

int remove_from_front(struct llnode **ptr_front) {
    struct llnode *front = * ptr_front;
    int retval = front->item;
    *ptr_front = front->next;
    return retval;

We can even directly insert a value into a sorted list:

void insert(int n, struct llnode **ptr_front) 
    // find location for insertion
    struct llnode *cur = *ptr_front;
    struct llnode *prev = NULL;
    while ((cur != NULL) && (n > cur->item)) {
        prev = cur;
        cur = cur->next;
    // set up the new node
    struct llnode *new = malloc(sizeof(struct llnode));
    new->item = n;
    new->next = cur;
    // update pointers
    if (prev == NULL) // insertion at the beginning
        *ptr_front = new;
    else // insertion at middle or end
        prev->next = new;

Note that it is still easy to accidentally produce double-free bugs:

struct llnode *a = NULL;
add_to_front(5, &a);
struct llnode *b = a;
add_to_front(7, &a);
add_to_front(9, &b);

The solution is simply to avoid having lists that share the same nodes.

Wrapper Strategy

Sometimes, we might wrap the whole list in a structure. This has several key advantages:

For example:

struct llist {
    struct llnode *front;
    struct llnode *back;
    int length;

Using this strategy, a list is now a pointer to a llist structure. This works with the functional and imperative approaches.

We would then implement our functions such that they accept wrappers. For example, constructors and destructors:

struct llist *create_list(void) {
    struct llist *lst = malloc(sizeof(struct llist));
    lst->front = NULL;
    return lst;
void destroy_list(struct llist *lst) {

This allows us to have a simpler interface, because we aren't passing pointers to pointers:

struct llist *lst = create_list();
add_to_front(5, lst); // this implementation of `add_to_front` is different from the one described int he imperative approach


There are two ways to mutate a list structure: changing the structure of nodes (adding nodes, etc.), and changing the values (mutating the contents of nodes).

Notice that the structure contains a length field. If we start this off at 0, increment it whenever we add an element, and decrement it whenever we remove an element, then it is always the exact length of the list.

This allows us to obtain the length of the list in \(O(1)\) time, rather than \(O(n)\) time to calculate it usually. We cache the length of the list in the list structure.

The back field allows us to directly reference the last node in the list in \(O(1)\) time. This allows us to implement things like list appending in \(O(1)\) time. In this case, the list removal function would need to update this back pointer when there are no items left, and the appending function would need to update this pointer to the new item as well:

void add_to_back(int n, struct llist *lst) {
    struct llnode *new = malloc(sizeof(struct llnode));
    new->item = n;
    new->next = NULL;
    if (lst->length == 0) { // empty list
        lst->front = new;
        lst->back = new;
    } else {
        lst->back->next = new; // make the last item point to the new last item
        lst->back = new; // update the new last item
    lst->length ++; // update the length

Using the structure makes it harder to point to the middle of a list accidentally. However, we now risk integrity/consistency issues - when the same information stored in different places becomes inconsistent.

Note that before, when we build a list in a loop, we ended up with a list with the contents in reverse order. Now that we have add_to_back, we can build linked lists in order.

For example, an implementation of sqr_list that duplicates the list:

struct llist *sqr_list(struct llist *lst) {
    struct llnode *node = lst->front;
    struct llist new_list = create_list();
    while (node != NULL) {
        add_to_back(node->item, new_list)
        node = node->next;

Augmentation Strategy

The augmentation strategy stores additional information in each node. For example, a doubly linked list stores a pointer to the previous node in each node in addition to the next node. This allows insertion and removal in \(O(1)\) time.

Things like skip lists and trees are all simply linked lists that are augmented.


The functional approach works best when no memory management or mutation is required. It has the advantage of being simple, clear, and concise.

The imperative approach works best when there is memory management or mutation is required.

The wrapper strategy provides a cleaner interface and can store additional information per list.

The augmentation strategy can store additional information per node.


A tree is a linked data structure, conceptually the same as trees in CS135.

In this course, trees will only store keys. The values can be considered an augmentation. We define a node of a binary search tree as follows:

struct bstnode {
    int item;
    struct bstnode *left;
    struct bstnode *right;

A reference to a tree is a pointer to a node. An empty tree would be a null pointer.

The size of a tree is the number of nodes in it.

The ordering property of the binary search tree is part of its definition.

The depth of a node is the number of nodes from the root to the node, including the root node. The depth of the root node is 1, and the depth of its children is 2.

The height of a tree is the maximum depth of any node in the tree. The height of an empty tree is 0.

Consider an implementation of BST insertion using the functional approach:

struct bstnode *bst_insert(int i, struct bstnode *t) {
    if (t == NULL)
        return make_bstnode(i, NULL, NULL); // insert item
    else if (i == t->item) // duplicate item
        return t;
    else if (i < t->item) // insert into left child
        return make_bstnode(t->item, bst_insert(i, t->left), t->right);
    else // insert into right node
        return make_bstnode(t->item, t->left, bst_insert(i, t->right);

This has memory management issues, similar to linked list insertion using the functional approach. This is because sometimes nodes are duplicated and sometimes they are directly copied from the original tree. In other words, the produced tree will share some nodes with the original tree.

This means that if we free the original tree, parts of the new tree will be made invalid, which makes it impossible for us to safely free it.

As a result, we usually write it using the imperative approach:

void bst_insert(int i, struct bstnode **ptr_root) {
    bstnode *t = *ptr_root;
    if (t == NULL) // tree is empty
        *ptr_root = make_bstnode(i, NULL, NULL);
    else if (t->item == i)
    else if (i < t->item)
        bst_insert(i, &(t->left));
        bst_insert(i, &(t->right));


A balanced tree is one where the height is \(O(\log n)\), where \(n\) is the number of nodes. This is the minimum possible height for the tree. Any tree with a greater height is unbalanced and has a height of \(O(n)\).

For example, (node 1 empty (node 2 empty (node 3 empty (node 4 empty (node 5 empty (node 6 empty (node 7 empty empty))))))) is an unbalanced tree, while (node 4 (node 2 (node 1 empty empty) (node 3 empty empty)) (node 6 (node 5 empty empty) (node 7 empty empty))) is balanced.

Note that if we insert elements into the tree in sorted order, we get the worst case balanced tree, which basically becomes a linked list.

The worst case time complexity of BST insertion, removal, and search is \(O(h)\), where \(h\) is the height of the tree. As a result, if the tree is balanced, then these operations are all \(O(\log n)\), and otherwise \(O(n)\).

There are trees that are self balancing, like AVL trees or red-black trees. These balance themselves automatically and have \(O(\log n)\) time complexity for all operations.

Many trees are also augmented with the number of nodes they contain in total. This allows us to implement a \(O(h)\) select(k) operation, which produces the kth smallest or largest element in the tree.


Trees and linked lists are all simply special cases of graphs. Graphs are more flexible, and allow things like reference cycles, direction (and the possibility of non-directed graphs), and even have labelled edges.

Graphs are, as in CS135, collections of nodes and the edges that link nodes together.


Abstract Data Types

All data structures are a combination of the following core data structures:

We choose between data structures based on their characteristics, such as time complexity for the most common operations that will be performed, or whether dupicate items are supported, or whether the data is ordered.

For example, a BST might not be a good choice for ordered data, because it would sort the data and destroy the order. For ordered data, we might choose a dynamic array or a linked list, based on which operations we need to be faster. If we needed really fast insertion but don't really care about lookup, then a linked list is a good choice.

For unordered data, good choices are a sorted dynamic array or a BST (often self balancing).

Arrays are good choices for data where we frequently access elements at specific elements - random access is best.

Linked lists are good choices for data where we frequently add/remove items to the front or back.

Self-balancing BSTs are good choices for data where we frequently add, remove, or search for items. They are a very well rounded data structure.

Sorted arrays are good choices for data where we frequently select or search for items.

An abstract data type (ADT) is a mathematical model for storing and and accessing data through operations. They are not specific to any languages or implementations.

Implementations of ADTs are simply modules where data can only be accessed through its interface.

Opaque Structures

Opaque structures are those where the fields are not accessible to the client.

In C, we can make these using incomplete declarations:


We would use the incomplete declaration in the header file, and then the complete declaration in the implementation file.

This allows us to expose the structure to the user, but the user will not be able to access or set any of the fields. This improves our information hiding.

Type Aliases

In C, we can define new types from existing types, by aliasing a new name to an existing type. This is done using the typedef keyword:

typedef int Integer; // `Integer` is now the same thing as `int`
typedef int *IntegerPtr; // `IntegerPtr` is now the same thing as `int *`

This is extremely useful for cleaning up code and simplifying complex declarations like function pointer types.

Conventionally, we use CamelCase for these custom types.

Following the previous example:

typedef struct account *Account;

Account a = create_account();
// ...

Some people advise against using typedef to define pointer types, because it's easy to forget to free them.

Collection ADTs

Collection ADTs are often what people mean when they say "ADT". They are simply ADTs that can store an arbitrary number of items. ;wip: section 12 slide 28

A stack ADT implements LIFO (Last In First Out) storage. Common operations include pushing, popping, and peeking. Stacks are often implemented using linked lists where the top is the front of the list.

A queue ADT is similar to a stack, but implements FIFO (First In First Out) storage. Common operations include adding to the back, removing from the front, and peeking at the first item. Queues are often implemented using linked lists with the wrapper strategy to keep track of the back pointer.

A sequence ADT implements an ordered collection that can be randomly accessed or modified. Common operations include inserting, removing, and retrieving the item at a given index. Sequences are often implemented using BSTs.

A dictionary ADT (also known as a map, symbol table, or associative array) is similar to a sequence, but the index can be any kind of data, not just numbers. Dictionaries associate a unique key with a value. Common operations include lookup, insertion, and removal. Dictionaries are often implemented using hash tables or BSTs.

A set ADT implements something very similar to a mathematical unordered set, where all values must be unique and are all unordered. Common operations include membership testing, adding items, set union, set difference, set intersection, and subset testing. Sets are often implemented using hash tables or BSTs.

The advantage of using ADTs is that the implementations are widely reusable and can have their component algorithms swapped out to suit the use case. For example, a dictionary might be implemented using a BST, but if we wanted a faster lookup operation we might swap out the internals for a hash table without affecting the functionality of the program.


Generic/Void Pointers

Hash tables implement a Dictionary-like interface. Hash tables are often implemented as dynamic arrays of linked lists. Each key is hashed into a nnumber mod the size of the dynamic array, and the corresponding value is stored at that index.

Our ADT implementations, to this point, only work with integers. An ADT should work with any data type. We can do this by creating a new implementation for every type, using typedef, or using generic pointers.

Making a new implementationn for every type results in code duplication and is unsustainable.

When we use the typedef approach, the client defines a custom type and the module is written with that type in mind. For example, if the client declares typedef float StackEntry, we can implement the ADT using StackEntry and make it work with this specific type. However, this only allows one type of stack in the entire program. This is also problematic if the type is a pointer, since we don't know whether to free or not.

The void pointer is basically a pointer to anything - we don't know or care what kind of thing is being pointed at.

We cannot dereference a void pointer, since that would result in breaking static typing:

int i = 5;
void *p = &i;

We also have type casting, which allows us to do that same trick as above, but without creating a new variable: int j = *(int *)p.

This is often used to avoid integer division: float half = (float)1 / 2. The type cast makes C interpret something of one type as another type.

We can simply store void pointers in our ADT and manipulate these pointers. However, who is responsible for freeing the memory? How do we compare items?

If the client is responsible for freeing memory, the client code becomes rather messy, or at least requires more effort to use.

If the ADT is responsible for freeing memory, there can be memory leaks if the entry has additional memory leaks, like storing a linked list in a stack. If the first node is freed, then the rest of the nodes are leaked. A solution to this is to have the client pass in a callback that frees a given entry.

In a similar fashion, we can have the client pass in a callback for comparing items, for ADTs that care about the order, like a Set.

Function callback declarations are rather complex, so we can use typedef to simplify them: typedef int (*CompFuncPtr)(const void *, const void *);.

The advantage of this approach is that we can store different types in the same ADT. For example, we can push an integer and a string onto the same stack, though we would need to consider memory management issues.

Now we can store these callbacks in the ADT itself, so that it is a self contained structure capable of all the standard operations without any additional parameters.


// push(s, i) puts item i on top of the stack
// NOTE: The caller should not free the item until it is popped
void push(Stack s, void *i);

// top(s) returns the top but does not pop it
// NOTE: The caller should not free the item until it is popped
void *top(Stack s);

// pop(s) removes the top item and returns it
// NOTE: The caller is responsible for freeing the item
void *pop(Stack s);

// destroy_Stack(s) destroys the stack
// PRE: The stack must be empty (all items popped)
void destroy_Stack(Stack s);

With this, we can now implement the Dictionary ADT:

// dictionary.c (partial implementation)
struct bstnode {
    void *key;
    void *value;
    struct bstnode *left;
    struct bstnode *right;
struct dictionary {
    struct bstnode *root;
    DictKeyCompare key_compare; // function pointer
Dictionary create_Dictionary(DictKeyCompare f) {
    struct dictionary newdict = malloc(sizeof(struct dictionary));
    newdict->key_compare = f;
    // ...
void *lookup(Dictionary d, void *k) {
    struct bstnode *t = d->root;
    while (t) {
        int result = key_compare(k, t->key);
        if (result < 0)
            t = t->left;
        else if (result > 0)
            t = t->right;
            return t->value;
    return NULL;

The qsort function, found in stdlib.h, is a C function that sorts an array of void pointers using a comparison function: void qsort(void *arr, int len, size_t size, int (*compare)(void *, void *));. We pass it the array, the length of the array, the size of each element, and the comparison callback.

This is a generic algorithm, because it is an algorithm that works on any type that can be compared.

The bsearch function, also found in stdlib.h, performs a binary search over an array of void pointers for a given key, producing NULL if not found or the pointer otherwise: void *bsearch(void *key, void *arr,int len, size_t size, int (*compare)(void *, void *));.




Compilation is the process of converting source code into machine code. C owes much of its popularity to how easy it is to convert into machine code.

Langauges such as Racket are interpreted - the source code is read and executed directly by the interpreter.

Languages such as C are compiled - the source code is converted directly into machine code, which can then be executed by the processor.

Many languages also allow bytecode compilation - compiling the source code into a lower level but not machine-specific bytecode that can be executed by a virtual machine, which is basically a much smaller and lighter interpreter.

In C, the three stages of compilation are preprocessing, compiling, and linking.

In the preprocessing stage, source files (.c) are read in and preprocessing directives such as #include are processed, to give the final source code ready for compilation (.c). The preprocessor is not specific to C, however, and can even be used with other languages.

In the compiling stage, the source code (.c) is read in and turned into object code (.o), while being checked for errors and warnings. Many forms of automatic code optimization happen in this stage.

Object code is almost complete machine code, but things like the addresses of functions and global variables are left as placeholders - the object file contains the object code for all the functions and the identifiers it provides or requires.

In the linking stage, the object files (.o) are combined into a proper executable (.out), with each global identifier and function given an address and ensuring there is an entry point and are no duplicate identifiers.

Now this executable file can be run on the computer!

The most common compilers are GCC and Clang. lang is used in more modern distributions because it has a superior design, being extensible, clean, and efficient. GCC is older and has had more time to implement things like advanced code optimization techniques.

The command gcc test.c, equivalent to gcc test.c -o text.out, will preprocess, compile, and link test.c in the working directory. gcc --help shows the available options.

The command gcc -c test.c will simply preprocess and compile test.c, and skip linking. This produces an object file, which is useful because if we distribute them with the header file, they are basically modules where the client can't access the source code.

The command gcc module1.o module2.o main.c -o program.out will link all three files together into one executable.

Command Line Parameters

The main function actually does accept parameters, but they are optional:

int main(int argc, char *argv[]) {
    // `argv` is an array of strings with the first element being the name of the executable, and the rest of the elements being the command line parameters
    // `argc` is the length of `argv`, and is always one or more since the first element is the name of the executable
    // the number of command line parameters is `argc - 1`


I/O data is often represented as a stream of data that flows from a source to a destination.

Sources or destinations can be programs, devices, files, or even other computers. This means we have a consistent interface for everything from writing to files to sending data to another program.

There is a standard source and destination known as stdin (standard input) and stdout (standard output), respectively. By default, stdin is connected to the keyboard, and stdout is connected to the output window.

Streams can be redirected to change their source or destination.

For example, myprogram.exe > output.txt redirects the stdout of myprogram.exe to output.txt, so all printf outputs will be written to that file. Additionally, myprogram.exe < input.txt will act as though we typed the contents of input.txt as input to myprogram.exe. We can do both at once as well: myprogram.exe < input.txt > output.txt.

We can also redirect the stdin and stdout to other programs. For example, if choose_a_file outputs chosen a file name, and cat prints the contents of the filename given as input, then choose_file | cat will print the contents of a chosen file name.

Creative Commons License This work by Anthony Zhang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Copyright 2013-2017 Anthony Zhang.