SOLID STATE PRESS
← Back to catalog
Big-O Notation cover
Buy on Amazon
US list price $2.99
(search link — ASIN auto-fills once available)
Computer Science

Big-O Notation

A High School & College Primer on Algorithm Efficiency

Algorithm efficiency questions show up on AP Computer Science exams, college coding interviews, and intro CS courses — and most textbooks bury the concept in 80 pages of theory before you see a single useful example. This guide skips the filler.

**TLDR: Big-O Notation** covers exactly what a high school or early college student needs to understand algorithmic complexity and feel confident using it. You'll learn what Big-O actually measures (hint: it's about growth, not clock time), how to read and compare the most common growth rates from O(1) to O(n!), and how to derive Big-O directly from code — loops, nested loops, sequential blocks, and recursive functions. The guide also clears up the confusion between Big-O, Big-Theta, and Big-Omega, and explains best, average, and worst case in plain language.

If you're studying for an ap computer science exam, working through an intro to data structures and algorithms course, or just trying to understand why one solution is faster than another, this primer gives you the tools without the padding. Every concept is explained in plain language, backed by worked examples with real numbers, and trimmed to what actually matters.

At 10–20 pages, it's designed to be read in one sitting — before a class, before a test, or whenever the topic clicks that you need to finally get this straight.

Pick it up, read it once, and walk into your next exam ready.

What you'll learn
  • Explain what Big-O notation measures and what it ignores
  • Rank the common growth rates from O(1) to O(2^n) and recognize them in code
  • Analyze loops, nested loops, and simple recursive functions to determine their Big-O
  • Distinguish Big-O (upper bound) from Big-Theta and Big-Omega, and best/average/worst case
  • Apply Big-O thinking to compare algorithms and predict which will scale
What's inside
  1. 1. What Big-O Actually Measures
    Introduces Big-O as a way to describe how an algorithm's running time or memory grows as input size grows, and why we care about growth rate rather than exact time.
  2. 2. The Big-O Zoo: Common Growth Rates
    Walks through O(1), O(log n), O(n), O(n log n), O(n^2), O(2^n), and O(n!) with concrete examples and a comparison of how they scale.
  3. 3. How to Analyze Code: Loops, Nesting, and Sequences
    Teaches the practical rules for deriving Big-O from code: count loop iterations, multiply for nested loops, add for sequential blocks, and drop constants and lower-order terms.
  4. 4. Recursion and Divide-and-Conquer
    Shows how to analyze recursive functions using recurrence relations and the intuition behind why binary search is O(log n) and merge sort is O(n log n).
  5. 5. Big-O vs. Big-Theta vs. Big-Omega, and Best/Average/Worst Case
    Clarifies that Big-O is technically an upper bound, distinguishes it from Theta and Omega, and explains how best, average, and worst case interact with these notations.
  6. 6. Why It Matters: Choosing Algorithms in the Real World
    Connects Big-O to real decisions like choosing between data structures, why O(n^2) algorithms break at scale, and what to study next.
Published by Solid State Press · May 2026
Big-O Notation cover
TLDR STUDY GUIDES

Big-O Notation

A High School & College Primer on Algorithm Efficiency
Solid State Press

Who This Book Is For

If you are a high school student looking for an AP Computer Science A study guide, a college freshman grinding through an intro to data structures and algorithms course, or a self-taught coder who keeps seeing O(n log n) and nodding politely — this book is for you. It also works for parents and tutors who need to get up to speed before a study session.

This is Big-O notation explained for beginners who are serious about understanding it, not just memorizing it. In about 15 pages, you will learn how to analyze algorithm efficiency from scratch: reading and writing Big-O, Big-Theta, and Big-Omega; tracing loops and recursion; and comparing growth rates like O(n), O(n²), and O(log n). Every section targets the algorithm complexity concepts that show up on computer science exam prep materials and in technical interviews. No filler, no padding.

Read straight through in one sitting, work every example as you go, then test yourself with the problem set at the end. That is the fastest way to learn Big-O notation and actually retain it.

Contents

  1. 1 What Big-O Actually Measures
  2. 2 The Big-O Zoo: Common Growth Rates
  3. 3 How to Analyze Code: Loops, Nesting, and Sequences
  4. 4 Recursion and Divide-and-Conquer
  5. 5 Big-O vs. Big-Theta vs. Big-Omega, and Best/Average/Worst Case
  6. 6 Why It Matters: Choosing Algorithms in the Real World
Chapter 1

What Big-O Actually Measures

Suppose you write two programs that both sort a list of numbers. On your laptop, Program A finishes in 0.003 seconds. Program B finishes in 0.005 seconds. Program A wins, right?

Now run both programs on a list one million times longer. Program A takes three hours. Program B takes eight seconds.

This reversal is not a fluke — it happens because the two programs respond differently to growth in the input. Big-O notation is the tool we use to capture that difference precisely, before we ever run the programs.

Big-O notation describes how the resource usage of an algorithm — a step-by-step procedure for solving a problem — scales as the input size grows. Input size is almost always written as $n$: the number of elements in a list, the number of nodes in a graph, the number of characters in a string. The resource we measure is usually time (how many steps the algorithm takes), though it can also be memory.

The key word is scales. We are not asking "how long does this take on my machine right now?" We are asking "if $n$ doubles, what happens to the cost?"

Why not just measure exact time?

Exact running time depends on things that have nothing to do with the algorithm itself: how fast your CPU is, which programming language you used, whether your data happens to be cached in memory. Two programmers implementing the same algorithm on different hardware will measure different times, but their programs will have the same Big-O because the shape of their growth is identical.

This is the idea behind asymptotic analysis — we zoom out and look at behavior as $n$ gets large, stripping away machine-specific details. The word "asymptotic" just means "what happens as we approach a limit" (in this case, the limit of very large $n$).

What we keep is the growth rate: the mathematical relationship between $n$ and the number of steps. What we deliberately throw away are constant factors — multipliers that shift the cost up or down by a fixed amount but do not change the shape.

Keep reading

You've read the first half of Chapter 1. The complete book covers 6 chapters in roughly fifteen pages — readable in one sitting.

Continue reading on Amazon