Efficiency of algorithms
When we say one algorithm is "more efficient" than another, we usually mean it does less work — fewer comparisons, fewer instructions, fewer trips around a loop — for the same input. GCSE doesn't formalise efficiency with Big-O notation (you'll meet that at A-Level), but it does expect you to compare two algorithms by counting steps or by reasoning about how the work scales with the size of the input.
What "efficiency" means in GCSE terms
- Time efficiency — how many basic steps are executed.
- Space efficiency — how much memory the algorithm uses.
Most exam questions are about time. The two key questions to ask yourself:
- How many steps in the worst case for an input of size $n$?
- What happens to that count when $n$ doubles?
Linear vs binary search example
Searching a list of $n$ items:
| Algorithm | Worst-case comparisons | Doubling $n$ does what? |
|---|---|---|
| Linear search | $n$ | doubles the work |
| Binary search | $\lceil \log_2 n \rceil$ | adds 1 more step |
For $n = 1024$: linear takes 1024, binary takes 10. For $n = 2048$: linear takes 2048, binary takes 11. The growth rates are very different.
Sorting example
Sorting a list of $n$ items:
| Algorithm | Worst-case comparisons |
|---|---|
| Bubble sort | up to $\tfrac{n(n-1)}{2}$ |
| Merge sort | $\approx n \log_2 n$ |
For $n = 1000$: bubble ≈ 500{,}000, merge ≈ 10{,}000 — merge is 50× quicker. As $n$ grows, the gap widens.
Why we count comparisons
Comparisons are the expensive part of search/sort algorithms because the work in each pass is dominated by the test (if a > b). Counting them gives a fair comparison even if the underlying programming language differs.
Constants don't matter as much
Doubling the speed of a linear-time algorithm is fine for small inputs but doesn't help for large ones — you'd still be limited by the linear growth. Switching to a logarithmic algorithm wins as $n$ grows. GCSE marks the qualitative comparison, not the constants.
Trade-offs
Sometimes a more time-efficient algorithm uses more memory:
- Storing every visited item to avoid re-checking (memoisation) — fast but uses RAM.
- Building an index — fast lookups but extra storage.
Conversely, a memory-efficient algorithm may take longer.
You don't need to memorise specific trade-offs, but you should be able to discuss "what would change if the list got 10× larger" or "what if memory was tight".
✦Worked example— Worked example — counting steps
SET total TO 0
FOR i FROM 1 TO n
SET total TO total + 1
ENDFOR
This runs the loop body $n$ times, so the work scales linearly with $n$. Compare with:
SET total TO 0
WHILE n > 1
SET n TO n DIV 2
SET total TO total + 1
ENDWHILE
The loop halves $n$ each pass, so the number of iterations is $\log_2 n$. For $n = 1024$ that's 10 iterations vs 1024 for the first algorithm — a huge difference.
Common pitfalls
- Saying "shorter code = more efficient". Code length and efficiency are unrelated.
- Forgetting that binary search needs a sorted list — the cost of sorting first matters.
- Confusing best-case and worst-case (linear search finds an item at the start in 1 step, but worst-case is $n$).
- Comparing only one input size and generalising — always think about how the work scales.
How exams test efficiency
Common shapes:
- "Algorithm A makes $n$ comparisons, algorithm B makes $\log_2 n$. Compare for $n = 1{,}000{,}000$." (A: 10⁶, B: ~20.)
- "If the list doubles in size, what happens to the number of steps?" (linear: doubles; binary: +1.)
- "Algorithm A is shorter; algorithm B is faster. When is A preferred?" (Small inputs, simpler code, lower memory, faster to write.)
AI-generated · claude-opus-4-7 · v3-deep-computer-science