How to use this
The skill tree maps out the conceptual dependencies on the path to understanding large language models. Each node represents a topic; nodes unlock in order, so you can track your own progress through the curriculum.
- Click any node to see a summary and the key sub-topics
- Mark Complete to unlock dependent nodes
- Green glow = available to learn now
The five tiers
Tier 1 — Foundations are always unlocked: linear algebra, calculus, probability & stats, and Python/NumPy. Everything else traces back to these.
Tier 2 — Core ML covers the classical toolkit: supervised and unsupervised learning, optimisation, and model evaluation. If you’ve taken an introductory ML course, most of this will be review.
Tier 3 — Deep Learning starts with neural networks, then branches into CNNs, RNNs/Seq2Seq, and practical frameworks (PyTorch/JAX). RNNs are worth spending real time on — understanding why they struggle with long sequences directly motivates the attention mechanism.
Tier 4 — Transformers is the conceptual heart. The attention mechanism comes first, then the full transformer architecture.
Tier 5 — LLMs builds on transformers to cover tokenisation, pretraining & scaling laws, fine-tuning & RLHF, and inference & prompting.