Neural Networks
View allLarge Language Models
View all
Open weight models you can download, use and even modify with your own data
A snapshot of who's shipping what in open-weights LLMs right now: parameter counts, architecture, context length, license, and whether the weights are really open or just open with friction.

How Much Should You Trust What an LLM Tells You?
I fine tuned a model on 50 wrong answers to see what it does with a question it hasn't seen. The result changed how I think about trusting AI.
LLMs Explained, Part 4: The Platform Era
From GPT-4 to today's reasoning models, agents, and coding tools. How LLMs went from a single chatbot to the platform every product is built on.
LLMs Explained, Part 3: How LLMs Got Useful
From GPT-3 to ChatGPT. The story of how scale, instruction tuning, and human feedback turned a text-completion engine into the assistant a hundred million people started using overnight.
Compute
View allGPUs Explained, Part 3: The Hardware Behind Modern AI
What an H100, H200, B200, and a GB200 NVL72 rack actually are. Plus HBM, NVLink, training vs inference, and the alternatives like TPUs and AMD MI300X.
GPUs Explained, Part 2: Inside a GPU
What is actually on a GPU die. Streaming multiprocessors, CUDA cores, threads, warps, memory hierarchy, and tensor cores, in plain English.
GPUs Explained, Part 1: Why AI needs GPUs
A simple story of why every modern AI service runs on GPUs and not CPUs. From mainframes in the 1950s to the H100, written for software engineers.
