# physics.md — Founder Memo

## What we are building

physics.md is a physics-grounded reasoning layer for AI-native learners and builders.

Instead of treating physics like a school subject, it helps people understand how physical principles become useful technologies, constraints, bottlenecks, and leverage in the real world.

More specifically, the product should feel like a **decoder for modern hardware and infrastructure**, starting with compute systems the user already cares about.

## The problem

Generic AI can explain physics concepts, but usually in a way that is:
- textbook-flattened,
- weak on history,
- weak on bottlenecks,
- weak on modern technological relevance,
- and weak on why anyone should care today.

That leaves the learner with explanations, but not a map.

The practical failure mode is simple: someone asks about GPUs, memory bandwidth, cooling, fiber, batteries, or photonics and gets a concept summary instead of a systems explanation.

## The wedge

The wedge user is:

> an AI-native software builder trying to make a training or inference system faster, cheaper, or easier to scale, and realizing that hardware details suddenly matter.

The public-facing wedge should stay **compute hardware first**, especially:
- GPUs,
- memory movement,
- interconnect,
- switching,
- power delivery,
- and heat.

This is the narrowest wedge because it starts from a system people already care about and immediately exposes physical bottlenecks that matter for AI.
It also gives the user a real pain they already feel, not a vague desire to “learn more physics.”

## The promise

**physics.md turns pre-quantum physics into a guide for understanding modern and frontier technology.**

A stronger product-facing promise is:

**Ask about a real system, then see what physical mechanism is doing the work, what bottleneck actually matters, and why that changes how you think about the technology.**

A sharper public-product judgment now matters too:

**The public product is not the markdown file by itself. It is the proof artifact that makes the markdown file feel necessary.**

## What makes it different

physics.md should help an AI answer every important topic through the same useful structure:
- what the system is trying to do,
- what physical principle is doing the work,
- what historical path led here,
- what bottleneck matters,
- why it matters today,
- where the value is.

The canonical first-use pattern should be:
1. start from a real system the learner already cares about,
2. trace the mechanism,
3. identify the bottleneck,
4. connect that bottleneck to present-day relevance,
5. leave the learner with clearer leverage.

Short version:

> real system → mechanism → bottleneck → why now → leverage

## Best topic tracks

- compute physics
- sensing and control
- energy and materials
- waves and optics

## Canonical first proof sequence

The first proof artifact should be:
1. **Why GPUs are physics, not just linear algebra**

That first proof now exists as an initial draft page:
- `why-gpus-are-physics.html`

The next likely proof artifacts after that:
2. **Why memory movement is often harder than the compute**
3. **Why optics keeps showing up in modern systems**

That sequence keeps the front door concrete instead of abstract.

## Why now

The technologies shaping the next decade are constrained by physical bottlenecks.
AI-native builders increasingly need to understand those bottlenecks, not just consume software abstractions.

As models, chips, energy systems, sensing stacks, and frontier hardware become more important, the people building on top of them need a better mental model of what the machine is actually doing.

## Risks

1. Too broad
2. Too educational
3. Too detached from actual technology
4. Too easy to replace with generic LLM explanations
5. Too memo-heavy before the proof artifacts exist

## What matters next

1. Improve the first activation artifact around the compute-first wedge, `why-gpus-are-physics.html`, until it feels like a real conversion proof for AI-native software builders trying to explain slow, expensive, or hard-to-scale AI systems
2. Make the landing page behave like a front door into that proof, with the flagship proof path as the primary CTA and the spec/docs as secondary support
3. Treat the first public success condition as recognition plus belief shift: the user should quickly feel both “this is my bottleneck” and “AI hardware is a physical system shaped by memory movement, interconnect, power, and heat”
4. Tighten the flagship artifact further: it should start from the job AI compute is trying to do, make memory movement feel like the main fight, and make power / heat feel unavoidable rather than secondary
5. Build the second proof artifact, **Why memory movement is often harder than the compute**
6. Tighten physics.md so the AI is even less likely to answer in textbook mode
7. Work on first-user distribution only after the proof surface is stronger
