Built for YouTube

The study layer for YouTube.

Paste a link. Get a synced transcript, a TL;DR you can edit, and flashcards that remember what you watched, so the video doesn't just disappear into your history.

Or try with:

What happens after you paste a link

Anatomy of a Loop

Link

Any YouTube URL.

01

Transcript

Synced, searchable, click to seek.

02

Chat

Grounded answers with [mm:ss] citations.

03

Notes

TL;DR + sections you can edit & export.

04

Flashcards

Anki-ready Q&A from what you watched.

One video → four ways to study it.

01 · Transcript

A transcript that follows the video

Sentence-grouped, searchable, time-synced. The active line highlights as the video plays. Click any sentence to jump.

transcript
0:12Welcome. Today we'll talk about how attention works in transformers.
0:34Each token attends to every other token, which is what makes the model context-aware.
1:02The query, key, and value vectors are linear projections of the input embedding.
1:28Now, multi-head attention lets the model look at different relationships in parallel.
chat
What's the key idea behind multi-head attention?
The model runs attention several times in parallel, each focusing on different relationships between tokens, then concatenates the results [1:28].

02 · Chat

An AI that watched it for you

Every answer is grounded in the actual transcript. Citations link back to the exact moment in the player.

Frequently asked