Simon Eskildsen: Building a Learning Machine in the Age of AI
I recently watched another podcast about what I love most: memory, learning, AI, and how these things are reshaping our brains. This time it was Simon Eskildsen - co-founder of Turbopuffer, a vector search startup that powers tools like Cursor and Notion, talking about how he’s turned himself into what I’d call a “learning machine.”
I first heard about Simon from a 2020 interview where he talked about reading 50-70 books a year, taking obsessive notes, and turning everything into flashcards. Four years later, with a startup and a newborn baby, his systems have condensed, but they’ve also gotten smarter thanks to LLMs.
The Flashcard Obsession That Actually Works
Simon has been using Anki flashcards for over a decade. Not just for vocabulary or technical documentation, but for everything - from his colleague’s kids’ names to whether you should roll your car windows down or use A/C at different speeds. He’s aiming for 10,000 cards total.
What got me was the philosophy behind it. Simon deliberately creates cards that “bring you a little bit of joy” and nostalgia. There’s a card about a waiter from a restaurant that doesn’t even exist anymore, just because the guy had a great radio voice. This isn’t about optimizing memory; it’s about creating touchstones to moments in your life.
He uses a dead-simple card template: question on one side, answer on the other, option to reverse it, and always a source with a date. “This was in 2017, I talked to this person who said this thing.” That metadata turns each flashcard into a time capsule.
The Startup as Ultimate Learning Machine
Want to force yourself to learn fast? Simon’s advice: start a company.
He co-founded Turbopuffer in 2023, and he says nothing challenges your breadth and skills more than building something from zero. His reading dropped from 50-70 books a year to maybe a dozen. He stopped writing extensive notes. But the learning accelerated because he had to - legal documents, accounting terms, technical infrastructure at scale, customer conversations. The stakes turned every gap in knowledge into immediate homework.
Exactly right. When you’re building something real, you can’t afford to be theoretical. Every conversation with a lawyer or accountant becomes a mini-crash course because you need to understand just enough to make the right decision.
How LLMs Changed Everything About Learning
Here’s where it gets really interesting. Simon sees LLMs as “an average of the internet” - not superintelligent, but incredibly useful for making associations and jumping into unfamiliar domains.
Google works when you know what you’re looking for. But when you’re exploring? When do you need associations? That’s where LLMs shine. Simon asks things like: “Hey, I think this can be done like this, I don’t know much about this area, can you riff on this with me?” The model places your question in latent space, finds related concepts, and pumps them back to you.
A perfect example: He needed to build a retaining wall at his cabin in rural Quebec. Legislation in French, no expertise, didn’t want to read 100 pages of regulations. He talked it through with ChatGPT, which not only helped him understand the requirements but suggested a gabion-style retaining wall (grid with rocks) that his contractor hadn’t even mentioned. Problem solved for $50 instead of $1000+.
He converted an old freezer into a fridge using a homebrewing temperature controller—again, something he’d never have thought of without asking an LLM. For physio exercises (tennis elbow, tight shoulders from desk work), he followed ChatGPT’s suggestions for two weeks and his chronic problems improved.
The Tools That Actually Stuck
Most of Simon’s LLM interaction happens through Raycast - a Spotlight replacement on steroids. Command-space, type your question, tab, and you get an answer from GPT-4 instantly. No opening browsers, no separate apps. 80-90% of his LLM use flows through this single interface.
But the real power comes from Raycast’s “AI Commands” - basically pre-set prompts with detailed instructions. Simon has commands for:
Recipe: Generates recipes in a specific condensed format that respects dietary restrictions (his wife is sensitive to fructans, so it automatically suggests substitutes). No chef’s life story, just ingredients and steps.
Define: This one is brilliant for non-native speakers like him (and me). You give it a word, and it returns six example sentences - historically interesting ones, using well-known figures from physics, computer science, geography. It also provides related words, synonyms, and sometimes an image. When he looked up “lambent” in the demo, it gave sentences about the Golden Gate Bridge’s glow and Isaac Newton’s candle experiments. Way better than a dictionary definition.
Friendlier: Adds warmth and emojis to text because, as he says, “as a Northern European, I sometimes write too directly.”
He subscribes to everything - ChatGPT, Claude, Perplexity, and jumps between them. Part of “being in AI” is spending $100/month on these subscriptions and getting inspired. I do the same thing.
Notion AI for Contextual Writing
Recently Simon started using Notion AI heavily for writing and thinking through problems. Unlike ChatGPT where you’re starting from scratch, Notion AI pulls in context from your entire workspace - past notes, related documents, conversations.
He’ll write in his journal about a difficult conversation and ask: “Hey, I was discussing this with someone and I feel like I didn’t represent myself well - give me feedback.” The AI understands not just that entry but related notes and discussions from weeks ago. That contextual intelligence makes feedback much more useful.
This is exactly where we’re heading: tools that don’t just respond to what you type but understand your entire knowledge graph and can surface relevant connections automatically.
The Future: Language Learning
For his daughter, Simon is thinking about language acquisition. She needs to speak Danish (his mother tongue), but they live in Canada with almost no Danish speakers around. What if she could have conversations with an AI tutor in Danish? Or Mandarin on Tuesdays, Thai on Thursdays? Kids’ brains are primed for language learning before age 10, but they need exposure to those sounds. LLMs could democratize multilingual fluency.
He’s also thinking about AR/VR - not for games but for learning. If you’re learning the word “eigengrau” (the dark gray you see when you close your eyes), imagine seeing it visualized in 3D space while the AI explains neuroscience. Visual stimuli + context = much stronger memory.
Other Tools in the Stack
Quick hits on what else Simon uses:
Superwhisper: Voice-to-text transcription for journaling (still experimental, a bit too slow)
Superhuman: Email client with AI-assisted replies
Readwise Reader: Read-it-later tool with AI for looking up definitions and asking questions about content (I use this too and love it)
Supermaven: AI code completion
Cursor: Best AI code editor (uses Turbopuffer under the hood), though Simon still can’t give up Neovim’s speed
Notice a pattern? Lots of “Super” products in his toolkit.
What This Means for All of Us
Simon’s approach isn’t about replacing human intelligence with AI, it’s about removing friction from learning. Google required you to know what you were looking for. LLMs let you explore adjacent possibilities. They don’t make experts 10x faster (yet), but they make novices infinitely better by letting them converse with expertise instantly.
Think about what reading did to human brains: it literally rewired our visual cortex for pattern recognition and analytical thinking. LLMs will do something similar. Kids growing up with instant answers to every question, with AI tutors that never get tired, with tools that help them visualize complex concepts, they’ll think differently than we do. Not worse, not better. Different.
And that’s not scary. That’s exciting.
As someone who constantly translates Russian content to English and works across multiple knowledge domains, I see Simon’s systems as a model: combine human curiosity with AI’s associative power, create lightweight rituals that stick, and never stop asking questions.

