Will Turso Be The Better SQLite? (with Glauber Costa)
SQLite is embedded everywhere - phones, browsers, IoT devices. It's reliable, battle-tested, and feature-rich. But what if you want concurrent writes? Or CDC for streaming changes? Or vector indexes for AI workloads? The SQLite codebase isn't accepting new contributors, and the test suite that makes it so reliable is proprietary. So how do you evolve an embedded database that's effectively frozen?
Glauber Costa spent a decade contributing to the Linux kernel at Red Hat, then helped build Scylla, a high-performance rewrite of Cassandra. Now he's applying those lessons to SQLite. After initially forking SQLite (which produced a working business but failed to attract contributors), his team is taking the bolder path: a complete rewrite in Rust called Turso. The project already has features SQLite lacks - vector search, CDC, browser-native async operation - and is using deterministic simulation testing (inspired by TigerBeetle) to match SQLite's legendary reliability without access to its test suite.
The conversation covers why rewrites attract contributors where forks don't, how the Linux kernel maintains quality with thousands of contributors, why Pekka's "pet project" jumped from 32 to 64 contributors in a month, and what it takes to build concurrent writes into an embedded database from scratch.
--
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
Turso: https://turso.tech/
Turso GitHub: https://github.com/tursodatabase/turso
libSQL (SQLite fork): https://github.com/tursodatabase/libsql
SQLite: https://www.sqlite.org/
Rust: https://rust-lang.org/
ScyllaDB (Cassandra rewrite): https://www.scylladb.com/
Apache Cassandra: https://cassandra.apache.org/
DuckDB (analytical embedded database): https://duckdb.org/
MotherDuck (DuckDB cloud): https://motherduck.com/
dqlite (Canonical distributed SQLite): https://canonical.com/dqlite
TigerBeetle (deterministic simulation testing): https://tigerbeetle.com/
Redpanda (Kafka alternative): https://www.redpanda.com/
Linux Kernel: https://kernel.org/
Datadog: https://www.datadoghq.com/
Glauber Costa on X: https://x.com/glcst
Glauber Costa on GitHub: https://github.com/glommer
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
--
0:00 Intro
3:16 Ten Years Contributing to the Linux Kernel
15:17 From Linux to Startups: OSv and Scylla
26:23 Lessons from Scylla: The Power of Ecosystem Compatibility
33:00 Why SQLite Needs More
37:41 Open Source But Not Open Contribution
48:04 Why a Rewrite Attracted Contributors When a Fork Didn't
57:22 How Deterministic Simulation Testing Works
1:06:17 70% of SQLite in Six Months
1:12:12 Features Beyond SQLite: Vector Search, CDC, and Browser Support
1:19:15 The Challenge of Adding Concurrent Writes
1:25:05 Building a Self-Sustaining Open Source Community
1:30:09 Where Does Turso Fit Against DuckDB?
1:41:00 Could Turso Compete with Postgres?
1:46:21 How Do You Avoid a Toxic Community Culture?
1:50:32 Outro
Can Google's ADK Replace LangChain and MCP? (with Christina Lin)
How do you build systems with AI? Not code-generating assistants, but production systems that use LLMs as part of their processing pipeline. When should you chain multiple agent calls together versus just making one LLM request? And how do you debug, test, and deploy these things? The industry is clearly in exploration mode—we're seeing good ideas implemented badly and expensive mistakes made at scale. But Google needs to get this right more than most companies, because AI is both their biggest opportunity and an existential threat to their search-based business model.
Christina Lin from Google joins us to discuss Agent Development Kit (ADK), Google's open-source Python framework for building agentic pipelines. We dig into the fundamental question of when agent pipelines make sense versus traditional code, exploring concepts like separation of concerns for agents, tool calling versus MCP servers, Google's grounding feature for citation-backed responses, and agent memory management. Christina explains A2A (Agent-to-Agent), Google's protocol for distributed agent communication that could replace both LangChain and MCP. We also cover practical concerns like debugging agent workflows, evaluation strategies, and how to think about deploying agents to production.
If you're trying to figure out when AI belongs in your processing pipeline, how to structure agent systems, or whether frameworks like ADK solve real problems versus creating new complexity, this episode breaks down Google's approach to making agentic systems practical for production use.
--
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
Google Agent Development Kit Announcement: https://developers.googleblog.com/en/agent-development-kit-easy-to-build-multi-agent-applications/
ADK on GitHub: https://google.github.io/adk-docs/
Google Gemini: https://ai.google.dev/gemini-api
Google Vertex AI: https://cloud.google.com/vertex-ai
Google AI Studio: https://aistudio.google.com/
Google Grounding with Google Search: https://cloud.google.com/vertex-ai/generative-ai/docs/grounding/overview
Model Context Protocol (MCP): https://modelcontextprotocol.io/
Anthropic MCP Servers: https://github.com/modelcontextprotocol/servers
LangChain: https://www.langchain.com/
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
--
0:00 Intro
2:48 Working at Google on AI Innovation
6:00 Google's AI Leadership and Responsible AI
9:34 What Is an Agentic Pipeline?
13:00 Building Agent Toolkits with ADK
15:00 Understanding the Agent Architecture
19:31 How Agents Discover and Use Tools
23:48 Parameter Extraction and Tool Execution
27:00 Structured vs Natural Language Outputs
29:00 Using Grounding for Real-Time Data
32:00 Managing Token Costs and Context Limits
35:42 When Not to Use LLMs
37:00 The Challenge of Edge Cases with LLMs
40:00 Testing Agentic Systems
42:00 Defining Test Criteria for Agents
44:58 Running Test Suites and Evaluations
47:06 Deploying Agents Like Python Apps
50:33 Building Safety Guardrails for Agents
53:00 Authentication and Authorization for Agents
55:09 MCP vs A2 Protocols
58:00 Agent Discovery and Communication
1:04:14 Outro
Building Observable Systems with eBPF and Linux (with Mohammed Aboullaite)
How do you monitor distributed systems that span dozens of microservices, multiple languages, and different databases? The old approach of gathering logs from different machines and recompiling apps with profiling flags doesn't scale when you're running thousands of servers. You need a unified strategy that works everywhere, on every component, in every language—and that means tackling the problem from the kernel level up.
Mohammed Aboullaite is a backend engineer at Spotify, and he joins us to explore the latest in continuous profiling and observability using eBPF. We dive into how eBPF lets you programmatically peek into the Linux kernel without recompiling it, why companies like Google and Meta run profiling across their entire infrastructure, and how to manage the massive data volumes that continuous profiling generates. Mohammed walks through specific tools like Pyroscope, Pixie, and Parca, explains the security model of loading code into the kernel, and shares practical advice on overhead thresholds, storage strategies, and getting organizational buy-in for continuous profiling.
Whether you're debugging performance issues, optimizing for scale, or just want to see what your code is really doing in production, this episode covers everything from packet filters to cultural changes in service of getting a clear view of your software when it hits production.
---
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
eBPF: https://ebpf.io/
Google-Wide Profiling Paper (2010): https://research.google.com/pubs/archive/36575.pdf
Google pprof: https://github.com/google/pprof
Continuous Profiling Tools:
Pyroscope (Grafana): https://grafana.com/oss/pyroscope/
Pixie (CNCF): https://px.dev/
Parca: https://www.parca.dev/
Datadog Continuous Profiler: https://www.datadoghq.com/product/code-profiling/
Supporting Technologies:
OpenTelemetry: https://opentelemetry.io/
Grafana: https://grafana.com/
New Relic: https://newrelic.com/
Envoy Proxy: https://www.envoyproxy.io/
Spring Cloud Sleuth: https://spring.io/projects/spring-cloud-sleuth
Mohammed Aboullaite:
LinkedIn: https://www.linkedin.com/in/aboullaite/
GitHub: https://github.com/aboullaite
Website: http://aboullaite.me
Twitter/X: https://twitter.com/laytoun
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
---
0:00 Intro
3:01 The Evolution of System Monitoring
9:26 The Challenge of Continuous Profiling
13:40 What Is eBPF?
19:20 The eBPF Verifier and Safety
28:38 eBPF vs Traditional Profiling Approaches
32:18 What's the Overhead of Continuous Profiling?
36:41 Profiling Tools: Parca and Pixie
40:08 Managing the Volume of Profiling Data
47:57 Flame Graphs and Visualization Tools
49:35 The Three Pillars of Observability
56:30 Distributed Tracing with Session IDs
1:03:00 Getting Buy-In for New Monitoring Tools
1:06:25 Which Tools Should You Choose?
1:10:24 Outro
Solving Git's Pain Points with Jujutsu (with Martin von Zweigbergk)
Git might be the most ubiquitous tool in software development, but that doesn't mean it's perfect. What if we could keep Git compatibility while fixing its most frustrating aspects—painful merges, scary rebases, being stuck in conflict states, and the confusing staging area?
This week we're joined by Martin von Zweigbergk, creator of Jujutsu (JJ), a Git-compatible version control system that takes a fundamentally different approach. Starting from a simple idea—automatically snapshotting your working copy—Martin has built a tool that reimagines how we interact with version control. We explore the clever algebra behind Jujutsu's conflict handling that lets you store conflicts as commits and move freely through your repository even when things are broken. We discuss why there's no staging area, how the operation log gives you powerful undo/redo capabilities, and why rebasing becomes trivially easy when you can edit any commit in your history and have changes automatically propagate forward.
Whether you're a Git power user frustrated by interactive rebases, someone who's lost work to a botched merge, or just curious about how version control could work differently, this conversation offers fresh perspectives on a tool we all take for granted. And if you're working with large monorepos or game development assets, Martin's vision for the future of Jujutsu might be exactly what you've been waiting for.
---
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
Jujutsu (JJ): https://github.com/martinvonz/jj
Jujutsu Documentation: https://martinvonz.github.io/jj/
Git: https://git-scm.com/
Mercurial: https://www.mercurial-scm.org/
Rust: https://www.rust-lang.org/
Watchman: https://facebook.github.io/watchman/
Google Piper: https://research.google/pubs/why-google-stores-billions-of-lines-of-code-in-a-single-repository/
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
---
0:00 Intro
2:39 Why Create a New Version Control System?
10:42 Working Copy as a Commit & No Staging Area
20:29 Storing JJ Commits & Change IDs
30:00 Evolution Log & Storing Conflicts
40:07 Git Compatibility & Co-Located Mode
50:08 Target Users & Simpler Workflows
1:00:12 Copy Tracking & Large File Support
1:10:18 Outro
Getting New Tech Adopted (with Dov Katz)
Getting new technology adopted in a large organization can feel like pushing water uphill. The best tools in the world are useless if we're not allowed to use them, and as companies grow, their habits turn into inertia, then into "the way we've always done things." So how do you break through that resistance and get meaningful change to happen?
This week's guest is Dov Katz from Morgan Stanley, who specializes in exactly this challenge - driving developer productivity and getting new practices adopted across thousands of developers. We explore the art of organizational change from every angle: How do you get management buy-in? How do you build grassroots developer enthusiasm? When should you use deterministic tools like OpenRewrite versus AI-powered solutions? And what role does open source play in breaking down the walls between competing financial institutions?
Whether you're trying to modernize a legacy codebase, reduce technical debt, or just get your team to try that promising new tool you've discovered, this conversation offers practical strategies for navigating the complex dynamics of enterprise software development. Because sometimes the hardest part of our job isn't writing code - it's getting permission to write better code.
---
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
Morgan Stanley: https://www.morganstanley.com/
OpenRewrite: https://docs.openrewrite.org/
Spring Framework: https://spring.io/
Spring Integration: https://spring.io/projects/spring-integration
Apache Camel: https://camel.apache.org/
FINOS (FinTech Open Source Foundation): https://www.finos.org/
Linux Foundation: https://www.linuxfoundation.org/
Moderne (Code Remix conference organizers): https://www.moderne.io/
Code Remix Conference: https://www.moderne.io/events
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
---
0:00 Intro
2:21 Getting New Tech Adopted at Large Companies
6:23 Aligning Innovation with Management Strategy
10:00 Scaling Technical Debt Remediation
12:20 Building Developer Platform Stacks
14:08 The Automation Marketplace Approach
18:08 Automated Code Migrations and OpenRewrite
19:57 Internal Framework Upgrades at Scale
24:14 Making Technical Debt Visible
28:00 The Mandate Problem: Carrots vs Sticks
32:20 Measuring Developer Productivity
36:00 AI in the Enterprise: Security and Policy Challenges
40:00 Using AI for Code Modernization
44:10 Cultural Resistance to AI Tools
48:00 Building vs. Buying: When to Use External Solutions
50:02 Enterprise-Ready Software Requirements
53:40 SaaS Adoption in Regulated Industries
57:00 Selling to Enterprise: The Reality Check
59:30 Open Source as an Enterprise Strategy
1:02:00 The Trust Factor in Technology Adoption
1:04:17 Outro
From Unit Tests to Whole Universe Tests (with Will Wilson)
How confident are you when your test suite goes green? If you're honest, probably not 100% confident - because most bugs come from scenarios we never thought to test. Traditional testing only catches the problems we anticipate, but what about the bugs you never saw coming? Those come from the unexpected interactions, timing issues, and edge cases we never imagined.
In this episode, Will Wilson from Antithesis takes us deep into the world of autonomous testing. They've built a deterministic hypervisor that can simulate entire distributed systems - complete with fake AWS services - and intelligently explore millions of possible states to find bugs before production. Think property-based testing, but for your entire infrastructure stack. The approach is so thorough they've even used it to find glitches in Super Mario Brothers (seriously).
We explore how deterministic simulation works at the hypervisor level, why traditional integration tests are fundamentally limited, and how you can write maintainable tests that actually find the bugs that matter. If you've ever wished you could test "what happens when everything that can go wrong does go wrong," this conversation shows you how that's finally becoming possible.
---
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
Antithesis: https://antithesis.com/
Antithesis testing with Super Mario: https://antithesis.com/blog/sdtalk/
...and with Metroid: https://antithesis.com/blog/2025/metroid/
MongoDB: https://www.mongodb.com/
etcd (Linux Foundation): https://etcd.io/
Facebook Hermit: https://github.com/facebookexperimental/hermit
RR (Record-Replay Debugger): https://rr-project.org/
T-SAN (Thread Sanitizer): https://clang.llvm.org/docs/ThreadSanitizer.html
Toby Bell's Strange Loop Talk on JPL Testing: https://www.youtube.com/results?search_query=toby+bell+strange+loop+jpl
Andy Weir - Project Hail Mary: https://www.goodreads.com/book/show/54493401-project-hail-mary
Andy Weir - The Martian: https://www.goodreads.com/book/show/18007564-the-martian
Antithesis Blog (Nintendo Games Testing): https://antithesis.com/blog/
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
---
0:00 Intro
3:12 The Problem with Traditional Testing
6:28 Property Testing & Antithesis Overview
9:57 Specifying Tests for Distributed Systems
15:00 Nintendo Games & Practical Examples
20:00 The Deterministic Hypervisor
28:45 Nintendo Emulation & Technical Details
30:00 Deterministic Replay & System Isolation
40:00 Advanced Techniques: Forking from Seeds
50:00 Coverage-Guided Testing & Smart Exploration
1:00:00 Real-World Applications & Client Stories
1:05:15 Deep Dive: Nintendo Games & Emulation
1:10:25 NASA, Space Systems & Future Applications
1:11:08 Outro
Building a Modern Cloud Platform (with Anurag Goel)
How would you build a Heroku-like platform from scratch? This week we're diving deep into the world of cloud platforms and infrastructure with Anurag Goel, founder and CEO of Render.
Starting from the seemingly simple task of hosting a web service, we quickly discover why building a production-ready platform is far more complex than it appears. Why is hosting a Postgres database so challenging? How do you handle millions of users asking for thousands of different features? And what's the secret to building infrastructure that developers actually want to use?
We explore the technical challenges of building enterprise-grade services—from implementing reliable backups and high availability to managing private networking and service discovery. Anurag shares insights on choosing between infrastructure-as-code versus configuration, why they built on Go, and how they handle 100 billion requests per month.
Plus, we discuss the impact of AI on platform adoption: Are LLMs already influencing which platforms developers choose? Will hosting platforms need to actively support agentic workflows? And what does the future hold for automated debugging?
Whether you're curious about building your own platform, want to understand what really happens behind your cloud provider's dashboard, or just enjoy hearing war stories from the infrastructure trenches, this episode has something for you.
–
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
Render: https://render.com/
Render’s MCP Server (Early Access): https://render.com/docs/mcp-server
Pulumi: https://www.pulumi.com/
Victoria Metrics: https://victoriametrics.com
Loki: https://vector.dev/docs/reference/configuration/sinks/loki/
Vector: https://vector.dev/
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
--
0:00 Intro
4:16 What is Render?
7:38 Building a Postgres Service Is Hard
13:20 How Do You Decide Which Features To Prioritize?
17:45 Defining Infrastructure In Code vs. Configuration
30:43 Feelings About Terraform
33:06 Dealing With The Myriad Demands Of The Userbase
39:05 How Do You Manage The User Interface?
42:13 How Do You Handle Internal Asynchronous Messaging?
50:59 Was Go The Right Choice For Building Render?
58:10 How Do You Handle Observability?
1:03:07 Is Billing A Separate Logging Concern?
1:04:10 How Are They Acquiring +150,000 Users A Month?
1:07:42 Are You Trying To Influence LLMs To Pick Your Platform?
1:10:59 Are AI Companies Going To Enter Into Pay-To-Play for SaaS Companies?
1:13:37 Will Platforms Need To Actively Support LLM Users?
1:19:10 What's The Future Of Agentic Debugging?
1:22:55 Outro
This is why you don't do a full rewrite of your software...
The classic rewrite trap! 🪤 Paul Dix from InfluxDB explains why starting fresh often means losing years of hard-earned solutions to problems you forgot you even had.
#SoftwareDevelopment #Rewrite #TechDebt #DeveloperVoices #Programming #InfluxDB #CodingTips
Go vs Rust: What Go Does Better
Paul Dix (InfluxDB) argues where Go clearly beats Rust - compile times and learnability. Would you agree? #GoVsRust #Programming #SoftwareDev
Why InfluxDB Chose Go #programming #podcast #golang #influxdb #startupdecisions
Paul Dix, CEO of InfluxDb, describes why version 1 was written in Go. Though they've since released a version written in Rust, Go was probably the right choice for the time. And shipping is *always* the right choice. 😁
InfluxDB: The Evolution of a Time Series Database (with Paul Dix)
How hard is it to write a good database engine? Hard enough that sometimes it takes several versions to get it just right. Paul Dix joins us this week to talk about his journey building InfluxDB, and he's refreshingly frank about what went right, and what went wrong. Sometimes the real database is the knowledge you pick up along the way....
Paul walks us through InfluxDB's evolution from error logging system to time-series datasbase, and from Go to Rust, with unflinching honesty about the major lessons they learnt along the way. We cover technical details like Time-Structure Merge Trees, to business issues like what happens when your database works but your pricing model is broken.
If you're interested in how databases work, this is full of interesting details, and if you're interested in how projects evolve from good idea to functioning business, it's a treat.
--
Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices
Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join
InfluxData: https://www.influxdata.com/
InfluxDB: https://www.influxdata.com/products/influxdb/
DataFusion: https://datafusion.apache.org/
DataFusion Episode: https://www.youtube.com/watch?v=8QNNCr8WfDM
Apache Arrow: https://arrow.apache.org/
Apache Parquet: https://parquet.apache.org/
BoltDB: https://github.com/boltdb/bolt
LevelDB: https://github.com/google/leveldb
RocksDB: https://rocksdb.org/
Gorilla: A Fast, Scalable, In-Memory Time Series Database (Facebook paper): https://www.vldb.org/pvldb/vol8/p1816-teller.pdf
Paul on LinkedIn: https://www.linkedin.com/in/pauldix/
Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social
Kris on Mastodon: http://mastodon.social/@krisajenkins
Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/
--
0:00 Intro
4:42 What Problem Does A Time Series Database Solve?
19:25 What's The Biggest Implementation Challenge?
22:57 How Does A Time-Structured Merge Tree (TSM Tree) Work?
31:02 Compacting TSM Indexes
42:16 Rewriting From Go Into Rust
52:42 The Challenge of Learning A Very Different Language
54:46 Why Do A Full Rewrite In Rust?
1:05:44 Using DataFusion for a Query Engine
1:14:35 Big Rewrites and Big Regrets
1:30:16 Multi-Tenant vs Single-Tenant Architecture
1:34:50 Managing Multiple Product Versions
1:42:37 How Do You Avoid Getting Sucked Into Management?
1:48:29 Outro
AI Coding is Like Managing Junior Developers
"It's a bit more like managing a team of junior developers" - Zach Lloyd explains how AI-powered coding changes your role from writing every line to orchestrating multiple AI agents.
The future of programming isn't about watching one AI work - it's about running 2-3 agents in parallel and thinking strategically about how to prompt them effectively.
From our full episode with Zach Lloyd, CEO of Warp: "Beyond AI Hype, What Will Developers Actually Use?"
#AIcoding #programming #developers #agentic #Warp #ZachLloyd #AI #softwaredevelopment #coding #tech #productivity #future #prompt #LLM #developertools

Will Turso Be The Better SQLite? (with Glauber Costa)