PostgreSQL + Drizzle ORM: My Favorite Stack for AI Projects
How Drizzle ORM transformed my backend development in AI projects with TypeScript type safety and production performance.
Mario Inostroza
PostgreSQL + Drizzle ORM has become my preferred backend stack for AI projects. It’s not just another ORM — it’s a paradigm shift that solved critical issues in my development workflow.
The Problem: Prisma in an AI World
When I started with Examya, I used Prisma. And it works well for standard CRUD applications. But in AI, things are different:
- Data changes constantly: embeddings, vectors, complex JSON
- Relationships aren’t always clear between entities
- Data models evolve with each iteration
- Performance on complex queries is critical
Prisma, with its focus on static types and explicit migrations, became a bottleneck. Every schema change required:
- Writing manual migration
- Regenerating the client
- Updating all types
- Ensuring compatibility with existing data
In an agile AI environment, this was too slow.
The Solution: Drizzle ORM
Drizzle changes the game with its development-time vs runtime approach. It’s not just “Prisma but faster” — it’s a completely different design.
Inferred Types at Development Time
Drizzle doesn’t require type generation. TypeScript infers everything from the schema at development time:
// This works without npx prisma generate
const users = await db.select().from(users).where(eq(users.age, 30))
This means I can change my schema and see TypeScript errors immediately — without waiting to generate the client.
More Expressive Queries
Drizzle’s API is more natural for complex queries:
// Prisma required multiple .include()
const orders = await prisma.orders.findMany({
where: { status: 'paid' },
include: {
user: {
where: { createdAt: { gte: new Date('2024-01-01') }
},
items: {
where: { price: { not: null } }
}
}
})
// Drizzle allows more natural nesting
const orders = await db
.select()
.from(orders)
.leftJoin(users, eq(orders.userId, users.id))
.where(and(
eq(orders.status, 'paid'),
gt(users.createdAt, new Date('2024-01-01'))
))
Production Performance
Drizzle uses “query builders” that compile to optimized SQL. In Examya, where I have queries processing thousands of orders daily, the difference is significant:
- Less RAM per query
- Faster SQL generation
- Intelligent connection caching
Integration with AI
Embedding Storage with pgvector
Drizzle handles pgvector perfectly, which is crucial for AI:
// Schema definition with vector support
export const embeddings = pgTable('embeddings', {
id: serial('id').primaryKey(),
content: text('content').notNull(),
embedding: vector('embedding', { dimensions: 1536 }).notNull(),
metadata: json('metadata').default('{}'),
createdAt: timestamp('created_at').defaultNow()
})
// Semantic similarity query
const similar = await db
.select()
.from(embeddings)
.where(
sql`embedding <=> ${embedding} < 0.3`
)
.limit(5)
Handling Complex Data
AI agents generate structured but unpredictable data:
// Drizzle handles JSON naturally
export const agentContexts = pgTable('agentContexts', {
id: serial('id').primaryKey(),
session: text('session').notNull(),
context: json('context').$type<Record<string, any>>(),
metadata: json('metadata').$type<AgentMetadata>(),
lastUsed: timestamp('last_used').defaultNow()
})
// Flexible queries
const sessionData = await db
.select()
.from(agentContexts)
.where(
jsonContains(agentContexts.context, { agent: 'shuri' })
)
Real Case: Examya
Technical Decision
For Examya’s backend, I chose:
- PostgreSQL: ACID consistency is non-negotiable for medical data
- Drizzle ORM: Flexibility for rapid schema evolution
- pgvector: For semantic search of medical guides
- NestJS: For business logic structure
Practical Results
1. Faster Iteration
I can change the schema and see errors immediately:
// Add new field without migration
export const medicalOrders = pgTable('medical_orders', {
// ... existing fields
aiAnalysis: json('ai_analysis').$type<AiAnalysisResult>(),
priority: integer('priority').default(0)
})
2. Easier Debugging
Query builders are more transparent:
// Generated SQL visible during development
console.log(
db.select()
.from(medicalOrders)
.where(eq(medicalOrders.status, 'pending'))
.toSQL()
)
3. Production Performance
Queries compile to optimized SQL and memory usage is predictable.
Migration Strategy
Migrating from Prisma to Drizzle was simpler than expected:
Step 1: Schema Compatibility
Both use TypeScript types and similar functions. The main differences are:
- Prisma:
@id,@default,@relation - Drizzle:
primaryKey(),defaultNow(),references()
Step 2: Query Migration
Most queries translate directly:
// Prisma
await prisma.user.findMany({
include: { orders: true }
})
// Drizzle
await db
.select()
.from(users)
.leftJoin(orders, eq(users.id, orders.userId))
Step 3: Immediate Benefits
- 40% less time on type generation
- 30% faster queries in development
- Fewer bugs from incorrect types
Future Proof
Drizzle is designed to scale. Examya’s future plans include:
- Time-series data: For medical trend analysis
- Graph features: With pg_graphql for complex relationships
- Distributed transactions: For multi-service operations
Drizzle’s flexibility allows preparing for these scenarios without changing stacks.
Conclusion: Why Drizzle + PostgreSQL?
For AI projects, Drizzle offers:
- Speed: Iterate fast without type friction
- Flexibility: Schema evolves with your needs
- Performance: Optimized queries for production
- Type Safety: Without losing TypeScript benefits
- Ecosystem: Compatible with entire PostgreSQL ecosystem
It’s not the perfect solution for all cases, but for projects where data is dynamic and performance critical — like AI applications — Drizzle ORM has transformed my way of building backends.
Do you use Drizzle? What’s been your experience with ORMs in AI projects? Leave me a comment on WhatsApp or X.
If you want to see the complete implementation, check the Examya repo where we apply this stack in production with thousands of daily queries.
Related reading
Similar topics
pgvector + Embeddings in Production: The Foundation of Medical Reasoning in Examya
Architecture for semantic search and text similarity in production with pgvector, pg_trgm, and real MINSAL data.
Similar topics
How I Built Patagonia's First Private COVID PCR Lab (And Why I Ended Up Building AI)
In March 2021, I hoisted 300 kg of biosafety cabinet by crane to a second floor during lockdown. By May we were running the first private COVID PCR tests in Chilean Patagonia. The nights that followed became the real origin of Examya.
Similar topics
Multi-Agent Orchestration vs Single Agent: Lessons from the Trenches
My journey building Cotocha: why multi-agent orchestration beats single agents in real-world projects.