It’s 1998. I’m crawling under desks swapping out hard drives and flashing BIOS chips so the world doesn’t end when the calendar rolls over to 2000. Novell is running the file servers, Windows 98 just dropped, and a few holdouts in the office are still clinging to WordPerfect — the MS-DOS version — like it’s a life raft.
I’m not thinking about insurance. I’m not thinking about building a platform. I’m just a kid who’s good with computers trying not to brick someone’s machine before Y2K gets the chance to do it first.
But I must’ve done something right, because the guy I was working under knew someone at Westrope — a wholesale insurance broker — who needed an IT person. And just like that, I stumbled into an industry I’d spend the next 27 years never leaving.
I spent 15 years at Westrope as their Senior IT Administrator — the person who kept everything running, learned every system inside and out, and quietly automated anything that sat still long enough. I was the IT generalist at every stop: the guy who figures out how the policy admin system works, then builds the data bridge to the next one, then automates the migration when leadership decides to switch platforms entirely. Jack of all trades, master of none — except that’s been my superpower.
When Ryan Specialty came along, I didn’t walk in as a director. I earned it. Senior Systems Administrator. IT Analyst and Development Manager. Senior Technical Operations Architect. Custom Solutions Manager. Each role a little closer to the intersection of insurance domain expertise and software engineering, until I landed as Software Development Director overseeing three separate development teams.
That’s the role that crystallized everything — years of understanding what insurance operations actually need, combined with the authority and ability to build it. Software is where my heart is now. It’s the thing that lets me build, automate, create. It’s my outlet. The place where 27 years of domain knowledge meets the pure joy of making something from nothing.
Plcy.io is the culmination of all of it. Every migration I’ve run, every data bridge I’ve built, every time I’ve stared at a legacy system and thought “this could be so much better” — this is the answer. It’s my magnum opus.
The Actual Problem
Here’s the thing — and this took me an embarrassingly long time to appreciate — insurance is wild. It’s the financial instrument that makes the rest of the economy possible. You can’t get a mortgage without it. You can’t ship cargo without it. You can’t open a restaurant, run a construction project, or launch a satellite without someone, somewhere, pricing the risk of everything going sideways. Insurance is the quiet load-bearing wall of civilization, and most of the software running it was written before some of your developers were born.
The P&C (property and casualty) insurance market in the US is $1.4 trillion annually. A significant chunk of that flows through Managing General Agents — MGAs and MGUs — who are essentially the specialty underwriters of the industry. They take on complex, non-standard risks that standard carriers won’t touch: excess and surplus lines, specialty programs, hard-to-place commercial accounts. This is the world I’ve spent my entire career in.
These are sophisticated insurance professionals operating in a genuinely complex domain. And many of them are managing their books of business on a combination of spreadsheets, shared email inboxes, legacy policy admin systems from 2003, and — I promise I’m not making this up — fax machines.
I’ve seen a $200M book of business tracked in a 47-tab Excel workbook. I’ve watched an underwriter manually re-key the same data into three different systems because none of them talk to each other. I’ve heard the phrase “we’ll add it to the tracker” said in a tone that suggests the tracker is a Google Sheet that has been alive longer than some junior employees.
The technology gap here is enormous. And that gap is precisely where Plcy.io lives.
Why Build in Public?
A few reasons, none of them particularly altruistic.
Accountability. When you tell the internet what you’re building, you have to actually build it. There’s no “we’re heads-down, big announcement soon” escape hatch when you’ve posted your architecture decisions and roadmap for anyone to read. The public commitment is a forcing function, and I respond well to forcing functions.
Community. Every engineer I respect learned from people who shared their work. Open source, technical blogs, conference talks where someone admits they made a terrible mistake and here’s what they learned — that’s the curriculum that actually matters. We’ve benefited enormously from other people building in public. This is us paying that forward.
Honest marketing. The insurance tech space is full of vendors who will tell you their platform does everything and show you a demo of the happy path. We’d rather show you how we build, let you evaluate the quality of our thinking, and earn your trust the slow way. If you read this blog for six months and think “these people seem rigorous and honest,” that’s worth more than any marketing deck.
The AI angle — and this one is genuinely new. We’re not just building software. We’re developing a methodology for building with AI as a full development partner. Claude Code isn’t autocomplete for us — it’s an actual collaborator that designs architectures, writes and runs test suites, debugs production issues, and catches the class of mistake that happens when you’re three context windows deep and tired. We’re doing something genuinely new here, and we want to share what we’re learning because the playbook for this doesn’t exist yet. We’re writing it as we go.
The Stack (The Fun Part)
Let’s talk about what’s actually running under the hood, because this is where it gets interesting.
The platform is a TypeScript monorepo — currently 200+ Prisma models across 35 schema files, React 19 frontends, Express APIs, Kafka event streaming, and a Redis layer for caching and service discovery. Standard enterprise stuff. The kind of architecture that either fills you with confidence or dread depending on how many standups you’ve attended this week.
Then there’s the routing layer. We wrote it in Rust.
Specifically, we’re using Pingora — Cloudflare’s open-source proxy framework — as our sole HTTP entry point. The reasoning: we needed a reverse proxy with sub-millisecond overhead, JWT validation at the edge, and rate limiting that doesn’t add latency spikes. Rust with Pingora checks all those boxes. Is it more complex than nginx + some Lua scripts? Yes. Do we have a router that can handle tens of thousands of concurrent connections with predictable latency? Also yes.
We also use this blog itself as a living example of what we’re building. The content validation for this very post looks like this:
// How this post was validated at build time
const posts = defineCollection({
loader: glob({ pattern: '**/*.mdx', base: './src/content/posts' }),
schema: z.object({
title: z.string(),
description: z.string(),
publishDate: z.coerce.date(),
tags: z.array(z.string()),
series: z.string().optional(),
draft: z.boolean().default(false),
}),
});
Zod schemas enforcing frontmatter correctness at build time. If a post has a malformed date or missing required field, the build fails. No silent bad data. This is the kind of thing that seems like overkill until the third time it saves you from publishing something broken.
The development experience looks like this:
# Running this blog locally
pnpm --filter @plcy-io/blog dev # You're reading the result right now
Scoped package commands. No pnpm run dev at the root, no “oops I accidentally ran a format pass across 400 files including the infrastructure configs.” Discipline about scope isn’t just good hygiene — in a monorepo this size, it’s self-defense.
What’s Coming
We’re planning three recurring content series, each targeting a different kind of reader.
Claude Code Tips — the tricks, patterns, and workflows we’ve developed for building with AI agents at scale. Things like: how to structure prompts for architectural work versus mechanical code changes, when to use subagents versus a single context window, how to build a quality gate system that catches AI-generated code mistakes before they hit production. This is the series I wish existed six months ago.
InsurTech Deep Dives — the domain knowledge that makes insurance tech actually hard. What’s the difference between an MGA and a carrier? How does the excess and surplus lines market work? What is a coverage tower and why do you need structured layer management instead of just a spreadsheet? If you’re building in this space, you need to understand the domain. We’ll share what we’ve learned.
Engineering Playbook — our TDD-first approach, monorepo management, and the quality standards we hold ourselves to. We’re religious about test coverage, about code reviews, about not letting “we’ll clean it up later” become “this is now load-bearing technical debt.” The playbook behind the platform.
Posts won’t be on a schedule. We’ll write when we have something worth saying, which has historically been “not often enough” and “all at once right before a deadline.” We’re working on that.
Stick Around
If you’re building something ambitious — a platform, a company, a system that has to actually work when real money is on the line — we’re building one too, and we’re going to show you how.
If you’re figuring out how to work with AI as a genuine development partner rather than a fancy autocomplete, we’re a few months ahead of you on that journey and we’ll share everything.
If you just enjoy watching someone try to make insurance interesting — honestly, same. Stick around.
Subscribe via RSS, follow along at plcy.io, or just bookmark this and come back when you remember it exists. We’ll be here, shipping.