A team of four of us built a healthcare AI platform that today serves more than 78,000 clinicians across 108 facilities live daily — and not one of us is the kind of engineer Silicon Valley would have hired to do it.
I cannot write a for-loop. I cannot deploy a Docker container. When my team talks about "rate limiting the API," I have to context-clue my way through the meeting.
We built the entire thing on Lovable. End to end. By describing what we wanted in plain English — and then learning, painfully, how to make Lovable actually work for us at production scale.
This is not a story about how AI makes coding obsolete. It's a story about something more interesting. I want to explain why I think that, what we learned, and what I think most AI founders are getting wrong.
The team that built this
There were four of us at the start: me, Sasha — Oleksandra Matkovska — running product, Dmitri Batulin running technical execution, and Kole Nelson on the founding team, working a different layer of the problem.
Kole is no longer with the company, but he's part of the origin story and I want to say that clearly. The first version of what is now ShiftNex was built by the four of us, and that matters — both because it's true, and because it tells you everything about how this company got built. No engineering team. No CTO. A founder who couldn't code, a product lead working from Odessa, a technical executor alongside her, and a teammate carrying the AI systems work who has since moved on.
I genuinely don't think anyone has built a healthcare staffing platform this way before. Not the size we've reached. Not in the time we reached it. Not with this team shape.
The wrong question
For the last two years, every AI conversation has been organized around the wrong question: "How good is the model?" GPT vs. Claude vs. Gemini. Benchmarks. Eval scores. Context windows. Reasoning traces.
It's the wrong question because the model isn't the bottleneck anymore. The bottleneck is knowing what to build, and knowing how to make the tools actually do it.
I know what to build because I spent nearly a decade running a healthcare staffing company called Actriv. We've been on the Inc. 5000 four consecutive years, peaking at #40 nationally in 2021. Tens of thousands of nurses on real shifts in real hospitals across the US. I know exactly what's broken in healthcare staffing because I've been the one filling those shifts at 2am when a hospital calls panicking.
When I sat down with Lovable, I didn't have to figure out what to build. I'd been living the problem for nearly a decade. The hard part wasn't the vision — it was getting the tools to execute it.
That's the part nobody is talking about. The moat isn't the model. The moat is the operator who knows where the model should be pointed, and the team that learns how to point it.
What actually happened
A year ago, three of us — me, Sasha, and Dmitri — opened Lovable and started describing the platform I'd been sketching on whiteboards for years. A staffing system that could match the right nurse to the right shift instantly, handle credentialing, manage compliance, and process payments — without the 47 phone calls and 12 spreadsheets it currently takes.
We didn't build it the "right" way. We built it the way non-engineers build: by describing the outcome, watching the AI fumble, and learning what to tell it next. The first version was ugly. The second version broke. The third version worked for one customer.
Then something more important happened: we learned how to train Lovable to work for us specifically. Not "use Lovable" — every founder demo does that. Train it. Build the patterns, the prompts, the conventions, the architectural choices that meant Lovable was no longer a generic AI builder; it was our AI builder. That muscle — figuring out how to make a generic AI tool behave like a senior engineer who knows your codebase — is the thing nobody talks about, and it's where most of the real work was.
Twelve months later, ShiftNex serves more than 78,000 clinicians across 108 facilities live daily and has delivered over 20,000 care hours through the platform. A year in, the platform is still entirely Lovable. No rewrite. No "now we're hiring real engineers to do it properly." Every line that runs production was generated by describing what we wanted. That's not a transitional state. That's the architecture.
Then we built the workforce
Here's where the story stops being a Lovable story and starts being something bigger.
Once the platform was running, we started building agents on top of it. Real ones. Not chatbots. Digital workers — built with Claude Code, orchestrated through Cowork — that handle the operational load a staffing company would normally hire dozens of people for. Credentialing checks. Shift confirmations. Compliance audits. Document processing. Customer follow-up. Payroll reconciliation.
These agents run day and night. They don't sleep, they don't churn, they don't quit for a $2-an-hour raise from a competitor. They scale with the platform instead of against it. Three years ago, this was a deck slide. Today it's how we operate.
What was a patent is now in production
The same compounding happened on the voice side, and that one is personal — because I filed the patents.
In September 2023, I filed two patent applications with the USPTO for voice-controlled workforce management systems. Voice as the primary interface for managing healthcare schedules, identifying clinicians, resolving conflicts in shift assignments — the things every staffing company does manually with phones and spreadsheets, but spoken.
In April 2024, the USPTO granted the first one: US 11,947,875 — voice-activated event listing management with target-entity identification and command interpretation. In June 2025, they granted a continuation: US 12,327,067 — adding machine learning models and event-conflict resolution. Additional applications are pending.
Two years ago, that voice layer was a patent application — paper. Today, the voice layer of our platform is in production, calling clinicians at 5am when a shift opens, in a voice they trust, and booking them in 90 seconds. The thing I sketched in a patent filing is the thing answering the phone now.
That stack — Lovable-trained platform, Claude Code agents running 24/7, voice IP in production — doesn't exist anywhere else. I know because we tried to buy it before we built it.
Why regulated industries are the real prize
Most AI founders are competing in the wrong markets. LLM wrappers, agent frameworks, dev tools, productivity copilots — these are saturated graveyards. The barriers to entry are zero, the differentiation is cosmetic, and the buyers are exhausted. Every YC batch ships forty of them.
Meanwhile, the most valuable AI opportunities are sitting in industries that are too boring for most founders to study seriously: healthcare staffing, commercial insurance underwriting, freight brokerage, clinical trial recruitment, municipal permitting, mortgage processing, agricultural compliance.
These industries share three traits that make them unfair advantages for domain operators:
- Compliance moats keep tourists out. A 23-year-old Stanford CS dropout cannot ship a healthcare staffing platform from a coffee shop, because they don't know what HIPAA, CMS, OIG, and state nurse licensing boards will do to them. I do. I've been audited. That knowledge is the moat.
- The buyers are starved. Healthcare staffing software is, almost universally, terrible. Logistics dispatch is terrible. Insurance broker tools are terrible. The incumbents are bloated, expensive, and 15 years out of date. A modern AI-native product looks like magic by comparison.
- The total addressable market is enormous and underestimated. US healthcare staffing alone is a $40B+ market. Nobody's writing TechCrunch articles about it because it's not sexy. That's the opportunity.
Every "boring" industry has a $10B+ wedge sitting wide open, waiting for someone who understands both the domain and the new tools.
What I think most AI founders get wrong
Three things.
They optimize for the model when they should optimize for the workflow. The model is a commodity. The workflow it sits inside is the product. I spent more time figuring out what should happen between steps 4 and 5 of a hospital onboarding than I did on any prompt.
They build for other founders instead of for operators. Twitter rewards content that sounds clever to other AI builders. The market rewards software that solves problems for people who don't know what an LLM is. Those audiences want completely different things. Choose the second.
They think technical depth is a moat. It isn't anymore. The hard part of building software in 2026 is not writing the code. The hard part is knowing what code is worth writing, and teaching your AI tools to write it the way your business needs it written. If your only edge is that you can write the code yourself, you're competing against me — and I have Lovable trained on our patterns, agents running 24/7, two granted voice-AI patents, nearly a decade of domain experience, and a customer base that already trusts me.
What this means
If you're an operator in a regulated industry — a nurse, an insurance broker, a logistics dispatcher, a paralegal, a compliance officer — and you've spent years thinking "this software is terrible, I could design something better," I want you to take that seriously.
You don't need to learn to code. You don't need to raise a seed round. You don't need a technical co-founder. You need three things: a clear picture of the workflow you'd fix first, a willingness to ship something embarrassing in week one, and the patience to train your AI tools — not just use them — until they work the way your business actually works.
The thing we built isn't impressive because of the technology. It's impressive because we were the wrong people to have built it, and we built it anyway. That should tell you something about what's possible right now.
The next decade of AI isn't going to be won by the people who train the models. It's going to be won by the people who know exactly where to point them.
I'm betting my company on it.
Founder of Actriv Healthcare and ShiftNex AI. Lives in Lake Tapps, Washington; born in Nairobi, Kenya. Named inventor on US patents 11,947,875 and 12,327,067, with additional applications pending.