Shift-left, but for AI coding assistants
Notes on what changes (and what doesn't) when ~14K developers start writing code with an LLM in the loop.
The first thing you notice when AI coding assistants land at scale is that the shape of your developer loop changes faster than your security tooling does.
The second thing you notice is that most of your existing security tools weren't built for this loop.
What actually changes
The traditional shift-left story goes: catch issues earlier, by running the same checks closer to the developer. Linters in the editor. SAST in CI. SCA on the PR. Dependency review at merge.
That story still works for the output — the code is still code, the checks still apply. But the input is different now. Developers are no longer typing every character of every change. Code arrives in larger blocks, often with subtle behavior the author didn't fully reason through. The cost of generating an insecure pattern is now near zero.
Three concrete shifts I keep coming back to:
- Quantity, not just quality. A single developer can produce 5–10× the volume of code per day. Security review queues that worked at the old throughput silently fall over.
- Confidence asymmetry. Generated code looks finished in a way hand-written code rarely does. Reviewers (and authors) are biased toward accepting it.
- New failure modes. Insecure patterns the model has memorized. Dependency hallucinations. Prompt-injectable comments. Subtle changes to authn/authz that pass tests but break invariants.
Where the leverage is
The thing that actually moves the needle isn't "more scans." It's putting the security signal as close to the moment of generation as possible — in the editor, before the developer commits, ideally before they even read the suggestion.
That's most of what I work on these days. The mechanics:
- Inline checks in the IDE — fast enough to run on every accepted suggestion, not just on save.
- Secure-by-default building blocks — give devs a pre-hardened path that's easier than the insecure one.
- Telemetry from the loop itself — what suggestions are being accepted, by whom, in which contexts. The data is gold for improving rules.
The unsexy truth is that most of this isn't new — it's the same shift-left philosophy, applied with the assumption that the developer is now operating at 5× speed.
What hasn't changed
Threat modeling still beats every other security activity per hour invested. Code review by a human who understands the system still catches things tools can't. Security culture — engineers who want to ship secure code — still matters more than any single control.
AI in the loop doesn't replace any of that. It just raises the cost of not having it.
More writing
What I'm exploring this week
Small experiments, half-thoughts, and links worth your time.
Threat modeling, faster — with an LLM in the loop
A practical pattern for using an LLM to bootstrap STRIDE without giving up the parts that need a human.