Compliance Standards Are Not Carved in Stone, Stop Treating Them Like They Are: A 2026 Field Report

Your compliance standards are not carved in stone, your competitors are not waiting for your next change board meeting, and your security team has 18 months to pick a side. A 2026 field report from a conversation that should never have happened.

Compliance Standards Are Not Carved in Stone, Stop Treating Them Like They Are: A 2026 Field Report

I am French. Complaining is my day-to-day sport. And once again, I am sitting down to write yet another long, sarcastic complaint about something that happened last week.

Specifically: I read a conversation that, I think, every CTO, CISO, and head of engineering in 2026 should print out, frame, and tape to the door of their next leadership offsite. Not because it was a great conversation. Because it was the perfect, almost theatrical example of how a company murders its own velocity in real time, with a smile, while citing a policy document nobody has read past page one.

What made me grumpy again?

The setup is the most ordinary thing in the world. An engineer builds a working replacement for an embarrassing piece of public-facing internal software. The original runs on PHP 7. Seven. The framework upstream is dead. Patches happen in-house, by the same engineer who built the replacement, on his free time, because nobody else wants to touch it. The replacement is up, it runs in pre-prod, it has features, it has a clean migration path, the stack is up to date and conventional. He announces it. He asks for feedback.

And then, predictably, the same actors who never lifted a finger on the original show up to explain why his work is unsafe. Why was no impact analysis performed. Why no RFC. Why no security review. Why this "vibe coded by one person" replacement should not be trusted to displace, and I am not making this up, the actively rotting PHP 7 thing they have all been ignoring for three years.

If you have lived through a version of this conversation in the last twelve months, you are not alone. If you have not, congratulations, you are either in a healthy company or you are not paying attention. Either way, sit down. Let us talk.

The Conversation (Anonymized)

Names changed, pattern intact.

Engineer S. drops a link on the internal channel: new tool, pre-prod, "please play with it, tell me what is broken". Within an hour, positive replies, bug reports, live iteration. Exactly how software is supposed to ship in 2026.

Then security lead L. enters the room with the standard audit-form checklist: impact analysis, security review, was this validated, are we comfortable replacing "an open-source tool" with "a vibe-coded thing maintained by one person". S. patiently explains that the "open-source tool" in question has been abandoned upstream for years and the person currently patching it in production is, well, him. Alone. On weekends. That is the "maintained open-source tool" we are protecting.

An SSO debate breaks out and dies (the tool deliberately has no auth dependency on the company's own stack, because the whole point of it is to keep working when the stack is down). Then L. drops the line that broke the thread:

"We have processes. An explicit directive from the CEO does not bypass our processes."

Read that sentence again. Just sit with it. In a single line it tells you: the speaker believes governance flows from compliance, not from leadership. The speaker believes their checklist outranks the person who signs the payroll. The speaker would rather be technically correct than commercially alive.

Several of us, myself included, lose patience. The CEO, eventually, also weighs in. His message is more diplomatic than mine, which we will come back to.

A couple of days later, while the meta-thread is still arguing about whether the original work was rigorous enough, S. casually ships a working WebAuthn login on the same pre-prod. Passwordless. Hardware-key based. Live-debugged a password manager compatibility issue in front of everyone. Did not file an RFC. Did the work. Showed the work.

That is the engineer. That is the situation. Now let us extract the lessons, because they apply to your company too. Yes, yours.

Mistake #1: Greeting a Working Demo With an Audit Form

Let me say this in capitals so it gets through your corporate spam filter:

WHEN AN ENGINEER SHIPS A WORKING REPLACEMENT FOR A KNOWN-BROKEN PIECE OF YOUR INFRASTRUCTURE, ON HIS OWN TIME, AS A SIDE QUEST, YOUR FIRST MESSAGE IS "THANK YOU".

Not "did you do an impact analysis". Not "where is the RFC". Not "this should have started in the change board". Thank you. Cheers. Happy emoji. Then, and only then, you ask questions.

This is not a "be polite" point. It is a load-bearing point about how organizations function. If your default reaction to delivery is bureaucratic friction, you train your engineers to never deliver. They learn, fast, that the safe move is to do nothing and write a memo about it. After a few cycles of this, the people who actually build leave, and you are left with the people who write the memos. Your competitor, meanwhile, hires the people who left. You are now slower than them on the things that matter. You die. Not next week. But you die.

Process exists to enable delivery. The day process starts blocking delivery, you have inverted the contract. The compliance function is supposed to be a service organization for the people who ship, not a tollbooth that the people who ship have to bow to. Repeat that sentence in your head until it sticks, especially if you happen to lead a compliance function.

The engineer in the story did not ship to prod. He shipped to pre-prod, opened the floor, and asked for feedback. That is, textbook, the moment to challenge the design, including the boring administrative parts, with a concrete artifact in front of everyone. It is the cheapest possible moment to challenge the work, because there is something to challenge. The "you should have opened the change ticket first" reflex is not just rude. It is technically wrong. You cannot write a useful change ticket without a prototype. The prototype reveals the actual scope. Anyone who has worked in a real engineering team for more than three years knows this. Anyone who tells you otherwise is selling consulting.

Mistake #2: Citing Standards You Have Not Read

Let me dwell on this one, because it is the most fun.

The "you cannot bypass our processes" framing is almost always backed, somewhere in the speaker's head, by a vague reference to ISO 27001, SOC 2, PCI DSS, NIS2, HIPAA, DORA, or whatever three-letter (or four-letter) compliance regime is fashionable in their geography this quarter. Translation in plain English: "a sticker on a wall told me to say this".

Here is the catch. Every serious change management framework on the market, ISO 27001, PCI DSS, SOC 2, NIS2, the lot of them, explicitly includes provisions for emergency changes. The exact policy of the company in our story has a section that reads roughly:

Some urgent changes may temporarily bypass the formal process, provided the risk of inaction is greater than the risk of deploying without prior approval. Typical cases: critical vulnerability, active threat or malware, urgent regulatory compliance, risk to personal data, risk to physical safety.

This quote is a paraphrase of ISO 27001 (A.8.32 Change Management, with ISO 27002:2022 §8.32 guidance).

Cool. Now let me ask the obvious question. A piece of public-facing infrastructure running on PHP 7, abandoned upstream, patched in-house by a single engineer, with known security debt accumulating month after month, is what exactly, if not "critical vulnerability and risk to personal data"?

I will wait.

The point I am making is not "the engineer should have invoked the emergency clause". The point is that the person citing the policy in the thread clearly did not know the policy contains that clause. She invoked "process" as a universal block. The policy itself, the actual document, does not say what she said it says.

This is so common it deserves a name. Let us call it compliance cosplay. You read the title of the standard, you internalize the vibe of the standard, you wave it around at engineers who are too busy delivering to fight you, but you have not read the standard. The standard, almost without exception, is more nuanced than you are. The standard explicitly tells you when to bypass itself. The standard has a risk-based section. The standard says, in writing, if inaction is worse, ship.

None of these standards is carved in stone. ISO 27001, PCI DSS, SOC 2, NIS2, HIPAA, DORA, pick your acronym: they are all, at most, useful starting kits. They are frameworks, designed to be tuned to the risk profile of your business, by people who understand the business. A company that treats any of them as an immutable specification has not implemented the standard. It has implemented a cargo cult of the standard. Which, ironically, is precisely the failure mode the standard is supposed to prevent.

And let me preempt the obvious objection. Yes, PCI DSS has prescriptive controls. Yes, HIPAA has hard rules on PHI. Yes, some clauses are non-negotiable. Nobody is arguing you should ship credit card numbers in plaintext or store medical records on a public S3 bucket. The argument is that even the most prescriptive standards contain risk-based logic, emergency provisions, and explicit room for compensating controls. Read the document you are weaponizing. You may discover it is not the wall you thought you were hiding behind.

If your security team interprets compliance as "the most restrictive reading of every clause, applied uniformly", they are not protecting the company. They are protecting themselves from an audit finding, and they are doing it at the expense of every other team in the building. That is not security work. That is liability transfer from the security team to engineering.

Mistake #3: A Security Team That Does Not Code Is Not a Security Team

I wrote this in my 2026 security article and I am going to write it again because apparently it needs repeating until it sinks in:

If your security team's first move when looking at a piece of code is to ask for a process artifact instead of opening the repo, you do not have a security team. You have a compliance team larping as a security team.

A real security review starts in the code. You clone. You read the auth flow. You grep for dangerous patterns. You look at the dependency tree, the deployment scripts, the secrets handling. You file specific issues. You PR a fix when you can. You bring evidence.

Asking "was a security review done" without doing one yourself is a tell. It is the security equivalent of a manager saying "give me a status update" instead of, you know, reading the document the engineer wrote yesterday.

And here is the kicker that should worry every CISO reading this: the engineer in our story, the supposedly lone "vibe coder", had put his code through multiple LLM audit passes. He had already done more deliberate adversarial review than 80% of the SOC 2 compliant shops I have audited. He just did it with a tool the security team does not know how to use.

That is the gap. The engineer is using 2026 tooling. The security team is using 2015 forms. Only one of them is delivering.

If your security team's KPI is "number of changes blocked", they are not protecting the company. They are protecting their bonus. If they have zero Git activity in the last 90 days, they are not protecting the company. They are decorating it.

What the Security Team Is Actually Supposed to Do in 2026

OK enough roasting. Let us be constructive for a paragraph or two.

The job description of a security team in 2026 is not "gatekeeper at the end of the line". It is not "the people you call before going to prod". It is, and I cannot stress this enough, continuous, and automated.

Continuous means: every commit goes through SAST in CI. Every dependency change goes through SCA. Every deployment publishes signed SBOMs that get diffed automatically. Every container scan runs on every build. Every secret rotation is automated. None of this requires a human in the loop on the happy path. None.

Automated means: the boring 20-tab Excel spreadsheet that nobody wants to fill in, the half-broken risk matrix that the security team passes around quarterly, the RFC template with 47 mandatory sections, all of that is generated from the artifacts. Not requested from the engineer. Generated. The repo, the deployment manifest, the access logs, the inventory database, those are the source of truth. The compliance documents are an export of that source of truth, not a parallel reality maintained by hand.

Who, honestly, in 2026, has the time to fill in a 20-sheet Excel with sections like "describe the data flow", "describe the threat model", "describe the rollback plan", all by hand, in prose, when the actual data flow lives in a diagram generated from the code and the actual rollback plan is one PyInfra command? Nobody who is also delivering. That is the answer. Nobody who is also delivering.

If your security and compliance posture cannot survive contact with high-velocity engineering, the posture is broken. You do not fix it by slowing engineering down. You fix it by automating the posture. Generate the artifacts from the systems. Verify them in CI. Sign them. Publish them. Free the humans to do the only thing humans are actually good at: judgment on the hard cases.

If you ask "yes but who builds that automation"... that is your security team's job description for the next two years. Build the pipeline. Not police the engineers.

Two Choices: Build the Automation, or Be Replaced By It

OK, time for the awkward conversation nobody in the security industry wants to have out loud.

A huge chunk of what your security team does today, hand on heart, is automatable. Not "automatable in the futuristic AGI sense". Automatable today, with tooling that already exists, by an engineer with a free afternoon and an LLM tab open. Filling risk matrices. Generating audit reports. Drafting RFC reviews. Mapping controls to frameworks. Producing the quarterly compliance summary nobody reads. Cross-referencing CVEs with the inventory. Writing the "describe theJinn data flow" prose in change tickets. Verifying that the SOC 2 evidence is still where you put it last quarter. Building the access review spreadsheets. All of it. Automatable. Now.

Security professionals reading this, I am going to be direct, because I respect you too much to soften it:

You have two choices. Either you become the person who builds that automation, or you become the person whose job is eaten by it.

There is no third option. There is no "I will keep doing it manually because the AI is not perfect yet". The AI does not need to be perfect. It needs to be 10x faster than you and 80% as accurate, and your competitors will fire the slow 100%-accurate human in favor of the fast 80% AI plus one supervising engineer. That is the math. It is not a prediction. It is happening, right now, in the shops I audit this quarter.

The good news is that the path forward is fantastic for the people who take it. The security professional who learns to write code, build pipelines, use LLMs as power tools instead of being threatened by them, instrument their own job out of existence and then go solve harder problems? That person is going to be the most valuable hire in every company for the next five years. They get to leave behind the soul-crushing Excel work and spend their days on what humans are actually good at: judgment on hard cases, threat modelling on weird edge cases, incident response when something goes properly sideways, hunting attackers who are themselves now AI-augmented. The career upside is enormous.

The bad news is that the path is brutal for the people who refuse it. The compliance officer who keeps insisting that the 20-tab Excel "needs to be filled in by a human for traceability" is going to discover, in 18 months, that the engineer next to them generates the same Excel in 8 seconds, gets it signed off automatically, ships their feature, and goes home. The compliance officer becomes the slow one in the team. The expensive one. The one whose role keeps getting "restructured". They will write LinkedIn posts about how the industry has lost its way. They will not be entirely wrong about some of it. They will also not be employed.

This applies to me too, by the way. My job, infrastructure firefighting, is also being automated. A solid chunk of what I did manually in 2020, I now hand to Jinn, PyInfra, and the LLM tooling on top. I have spent the last three years making sure I am the person building the automation, not the person it replaces. That is the deal. That is the deal for everyone. Take it.

The security industry that emerges on the other side of this is going to be smaller, faster, more technical, and frankly better at protecting companies than the bloated compliance-heavy version we have today. That is good news for businesses, good news for customers, and good news for the security people who saw it coming. Pick a side. Pick it this quarter, not next year.

AI Changed the Game. "Vibe Coded" Is Now a Tell, Not an Argument.

Let me get to the AI angle, because it is the undercurrent of the whole conversation that nobody in the thread named directly.

In 2015, "one engineer, one weekend, full app replacement" was, genuinely, suspicious. The maintenance load alone would crush a solo engineer over time. The bus factor of one was real. The chance that the code was held together with luck and Stack Overflow was, statistically, not low. The "we need a small team to make this trustworthy" reflex made sense.

In 2026, that math has shifted. A single competent engineer with modern LLM tooling can, in a weekend, produce a clean, idiomatic, type-safe, test-covered application in a well-known stack. The code is generated, yes. It is also reviewed, refactored, and audited by the same tooling, multiple times, in ways human reviewers have never had the patience to do. The bus factor argument is also weaker: the code is documented inline, in a form the next maintainer (human or LLM) can read in minutes. The lone engineer of 2026 is, in operational terms, closer to a four-person team of 2018 than to a lone engineer of 2018.

Does that make "vibe coded" software automatically safe? Of course not. There are bad LLM outputs, hallucinated dependencies, subtle injection vectors, lazy auth, copy-pasted secrets. But there are bad human outputs too, and the bad human outputs are usually worse, because nobody re-reads them.

The correct security posture, in 2026, is not "vibe coded equals untrusted, hand-written equals trusted". That mental model is from a previous decade. The correct posture is artifact-first: the code, however it was produced, lives in a repo, gets reviewed, gets scanned, gets deployed via a known pipeline, gets logged, gets monitored, gets patched. The provenance of the keystrokes is not the security boundary. The behavior of the code is.

Treating "vibe coded" as a slur, in a thread, in 2026, has the same energy as treating "open source" as a slur in 2005. It is a marker, not of caution, but of being culturally three or four years behind.

And, again: the same tooling the security team is using as a punchline is exactly what the engineer used to make his code more rigorously reviewed than yours. Sit with that. Then go install the tool.

The LLM-In, LLM-Out Spiral: Where We Are Heading If Nobody Pumps the Brakes

Here is the future I am genuinely worried about. Not because of AI. Because of how teams reorganize around AI without thinking.

Picture the workflow we are sliding toward, in too many companies, right now:

  1. Engineer asks Claude to generate the code.
  2. Engineer asks Claude to generate the RFC justifying the code.
  3. Security person asks Claude to generate the review questions.
  4. Engineer asks Claude to generate the answers.
  5. Manager asks Claude to summarize the review.
  6. Compliance officer asks Claude to generate the change ticket.
  7. Auditor asks Claude to validate the change ticket.
  8. Everybody puts Co-Authored-By: Claude at the bottom of their commits, sticks a sparkle emoji on their LinkedIn bios, and goes home.

LLM in, LLM out, nobody knows anything. The artifacts pile up. The understanding does not. The first time something actually breaks in production, the four people responsible will discover that none of them can read the code, none of them know why a specific design decision was made, and the LLM that wrote it is currently being deprecated by the vendor.

This is the failure mode the industry is sleepwalking into. The fix is not "ban AI". The fix is to use AI for the parts where you do not need to deeply understand the artifact, and to refuse to use it for the parts where you do. Boring scaffolding, test generation, log triage, dependency upgrade chores: full speed ahead. Security-critical paths, architectural decisions, threat models, anything that someone will need to argue about at 3 AM during an incident: humans, in the loop, with their full attention.

The early signal of this failure is exactly the situation in our conversation. The compliance side cited a policy without having read it. The engineer audited his code with LLMs. Both sides leaning on tooling instead of comprehension, mirroring each other across the gap. The compliance side has been doing this for years, just with checklists instead of LLMs. The engineering side is starting to do it too, with the same risks. The healthy move is for both sides to stop performing diligence and start doing it.

The compliance team's accountability is to understand the policy they cite. The engineering team's accountability is to understand the code they ship. Both, in 2026, are easier to skip than ever. Both are also, paradoxically, more important than ever.

"We're Not Getting Eaten by Competition" Is the Most Expensive Cope in the Industry

There was a moment in the conversation, almost a throwaway, that I want to come back to.

I made the argument that this style of process friction was a serious risk because "competitors are catching up fast". The response, from the security side, was, paraphrased: "we are clearly not at that point AT ALL, you should ask the leadership team, that is not what I heard at the last quarterly, that is not what the CEO said in the most recent live AMA".

This is the moment I want every reader to freeze on. Because this exact sentence is being said, right now, in every company that is about to get its lunch eaten. It is the most expensive sentence in the modern enterprise. It is the cope of choice.

Let me spell out the cognitive trap. The company is growing. The metrics, on the slide deck, are up and to the right. The CEO says velocity is the priority. The town hall is upbeat. Therefore, the reasoning goes, we are fine. Therefore the engineer who is panicking about competitors is overreacting. Therefore the security team has the luxury of taking three months to evaluate a pre-prod tool.

The flaw in this reasoning is the difference between absolute growth and relative growth. You can be growing 30% a year and still be losing market share, if your competitors are growing 80% a year. You can be selling more this quarter than last quarter and still be heading for bankruptcy in four years, because every customer you onboard takes 50% longer than your competitor's onboarding, and every renewal feels harder than it used to. The death of a company in the AI era will not look like a crash. It will look like a slow, polite, well-documented loss of market share, narrated by upbeat quarterlies, until one day the numbers stop being good and nobody can quite say when the inflection happened.

The single best predictor of who wins this decade is how fast you can turn a working preprod into a production deployment. If your answer to that question is "depends on the next change board meeting", you are losing. Quietly, politely, in the most ISO-compliant way possible, but losing.

AI is going to amplify this gap. The companies that figure out how to integrate AI into product, into engineering, into security, into compliance, are going to compound that advantage week over week. The companies that treat AI as a thing engineers should be policed about will also compound. In the wrong direction. The gap will not be linear. It will be exponential. By the time the slide deck reflects it, it will be too late to course-correct.

If you find yourself denying that this dynamic applies to your company, with a wave of the hand and a reference to the latest town hall, please go and sit somewhere quiet, and read your competitor's release notes from the last six months. Read them honestly. Then come back.

The Right Reaction, As a Template

For the avoidance of doubt, here is what the security lead's first message in our conversation should have been. I will donate the template to any compliance professional who needs it.

"Hey S., this is great. Genuinely. The current thing has been embarrassing for years and I am glad someone is finally moving on it. I will clone the repo this afternoon and do a first pass: auth flow, dependency tree, secrets handling, deployment story. Expect a list of issues by tomorrow. In parallel, I will open the change management entry myself so you do not have to context-switch. Let us push this to prod ASAP."

Read that twice. Notice what it does. It thanks the engineer. It commits the security person to actual work. It removes a process burden from the engineer instead of adding one. And critically, it points the whole conversation at production, fast.

That last part matters most. Every day the legacy stays in production is a day the company looks bad, a day the embarrassing screenshots circulate, a day the sales team has to apologize for the look of the thing. The risk of moving fast is bounded, because the engineer is competent and the stack is conventional. The risk of moving slow is unbounded, because the legacy is rotting in production right now.

A security lead who optimizes for "no change happens" is not being prudent. They are extending the lifetime of the known bad state. That is the opposite of security work.

The CEO Got It Right

The CEO eventually jumped in. The core of his message: thank you to the engineer; velocity is our top priority; a working pre-prod is the correct moment to discuss things; the design decision on auth dependencies is policy, not negotiation; change management opened after the fact, on a concrete artifact, is healthy and not a violation; and to the security team specifically, your job is to be the people engineers want to come and talk to early, not the people they hide things from.

That is the right message. That is what leadership looks like in 2026. Not "I am the CEO, I follow the process". Rather: "I am the CEO, and part of my job is to override the process when the process is producing the wrong outcome, and the process is going to stop the right thing from happening this week."

A company governed by its own compliance, instead of by its product and engineering instincts, is a dead company that has not yet been informed. Compliance is the brakes on the car. The car still needs to move.

Closing: The Real Threat Model

If you take one thing from this article, take this.

In 2026, the threat to your company is not the engineer who shipped a working pre-prod over a weekend with LLM tooling. The threat is the layer of people around that engineer who do not know how to engage with what just happened.

The security person who cites a standard without reading the emergency clause.

The compliance officer who treats RFCs as the work, instead of the documentation of the work.

The middle manager who is more afraid of an audit finding than of being out-shipped by a faster competitor.

The colleague who is convinced "we are not at risk", because the last town hall was upbeat.

These are the people who are killing companies in 2026. Not "vibe coders". Not "rogue engineers". The professionally cautious. The aggressively compliant. The people who use the word "governance" as a shield against having to do anything difficult.

If your company has a single engineer like S. on payroll, your job, whatever your title, is to make his life easier. Open the repo. Read the code. Open the change ticket on his behalf. Bring him coffee. Push for prod by Friday.

If you do not, someone else will. That someone else will eat your customers, your sales pipeline, and eventually your engineers. They will do it in a fraction of the time it took you to write your last RFC. They will do it with a small team, modern tooling, and a CEO who treats velocity as a survival trait, not a buzzword.

And by the way, the same engineer, in the same thread, a couple of days later, casually shipped passwordless WebAuthn login. Fixed a real-world compatibility bug live. Did not ask permission. Did not open an RFC. Did the work. Showed the work.

That is the company you want to be in. That is the engineer you want to thank.

And if your first instinct on reading this is to draft an email to your security team asking them to "tighten the AI usage policy", please, for the sake of your shareholders, your customers, and your remaining engineers: log off, and go read a repo.

If you are reading this and thinking "we don't do that at our place", congratulations, you have joined an exclusive club. If you are reading this and thinking "oh god, that's our last Tuesday", you know what to do. The rest of us will be in the repo, probably with Claude open in another tab, doing the actual work.