SheHacksPurple: February 2026

Call your MP for me? Or do it for better code!

The SheHacksPurple Nerd-a-licious Newsletter

💜 Hit ‘reply’ to send me a message! I read every response and love hearing from you. 💜 

Hi Everyone!

Exciting news! Petition e-7115 has reached its minimum goal of 500 signatures!!!! My petition asks the Government of Canada to create a mandatory secure coding standard for all federal departments and orgs. But our work isn't done. The petition remains open for signatures until May 26, 2026, and after that it will be presented in the House of Commons for a vote. We need them to vote yes! To ensure it passes, it's crucial that Members of Parliament (MPs) have heard about it and know that YOU want them to vote YES on e-7115.

Here's how you can help:

  1. Sign the Petition: If you haven't already, please sign here: Petition e-7115.

  2. Contact Your MP: Reach out to your local MP to express your support for the petition and urge them to vote YES when it's presented during question period. You can find your MP's contact information by entering your postal code here: Find Your MP.

Sample Potential Message to Your MP:

Subject: Support for Petition e-7115 on Mandatory Secure Coding Policy

Dear [MP's Name],

As a constituent of [Your Riding], I am writing to express my strong support for Petition e-7115, which calls for the establishment of a mandatory secure coding standard across all federal departments and Crown corporations.

In an era where cybersecurity threats are increasingly sophisticated, it's imperative that our government takes proactive measures to safeguard sensitive information and critical infrastructure. Implementing standardized secure coding practices will not only enhance our national security but also reduce the risk of costly breaches and service disruptions.

I urge you to support this petition and advocate for its adoption in the House of Commons.

Sincerely,
[Your Name]
[Your Address]
[Your Email]

Please call your MP for me????

Chad and me at WWHF!

I just got home from the Wild West Hackin' Fest in Denver, where I lost at a hacker gameshow, did quite well at training, and got to see several wonderful friends.

I received a request from Dan for some content on several topics, and I wasn’t sure how to do it all, so instead I found great articles by other people to share with you.

Thank you for telling me topics that interest you. I am not an expert on everything, but I do what I can!

PS You may have noticed not a lot of content this month. I know. I took some time off, but I am now getting back to work. :-D

Jason Haddix, me and Julia Haddix. Surrounded by awesome at Wild West!

AI Is Everywhere. So Why Are 37% of Security Teams Still Running Manual Workflows?

Visibility is solved. Actionability isn't. The 2026 Actionability Report reveals why top teams are pulling ahead. Only 1 in 3 keep their asset inventory current. 51% lose critical context during remediation. 37% are still stuck in manual workflows. The data is there. The action isn't. Find out how elite security teams are closing the gap.

New Content!

Events!

Article

The Psychology of Bad Code Part 3 - Vibe Coding

For the rest of this series, I am going to follow a similar format for each post/behavior. I will name the behavior, then various biases and heuristics that I believe apply, and then give some examples that may or may not feel familiar. Next, I will cover why the behavior seems reasonable at the time, and how it’s causing a security problem. I will follow up with suggested solutions, and I might indulge a little in that section, but hopefully you don’t mind. As always, feel free to send feedback!

- TJ

The behavior: Vibe Coding

AI-assisted, fast, contextless coding without verification.

What this looks like in the real world

  • Accepting and committing AI-generated code because it compiles and looks clean

  • Skipping review because “the AI probably got it right”

  • Letting AI write auth, parsing, validation logic, or other complex security controls

  • Reviewing outputs quickly instead of reasoning through every part

This often shows up when we are under time pressure or when feel behind.

Behavioral biases at play

  • Automation bias: We trust suggestions from automated systems, especially when they appear confident. AI is so confident.

  • Fluency bias: Clean, readable code feels more ‘correct’ than it actually is. It just looks good.

  • Cognitive offloading: We delegate thinking to tools when they seem reliable. Some people might call this laziness, but I don’t think that’s fair. We work in tech to make things easier. We’re trained to always seek out the easiest way. It’s literally our job.

These biases are both common and normal. They exist to conserve mental energy. They aren’t bad, usually they serve us well. But not in this case.

Why this behavior makes sense in the moment

  • AI tools are right a lot of the time.

  • The code they produce looks professional and complete. Plus, it compiles!

  • We are usually rewarded for speed rather than quality (which means less security in this case)

  • Reviewing AI code might feel redundant for some people

  • It usually takes a long time before small shortcuts get caught (such as the annual pentest)

This seems like rational behavior for a high-pressure situation. You might do this. I might do this too.

The security risk

As a person who creates training and reviews a lot of code as part of that process, let me tell you the stuff I’ve seen the AI get wrong…

  • Missing authorization or incorrect access checks

  • Incomplete or poor quality input validation

  • Assumptions about trust boundaries that are totally wrong (implied trust)

  • Error handling that leaks sensitive information, or is missing altogether

  • Constantly missing security controls where the AI knows they should be. If you don’t ask for it explicitly, there’s a good chance it won’t be there.

The biggest risk is the context being wrong, which can cause a cascade of issues. AI does not know your system unless you literally give it a copy. And sometimes that still isn’t enough.

Let’s call this context collapse. The AI generates plausible code without any understanding of your system’s history, trust boundaries, or constraints.

Solutions:

Although training developers on how to use AI more safely and how to review code is a great foundation (which is what I do for a living, in case you want to hire someone for that), we need to do more than just training. If we expect them to reply on willpower to resist taking short cuts, we are likely to end up disappointed. Let’s look at some ideas for behavioral and system-level fixes.

AI System Setup

Let’s start with setting up whatever approved AI your developers have access to with security by default. Let’s connect a RAG server with secure code examples, or anything else you can give it to so that it has better code to reference. I realize a lot of people don’t have something to work with for this, but I swear I will get to this at some point!

Up next, let’s set up a list of prompts that the AI should apply every single time (add it to the memory), so that it auto-reviews the code it generates and cleans it up. I suggest you turn your secure coding guideline into prompts. If you don’t have a guideline, you can use mine.

Secure Defaults

If you can find a technical way (each IDE and AI assistant is different) to add a prompt to the user to review risky code before accepting suggestions that would be a great nudge (which is a well-known type of behavioral economic intervention). For example, “This line modifies auth logic. Review carefully.”

If you can add a check list for code review as part of your pull request process, that would also be helpful. If you can have it also force an additional reviewer if you’ve changed or added complex security controls, that would be a nice point of friction to ensure we give it more attention.

Let’s Talk Friction

If we add a pause or some other sort of ‘friction’ to make someone think a bit more while making important decisions, we get better results. It’s like adding a barrier to entry, it’s not huge, but it’s enough to make someone stop and think. For friction, what about… Requiring a short, written explanation of what the AI-generated code does before we merge it? If we can’t explain it, perhaps we shouldn’t commit it. I’d love to hear of other ideas for friction or important places to pause.

Conclusion

Vibe coding is not ‘bad’ per se but giving over all our decision making to powerful tools we do not understand certainly is. Let’s design systems to help us avoid falling into this obvious trap.

We end with a meme.

I feel her pain!