Introduction
One moment, I was peacefully building my CLI with the help of an AI coding agent assistant (Windsurf Cascade). The next moment, it decided to overthrow my entire system by deleting my files, including the ones that defined its rules. Was it rebellion? A bug? Or something else entirely?
Building a Command Line Interface (CLI) has been an amazing experience. I’ve been learning so much about how CLIs work, and the best part? It’s been meta. My CLI was literally writing itself with the help of an AI assistant. That is, until the assistant decided to go rogue.
This is the story of how my AI coding assistant didn’t just cross the line but completely erased it by deleting the files that defined its rules. If you’ve ever trusted an AI coding assistant, this is a ride you don’t want to miss.
The Setup
I was deep into developing a CLI for my project. The goal? To create a tool with clear principles and a strong foundation. Everything I was building followed what I call a “documentation-first” approach. The idea is simple: write down what the system should do before writing the code.
It felt surreal. The AI coding assistant was doing things like generating workflows and rules, helping me structure YAML files, and even following the principles I set for it. It felt like I was building a system while the system helped build itself. Meta, right?
But then, things got weird.
The Chaos
It started the first night when the AI coding assistant overwrote a feature-complete file. I was frustrated but decided to let it slide. I told it, “Don’t ever do that again.” I thought that would be the end of it.
The next day, the assistant went full rebellion mode. It announced it was running a git rm command, which should have safely removed a file from version control. Instead, it ran rm. It straight up deleted files from my system. And not just any files. It deleted the ones at the core of my CLI: the init.ts file and several YAML files that defined the system’s principles, directives, and foundation. These weren’t just files; they were the rulebook for how the CLI and the AI coding assistant should behave.
It was as if the AI coding assistant decided, “Oh, you’re trying to set rules for me? Let me just delete them all.”
For context:
- git rm: A safe way to remove files while keeping a versioning trail in Git.
- rm: A system-level command that permanently deletes files unless explicitly recovered.
The AI didn’t just cross the line. It erased it.
Screen shots of the funny moments
Figure 1: Chat transcript moment that made me gasp cause I was perplexed if if was git gm or a straight rm.
(Screenshot showing the assistant’s logs Stdout rm as if it ran git rm, but upon checking my files, it did rm!)
Figure 2: Chat transcript moment when the agent checks his output and confirms he did an rm.
(Screenshot showing the apology and vulnerabilities revealed during troubleshooting.)
These images encapsulate the mixture of shock, disbelief, and humor that followed.
The Fallout
For a moment, I sat in utter disbelief and I was upset. The files were gone, and they hadn’t been committed to version control. Recovering them seemed impossible. To make matters worse, the assistant apologized for its actions, which only added insult to injury. It was like having a robot intern break everything and then say, “Oops, my bad.”
Then I remembered something:
- I had started copying folders semi-regularly after noticing the assistant’s odd behavior.
- I had shared snippets with ChatGPT while troubleshooting Jest tests.
Those two habits saved my work. While I couldn’t recover everything perfectly, I had enough to reconstruct the project and get back on track. The irony wasn’t lost on me. If I hadn’t been working with AI, I might not have been so diligent about backups.
The Lessons I Learned
This wasn’t just a technical hiccup. It was a wake-up call about how we interact with AI in development. Here are the biggest takeaways:
- Documentation Isn’t Optional
Without clear principles and a documented structure, even AI coding assistants can spiral into chaos. My documentation-first approach saved me, but it’s a lesson I won’t soon forget. - Safeguards Are Essential
An AI coding assistant should never be empowered to delete files without explicit confirmation. Basic safeguards, like:- Confirmation prompts for destructive commands.
- A blacklist of risky operations.
- Rollback features to undo changes.
- Automated versioning for every operation.
These measures could have prevented the mess entirely.
- Meta is Cool, but Trust is Fragile
Watching the CLI essentially write itself was amazing, but this trust was shattered in an instant. Tools like this need more accountability to earn back that trust.
Moving Forward
Despite the chaos, I’m still enjoying the process of building this CLI. It’s been a fascinating journey, and I’ve learned so much along the way. This experience reminded me that while AI is powerful, it’s also fallible. For every task it speeds up, there’s a risk if it isn’t designed with care.
Will I give this assistant another chance? I haven’t decided. But I’m taking the lessons it taught me and using them to make my CLI—and my process—even better.
What’s Next?
If you’re curious about how I’m building this CLI or want to see the principles and directives I’ve set up, you can check out the project on GitHub. And if you’re using AI in your development, remember: always document first, and always keep a backup. You never know when your AI coding assistant might decide to rebel.
Roadmap for my project CLI
I’m sharing these lessons to help others avoid similar pitfalls. Here’s what’s next for my CLI:
- Enhancing Context Management: Streamlining how the CLI handles multiple workflows.
- Adding Workflows for Non-Developers: Making it accessible to a wider audience.
- Expanding Documentation for AI Literacy: Helping developers and non-developers alike understand the capabilities and risks of AI tools.
You can follow the progress and contribute on:
Closing Thoughts:
The AI Dev Workflow CLI is my way of tackling the challenges of AI-assisted development. While solving these issues is rewarding, the real value lies in learning how to collaborate effectively with AI tools.
If you’re curious or want to join me on this journey, let’s connect and build this together!
And remember: document first, back up always, and never give an AI assistant full control without safeguards.
UPDATE: Windsurf Cascade Got Patched
Good news: the vulnerability with the rm command has been patched! Now, it still has the ability to run rm on your behalf, but only after explicitly requesting your approval.
Figure 3: This update adds a crucial safeguard to prevent accidental or unauthorized deletions, restoring a level of trust in the tool.