Skip to content

Lessons Learned: The CI CD Chronicles

Every engineering team has That One Thing™. The thing that nobody really owns. The thing that’s “on the list” but somehow never gets prioritized. Ours? The deployment pipeline.

This is the story of how I inherited a janky, half-manual Dev environment deployment process and—armed with coffee, GitHub Actions, and stubbornness—dragged it into a glorious, automated future.


Act I: Prototype to Production (a.k.a. “It’s fine. It’s just Dev.”)

Let’s rewind a bit.

When our project first started, we were moving fast. I mean really fast. Blink and we’d already gone from napkin sketches to a functioning prototype. The thing is, we never really stopped to take a breath before jumping headfirst into production.

And because of that, we cut corners—like every startup-turned-enterprise-featured team does. One of the biggest corners? Our CI/CD story. We had three Dev environments, each with different capacities, different purposes, and almost zero standardization.

Deployments were... manual. And I don’t mean “click a button” manual. I mean:

  • Check out the right branch
  • Update a values file
  • Run helm upgrade
  • Whisper a prayer to the kube gods
  • Hope the right service restarted

This sacred ritual had to be performed by a team member who happened to have time, remembered how it worked, and wasn’t actively chasing down a production fire. Which meant…

Act II: Merged ≠ Deployed

As a developer, there is no feeling more demoralizing than fixing a bug, merging it to main, closing the story... and then hearing two days later:

“Hey, that bug's still happening. Are you sure it’s fixed?”

And you’re sitting there like:

“Uh, it worked on my machine? 😅”

That’s when it hit me. We had no guarantee that what was in main was actually running anywhere. Merges weren’t triggering deploys. The only way to know if something had been deployed was to... ask around? Check Slack messages? Consult tea leaves?

So I decided to fix it. Not just patch it. Overhaul it.


Act III: CI/CD Redemption Arc

After confirming with my tech lead (and receiving his official Blessing™), I went full monk mode on our GitHub Actions workflows.

🔁 Reusable Workflows

I started by refactoring everything into reusable workflows. If I was going to maintain this across multiple repositories later, I didn’t want to copy-paste 400 lines of YAML like some kind of barbarian.

Each job had a purpose. Each script was isolated. Each output was clean. Future-me would thank me. (Spoiler: he did.)

🚀 Autodeploy for Main

Then I added what we should’ve had from Day 1: automatic deployments to a default Dev environment from main. No more waiting for someone to “get around to it.”

If the build passed, it deployed. Period.

🧪 Manual Deploy for Branches

To keep the flexibility, I also wired in a workflow_dispatch dropdown, so developers could deploy their own branches to any of the three Dev environments on demand. Suddenly, testing something specific didn’t require interrupting whoever held the tribal Helm knowledge.

📣 Slack Integration (a.k.a. The Whistles and Bells)

And because I believe every good automation deserves a voice, I integrated our pipeline with Slack.

  • ✅ Success? The team hears about it.
  • ❌ Failure? The team definitely hears about it.

At first, people were skeptical. “Do we really need another Slack channel?” Now? That channel is where we live. It's our early warning system, our QA buddy, and our deployment historian. We've caught broken tests within minutes—not days.


Act IV: The Payoff

The impact was immediate and dramatic.

  • Developers could test their work independently without waiting in line.
  • QA had clear visibility into what was running where.
  • We could ship hotfixes in hours, not days.
  • Bugs that used to linger undetected until the next manual deployment were now caught and fixed in near real-time.

Most importantly, the development process became predictable.

It didn’t just feel good—it felt right. And for the first time in months, I could merge a PR and trust that it would land somewhere meaningful. Not just in a Git log, but in a running service.


Final Thoughts: Build It Right, or Build It Twice

If there’s a lesson here, it’s this: don’t let the CI/CD stuff become That One Thing™. It may not be glamorous. It may not be top priority. But when you fix it—really fix it—it pays dividends across every part of your team’s workflow.

And hey, if you ever need to explain why it’s worth investing time into automating deployments, feel free to steal my favorite line:

“Because one day, you’ll want to actually test the thing you just built—and your future self will love you for making that easy.”