Lessons Learned: When Your UI Looks Fine… Until It Doesn’t
(Or, How I Stopped Worrying and Added Cypress to the Pipeline)
Let’s be honest: there’s no worse feeling than hearing from a client that a feature you know was working is suddenly… not.
“Hey, that button doesn’t do anything anymore.” “It worked yesterday!” “Well, it doesn’t today.”
Cue the cold sweat and the quick scroll through Git logs. Welcome to the wonderful world of UI regressions.
The Context: Backend Was Covered. UI… Not So Much.¶
In our project, we had a solid foundation of backend tests. The pipeline would catch broken APIs, failed integration chains, even those sneaky edge cases someone thought they could sneak through on a Friday evening.
If a dev made a change that introduced a side effect? Boom. Pipeline failure. That’s the dream, right?
But the frontend? Different story.
We had no automated UI testing. And that meant regressions would slip through like ninjas—completely undetected—until they ambushed us in production.
The Symptoms: Everything Works... Except When It Doesn't¶
We saw the signs:
- Bugs that only showed up after deployment.
- Features mysteriously broken in one environment but working in another.
- Devs saying, “It works for me,” followed by a pipeline yelling, “No it doesn’t.”
Worse? Even our builds weren’t always reproducible. We’d see a test pass on one developer’s machine and then fail in CI/CD.
That, to me, was a big red flag.
The Diagnosis: No Visibility = No Confidence¶
When your UI changes break things silently, you're flying blind. And clients don’t care if your backend has 97% test coverage. If the dropdown doesn’t work, you’re the villain.
So I made the case to the team:
“Let me add Cypress tests. Let me bake them into the pipeline. Just a few smoke tests. If we hate it, we roll it back. Deal?”
They agreed. And I got to work.
The Solution: Cypress + CI + Sanity¶
🧪 Step 1: Add Cypress¶
I started with core user flows:
- Login
- Navigation
- Button clicks
- Input validation
- State changes
It wasn’t about 100% coverage—it was about risk coverage. Cover what breaks often.
🔄 Step 2: Integrate with the Pipeline¶
I wired Cypress into our GitHub Actions workflow, so every PR now ran:
- Frontend build
- Cypress smoke tests
- Slack notification on failure (because nothing says “fix your PR” like a ping at 9 a.m.)
🔍 Step 3: Track Consistency¶
And then we started seeing the real problems.
Bonus Problems: Dependency Drift and the npm vs. yarn War¶
Once we had consistent builds in CI, we realized just how inconsistent local dev environments were.
- Some devs used npm
- Some used yarn
- Both seemed innocent... until they weren’t
One particular regression baffled us.
Cypress passed locally. It failed in the pipeline.
Turned out: someone used npm install
on a repo originally designed for yarn
.
Here’s what we learned:
- npm's legacy dependency resolution pulled in a package version that yarn refused to build with.
yarn.lock
got out of sync.- The app failed silently on dev machines, and loudly in CI.
We traced it to one rogue package that wasn’t yarn-compatible in its latest release. Boom. Mystery solved. Rage contained.
We fixed it by:
- Locking the node version via
.nvmrc
- Enforcing yarn-only installs via
preinstall
checks inpackage.json
- Documenting it in our onboarding guide: “Do NOT run npm install unless you’re ready for pain.”
The Result: Less Guessing, More Confidence¶
After all this:
- Regressions dropped noticeably.
- Our confidence in each release increased.
- Developers stopped playing “Does it break in Prod?” roulette.
- And the UI finally had a seat at the grown-up testing table.
Final Thoughts: Test Your UI, or Prepare to Babysit It¶
Look—automated tests won’t save you from everything. But without them, you're one typo away from embarrassment.
The real lesson?
If your frontend has no tests, you don’t have a frontend—you have a coin toss.