A platform the team had given up on
Postmedia The situation
After a major acquisition, Postmedia ended up running a legacy platform at 100% capacity, with chronic outages. The team was exhausted. They'd been close to the system for so long that they'd stopped seeing it. Their honest belief was that they'd hit a hard physical limit — that the only way out was a multi-year, multi-million-dollar rebuild.
What I did
I sat with the system for a few weeks before doing anything. Just looking. Talking to people. Reading code. There was a caching configuration the team had written off as too risky to change. I thought it was worth trying. We piloted it in a low-stakes corner of the platform first, where if it broke we could roll it back in minutes.
The outcome
Utilization dropped 40% almost overnight. The company got eighteen months of breathing room to plan the real migration on its own terms — instead of in a panic. The team's confidence came back, which mattered as much as the technical fix.
What I'd want a reader to take from this
Sometimes the team closest to a problem is the team least able to see it. That's not a failure. That's just how proximity works. Bringing in someone with no history with the system isn't about being smarter — it's about being able to ask "have we tried X?" without it feeling like a criticism.
A year of failure, turned into a flagship
Anheuser-Busch The situation
Anheuser-Busch had spent a year trying to make a Salesforce Lightning portal work for their global employees. It was non-functional. The team was demoralized, the executives were frustrated, and corporate IT and the project team were talking past each other across multiple time zones. Nobody could agree on what to do next.
What I did
The platform was new to me too — I learned it on the way in. But the actual problem wasn't really technical. It was that nobody had been in the room with the authority to make decisions and the willingness to talk to everyone, on every continent, in their own time zone. I took that role. Brokered alignment between IT and the executives. Got a remote global team pointed in the same direction. Then we built.
The outcome
The portal launched in months and became a flagship piece of internal infrastructure for the company.
What I'd want a reader to take from this
"Failed for a year" rarely means "the technology is impossible." Usually it means "nobody had the standing to make a call." That's not a code problem; it's a presence problem. And presence is one of the things a fractional CTO actually brings.
A national campaign, fourteen days from disaster
Labatt The situation
Labatt was fourteen days out from a national designated-driver campaign — the kind that's been booked, paid for, and announced. The plan was to ship native iOS and Android apps. App store review wouldn't clear in time. Everyone knew, and nobody wanted to be the one to say it.
What I did
I came in, looked at what was actually possible in the days we had, and said the quiet part out loud: drop the apps, build a high-performance web app instead. We did. It included a custom head-detection feature that let people put their faces on superhero costumes — this was years before the AI tools that would make that easy.
The outcome
The campaign launched on time. Zero downtime through the LCBO event deployment and the national rollout. A campaign that almost missed its date became one of the company's flagships.
What I'd want a reader to take from this
Most disasters in technical projects aren't surprises. They're things people know but can't make themselves say. Part of the job is to be the person who can say them — and then offer a real plan immediately afterward, so the saying doesn't feel like criticism.
A global conference that had to go virtual
Autodesk University The situation
COVID forced Autodesk to cancel their flagship in-person user conference. They needed to pivot to a fully virtual event within a few months — one that could handle more than 100,000 simultaneous attendees across 200+ countries. They wanted personalization (who you are, what you're attending, what's relevant to you) at the same time as caching (what makes a site fast across the world). Those two things usually fight each other.
What I did
We decoupled them. Personalization happened in one layer, caching in another, and they didn't trip over each other. It required a different architecture than what was standard at the time, but it wasn't exotic — just deliberate. We had a few months to deliver — enough time to be deliberate, not enough to second-guess.
The outcome
Zero performance degradation, anywhere on the planet, for the duration of the event.
What I'd want a reader to take from this
Crises don't require exotic solutions — they require someone who's seen the shape of the problem before and can make the right architectural decisions quickly, before the event, not after.
A headless CMS, a decade early
Southam Newspapers The situation
In the early 2000s, Southam Newspapers (later Postmedia) needed a way to run dozens of newspaper sites without rebuilding each one from scratch. The standard approach at the time was monolithic — one big CMS doing everything, locked to a specific front-end. We tried something different.
What I did
I architected a headless platform — content lived in one place and could be presented anywhere. It was years before "headless" was a category the industry talked about. I grew the team from one person (me) to more than fifty as it scaled.
The outcome
It became the backbone of what would grow into Canada's largest digital media network, supporting millions of daily visitors.
What I'd want a reader to take from this
Sometimes the right architecture is unfashionable when you build it and obvious in retrospect. The trick is being willing to build the unfashionable version when you can see why it matters — and being willing to defend it for the years it takes for the rest of the industry to catch up.
Enterprise tools in a fast-moving market
A startup pivot The situation
A startup I worked with had inherited an enterprise architecture that didn't fit the market they were actually selling into — the Shopify ecosystem, where iteration speed matters more than enterprise polish. They were moving slowly because their tools assumed a slower world.
What I did
We stripped out the parts of the stack that were slowing them down without breaking what worked. It wasn't a rewrite — it was a re-tooling. We kept the durable parts and replaced the heavy ones with lighter equivalents.
The outcome
They moved from monthly releases to daily ones, found product-market fit within months, and grew from there.
What I'd want a reader to take from this
The right tools depend on the market you're in, not the market you came from. A team can be doing everything "correctly" by their old playbook and still be moving too slowly for the new one. Recognizing the mismatch is half the work.