January is typically full of big ideas about the future. This piece looks at the problems we’re already running into, and how they’re shaping Digital Journal’s editorial direction for 2026.
In mid-December, we held our first Editorial Advisory Committee meeting where a group of experts came together to inform Digital Journal’s coverage with insight on what matters now to business leaders, policy makers, technologists, and talent.
The committee helps surface the issues, gaps, and real-world dynamics shaping how innovation plays out inside organizations and across ecosystems, and those conversations help guide what Digital Journal will focus on and report on over the following six months.
Kamales Lardi chaired the session and guided the discussion. I mostly observed, which lasted about as long as you’d expect before I started asking questions. Old habits.
The group included Terry Rock, Kirstine Stewart, Clark Lai, and Nathan Mison, with experience covering Canada and the U.S. and extending across Europe, the Middle East, and Asia. Benjamin Bergen, now CEO of the Canadian Venture Capital & Private Equity Association, is also a member of the committee, though he couldn’t join this meeting live.
Early in the conversation, Nathan offered a line that kept coming up as we discussed described what everyone was seeing in very different contexts.
“We’re still running medieval institutions with god-like technology,” he said to kick things off.
No one needed it explained.
We’ve built extraordinary capabilities at breathtaking speed, then asked organizations, governments, and leadership structures designed for a much slower world to manage them.
Clear lines of authority, long feedback cycles, and conservative risk models are now responsible for systems that don’t behave that way anymore.
These issues have shown up repeatedly in our reporting over the years. What the committee’s conversation added was perspective on where it’s now failing in practice in Canada, inside organizations trying to make decisions with tools their systems weren’t designed to handle.
I’m going to unpack some key things we heard in that meeting, and what it means for Digital Journal in 2026+.
Here’s what we heard, and what we’re going to do about it.

What AI maturity actually demands from organizations
AI is already embedded across many organizations. That’s not up for debate. What came through clearly, however, was how uneven things look once the technology moves past experimentation.
Yes — it’s still an issue, and several committee members described the same pattern, using different language. AI initiatives advance quickly at the surface level. Tools roll out. Pilots multiply. Demos impress. And then progress flattens.
The technology keeps working, but organizational structures haven’t changed, and that has become an AI issue.
Leadership teams want results, but governance models still assume slower cycles and clearer cause-and-effect. Organizations invest heavily in capability while leaving operating models largely intact. The result is “pilot hell” where there’s lots of activity, and very little institutional movement.
What stood out was how often organizations underestimated the organizational work required to use AI effectively. Design, accountability, and oversight were frequently treated as afterthoughts rather than built into how decisions are made day to day.
That shows up as teams being unsure if they can rely on AI outputs, who is accountable when decisions go wrong, and when systems should be trusted versus overridden.
In this environment, AI doesn’t really fail outright. It just never becomes routine work. It becomes another layer added to systems already struggling with managing change.
“Move fast and break things” isn’t reality in a lot of organizations yet. Nothing broke because nothing major is moving at speed yet. This is where maturity shows up and can change the game.
When leadership is willing to rework decision-making, incentives, and governance to reflect what the technology actually enables, AI starts to change how work gets done.
In these cases, it becomes part of normal workflows rather than something teams have to justify or work around. Without those changes, companies add new capability while leaving the rules that govern everyday work untouched, which keeps AI confined to pilots, side projects, and pockets of enthusiasm instead of institutional use.
As for how we fold this into Digital Journal’s coverage in 2026, here’s a summary:
- What we heard: AI isn’t the mystery anymore. The gap is organizational. Pilots are easy to start and hard to turn into day-to-day capability, because operating models haven’t moved. Governance still assumes slower cycles and clean accountability. Human-centred design gets pushed down the list. Oversight gets discussed and then parked. The result is familiar: activity everywhere leads to progress that stalls when it hits the parts of the organization that make real decisions.
- What we’re doing about it: In 2026, we’re going to report on AI the way it behaves inside organizations, not the way it looks in demos. We’ll follow what happens after the pilot: who gets authority to use these systems, who carries accountability when decisions go sideways, what governance looks like when it isn’t just a committee, and what operating changes make adoption stick. We’ll also cover the unflattering version of the story: where deployments stall, how risk gets handled when outputs aren’t definitive, and what happens when old rules collide with new capability. Editor’s note: If you’re pitching Digital Journal for coverage, focus on this.
Why innovation conversations keep returning to policy and power
As the discussion moved between AI adoption, commercialization, and scale, we kept running into a few big themes and topics: Canadian defence spending, procurement rules, regulation, and national priorities.
The Canada-U.S. trade war has forced Canada to rethink how it retools its systems and decides what gets funded, bought, and scaled. Innovation follows buyers, which means Canadians need to create new pathways and rework existing ones to align procurement, policy, and market demand at home.
The Canadian government is committing nearly $82 billion to defence spending over the next five years, and the implications of that came up a number of times in our committee discussion, particularly around dual-use technologies. These are tools and systems developed for defence or security purposes that can also be applied commercially, shaping how innovation moves from public procurement into the broader economy.
Canada is putting real money behind defence and national security priorities, and that money comes with requirements around domestic capability, suppliers, and timelines. It changes which projects get attention, which companies find buyers, and which work attracts capital. Some technologies now have a clearer path because there is a government buyer willing to spend at scale. Others, even when they work, struggle to find a first customer because they don’t fit those priorities.
Defence spending also came up because it changes the first-customer problem.
When governments commit serious money to a category, investors stop asking “is there a buyer?” and start asking “can you qualify to sell into it?” That’s a very different conversation. It turns innovation into a market access question, and government buying into an investment signal. If a company can credibly get into that buying pipeline, capital gets easier to raise. If it can’t, the technology quality doesn’t matter nearly as much as everyone likes to pretend.
Procurement came up in our conversation a lot, and it’s the most immediate constraint because it’s where “buy Canadian” either becomes real, or becomes a nice LinkedIn sentiment. Procurement decides what scales.
The editorial committee also pointed out that about 15% of the federal economy runs through procurement, so even a small change in purchasing patterns would move serious money toward Canadian suppliers.
One example raised was Alberta Health Services, an organization that spends billions every year in procurement. The question was whether changing procurement standards could actually open market access for Canadian med tech and other startups.
There was also a blunt observation about how government purchasing decisions are changing. For years, the default assumption was that procurement had to remain fully open to foreign suppliers to comply with trade agreements. That assumption has weakened quickly. In recent months, it has become normal to talk openly about supporting domestic suppliers, particularly in areas seen as strategic. That conversation would have felt off-limits not long ago.
These issues weren’t coming from one corner of the ecosystem, either. Founders, investors, and corporate leaders are all describing the same pattern: decisions made early, often far from product teams or markets, are determining which companies can scale and which never get the chance.
Those decisions show up in procurement rules, funding criteria, regulatory timelines, and internal approval processes that shape who gets a first customer and who doesn’t. By the time a product team is ready to sell or scale, many of the outcomes are already set. What looks like a market failure on the surface is often the result of choices made much earlier, outside the view of the people actually building the technology.
So what is Digital Journal going to do about it?
- What we heard: Policy, procurement, and regulation are not side issues. They are actively determining which innovations get customers, which attract capital, and which stall after early success.
- What we’re doing about it: In 2026, Digital Journal will focus more of our reporting on how these forces shape innovation in practice. We’ll look at how defence and national priorities influence where effort and investment go, how procurement rules decide who gets a real buyer, and how regulatory clarity or delay affects scale. These are the decisions that decide outcomes long before success stories get written. Editor’s note: If you want to pitch us, focus on procurement pathways and let us know about obstacles or what’s working.
Why strong ideas stall after they prove themselves
Nobody on our committee doubts Canada’s ability to generate smart ideas.
But Canada was compared to Switzerland in our meeting because there are many similarities: world-class institutions, serious technical education, plenty of innovation, and a familiar pattern where companies hit a level of success and then leave because scaling is harder than inventing.
Most of our conversation focused on the need to talk about what happens after proof of concept.
Commercialization —
Commercialization came up repeatedly as the big unsolved operational problem, alongside risk capital, founder talent, and the systems that make scaling repeatable instead of heroic.
Another theme was coordination, particularly the challenge of getting the right people in the room to scale Canada’s innovation ecosystem. Builders, buyers, partners, investors, and acquirers operate on different timelines and incentives, which makes alignment difficult to sustain long enough to move anything forward.
And this is where the conversation inevitably wandered into corporate land — where incentives go to die.
Corporate innovation —
Corporates weren’t framed as villains. Instead, the discussion focused on how many leaders inside large enterprises still don’t fully understand what meaningful innovation engagement requires. That is a big challenge because incentives inside large enterprises often make the play-it-safe choice the more attractive one.
That matters because large enterprises often determine who gets a first customer and who doesn’t. Their buying decisions, partnership models, and internal approval processes shape which innovations move into real use and which stay in pilot mode. When engagement is cautious or fragmented, even strong solutions struggle to become part of day-to-day operations.
Investor focus and market size —
Another friction point surfaced when comparing Canada and the United States, and it wasn’t a new one. The conversation returned to how scale gets evaluated, and how assumptions shaped in much larger U.S. markets often carry over into Canada. Market size, customer concentration, and growth paths are different, but the filters applied to companies can often be the same.
In the U.S., billion-dollar outcomes align with how many venture funds are structured and how returns are expected to materialize. In Canada, many industries are smaller by design and scale differently, even when the underlying businesses are strong and profitable.
What came through wasn’t a critique of any single type of capital. It was a recognition that the pathways between early traction and long-term scale are less clear for companies that don’t fit a hypergrowth model.
Some firms fall between categories, too big for early-stage support, not shaped for venture returns, and without obvious next-step capital or buyers. The issue is that the system offers fewer defined routes for companies that grow differently within Canada’s market reality.
- What we heard: Canada’s innovation problem tends to surface after the prototype stage. Founder communities matter, capital needs to be available at every step, especially growth capital, talent has to be in place, and industry needs to stay close to real problems that someone will actually pay to solve. The biggest gap named directly was the lack of sustained innovation engagement from large enterprises.
- What we’re doing about it: In 2026, Digital Journal will focus more of its reporting on the unromantic middle of the story. That includes where commercialization breaks, what “risk capital” looks like at each stage, how corporate buying and partnering actually works, which incentives inside large organizations block engagement, and what makes it easier for companies to scale in Canada instead of leaving. Editor’s note: If you’re pitching us for coverage, focus on these elements.
Final shots
None of the issues raised in our editorial committee meeting were abstract.
They show up in budget decisions, procurement rules, leadership incentives, and which companies get a real customer versus another pilot. Innovation in Canada is already happening. The question is whether our systems are set up to let it compound, or cap it.
What emerged most clearly was that execution, not imagination, is now the constraint. AI works, technologies mature, and founders build capable teams, but progress slows when governance, buying processes, and operating models fail to move with them.
The cost of that gap is time, capital, and eventually talent, as companies look elsewhere to grow.
These dynamics also explain why policy and power kept entering our innovation conversation. Who buys first, who carries risk, and how rules are applied often matters more than technical merit. Decisions made far upstream determine which ideas become infrastructure and which remain experiments.
That context will shape how Digital Journal reports in 2026.
The focus won’t be on what’s new all the time, but on what changes outcomes, where systems help innovation take root, where they block it, and what actually shifts the odds that companies scale here rather than somewhere else.
