Connect with us

Hi, what are you looking for?

Business

Most companies are thinking about AI strategy the wrong way

Publicis Sapient CEO Nigel Vaz on why AI should be treated as a business operating system, and why strategy cycles must change.

Photo by LinkedIn Sales Solutions on Unsplash
Photo by LinkedIn Sales Solutions on Unsplash
Photo by LinkedIn Sales Solutions on Unsplash

For many companies, the inevitable AI conversation still sounds like a technology upgrade. 

A few new tools here, a pilot project there, all for the potential of a productivity boost. 

Looking inside real large-scale transformations, however, might make for something very different. The current AI wave is changing how organizations make decisions, how work moves through a company, and how quickly strategy itself has to evolve.

This shift was the focus of a recent conversation recorded at the HBR Strategy Summit 2026, where Harvard Business Review editor-in-chief Amy Bernstein sat down with Nigel Vaz, CEO of Publicis Sapient. 

Vaz has spent years advising global organizations on digital transformation, giving him a close-up view of how companies try to modernize their business models. 

In their discussion, he argues that many firms are still thinking about AI too narrowly, treating it like a technical deployment rather than a structural change to the business.

Three themes emerged in the conversation.

First, treating AI as a standalone technology project risks missing the deeper ways value is created and delivered.

Second, the familiar rhythm of strategy, annual plans, multi-year roadmaps, and tidy handoffs between departments is becoming harder to sustain. When data and decisions move quickly across functions, strategy has to evolve through constant testing and iteration rather than linear planning cycles that lock organizations into slow feedback loops.

And third, Vaz argues that ethical commitments only matter when they are built directly into the systems companies deploy, from data governance to the guardrails employees use when interacting with AI tools. Without that structure, corporate AI ethics risk becoming little more than pinky promises.

Here are three key moments from the interview.

On AI as an operating system, not just a tool

I think we’re at the beginning of a fundamental transformation, where AI has been talked about as a technological trend, but it fundamentally is reshaping how businesses create and deliver value, much like the internet did in the ’90s. So, for me, it’s not so much about how AI is changing the process of strategy, but it’s more how AI is changing, how decisions are made and how work gets done…

Similar to what we saw in the advent of digital, for me, AI isn’t about making strategies smarter, right? It’s mostly about how it forces organizations to be rethought, particularly in the context of how quickly they move.

On the danger of linear strategic thinking in the AI age

We’ve got corporate strategy, then we’ve got our finance strategy, we’ve got our marketing strategy, we’ve got our product strategy, we’ve got our manufacturing strategy, and not really focusing on thinking about how data flows across the organization and how work will get done and how these interdisciplinary tasks that create connections between sales and marketing that are historically not common, but now really valuable if you could connect those data sets in the context of solving potentially a manufacturing question, not sales or marketing, right?

And being intentional around thinking about those kinds of challenges is probably the one I would highlight, because it’s almost like all of the success of strategic processes thus far are the very things that to some extent, limit your ability to get value in terms of being intentional about how you design for an AI first world, primarily around people and context and OKRs, not just technology.

On moving ethics beyond corporate platitudes in the face of ROI pressure

One of the challenges about what makes this different than the traditional ethical discussions of the past are the fact that these ethical considerations have to be grounded in the technology because if you don’t actually ground them in the technology, all they become is a set of ethical guidelines and principles that you put out as an organization to make yourself feel better. And what I mean by that is having a clear perspective on how are we using data that’s been given to us in the context of one thing for another.

Listen to or read the transcript of Vaz and Bernstein’s complete conversation here.

You may also like:

World

AI tools make deepfakes easier to create and harder to detect than ever before.

Social Media

Australian mining magnate Andrew Forrest is asking a US federal court in Silicon Valley to hold Meta accountable for scam ads.

Business

If intelligence becomes a metered utility controlled by a handful of providers, then decision making becomes capacity-constrained infrastructure.

Business

Factors like convenience and workflow efficiency increasingly outweigh model preference in day-to-day usage.