Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    White House Mixes Iran War Strikes With Wii Sports Montage

    March 12, 2026

    Glasner hoping for ‘third time lucky’ as Palace struggle against Larnaca again

    March 12, 2026

    Judge rejects NCR defence for Toronto man found guilty of stabbing husband in 2021

    March 12, 2026
    Facebook X (Twitter) Instagram
    Select Language
    Facebook X (Twitter) Instagram
    NEWS ON CLICK
    Subscribe
    Thursday, March 12
    • Home
      • United States
      • Canada
      • Spain
      • Mexico
    • Top Countries
      • Canada
      • Mexico
      • Spain
      • United States
    • Politics
    • Business
    • Entertainment
    • Fashion
    • Health
    • Science
    • Sports
    • Travel
    NEWS ON CLICK
    Home»Business & Economy»US Business & Economy»Why AI Transformation Needs a Human Touch
    US Business & Economy

    Why AI Transformation Needs a Human Touch

    News DeskBy News DeskMarch 12, 2026No Comments24 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Why AI Transformation Needs a Human Touch
    Share
    Facebook Twitter Pinterest Email Copy Link

    ADI IGNATIUS: I’m Adi Ignatius, and this is the HBR IdeaCast. A few weeks ago, Harvard Business Review hosted a day-long event looking at the cutting edge of strategy research and practice, the HBR Strategy Summit 2026. The day was filled with expert advice and guidance from both executives and academics. And for the next four Thursdays, we’ll be sharing some of the best conversations with you on IdeaCast. First up, a conversation between HBR editor in chief, Amy Bernstein and Nigel Vaz, the CEO of Publicis Sapient. The company is in the digital transformation business, helping organizations modernize and adopt artificial intelligence to their existing models.

    That means he has had a front row seat to digital transformation at all kinds of organizations, and he shared his thought on what companies really need to do now around AI before it’s too late. You’ll hear him argue why AI should be thought of as an operating system, not a tool, how linear thinking is holding leaders back, and the most exciting opportunities he sees AI offering now. Here’s that conversation between Amy Bernstein and Nigel Vaz.

    AMY BERNSTEIN: You have had a ringside seat for strategy making all over the place, all over the world, many different kinds of companies. You have been doing it for years, so you have the long view. How has AI affected all of that, all the strategy making, all the thinking about strategy? If you could sort of boil it down.

    NIGEL VAZ: Look, I think AI is far more an operating system for how a business needs to operate than it is a technology, right? Because I think we’re at the beginning of a fundamental transformation, where AI has been talked about as a technological trend, but it fundamentally is reshaping how businesses create and deliver value, much like the internet did in the ’90s. So, for me, it’s not so much about how AI is changing the process of strategy, but it’s more how AI is changing, how decisions are made and how work gets done. And if you think about how decisions are made and how work gets done evolving, then very quickly you are having to change very simply the tempo of strategy, right?

    So, do annual strategy cycles work? Do planning shifts come in long multi-year cycles? Do budgets get decided on an annualized basis? And do those create competitive advantage, which is the primary purpose of strategy? Or is it actually about how business needs to operate differently? So, similar to what we saw in the advent of digital, for me, AI isn’t about making strategies smarter, right? It’s mostly about how it forces organizations to be rethought, particularly in the context of how quickly they move.

    AMY BERNSTEIN: So, it’s a question of speed, but you also talked about value creation and value capture. So, it’s also about business model. It sounds as if you’re talking about… You’re saying that organizations really need to rethink their entire business models. Is that accurate?

    NIGEL VAZ: Absolutely. Because I think when you think about how organizational innovation has evolved over the last so many years, we’re really starting to see that organizations find the ability to innovate in small pockets, but they find a real challenge in how they scale these innovations across the company. And the reason that that breaks down is because largely experimentation doesn’t scale unless you really reimagine how things are going to work, right? And I think we are seeing that real big shift in the context of AI.

    So, when you think about a lot of these proof of concepts which don’t scale, they’re largely because the kinds of problems they’re solving are functional problems in a specific part of the organization that can deliver narrow streams of value versus the broader shift for the organization requires a bigger rethink in the context of the what and the how of how a business operates, which I think is most of the conversations we’re in with a lot of companies around the world, like tackling really big, meaningful problems. Can you take a car that would take 18 months to redesign and bring that down to 18 weeks? And if you could, what are the choices strategically does that offer you?

    AMY BERNSTEIN: Right. And as always, it comes down to the choices and you can tell… The hard thing about strategy is what you say no to, but it sounds as if part of what you’re getting at here is that for organizations that in the last decades have moved into a bunch of different businesses, and that’s happened across many organizations, it sounds as if what you’re calling for is focus. And I’m wondering how you get organizations to focus strategically.

    NIGEL VAZ: In my experience, the way I think you think about focus is you’ve got to pick things that you can test and learn from undoubtedly, but you have to pick those things in a way that allow you to take those learnings and make sure that those learnings actually are applicable to the broader organization. So, you want to pick a problem that’s not so small that it can be dismissed as irrelevant in the context of a broader transformation, but not so big that you never get out of the blocks in terms of how you actually are solving it, right?

    And you want to find yourself in that sweet spot of saying, “This is a decent enough problem that the organization will see it as representative of how we actually solve the bigger challenge that AI presents itself because it’s forcing us to rethink so many aspects of how we engage with customers, how we drive growth, how we take cost out of the business.” But at the same time, it’s also not so big that it just does not deliver value quickly enough. And very quickly, the organization kind of moves on to the next thing.

    AMY BERNSTEIN: So, how do you advise your clients to find that sweet spot?

    NIGEL VAZ: The first thing you have to get really clearheaded about is what problems are you trying to solve at an organizational level? And then what are the precursors to those problems that become candidates to validate that strategy, right? Some of those questions are what questions, so what are we doing, right? Many of those questions are how questions. I’ll illustrate an example, right? We have lots of clients who are in the midst of large scale technological transformations, and they’re basically looking at building new digital platforms and tools as they get out of the traditional software ecosystem, which means that all of their business processes are baked into these software platforms that are monolithic and haven’t changed or don’t change frequently enough, right?

    Now, rather than basically saying, “Hey, we’re going to get rid of our ERP systems and we’re going to get rid of a lot of our core technology,” what you’re basically saying is what are the precursors to that? So, maybe we’ll take a functional area, which is an older application that’s difficult to change, that’s harder to move, and we will deploy an AI modernization effort on that area of the business.

    Suddenly, you move from slow moving technology to an agentic agent first orchestration, you prove the model, and now you can start to say, “You know what? We need to go on a broader modernization effort across our organization to replicate the learnings from here almost on an incremental basis until we are no longer bound by the business process or aren’t constrained by the technology that we are leading.”

    AMY BERNSTEIN: So, you’re talking about very new ways of thinking about organizational strategy and business strategy. And I’m wondering, when you’re dealing with clients, what is the thing you listen for that will most reliably kill a strategy in the age of AI? I mean, what is the most common error in thinking that you’ve come across?

    NIGEL VAZ: I think probably the single biggest thing is the ability to follow a linear thought process before you get going. So, this classic idea of we are going to do this, then we’re going to do that, then we’re going to do that, then we’re going to do that. We’ll review the outputs and then we’ll go around the loop again, right? And this idea of the linear baton passing, functional separation of strategy.

    So, we’ve got corporate strategy, then we’ve got our finance strategy, we’ve got our marketing strategy, we’ve got our product strategy, we’ve got our manufacturing strategy, and not really focusing on thinking about how data flows across the organization and how work will get done and how these interdisciplinary tasks that create connections between sales and marketing that are historically not common, but now really valuable if you could connect those data sets in the context of solving potentially a manufacturing question, not sales or marketing, right?

    And being intentional around thinking about those kinds of challenges is probably the one I would highlight, because it’s almost like all of the success of strategic processes thus far are the very things that to some extent, limit your ability to get value in terms of being intentional about how you design for an AI first world, primarily around people and context and OKRs, not just technology.

    AMY BERNSTEIN: Yeah. And what you’re saying reminds me of a couple of themes that we’ve heard already today about the importance of trial and error, getting away from this linear waterfall approach to strategy making, having to hammer everything to perfection before you move on to the next step. And now I’m wondering, how do you know that your new strategy is working before you start getting the numbers that prove it, the KPIs, your OKRs, whatever they may be?

    NIGEL VAZ: I think this is one of the biggest challenges, right? I think this idea of a strategic planning exercise that is separate from an executional exercise is part of that traditional model, right? I think so much of strategy today, whether it’s around growth or whether it’s around cost out innovation or whether it’s around operational acceleration, has to come from having a strategic set of principles and approaches, but then also from how that connects into the organization in the context of real execution, providing input back into that process so that you don’t have this idea of, well, look, we’re going to come up with this incredible strategy.

    We’re going to spend all this time developing a strategic hypothesis, but then we’re going to then deploy that validation of that hypothesis into a very linear process of measurement again. And then we’ll review it at the end of next year when we do next budgeting cycle in order to iterate. So, much of this today is about measuring strategy in unit economics, not just activity, thinking about the smallest possible things you can measure and using those to allow you to infer whether your strategic progress is in the direction that you want. I was using the technology examples rather than waiting for a project report at the end of a milestone, what is the cost per release? What’s the cycle time per feature?

    What’s the defect escape rate? Because ultimately we’re in the business of helping companies transform digitally, but what that primarily means is deploying technology in order to enable either driving growth or solving cost and efficiency challenges or customer experience improvements in an organization. A lot of this comes down to how you are measuring in increments and then using that to infer or validate your hypothesis around strategic choices.

    AMY BERNSTEIN: So, you mentioned driving growth and you mentioned driving efficiency. We have talked a lot about the efficiency piece of this. And I’m wondering, when you see an organization that’s really using AI to drive growth, what is it doing when you’re looking across your roster of clients at those that have really kind of cracked the code, what is it that they are doing differently?

    NIGEL VAZ: I would say very few companies across the world would say that they’ve cracked the code, and we agree with that perspective. But what I can tell you about the people who are leading in the current context, there are a few things that they’re doing that are really different.

    The first is recognizing that the traditional idea of software systems encapsulating a lot of the differentiation from a process perspective is now moving to a data ecosystem of connecting different sets of data in order to start to understand how those data connections enable them to serve customers better, whether it’s in the context of improving basket sizes in a retail context by using predictive analytics on what’s in that basket and perhaps what you might be looking to create on the basis of the things you have and telling you about the few things you might have missed through to accelerating the process of drug discovery by looking at adjacencies to the primary areas of research that the company is focused on, leveraging all of the data sets from previous failed trials.

    Every one of these is an innovative use of connecting data into an AI first approach to creating value for end patients, citizens, customers in a way that was just not being done historically.

    AMY BERNSTEIN: So, I want to go to some of the questions that have been coming in, Nigel. You’ve clearly touched a nerve with a lot of folks. One question that’s gotten a lot of uploads comes from Stacy. She says, “The theme throughout the summit is rethinking how we do work.” With that, where do you think AI strategy should live? Is it living in IT right now most of the time? It seems as if there needs to be a cross-functional relationship with leaders, experts, and individual contributors.

    NIGEL VAZ: Yeah, I think that’s a fantastic observation, right? And this is where I started off right at the beginning. AI even today is talked about in the context of technology. And I have an analogy here to respond to Stacy’s question of going back to the ’90s, when the most valuable technology companies in the late ’90s, early 2000s were companies like Cisco, because where we were in that curve of the internet getting established was moving packets faster between organizations. So, the whole context of the internet conversations, and we were a company that built some of the first online banks and allowed you to pick a seat on an airplane, and those were not technological problems.

    Those were problems of business innovation and recognizing that an airline that allows you to pick your own seat not only makes things at an airport more efficient and not only makes things at a call center more efficient, so you’re not calling up and saying, “Where’s 32B?” Because you can’t see a map in front of you. It also means that it creates one of the largest revenue streams for an airline today where people are willing to pay for the privilege of picking a seat, right? And I think this is very similar to where we find ourselves with AI today. So, much of the conversation is around compute.

    So, much of the conversation today is around the technological manifestation of AI in the context of which model is better. But the reality is whether it’s compute or models, there are foundational architectural components of AI when the real conversation about AI ought to be held at a business level because most of the value will get created on the applications, on the business processes, on the new offers we build on top of the compute and the models, right? And so, to, I guess Stacy’s question, organizations that are more successful than others are not having this conversation in the context of technology.

    They’re having this in the context of what kind of changes are possible for us with our customers, with our employees, with our partners and how we interact with them in the context of what is possible technologically, as opposed to a technology led strategy for AI, which is fine to have at a CIO level, but that’s not where the primary value unlock I think will come for the broader organization.

    AMY BERNSTEIN: That makes a lot of sense. Another question that’s had a lot of upvotes comes from Suzanne who asks, when you talk about AI transformation with the human touch, what ethical red lines do you believe every organization should define before deploying AI at scale? How do you advise CEOs to balance the pressure for speed and cost savings with the need for responsible ethics first AI, especially when the short-term ROI is unclear? And I’ll just add when there’s so much pressure to show ROI.

    NIGEL VAZ: Yeah. And look, I think there’s two levels of conversation here, right? I mean, one of the challenges about what makes this different than the traditional ethical discussions of the past are the fact that these ethical considerations have to be grounded in the technology because if you don’t actually ground them in the technology, all they become is a set of ethical guidelines and principles that you put out as an organization to make yourself feel better. And what I mean by that is having a clear perspective on how are we using data that’s been given to us in the context of one thing for another.

    What are the expectations of what kind of data we want to allow leave our organization to potentially interact with which kinds of models? What choices are we making in the context of open source models where we can understand how models have been trained in the waiting and closed models? What are the geographic considerations in the context of sovereignty around AI and data governance, which is becoming an important consideration in the context of all of the conversations around the geopolitical landscape changing so rapidly with tariffs and other considerations, right? All of these are not just principles that can be agreed. They have to also drive a very specific set of technological decisions.

    So, I’ll give you an example of this, right? Saying we actually want to protect our customer’s data, but then allowing your employees to experiment on AI tools that are not in a sandbox, which is a pretty basic example, where that data might potentially be enriching models in the public domain is an error. You think three years into this iteration of AI, we would have not seen happen, but it still happens because companies aren’t necessarily providing their employees tools to enable them to be the most productive that they can be.

    And so, you are seeing somebody who’s trying to help a customer in the context of a customer service problem and is finding it really hard to find the information on their own website or on their own systems they’ve been given. And they simply copy that question, stick it into a public AI chatbot and ask the question, sharing perhaps some of the information the customer has given them in order that they might provide that customer with a better response. But that data now, of course, has been exposed to a public domain context where you don’t know exactly where and how that will propagate further.

    These are some basic things that I think you have to recognize aren’t just now about these guidelines about saying how we want ethical use of data, how we deal with misinformation and disinformation, how we deal with AI swap in the context of outputs, how do we deal with fakes and really helping guide in the context of marketing perhaps or social media, what’s fake and what’s not, right? All of these choices have to be then embedded in systems and systemic ways of working in the technological approaches you choose because I think in this day and age, that is where the difference get made.

    So, do you have some ability to watermark or to highlight AI usage in the context of creative outputs, marketing communications? All of these things I think are where the distinction of whether you’re truly living the values that you preach around responsible AI, I think matter.

    AMY BERNSTEIN: Yeah, that makes a lot of sense. Our next question actually is kind of adjacent to this. It’s from a CEO who asks, “For organizations serving vulnerable communities, what safeguards are essential so AI does not unintentionally reinforce inequities and access, voice, or outcomes?”

    NIGEL VAZ: The most critical thing there is recognizing the data that models that you’re using have been trained on in the context of services that you’re providing. Because we should all be clear, AI is only as good as the data that it’s trained on. So, if your data has all of the biases and all of the concerning components that you want eliminated from the interactions with these vulnerable communities, I think you have to start with the datasets that are being used in order to provide services because I think all of the things that you build on that foundation will either only compound or potentially could be mitigated in the way that the models have been trained.

    And I think actually understanding how you’re addressing misinformation, disinformation bias in the context of the datasets becomes critical. And then I think it’s making sure that you have the appropriate safeguards where you are building in reinforcement on a consistent basis around the things that you want to ensure are held true. Because I think in the context of vulnerable communities or indeed in the context of providing equitable experiences for people, making the choice for what you want to limit is almost as important as what you want to reinforce.

    AMY BERNSTEIN: Yeah, a lot of decision-making, a lot of choices. You have to be really focused and mindful on all of that. Rich, who’s a founder and CEO asks, “What is the most important human attribute that leaders must exhibit to successfully drive AI transformation?” What social and emotional factors are leaders not thinking about enough? This is a big HBR question.

    NIGEL VAZ: I think for me, we have to recognize that we’re at a point now where there’s been a lot of conversation about AI in a kind of broad generic sense, right? But if you start back to this wave of generative AI maybe three odd years ago, we started with chat, and then we saw reasoning models start to evolve. And now we are in a world where we’re starting to see agents very practically… I have a few on my computer now doing work while I’m having this conversation and then eventually, we’ll evolve to having AI coworkers.

    And I think we have to recognize this idea that the very nature of how we work as an organization is going to start to change to this interaction between us as people and the dependence and the direction and the agency that we will provide these AI tools to work on our behalf. And I think we have to start to think about this in the context of how we think about people, where you aren’t necessarily just going to give them a task and they go off over a very long period of time and just continue to execute that task, but you’re actively able to engage with them, redirect, course correct, nudge, and evolve, right?

    And so, that persistence around the memory of these agents, the interaction I think will define, I think very strategically how we have to start to think about work in the context of organizations, because that is very different from then how work got done just with people engaging with each other. To the second question, I think what people aren’t doing is necessarily thinking as strategically about what are the guardrails? What are the expectations?

    If you are a CHRO in an organization working for a CEO like you or a human resources leader, or as we call it in SAP, and a people success leader, somebody whose job it is to make people successful, how do you actually start to think about making these people successful in the context of how you enable agents to interact with these people? We as a business are an enterprise AI technology company in addition to a services company, right? So, we think of ourselves as people and product together. And so, that coexistence in organizations doesn’t just exist in ours because we’re in the business of providing that service.

    That exists in the context of every business, whether you’re a retailer or a telecommunications company or an airline, where your people are going to be working alongside these AI tools, which have moved from being just entirely directed by people to, in some cases, operating autonomously. So, how do you then, in the context of that, ensure that both sides of that equation, the human and the AI coworker, as it were, are working together in a system that creates value and minimizes risk to the organization because I think that will be the frontier that we will find ourselves in very, very quickly.

    AMY BERNSTEIN: So, we have time for one last question, and this one got a lot of upvotes. It’s from Caterina, who’s a co-founder. Can you give an example of effective usage of AI in strategy development or execution and what was critical to that success?

    NIGEL VAZ: I’ll pick an example of a strategic choice. An automotive company in this instance was making on a big strategy question around how do we actually do three things at the same time? First, move quicker in a more agile environment. Second, make sure that we are being more responsive to customers, changing behaviors. And third, organizing operationally our supply chains in order to be responsive to this, right? And the first thing I think they had to do was to strategically weight the balance of what this resolution was going to affect the most. And in this case, they prioritized making sure that they were being more relevant to their customers, right?

    And so, one of the things that they did is shifted the process of predetermining a lot of the answers and starting to basically build what was a strategy frame around the cost, the demographic, the type of automobile that they were producing. And then almost on an iterative basis, engaging with markets to basically say, “Okay, if we add this camera that allows you to reverse, now that means the cost for Malaysia, which is a big audience suddenly becomes too high and then they’re not going to participate in that model’s consumption. So, how do we use that information early enough that we can start to design the dash with or without a camera so that there’s optional variants created?”

    Then that feeds into the supply chain in terms of how they’re sourcing, pricing. So, it’s a good example of AI being used to drive large strategic decisions, which then enable lots of small strategic decisions that ultimately get them to a faster design car that they’re able to pivot from more quickly if they don’t see as much engagement from the markets before it even hits the actual consumer whose perspectives have been fed in through this entire process.

    ADI IGNATIUS: That was Nigel Vaz, CEO of Publicis Sapient, speaking to HBR editor in chief Amy Bernstein at the 2026 HBR Strategy Summit. If you found this episode helpful, share it with a colleague and be sure to subscribe and rate IdeaCast in Apple Podcasts, Spotify, or wherever you listen. If you want to help leaders move the world forward, please consider subscribing to Harvard Business Review. You’ll get access to the HBR mobile app, the weekly exclusive insider newsletter, and unlimited access to HBR online. Just head to hbr.org/subscribe.

    And thanks to our team, senior producer, Mary Dooe, audio product manager, Ian Fox, and senior production specialist, Rob Eckhardt. And thanks to you for listening to the HBR IdeaCast. We will be back with a new episode on Tuesday. I’m Adi Ignatius. All right.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
    News Desk
    • Website

    News Desk is the dedicated editorial force behind News On Click. Comprised of experienced journalists, writers, and editors, our team is united by a shared passion for delivering high-quality, credible news to a global audience.

    Related Posts

    US Business & Economy

    Most Entrepreneurs Are Using AI Wrong. Here’s a Simple 3-Step Fix

    March 12, 2026
    US Business & Economy

    Trader Joe’s is bringing back its viral mini tote bags. Here’s when to start looking

    March 12, 2026
    US Business & Economy

    The CEO of AG1 Says Success Is Powered by Trying New Things

    March 12, 2026
    US Business & Economy

    Extreme March heat wave will scorch Los Angeles and the Southwest this week. The long-term consequences could devastating

    March 12, 2026
    US Business & Economy

    Burnout Isn’t a Badge — It’s a Sign You’re Neglecting Yourself

    March 12, 2026
    US Business & Economy

    Bumble stock is up today. Whitney Wolfe Herd’s solution to ‘swipe fatigue’ might be part of the reason why

    March 12, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Don't Miss

    White House Mixes Iran War Strikes With Wii Sports Montage

    News DeskMarch 12, 20260

    The White House Bombing Iran Is a Game Wii’d Play Published March 12, 2026 4:10…

    Glasner hoping for ‘third time lucky’ as Palace struggle against Larnaca again

    March 12, 2026

    Judge rejects NCR defence for Toronto man found guilty of stabbing husband in 2021

    March 12, 2026

    Danica McKellar Shocks Fans With Placenta Story

    March 12, 2026
    Tech news by Newsonclick.com
    Top Posts

    Glasner hoping for ‘third time lucky’ as Palace struggle against Larnaca again

    March 12, 2026

    todas las llegadas de celebridades – Celebrity Land

    February 10, 2026

    Families demand relocation of 2 schools near Tabasco refinery

    February 10, 2026

    Lala Kent Reveals She Almost Had An Affair With NFL Player

    February 10, 2026
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    Editors Picks

    White House Mixes Iran War Strikes With Wii Sports Montage

    March 12, 2026

    Glasner hoping for ‘third time lucky’ as Palace struggle against Larnaca again

    March 12, 2026

    Judge rejects NCR defence for Toronto man found guilty of stabbing husband in 2021

    March 12, 2026

    Danica McKellar Shocks Fans With Placenta Story

    March 12, 2026
    About Us

    NewsOnClick.com is your reliable source for timely and accurate news. We are committed to delivering unbiased reporting across politics, sports, entertainment, technology, and more. Our mission is to keep you informed with credible, fact-checked content you can trust.

    We're social. Connect with us:

    Facebook X (Twitter) Instagram Pinterest YouTube
    Latest Posts

    White House Mixes Iran War Strikes With Wii Sports Montage

    March 12, 2026

    Glasner hoping for ‘third time lucky’ as Palace struggle against Larnaca again

    March 12, 2026

    Judge rejects NCR defence for Toronto man found guilty of stabbing husband in 2021

    March 12, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Editorial Policy
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    • Advertise
    • Contact Us
    © 2026 Newsonclick.com || Designed & Powered by ❤️ Trustmomentum.com.

    Type above and press Enter to search. Press Esc to cancel.