In Conversation with Christian Dippel: Developing Competitive Edge with AI
In this episode:
Over the past few years, AI technology has exploded into the mainstream, with new platforms and applications emerging at an exponential pace. As organizations rush to adopt these tools, many are grappling with deeper questions around how to approach AI strategically—beyond experimentation and surface-level implementation.
In this session, we sit down with Christian Dippel, Associate Professor of Business, Economics, and Public Policy at Ivey Business School, to explore what it means for leaders to think critically and act decisively in an AI-enabled world. Dippel shares insights on how to identify meaningful opportunities for impact, understand the broader organizational implications of AI, and navigate the gap between hype and real-world application.
Other ways to listen:
About "In Conversation With..."
In this live virtual event series, we sit down with an Ivey faculty member to discuss their research and expertise. Produced by Ivey Executive Education, these sessions allow us to take a deep dive into a specific topic with each professor, sharing insights on challenges and issues relevant to the future of business.
What is Learning in Action?
Hosted by the Ivey Executive Education at Ivey Business School, Learning in Action explores current topics in leadership and organizations. In this podcasting series, we invite our world-class faculty and a variety of industry experts to deliver insights from the latest research in leadership, examine areas of disruption and growth, and discuss how leaders can shape their organizations for success.
Episode Transcript:
CHRISTIAN DIPPEL: Your job as a leader, whether it's at the C-suite level or leading a strategic business unit or whatever the case, I think it's going to become more about asking good questions and less so about being the person to provide the answers.
SEAN ACKLIN GRANT: Welcome to Learning In Action, where we explore fresh ideas shaping leadership today. This episode is a recording of our live session in conversation with Dr. Christian Dippel, Associate Professor of Business, Economics and Public Policy at Ivey Business School. If you're asking how to move beyond the hype around AI and turn it into real strategic impact, you're in the right place.
Christian walks us through spotting high-potential AI opportunities, aligning them with your broader strategy and bringing organizational clarity to a fast-moving technology wave. Whether you joined us live or are tuning in for the first time, this episode delivers "can't miss" insights. Let's get into it.
HOST: Christian, welcome. Thank you for joining us today.
CHRISTIAN DIPPEL: Thanks so much for having me.
HOST: Why don't we start out on a more personal note, and if you could share a little bit about your personal journey, which spans the rigour of research and academics all the way through to practical applications of organizations that you're directly working with?
CHRISTIAN DIPPEL: I've been a professor for about 14 years now, graduated in 2011 and spent the first 10 years of my academic life at UCLA in California. Really, at the beginning of my academic career, I wore three hats. I had an investment banking background, so I had a bit of that finance hat. My formal PhD training was in global macro, really international macroeconomics, and economic history. And so those were my three hats that I wore.
And then over time through my career, two things influenced me and pushed me, so to speak, a little bit towards the path of what we're talking about today. The first is being at UCLA, it's a very entrepreneurship-heavy business school. So among North American business schools, it's got the highest ratio of MBAs starting their own business after MBA graduation. So you get a lot of this Silicon Valley or Silicon Valley adjacent Southern California tech entrepreneurship, venture initiation stuff that you get exposed to.
The other thing that happened is I became more and more a data scientist, and I got increasingly frustrated with the fact that in academia, what often happens is people engage with these incredibly interesting data science projects, but everything is on the academic funding cycle. So you get a big grant to build a cool thing. You build the thing, then you try to write three or four papers on it, and then you move on to the next topic.
So everything is optimized for the paper writing, which is fine, but it's really frustrating that you get these cool data projects that then lead these zombie lives on the internet, where you go on a website and you know, this thing was built. It was so cool. And then it says, last updated in 2019 because the grant ran out, the people got tenure or lost interest or whatever. And so I was of this mindset that I wanted to build data science projects that have the ability to self-sustain, which then naturally meant they had to have a commercial element to themselves to break out of this research cadence, if you will.
And then I quickly find out that if you build it, they won't just come. So in other words, you can't just build the thing that you're in love with and assume that it can stand on its own two feet. And I had to quickly learn a lot of strategy. And then, really, it sort of became this natural confluence where I was building projects in data science. I had to learn the strategy. And then right around the time when that happened, AI came around. And so for me, strategy, almost overnight, very quickly just became AI strategy. So in other words, for me, for what I was doing, there was no strategy that wasn't AI strategy.
HOST: Thank you for the introduction. I'm smiling because it's two comments here at the-- build it, and they will come doesn't necessarily really happen. It's how do you go beyond that, and how do you engage? And given the topic at hand, even when you reference 2019, when it comes to AI, that feels like it's centuries ago in terms of how fast everything is moving.
So let's go back to the topic or even the headline. We talk about, creating a competitive advantage, and how can I play a role? I actually talked about moving away from buzzwords. Competitive advantage almost feels like a buzzword, to some extent. It's obviously something that we strive for when you're leading organizations to create that edge. So how can we think about it maybe in a more practical or pragmatic term?
CHRISTIAN DIPPEL: I got into strategy in a roundabout way, you would say. I wasn't trained as a strategist. I was trained as an economist in global macro, and then the strategy came later. And when I was at UCLA, I had this very renowned senior strategy colleague who retired midway through my stint at UCLA, Richard Rumelt. And he used to tell the story of-- he would have another senior colleague come into his strategy class to maybe get into teaching strategy himself. And after sitting there for 10 classes, he walked up to Richard and said, I think I can summarize your entire strategy class in one question, what's going on here?
The point that he was making is, at the end of the day, strategy is-- it's a fairly unstructured thing in the sense that it's so context-specific. And this was an interesting experience for me as a macroeconomist who is really a data scientist. You're always in the business of generalities, so you're always trying to figure out, what does the average do? What are like the patterns in the data?
But when you start to think about, what is going to make my one thing successful in this sea of other things around it, it's like your whole way of thinking totally pivots into-- everything becomes about context. And the data is a background thing. You need to know some business trends. You need to have a sense for consumer behavior, competitive landscape behavior, all these things that can be expressed quantitatively. But at the end of the day, it becomes this highly context-specific, very qualitative endeavor and really thinking deeply about, what is going on here?
The answer to the question, what is going on here, is going to be different for every kind of business. There will be businesses where AI really hits you on the process innovation. There'll be businesses where it really hits you on the product innovation. And I think it's really about-- the way that I think about it is really to not be too heavy on the playbooks, like, let's apply this framework or that framework. Frameworks have their use, for sure, but nothing, I think, replaces deeply thinking about your situation.
HOST: Great way to frame it. I find sometimes when we use these bigger terms-- and even I had a recent conversation with another faculty where we dug in specifically on strategy and scenario planning and, how do you stay ahead in today's rapidly changing world? It's sort of broke it down to say-- but it doesn't have to be this big grandiose piece where you almost run the risk of making it bigger than it is and creating that inertia.
It's like, OK, so what's going on today? What's going on tomorrow? Continue to ask myself these questions. So if we go to your work, we look at drawing on political economy, institutional economics. How do these lenses illustrate the strategic role that AI can play? And maybe I'll put in a second part to this question around hidden assumptions that might be at play as well. What are your thoughts here?
CHRISTIAN DIPPEL: In my ventures, let's say-- let's start there. Those ventures, as well as my research, really are in-- political economy people refer to that as government relations. It's really similar things. And I saw that transition of-- 2011 was when I got my PhD. 2014, '15 is when it was cutting-edge to be using keyword extraction to characterize certain bodies of text. By 2016, you can show off with your machine learning algorithms. By 2017, you do sentiment analysis. And so the bar kept getting raised but in a somewhat, I would say, incremental manner.
But it's really 2023 when it hits you. Suddenly, there's this level shift. There's a million things that you can do that you couldn't previously do. And from a business perspective, anyone with a lot of exposure to data or a lot of in-house data or needing to grapple with a lot of data-- there's text as data and there's quantitative data. I would say the manifestation of AI that we're talking about mostly today, which is not agentic, but let's say just like LLMs, that's mostly about text as data.
And text as data is really an application where, if your business interfaces with text as data, which mine did-- with text as data, you're constantly in this world of not being able to see the forest for the trees because there's just so much text. So if you're interfacing with government, there's hundreds of bills every year that affect hundreds of X, which in turn affect thousands of regulations. There's thousands of pieces of formal commenting, lobby meeting, committee meeting transcripts.
There's just such a big ocean of text that no traditional method can really get a good handle on, and that's really where AI it lowers the bar so much in terms of even a very small business being able to do so much with a lot of data, where you went from being able to, do more or less, nothing at all five years ago to being able to do enterprise-level applications with a tiny team if you are in this kind of very data heavy environment, which, in my case, in the political economy, institutional economics type environment, we work.
HOST: I have conversations frequently with leaders who are grappling with almost too much. There's so much information. There's so much data. I don't even know where to start. So I'm keen. I want to take advantage of all the data. I want to be able to analyze and look at it and get insights that maybe I wasn't able to get before or get them in a much more expedited manner. Do you have an example of an organization or maybe someone that you work with that's doing this well? And how do they start, or what are they doing? Are there any secrets but are not really secret that we could share to just help those that are listening in who want to move forward, know where to start?
CHRISTIAN DIPPEL: There's general, almost irony you could say, where-- I always think, in strategy, there's two buckets of strategy, or at least that's a frame you can impose on it. You can say there's quantitative decision making, which is operating in data-rich environments. So a typical example of that would be pricing decisions. For the most part, if you're operating in a market that's well established, you can make very data-informed decisions about pricing decisions. You can call that strategy, but it's very quantitative, very data-driven strategy.
And then there's very imagination-driven strategy, which is the strategy you apply to settings for markets that don't exist yet. Who knows what the market for space travel is going to look like in 10 years? We can't know this. We can only imagine it. And I think, one of the tensions that people seem to have when it comes to AI is-- AI is all about the data. It's all about ingesting and transforming the data. Yet when we think about, what are going to be the repercussions of AI for my business or in my industry-- it's really a very qualitative exercise of imagining the future that is not-- inherently, because it is the future, can't be very data-informed.
And because you can't really interpolate from what's been going on in the last two or three years, there's just not enough data. It's not established enough. It's too exponential. The process is too to accelerating to really use data to extrapolate into what's that going to look like in two or three years. So really, you just need to think about it.
HOST: Yeah, for sure. You mentioned earlier around frameworks, which sometimes can be a challenge if we over rely, also can be an enabler as a way to get things structured. So let's play out strategy and maybe sort of bold or future thinking. Frameworks that either you've used or approaches that you've seen that help leaders process their thinking or start to get things organized so that they can make better decisions, they can lean into the data to create insights--
CHRISTIAN DIPPEL: So strategy is heavy on frameworks and I think for good reason. I think frameworks are always a little bit of a double edged sword in the sense that a framework offers you a playbook. And I think at the end of the day, you need to think deeply about your situation. And I think playbooks or frameworks can help you get there quicker, or they can help you--
It's almost like, you know, you're trying on a Phillips screwdriver and a flathead screwdriver, and you see which one fits the screw. So you have tools at your disposal that helped you solve the problem. Where I caution always when I work with clients, when I work with students in exec ed, is it also satisfies a little bit our bias towards not having to think deeply and just having a playbook to play.
If someone just hands me my five forces canvas or my business model canvas and I can fill in the thing, I can feel like I've achieved something. But I think ultimately, a person that doesn't have those tools but is really willing to sit there still and think deeply about their situation, they will come up with better solutions than the person that is just superficially filling in their business model canvas.
And that's not a criticism on the business model canvas. It's human nature that I think it can be a little bit offering us sort of this like fake ticking boxes sense of getting a handle on a situation. So I love frameworks. They're very useful. But I always caution students or in consulting work with clients. They're a means to an end. They're not the end in itself.
HOST: I like your analogy around the screwdrivers. You know, we sometimes confuse motion with progress. So someone can actually be moving but not necessarily advancing. But it satisfies this. Oh, I'm busy. I'm doing things, right? I'm making stuff happen. How would you help someone sort of shift between the two? So are there any cues that I can say, like maybe I'm not doing enough of the hard work, and I'm sort of relying too much on. OK, I filled out the sheet. I've done work, pat myself on the back. What are some things I should be holding myself accountable to maybe or paying attention to?
CHRISTIAN DIPPEL: If you look back in history, and that's always like when you look back at the past, things appear very obvious, but they're not obvious at the time or for us looking forward for a new technological disruption. But with past technological disruptions, you often have these first order effects that are quite obvious.
The radio comes around, and it totally disrupts the newspaper industry. TV comes around, and it completely disrupts the radio station industry. The railway comes around, and it kills off the Pony Express in the American West, and it completely destroys anyone who is invested in canal building, which 10 years prior, was incredibly profitable. Because these are sort of first order obvious replacements.
And many in what some management people like to call fast history. So recent technological history, whether that's in microchips or PCs or whatever. You get these examples like Netflix killing Blockbuster. And they clearly, Blockbuster just didn't think about the first order effect of home delivery of DVDs. And then subsequent to that, the long game was always going to be streaming. They didn't think about it. They got killed by it. Kodak getting killed by digicams.
But then oftentimes, when you think about the really fundamental technological changes that really changed the way we live, the second order effects are actually much more interesting or, well, depending-- if you're Kodak, they're not more interesting. Depending on what business you run, you might be more affected by the first order effect. But what I'm trying to get at is you might be operating in a business where you say, AI doesn't affect me all that much.
But when you think through the second order effects, think about something like the tractor coming around in the 1920s completely changed agriculture, completely changed employment opportunities in agriculture. Those are like the first order effects that people could see right away.
But then within 10 years, what happened is you got massive rural urban migration, and you got complete changing of the urban landscape, the rise of superstar cities, urban manufacturing, the rise of the service sector, and just this like infinite plethora of new opportunities coming around.
And that's the part where I sometimes think, because I was fortunate enough to have some training as an economic historian, you have a bit of this longer run lens of, OK, let's try to really think through what this thing is going to do in five years, in 10 years, and not just get hung up on the difference of does my competitor have a chatbot today, and I don't have a chatbot, and what that's going to do to my perception of my brand in the marketplace today?
HOST: So let's build on that last point, right? And I think most leaders endeavor to learn from the past. Right. We can't change it, but we can certainly glean insights from it. So how do you challenge and get someone to think five and 10 years when we're talking about AI? And I'm thinking, jeez, something's going to change in five or 10 days. And that idea of, oh, I don't have a chatbot that is on my site. And if someone goes to my site, I'm going to look like I'm behind the times.
So how do you toggle between what do I need to do right now while anticipating? Because if I don't think five years plus out, I might be in for a much bigger surprise than not having a chatbot today.
CHRISTIAN DIPPEL: There's a lot of myopia in how humans think in general. I think the value of trying to bring a little bit of economic history into whether that's a one on one consultation or whether that's a classroom setting. Bringing a little bit of that historical perspective can allow you to do a little bit of back casting.
You can say, OK, what is it that people didn't see in the '60s or in the '70s about what the arrival of all sorts of home appliances would do to female labor force participation, and then female educational attainments, and then changing gender roles 10, 20, 30 years out from those technological innovations? What are the kinds of biases that are making it hard to see out like that?
Really trying to, I think, lean into just sitting back and imagining what is my industry going to look like, either in free form or doing that through something like Porter's Five Forces framework. I think that's really important.
HOST: So how do you feel about using AI to cross-check your own human-based deep thinking, right? So you've sort of challenged to say, carve space. Push yourself. Exercise that what's going to happen to my industry in my space five years and 10 years? How can AI help me do that? So I'm still going to do the hard work, but maybe it can fact check me or cross-reference me or keep me on track.
CHRISTIAN DIPPEL: AI means many different things to different people, different strategic business units. My own perspective and a little bit that's shaped our conversations so far is when you're operating in a kind of business that is more data-rich, there are a lot of operational things that you can do with AI.
One thing that I'm finding in just some work with clients is there's a lot of small and medium-sized businesses that have a ton of data. That could be manufacturing data. It could be order processing data, client facing data, all types of data.
But up until now, it's never been worth hiring, let's say, two data scientists. Like if you're not like a data-driven enterprise per se, you just sort of accidentally, you might say, or just as a side product of your business operations, I think a lot of enterprises are sort of moving into a world where you've accumulated an incredible treasure trove of data, have never really done anything with it, and now, it's a matter of can I build a back end maybe in a very sandboxed way that doesn't take a lot of resources, where I can really come up with new business offerings or just really process optimization, maybe knowing which clients are due a maintenance, which clients are due, a new product, having an automated email pipeline that automates part of the outreach or standardizes some of the procedures we haven't standardized in the past.
So that's the part where even as a small business, you can use AI to automate a lot of things. The thing that I think you're alluding to more is, is almost like a slightly more individualistic enterprise, which is you might just sit there at your desk, and you have to make a decision. And you can go speak to your colleagues at the water cooler, and that's useful.
But maybe you're worried that everyone is drinking the same Kool-Aid to an extent when it comes to certain strategic business decisions. Now you have the ability, and that's a very individualistic exercise that is almost something that I think in a business culture can be encouraged.
But early on, it's not really actually that clear how to even standardize it. Encouraging could be something as simple as the company will pay for your $20 a month subscription to one of the models to have a more premium access to the capabilities, like deeper thinking, deeper research capabilities. That's not a lot of money you need to spend to use those kind of tools.
But what they allow you to do is really check your own thinking, scrutinize your own thinking. And of course, any individual can use that. You can use that to prepare for job interviews. You can use that to prepare for this particular interview. I didn't do it, but I could have tried to characterize Brian Benjamin to a chatbot and ask what kind of questions, is he going to ask me? And so there's a lot of capability to do that.
And one area where I see that a lot is in the classroom now. Just in the last, let's say, 12 months, the ability to just use really any LLM, I was hesitant to just say ChatGPT. ChatGPT is becoming a little bit like Xerox for copying. And getting a really good strategic partner.
So when I teach geopolitics, I might throw a question out there like, OK, imagine there's a scenario where we are an electric vehicle producer in Ontario, and there's a civil war in the Democratic Republic of Congo where all the cobalt comes from. Chances are zero people in the audience will have deep knowledge on this.
But you can do a scenario analysis. You can describe the thing to ChatGPT and do a scenario analysis within five minutes. And within five minutes, we can have a group strategic discussion. So we get elevated in our industry knowledge and in our sort of high level strategic thinking and an issue. Within minutes, you can get elevated so much.
And then of course, that can't be all and say all. There needs to be additional guardrails around the process, and there's all sorts of things you need to guard against and be worried about. But as a quick way of just getting deep into a topic, scrutinizing your own thinking, it's just incredible.
Predictive nature can be an interesting tool to grab from the past and context. And so forth. How can AI be used to help me imagine 10 years out if I am trying to see where my industry is going or what's going to happen? Is it a tool I can use in that kind of capacity?
CHRISTIAN DIPPEL: You often hear this one liner when it comes to AI, and I think there's a lot of truth in it, that it really raises the value of good questions. In other words, your job as a leader, whether it's at the c-suite level or leading a strategic business unit or whatever the case, I think it's going to become more about asking good questions and less so about being the person to provide the answers.
And so there's a lot of debates in the AI space. Like with my companies, I interface more with the sort of technical elements of prompt engineering and automating prompt engineering through the various LLMs, APIs, et cetera. So asking good questions is becoming more and more important.
Some of those questions will be automated questions. So if you just have a data feed that operationally matters for your company, you want to have an automated pipeline in the back end that queries the data feed and generates the answer from the data, and it's all about asking good questions.
That's sort of like the engineering side of asking good questions. But I think at the more qualitative level, if you're just a person making a decision, you're sitting at your desk, and you can use AI as a copilot in a non-technical sense, so not as an explicit copilot but just as someone to run ideas by.
So much of the way we ask questions is informed by our context. And I'll give you one example. When I teach an executive education class in Canada, and we talk about how is the labor market going to be impacted by AI? One thing that you often get is a lot of people assume that one area that is going to be relatively unaffected by AI is old age care because that is something that requires the human touch.
And that might be entirely true. But what I found really interesting is the contrast to if you talk to people in East Asia, let's say in Hong Kong, about AI, that is their number one use case for AI, is old age care. Because the demographics are so fundamentally different in those two geographies. We have relatively young demographics.
Of course, we talk about aging and things like that. But it's not a very pressing concern in a macroeconomic sense when you look at our population trees in North America. When you look at population trees in Hong Kong, Japan, Singapore, oh, my gosh, they look very different from ours.
And old age care is viewed as the number one-- well, maybe not the number one, but one of the number one use cases where I can actually help in the form of literally AI-informed robotics. And so it's just striking how people's predictions about future use cases are so socially informed by the context. And I think AI can help you question your own assumptions. In when you're asking those questions.
HOST: You're right. We kind of naturally go to what we know. And one would hope that we can learn from each other because as things shift in North America and aging does become an even bigger sort of prominent piece, what have we learned from other parts of the world that have already been there first?
I wrote down, you're asking good questions, and we talk about prompt engineers. I'm fortunate to work with a lot of exceptional coaches here at Ivey. So I've had the pleasure of being asked really good questions. Very specific, often open-ended, leaving space for conversation. And so I think that your comment and maybe the art of good questions is going to be an even more prominent skill that leaders are going to need to hone and to build and to refine.
So I'm going to go back a little bit to the conversation around sort of executives and maybe even park executives. I think leaders at multiple levels in the organization are involved in strategic planning. So yes. It can be organizational strategic planning, but it also be departmental or even team strategic planning.
So when we hear about AI initiatives, most strategic planning exercises, we got to leverage AI to some capacity or another. So let's toggle between is this sort of a process innovation? So can AI save me time? Can it save me money? Can it create speed and efficiency with you sort of reshaping maybe our business model or our team model altogether?
CHRISTIAN DIPPEL: I used to have a little bit this idea, I think, early on in thinking about AI, that product innovation is somehow better than process innovation. Process innovation, it feels a little boring. It's sort of like you're wringing the towel dry, like getting the efficiencies you can get, but product innovation is really the exciting part.
But I've really changed my thinking on that. And actually, some of our colleagues in Hong Kong had a helping hand in a discussion. I remember well where the framework that was applied in that conversation was more about where is AI a homogenizer, and where is it a differentiator?
And the chatbot is a great example of a product innovation that is not going to be a differentiator. Chatbot technology is a very commoditized type of technology. I've been involved in building chatbots. It's a very standardized product. And yes, there's better ones and less good ones. And yes, we've seen with Air Canada, we've seen examples of it horribly backfiring because they didn't execute very well on their chatbot.
But at the end of the day, it's not a differentiator. No business will stand out for having a chatbot. Some businesses might be worried that in the short run, they might stand out negatively for not having one. I think those are relatively minor concerns.
At the same time, sometimes, the process innovation is your real differentiator and your real business model. So when you think about famous examples like IKEA, Walmart, to a lesser extent, the network dynamics of Netflix or Amazon, those are really all businesses where the business model is on the process innovation more than the product innovation.
And so that is really an area where my thinking has changed from thinking product innovation somehow better than process innovation, to really being agnostic about the two and thinking about where is the true competitive advantage. Is it plausible for us to build a product that really sets us apart, or is it maybe the case that there's something in our processes that is going to really be able to leverage AI in a way that sets us apart?
HOST: We're often looking for the bigger splash or the bigger bold innovation, and maybe it's actually something a little closer to home that we need to pay attention to. I want to dig in because you're involved in a lot of really cool ventures and sort of consulting work. A couple of live examples of where AI is being strategically applied. So it's not hype. So don't tell me about the ChatGPT bot that Company A implemented. Something that be interesting for our listeners to hear that you've been involved with firsthand.
CHRISTIAN DIPPEL: One example that comes to mind is in one of my own businesses, which is really a newsletter business, that takes an enormous amount of government text. And we knew with that idea that no one really wants to have a newsletter that is everything that is going on in Ottawa. No one really needs to know everything that is going on in Ottawa.
The value really comes from saying, if you're in oil and gas, you want to have a newsletter that says everything that's going on in Ottawa that affects the oil and gas industry. If you're in utilities, you want a newsletter for that. If you're in telecoms, you want a newsletter in that. If you're in finance, you want a newsletter for that.
And so we had this idea, but we had this forest for the trees problem, that it was, well, government just generates too much text, like the editorial team that it would require to like, parse all those things into 20 different topics is just enormous.
And it's a good actually example of this process versus product innovation, and how fuzzy that line sometimes can be with AI is we have a very, very simple AI stack. All it does is take all the data that we're ingesting and funnel it into 20 plus funnels.
The technology is not that hard. It's not a particularly sophisticated model. It's using very off the shelf tools. But it allows you to go from 0 to 1 in being able to not being able to do it at all, to being able to do it in an extremely automated way. And so now, we're in a place where I sit there on a Sunday night, editing 24 newsletters in the space of an hour and a half. Everything else is AI generated, and I just look through them, do a little bit of polish, and that's it.
So your capability as a small business to do an enormous amount, obviously, depending on your line of business, it's harder in hardware than it is in software. In certain applications, you get completely supercharged in what you can do, and a lot of it can be much more about the process innovation than the product innovation.
HOST: Yeah, interesting. You think about an hour and a half. What would it have taken without AI support? How long would it have been?
CHRISTIAN DIPPEL: Well, it's almost like it's your process is changed in a way where it would have been straight up impossible because you would have had to see the forest for the trees. But there were too many trees, and you could have never done it to begin with.
Then there is a maybe intermediate solution that's like six years ago technology where you define a set of keywords. You parse through all the text based on buckets of keywords and to try to do an initial funneling. But oh, my gosh, you would have had a lot of work left to do to then make that good. And then you need to still summarize all the content, which four or five years ago, would have been very hard to do. There would have been a lot of hallucinations, the quality of the text would have been horrendous. And so all of those problems just go away. And you're left with just polishing, editing work.
HOST: Yeah. So saving time on something that you might not have even been able to do because of the complexity involved. So what's one single takeaway that you would leave, something that I can do as a leader, when I'm thinking about AI in the context of creating competitive advantage?
CHRISTIAN DIPPEL: I think one big takeaway that I think a lot of businesses, especially businesses that interface with Ivey a lot, I think have traditionally operated on a model where a wait and see approach is often the right approach when it comes to new technologies. And you know, in academia, the fancy language for that is like the real option value of waiting.
And oftentimes, with new technologies, the idea is, well, some standardized gold standard is going to emerge. And maybe I need to wait for two or three years for the dust to settle. But then there'll be an off the shelf product that will do what I need to do. And so it's better to wait.
And especially in the world of atoms, like in the physical hardware world, that is often the case, right? We all know Moore's law. The cost of computing decreases exponentially. Let's say things like the cost of solar panels has also been decreasing exponentially over the last 10 years. So lots of technologies. Absolutely, it makes sense to wait and see.
I think AI is really different, and it's different for two reasons. I'm sure it's different for more than two reasons, but two that are sort of prominent in my head. One is it just keeps accelerating. And so if you wait and do nothing, the thing is going to accelerate away from you in a way that you might come to regret two or three years down the road.
And then the other thing is the cost of an option is just so darn low compared to most other technologies. All it really takes is start using the thing in a potentially very low tech way, maybe designating two or three critical people in your organizations to say, explore some options. Free up one day a week where you say today is like your moonshot research day, where you just think about things we can do with AI.
So the barrier to adoption is really so low that I think it really makes sense to engage today and not wait, even for organizations that have been well-served in the past, with taking a wait and see approach when it comes to adopting new technologies.
And then the second thing is going to be quicker because I already said it. I think a lot of people I find in conversations grapple with this tension, that this idea that AI is all about data. But thinking about what AI will do is all about qualitative imagination. And I feel like that creates a real barrier to engaging with it, and I think getting comfortable with that, with the idea that, yes, AI is all about transforming data, but that doesn't mean that you can't just sit there and think carefully about what AI will do to your business or your industry.
HOST: And Thank you for sort of making the comparison to sometimes, it did make sense to wait and see, let the dust settle, let the kinks get worked out. And whereas other cases, I sort of pictured this train going down the tracks, and it's picking up speed, and it's like, I'm not ever going to catch up. I gotta jump now and get on.
For individuals, do you feel it's worth paying for premium AI models. And if so, or if not, do you have a favorite?
CHRISTIAN DIPPEL: One of the things that I found most amazing is in working with engineers. When you use something like GitHub Copilot, GitHub Copilot is like integrated with five or six different AI models, and just seeing an engineer switching between models and one model will be incredible at changing your code.
Another model will be incredible at giving you code from scratch if you're just describing what you want to do. And so there's a lot of nuance, and I think the deeper down that rabbit hole you go, the more those nuances matter. But then at the end of the day, it's up to you to decide if those nuances matter to you.
So I routinely use four different models just in conversation. I have premium subscriptions to two. I don't have premium subscriptions to the other two, and I just experiment. And certain models I find very little difference between them depending on the context too. So if you just ask, here's the things I have left over in my fridge. Give me some suggestions for dinner tonight, it's not going to matter all that much what you use it for. If you want to do like really deep thinking analysis of an industry profile, maybe there will be some real differences between a premium model and a non-premium model.
HOST: So what I take away is it depends.
CHRISTIAN DIPPEL: Yes. It depends. And maybe further to that point, I don't think you need professional advice on the decision of whether to spend $20 a month on a premium model. Just spend $20 on a premium model, and if you don't like it, unsubscribe a month later. It's not an irreversible decision.
HOST: You know, someone might like this particular tool because they're using it for this very specific reason, whereas someone else might have success with a different tool because again, it's a different purpose. And given how fast things are going, I'm sure new tools will be emerging all the time.
CHRISTIAN DIPPEL: To your point, in a classroom setting, if we have a session where we actually say, OK, now everyone go and prompt an AI. People tend to use very different ones. Some people are on their work laptop. And so they often use just Microsoft Copilot. Other people use GPT, some people use Perplexity, Claude, whatever. You get a pretty wide range of results in the answers. But then at the same time, if you ask the same question of the same LLM 5 minutes later, you can also get a wide range of results.
HOST: It's like us, right? If you ask me the same question tomorrow, I might get a different answer depending on what we were talking about.
Thanks for listening to Learning in Action. If Christian's insights sparked new thinking on AI strategies, don't stop here. Head over to ivexeced.com to explore upcoming programs and deepen your fluency with AI. And follow us on your favorite social media platform to catch more real time thought leadership and exclusive podcast drops. We'll see you next time.
About Ivey Executive Education
Ivey Executive Education is the home for executive Learning and Development (L&D) in Canada. It is Canada’s only full-service L&D house, blending Financial Times top-ranked university-based executive education with talent assessment, instructional design and strategy, and behaviour change sustainment.
Rooted in Ivey Business School’s real-world leadership approach, Ivey Executive Education is a place where professionals come to get better, to break old habits and establish new ones, to practice, to change, to obtain coaching and support, and to join a powerful peer network. For more learning insights and updates on our events and programming, follow us on LinkedIn.
