According to artificial intelligence expert and instructor at Harvard DCE Professional & Executive Development Mark Esposito, technology can empower us to address some of the biggest challenges we face today.

He offers his insights on the present state — and potential future — of AI.

What are some current AI trends in business and what value do they bring?

The trends in business that we see today in AI are largely coming with the increase of efficiency and productivity for many firms. Through the distribution of AI to different open AI models, more companies are exploring the possibilities to have some form of machine learning in their operations.

Many years ago, in trying to improve manufacturing and production, these improvements were related to the physical world. We now see a very similar phenomenon happening with service companies; they’re trying to use AI to enhance what they do. 

For example, marketing today is much more about digital interfaces because technology made it possible. Now with AI as part of the equation, it changes even more — interfacing with customers, profiling, the kinds of campaigns, and the algorithmic governance that we have. 

What does AI mean in your field?

There are a few functions in which we talk about AI in my field. When we are doing programs here at Harvard, we think of AI as autonomy; we’re building some degree of automation in the decision-making process. 

AI is also advanced statistics. Sometimes there’s a bit of mystique around AI, but it’s mainly those advanced statistics that, through computer power, generate many different outputs which then become predictions.

Fundamentally, artificial intelligence is about the ability to use algorithms to define patterns. Once you recognize the patterns, you can eventually predict them. And once you can predict them, you can prescribe meaning. 

What are some of the potential challenges that you have to apply to AI in business?

In a dynamic algorithmic model, sometimes the pace of predictions happen faster than our ability to make decisions or to understand them. This time lapse is not easy to mitigate. 

We also have the ethical dimension. AI technology exists on its own, but it’s an extension of human thinking. I call it an extension of the brain. If you have some form of biases that are injected into the algorithm, you’re likely going to digitally amplify the bias in the software or the AI solution. This becomes problematic when a limitation of a person or a group of people’s mental models now become the norm, or they become standardized. 

When we don’t have clarity on how the predictions are generated, it becomes very difficult for us to build defensibility or accountability. The challenge that we have to address is deciding the role of humans and how we make the technology human-centric. Even if humans are largely relieved of a repetitive task, it doesn’t mean that we are removing the delegation of power to humans. If we’re unclear about that role, then we have to intentionally try to define it.

When we are looking at technologies like AI, we need to think about how we audit, because we need to make sure that we are clear that some processes might be flawed. We also need a margin of response in case something goes wrong, because no matter how automated the process is, the final 1 percent should really be about people making decisions.

What are some examples of AI use cases in business? What has been successful and what hasn’t? 

Some of the most successful examples of AI are in the financial industry. One example is credit card fraud prevention. We know that somebody is using our credit card against our will because our behavior is profiled by credit card companies; we tend to be repetitive in the things we buy, in the amount we buy, in the locations we buy. When a credit card is stolen and used by somebody else, that generates an anomaly. The prevention of credit card fraud has largely been preempted by the use of AI. 

We cannot always use AI as a prediction of social unrest. In journalism, we cannot write about social trends that easily because there are so many variables we can’t know or anticipate in a mathematical manner. Healthcare and education has made good progress, but we still need infrastructure for some of the technology to be integrated if we want to scale solutions to more people. 

It is oversimplification to think that just because we can automate or digitize the process, it will work. We are still in the early stage of learning where technology can help us accelerate and where it has held us back. The next few years will be critical for us to better understand this. 

Artificial intelligence is about the ability to use algorithms to define patterns. Once you recognize the patterns, you can eventually predict them. And once you can predict them, you can prescribe meaning. 

How might we see the acceleration of AI being integrated in day-to-day life in the coming years? 

One of the likelihoods is that we’ll see  a fusion of cybernetic capacity into our lives. We call it “the cognification,” where cognitive and technology are combined. Smart infrastructure will likely make future infrastructure much more responsive, technologically advanced, and more environmentally friendly.

Augmented reality is becoming more and more possible. Reality and virtual reality may coexist at the same time, which means that our sense of perception will no longer come from the traditional normative idea of perception, but it will be discretionary depending on what we’re going through. This opens up the possibility to explore more nuances, but could be dangerous because we could see multiple versions of the same truth, and build even deeper echo chambers of information. 

How have the industries from which course participants are coming changed? What are people curious about?

When I have such a rich array of people from all over the world in my classes, an interest in algorithmic governance is common to many of them. When I started to teach the AI programs in 2019, the question that was asked was very different. Now there are people who feel that we need to start thinking about the rules of engagement.

When I first started many of those participants were coming from technology companies. Today, I think technology companies are now one of the many. Participants include people working in legal practice, in manufacturing, in academia, in construction.

Everybody has an opinion about Chat GPT; some use it extensively, some don’t know how to use it, some banned it. But there’s also a lot of thirst and interest from the participants to get a bit deeper. They all get exposed to AI, but they don’t get deep enough for them to have a sense of true understanding. 

I think there’s a lot of opportunity to distill knowledge in the right way. In my course, it becomes a form of building strategic value by creating value, rethinking your operations, or redesigning your organization so employees don’t fear technology. Otherwise you’re going to have an arsenal of people that are scared that they’ll be replaced by AI, which is not the narrative I like to believe in.

How do you generate immediate value for your students while still making the knowledge applicable for as long as possible? 

My lucky combination is that on one hand, I’m an academic. I engage with important questions, because that’s the nature of inquiry. I am also an entrepreneur. I co-founded an AI company, Nexus Rentier Tech, and a lot of my learning comes from the practice of AI with clients and with use cases.

The reality of running the company from an entrepreneurial perspective gives me a sense of direction on where the market is. The academic side of me goes back to the inquiry and asks, what is happening and why? I found that I can strike the balance between these two sides, the theoretical and the practical, and it works for me.

Every iteration of the program also needs to integrate new technology transformations because this industry is changing very fast every day.