business.com receives compensation from some of the companies listed on this page. Advertising Disclosure
BDC Hamburger Icon

MENU

Close
BDC Logo
Search Icon
Updated Apr 19, 2024

Wharton Professor Ethan Mollick Is Ready to Merge With the Machines (Full Q&A)

Tina Nazerian, Contributing Writer

Co-Intelligence book cover

Emotions over AI gravitate toward extremes: While many CEOs dream of boosting productivity by orders of magnitude, many workers are terrified their livelihoods will vanish. Ethan Mollick, an associate professor of management at the Wharton School — whose new book, Co-Intelligence: Living and Working with AI, is a No. 1 Amazon Charts bestseller — wants us to be realistic yet optimistic. He spoke with b. about reasons for excitement and experimentation as the world transforms around us.

b.: You were losing sleep over AI. What was the lightbulb moment that led you to embrace it rather than fear it?

Mollick: I mean, there wasn’t a “fear” moment. That wasn’t something that occurred to me, because I’ve been using the system for a long time. So, I wasn’t surprised about what it could do necessarily. … The point of [my] sleepless nights was not that I was disturbed by it; the point was realizing this is a big deal, right?

Once you kind of get through this sort of existential crisis — why does this thing appear to think, what’s this mean for jobs and work? — you can start to use it for very productive and positive things. … I want to know what the fear is — like, if the fear is an existential anxiety with the future, I totally get it. I think using [AI] both makes it feel scarier for a little while, but also you start to understand its limitations and abilities. … As I started working with more and different things, I started to find the value of it increasing a lot. So, I think that … the way to use it is along with you as a human.

b.: You advise treating AI as a co-worker, co-teacher, and coach. What are the benefits and risks?

Mollick: All right. So, the first principle … in the book is “treat the AI like a person, and tell what kind of person it is.” And the reason is because that’s how the AI systems are built to operate. They operate as if they were people, right? The most effective way to work with it is to play into that paradigm. It also makes the AI seem less like a technical solution to [a] problem — because it doesn’t work like standard software works.

So, there is a risk of them. The AI is not a person. It doesn’t have thoughts or dreams or emotions. You could be lured off-guard by this.

b.: What moral and ethical responsibilities should business leaders have as they use AI?

Mollick: There’s a lot of ethical concerns associated with AI — how they’re trained, who controls them, what biases they have, whether people are passing off [their] work as their own … and although it gets overblown, there are always data privacy, security issues.

It’s an open question about what the rules are. Like, is getting advice from AI on something [acceptable]? Do you have to just tell people if you’ve done that? If you [ask AI to] write an outline for you to use, do you have to disclose that? It’s hard to have a single hard-and-fast rule about when you disclose things. How can business leaders start approaching best practices and setting boundaries over the use of AI for themselves and their teams? Because you can’t just use it on a whim whenever. …

The advice I give … is bring AI to everything you do — ethically, legally — and see what works or what doesn’t work. That’s the only way to [learn] what it’s good or bad at. … You need to get a sense of, where is it useful and where is it not for your products or services? And then you want to think about what use case it has. And then that lets you think about, OK, what happens if AI continues to get better quickly? Or slowly …?

There’s no instruction manual, right? It’s not like somebody knows how to be the world expert in your field by using AI. … You have to explore this for yourself to figure out how it’s best used. … I wouldn’t use [the free version of ChatGPT] for idea generation. [The paid version is] a much more powerful system, maybe 10 times as powerful. … And there are three “frontier” models right now: Claude 3, GPT-4, and Gemini Advanced. I think people need to use one of those.

b: You mentioned data privacy. How should a company protect its proprietary information while using AI? If you’re an agency writing a brand marketing message for a client, you don’t want your client’s information feeding the tool’s public database.

There’s lots of privacy solutions out there. If you pay for the premium GPT-4, there’s a switch you could throw so it doesn’t train [on] your data. … It’s a software process, not an entity. It has to be put into a training dataset. So, I think that people are overly worried about that compared to how relatively easy the problem is to solve.

b.: At this point, if you’re in the business world, is it a losing battle to be a contrarian who refuses to use AI?

Mollick: Well, I kind of wonder what the reason is. There are some parts of regulated industries where you can’t use AI right now. … When we rely on [AI] as a crutch, that could lower our ability to do things or understand things. But I think using AI as a co-intelligence offers a huge amount of opportunities for the future. … The results are likely to be complicated, with lots of positives and negatives, but just talking abstractly [about it] doesn’t help as much as actually getting your hands dirty.

It’s dangerous [for workers] who aren’t as good at their tasks. If you’re a very good ghostwriter — which I’m sure you are, right? — the AI is not going to write as good as you. … I also think people who are in creative industries, like yourself, tend to think that most people like to be creative and have lots of ideas. And a lot of people are very happy outsourcing some of their creativity to the AI.

b.: What regulations over AI usage and development might be on the horizon?

Mollick: A lot of people in regulated industries can’t use AI because there’s no clear regulation. The basic draft agreement by [the White House] is sort of developing along with the companies. That’s a legitimate strategy to do voluntary compliance and monitor closely. So, there is sort of a transformation happening, but there’s a lot of complication because it’s a multinational problem, not just one country.

Co-Intelligence is available now.

This article first appeared in the b. Newsletter. Subscribe now!

Tina Nazerian, Contributing Writer
BDC Logo

Get Weekly 5-Minute Business Advice

B. newsletter is your digest of bite-sized news, thought & brand leadership, and entertainment. All in one email.

Back to top