Emotions over AI gravitate toward extremes: while many CEOs dream of boosting productivity by orders of magnitude, many workers are terrified their livelihoods will vanish.
Ethan Mollick, an associate professor of management at the Wharton School — whose new book Co-Intelligence: Living and Working with AI is a No. 1 Amazon Charts bestseller — wants us to be realistic yet optimistic. He spoke with b. about reasons for excitement and experimentation as the world transforms around us.
b.: What was the lightbulb moment that led you to embrace AI rather than fear it?
Mollick: Once you kind of get through this sort of existential crisis — why does this thing appear to think, what’s this mean for jobs and work? — you can start to use it for very productive and positive things. … I think using [AI] both makes it feel scarier for a little while but also you start to understand its limitations and abilities.
b.: What moral and ethical responsibilities should business leaders have as they use AI?
Mollick: There’s a lot of ethical concerns associated with AI — how they’re trained, who controls them, what biases they have, whether people are passing off work as their own …
The advice I give … is bring AI to everything you do — ethically, legally — and see what works or what doesn’t work. That’s the only way to [learn] what it’s good or bad at. … You need to get a sense of, where is it useful and where is it not for your products or services. And then you want to think about what use case it has …
I wouldn’t use [the free version of ChatGPT] for idea generation. [The paid version is] a much more powerful system, maybe 10 times as powerful. … And there are three “frontier” models right now: Claude 3, GPT-4, and Gemini Advanced. I think people need to use one of those.
b.: How should a company protect its proprietary information while using AI?
Mollick: There’s lots of privacy solutions out there. If you pay for the premium GPT-4, there’s a switch you could throw so it doesn’t train your data. … It’s a software process, not an entity.
b.: At this point, if you’re in the business world, is it a losing battle to be a contrarian who refuses to use AI?
Mollick: There are some parts of regulated industries where you can’t use AI right now. … When we rely on [AI] as a crutch, that could lower our ability to do things or understand things.
But I think using AI as a co-intelligence offers a huge amount of opportunities for the future. … The results are likely to be complicated with lots of positives and negatives, but just talking abstractly [about it] doesn’t help as much as actually getting your hands dirty.
This interview has been edited for length. Read the full Q&A at business.com.
Co-Intelligence is available now.