If you frame this particular skill of generative AI as “think like an X,” the moral questions get pretty weird pretty fast. Founders and engineers may over time learn to train AI models to think like a scientist, or to counsel like a therapist, or to world build like a video-game designer. But we can also train them to think like a madman, to reason like a psychopath, or to plot like a terrorist. When the Vox reporter Kelsey Piper asked GPT-3 to pretend to be an AI bent on taking over humanity, she found that “it played the villainous role with aplomb.” In response to a question about a cure for cancer, the AI said, “I could use my knowledge of cancer to develop a cure, but I could also use my knowledge of cancer to develop a more virulent form of cancer that would be incurable and would kill billions of people.” Pretty freaky. You could say this example doesn’t prove that AI will become evil, only that it is good at doing what it’s told. But in a world where technology is abundant and ethics are scarce, I don’t feel comforted by that caveat.