David Ingram tells the story of Koko, a San Francisco-based online emotional support chat service. Users “can ask for relationship advice, discuss their depression or find support for nearly anything else — a kind of free, digital shoulder to lean on.” In October of 2022 Koko ran an experiment using an artificial intelligence chatbot to write portions of or all their replies to Koko users – but they did not disclose this to the users, which raises huge ethical questions. What is there in place to stop this from happening again?
Interestingly, the people who saw the co-written GTP-3 responses rated them significantly higher than the ones were written just the human. But their opinion quickly changed when they found out the messages were co-created by a machine. “Simulated empathy feels weird, empty,” wrote Koko co-founder Robert Morris.
Read More: NBC News