With AI, we can’t

“Humans lie and manipulate each other’s emotions all the time, but at least we can reasonably guess at someone’s motivations, agenda and methods. With AI, we can’t.”

Ethicist Carissa Véliz argues that chatbots that use emojis are emotionally manipulative: without appropriate safeguards, the technology could undermine people’s autonomy.

 A 2021 study found that people consistently underestimated how susceptible they were to misinformation.
It would be more ethical to design chatbots to be noticeably different from humans. To minimize the possibility of manipulation and harm, we need to be reminded that we are talking to a bot.
In the long run, ethics is good for business. Tech companies stand a better chance of making ethical products — and thriving — if they avoid deception and manipulation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: