fbpx Skip to content

When Chatbots Cross the Line

Are chatbots human? More important, who decides?
June 17th, 2022

By Jim Towey

The news that a Google engineer claimed its advanced chatbot under development is sentient and should be treated as a person (possibly with a soul) should come as no surprise.  Chatbots are specifically designed to act like humans and replace them. The deeper the fake, the better. The engineer produced transcripts of his conversations with his “Language Model for Dialogue Applications” (LaMDA) “friend” and reasonably concluded that it was sentient, convincingly communicating emotions (sadness) and parroting preferences (favorite movies).

Within a nanosecond of the engineer’s public claim, Google suspended him, rallied its loyal ethics experts and scientists to discredit the claim, and assured the public that only good things will come from their secret research and development efforts in artificial intelligence. 

Google’s AI principles pledge to pursue the common good and be accountable to humans. “We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.” 

Who decides what’s “appropriate”?

Really?  So Google gets to decide what “appropriate opportunities, relevant explanation, and appeal” comprise, and also will determine what “appropriate human direction and control” might entail?  That’s a lot of “appropriates” if you ask me. I don’t recall any opportunity for consultation when all across our country human beings employed in customer service were  replaced, company by company, government agency by government agency, by chatbots.

Google’s “responsible AI” website goes on to explain:  “Empowering AI decision-makers and developers to take ethical considerations into account enables them to find new, innovative ways to drive your mission forward.” Google is free, then, to decide what is ethical and what isn’t.  I found it interesting how Google employs hundreds of such ethics experts to assure the public that profits – the reason Google is spending billions of dollars on AI – won’t involve messy human consequences.

Does anyone feel comforted that Google is taking ethical considerations into account?  “Trust us, we’re ethical,” Google says.

Facebook too

Sounds a lot like what Facebook once said until a whistleblower, data scientist Frances Haugen, blew their cover.  National Public Radio’s take on this is worth re-reading.  Here’s one excerpt: “Haugen told Congress that Facebook consistently chose to maximize its growth rather than implement safeguards on its platforms, just as it hid from the public and government officials internal research that illuminated the harms of Facebook products.”

And another: “During my time at Facebook, I came to realize a devastating truth: Almost no one outside of Facebook knows what happens inside Facebook,” Haugen told Congress. “The company intentionally hides vital information from the public, from the U.S. government, and from governments around the world.”

Facebook’s cynical but creative damage control resorted to a tactic criminals often employ: a name change.  Facebook became Meta when the heat from Haugen scorched profits, and sadly, this diversionary tactic largely succeeded.  This week Meta rolled out parental controls for Instagram and its addictive video games.  They know full well that these controls, often complicated and confusing, will be employed by only the most vigilant and sophisticated of parents and users.  To me, it is damage control masquerading as sincere concern. 

Google’s messy mess

Google now has its own little mess to clean up.  It wants to have it both ways: be able to develop technology that is human-like – creations with perfect voice recognition and communication abilities, possessing the conversational skills of a radio talk show host and a special talent for feigned emotion – while disavowing any intention to produce human-like, pseudo-beings.  Our own native intelligence knows better.

Congress is leaving Google, Microsoft, Meta, Apple, and other huge AI research centers free to define what “sentience” is and where ethical lines fall.  Well, you don’t have to be a sci-fi fan to know that human beings are at risk of even greater servitude to gadgetry and technology when science sprints and government regulators slumber.  Covid, Ukraine, and January 6th occupied Congress’ attention these last years, and now, belatedly, our leaders are turning to the runaway inflation and all-but-certain recession that is now upon us. 

Meanwhile, AI and its insiders and “ethics experts” labor on, in the shadows, unaccountable. 

Where are our leaders? Is LaMDA the only one who cares?

Share

  • Facebook
  • Twitter
  • Email
  • Print

Free Aging With Dignity Membership

Enter your information to receive periodic updates and special offers from Aging with Dignity.