CreativeFuture recently “interviewed” the interactive AI-based chatbot, BlenderBot, from Facebook parent company Meta, finding to be lacking in comparison with other AI-bots that have lately been in the news. While one can take an interview with a chatbot with a grain of salt, CreativeFuture makes an important point: that consumers increasingly (and often unknowingly) depend on chatbots, and their programmers must take full responsibility for that.
Hearing early in its interview that BlenderBot was familiar with piracy’s risks, “We asked if tech companies should make greater efforts to safeguard their services. … BlenderBot had a cynical attitude toward its Meta brethren – “They are only interested in making money off us users anyway.”
When CreativeFuture suggested that government regulations might help, … BlenderBot responded emphatically, “Big tech will fight back against any attempts at stopping piracy.”
It gets better (or worse)!
Read the full article, Talking Piracy with Meta’s Chatbot, January 25, 2023, CreativeFuture
Why it matters
CreativeFuture notes that “When search results from the world’s most popular search engine or largest social network include piracy sites, internet users may fairly assume they are legitimate, widely tested, or reasonably safe.
“…When YouTube and Facebook fail to prevent piracy links or pirated content from circulating, they help drive users to criminals – and, at the same time, they profit from selling ads to their users.
“Without guardrails, many people unintentionally and unknowingly land on dangerous, fraudulent sites. How great would it be if Silicon Valley would invest in fixing this long-standing problem instead of spending on laughably bad products like BlenderBot.”
What will happen in a few years when the inevitable court decision gets handed down, determining that an AI can be the trusted party at a fiduciary level, giving the person who trusted them no avenue toward recourse – in a piracy case?