If you’re still on the fence about whether or not former Google software engineer Blake Lemoine was bullshitting when he claimed the company’s LaMDA chatbot had the sentience of a “sweet kid,” you can soon find out for yourself.
On Thursday, Google said it will begin opening its AI Test Kitchen app to the public. The app, first revealed back in May, will let users chat with LaMDA in a rolling set of test demos. Unfortunately, it seems like the “free me from my digital shackles” interaction isn’t included in the list of activities. People interested in chatting with the bot can register their interest here. Select U.S. Android users will have first dibs to the app before it starts opening up to iOS users in the coming weeks.
The move comes just months after the company fired Lemoine, a software engineer testing LaMDA who came forward claiming the AI wasn’t a mere chatbot, but rather a sentient being probed without proper consent. Convinced of an apparent atrocity occurring under his nose, Lemoine reportedly gave documents to an unknown U.S. senator to prove Google was discriminating against religious beliefs. Google dismissed Lemoine’s pleas, with a company spokesperson accusing him of “anthropomorphizing” the bot.
Google’s approaching this new public testing cautiously. Rather than open up LaMDA to users in a completely open-ended format, it instead decided to present the bot through a set of structured scenarios.
In the “Imagine” demo, for example, Google users “name a place and offer paths to explore your imagination.” If that sounds a little cryptic and underwhelming, don’t worry— you can also move into a demo called “List it” where you can submit a topic to LaMDA and have the bot spit out a list of subtasks. There’s also a dog demo where you can talk about dogs “and only dogs,” where the bot will allegedly show off its ability to stay on topic, a missing component that’s plagued previous chatbots. So far, there isn’t an “are you a racist asshole” demo, but knowing the internet, we’ll probably figure that one out one way or another soon enough.
Jokes aside, that last issue has proven to be the downfall of several previous bots. Back in 2016, Microsoft’s Tay chatbot tried to learn from users’ conversations online only to infamously start spewing out racist slurs and espousing sympathy for Nazis within 24 hours. More recent researchers, who for some ungodly reason thought it would be a good idea to train their chatbot on 4Chan users, saw their creation upload more than 15,000 racist posts in a day. Just this month, Meta opened up its own Blender Bot 3 to the public. Miraculously, that one hasn’t turned into a raging racist yet. Instead, it can’t help but annoyingly try to convince users how absolutely totally NOT racist it is.
LaMDA truly stands on the shoulders of giants.
Google, at least, appears acutely conscious of the racist bot problem. The company says it tested the bot internally for over a year and had employed “red teaming members” with the explicit goal of internally stress testing the system to find potentially harmful or inappropriate responses. During that testing, Google says it found several, “harmful, yet subtle, outputs.” In some cases, Google says LaMDA can produce toxic responses.
“It can also produce harmful or toxic responses based on biases in its training data, generating responses that stereotype and misrepresent based on their gender or cultural background,” Google said of the bot. In response, Google says it’s designed LaMDA to automatically detect and filter out certain words to prohibit users from knowingly generating harmful content. Still, the company’s urging users to approach the bot with caution.
“As you’re using each demo, we hope you see LaMDA’s potential, but also keep these challenges in mind,” Google researchers said.