Even Google employees don’t like its AI chatbot Bard
A new report reveals Google employees were unhappy with the AI before launch
Google’s recently released AI chatbot Bard launched too early, according to several Google employees who repeatedly criticized the abilities and safety of the tool before its launch.
Citing internal screenshots and comments from 18 current and former Google employees, a new report from Bloomberg reveals that many Google staff asked the company to not release the AI model. Some called it “a pathological liar”, another called it “cringe-worthy”, while one employee suggested its advice on topics like scuba diving was dangerous and “would likely result in serious injury or death.”
➡️ The Shortcut Skinny: Google Bard
🚫 Several Google employees warned the company against releasing Bard
😬 A new report outlines their ethical concerns with the AI chatbot
👋 Google reportedly went ahead and released the AI tool anyway
🏃♀️ The workers say Google is trying to keep up with the competition
The employees allege that Google released Bard despite these functional and ethical concerns because it sees itself as in a race to keep up with the competition, such as Microsoft’s new AI search chatbot for Bing.
The debut of OpenAI’s Chat-GPT in November last year reportedly sent the tech giant scrambling to come out with its own generative AI model and rapidly integrate the tech into its existing products, while apparently sidelining ethical questions along the way.
Workers speaking to Bloomberg said Google’s internal group working on ethics was told not to obstruct the development of its generative AI tools, demoralizing and disempowering them.
In response to Bloomberg, a company spokesperson said Google was “continuing to invest in the teams that work on applying our AI Principles to our technology” and that developing AI responsibly remained one of its top priorities.
Much attention has been paid to artificial intelligence in recent months as companies across the tech industry and beyond have started to integrate generative AI models into their products. Google’s Bard already had an uneasy introduction to the world when it fluffed its first public demo, and Microsoft’s Bing AI similarly stumbled out of the gate.
As more companies – from CNET to Snapchat – begin using the models, discussions over when an AI is ready to be released for public use, and what sort of responsibilities should be put on it and its creators will only become more pressing. Some tech leaders have already heralded ChatGTP as the iPhone moment of artificial intelligence; the stakes are high.