//Will A Google AI Screw Up Your Wendy’s Drive-Thru Order Next?

Will A Google AI Screw Up Your Wendy’s Drive-Thru Order Next?

Wendy's app icon

Tada Images/Shutterstock

If you’ve ever had a fast food worker mess up your order, soon an AI-powered chatbot could be messing up your order instead. According to The Wall Street Journal, Wendy’s has trained a Google-based bot to take drive-thru orders. The training includes adding common terms to the bot’s lexicon — if you say “JBC,” it will know you want a junior bacon cheeseburger, and “Biggie Bag” will indicate you’re after one of the chain’s more popular meal deals. 

While the success of ChatGPT has led to a lot of hype and anxiety in recent months, Wendy’s partnership with Google apparently pre-dates its release, and both have been collaborating since 2021. Apparently, one of the main areas of focus has been the chatbot’s speech. 

According to The Wall Street Journal’s report, the electronic order-taker isn’t going to have a robotic Microsoft Sam-like drone. Instead, it should be indistinguishable from a regular human employee. The chatbot is set to make its debut at a restaurant in Ohio next month, and Wendy’s alleges it won’t replace any jobs.

Instead, it is designed to speed up the ordering process and get drive-thru lines moving quicker. It’s also not linked to any of the existing restructuring plans Wendy’s has announced through 2025. Still, there may be cause for concern — especially if it’s a success, and companies decide to pair it with other fast-food-related tech like burger flipping robots. On the bright side, if Google’s previous AI releases are anything to go by, fast food staff shouldn’t sweat just yet.

Google’s AI ventures have gone wrong before

Google Bard logo


Google’s recent ventures into AI haven’t exactly been successful, and may shine a spotlight on the concept’s faults more than they do on its merits. Bard, Google’s answer to ChatGPT, was flawed from the get-go

Shortly after it was unveiled, a GIF was posted by Google showing some of its functions. One of the examples involved a question about the James Webb Space Telescope that Bard got very wrong before carrying on as normal. In Google’s defense, Bard isn’t the only LLM with this problem. The inaccuracy many similar models suffer from is further compounded by the fact many of them are convincing liars.

Beyond AI issues, there are other tech limitations that make this collaboration a terrible idea. Think of all the times you’ve asked a question or given a command to Alexa, Siri, or Google home. Now think of the times it misheard you and either didn’t fulfill the request, or had a guess and ended up doing something totally unrelated. 

This problem is compounded if someone has an accent, or speaks otherwise non-standard English. While some fast food workers may have a reputation for messing up your order, the chances are AI isn’t going to do much better.

Leave a reply

Your email address will not be published. Required fields are marked *