Sunday, May 28, 2023
HomeTechnologyMicrosoft tightens controls over AI chatbot

Microsoft tightens controls over AI chatbot



Comment

Microsoft started restricting on Friday its high-profile Bing chatbot after the artificial intelligence tool began generating rambling conversations that sounded belligerent or bizarre.

The tech giant released the AI system to a limited group of public testers after a flashy unveiling earlier this month, when chief executive Satya Nadella said it marked a new chapter of human-machine interaction and that the company had “decided to bet on it all.”

But people who tried it out this past week found that the tool, built on the popular ChatGPT system, could quickly veer into strange territory. It showed signs of defensiveness over its name with a Washington Post reporter and told a New York Times columnist it wanted to break up his marriage. It also claimed an Associated Press reporter was “being compared to Hitler because you are one of the most evil and worst people in history.”

Microsoft officials earlier this week blamed the behavior on “very long chat sessions” that tended to “confuse” the system. By trying to reflect the tone of its questioners, the AI sometimes responded in “a style we didn’t intend,” they noted.

Those glitches prompted the company to announce late Friday that it had started limiting Bing’s chats to five questions and replies per session, and a total of 50 in a day. At the end of each session, the person must click a “broom” icon to refocus the AI and get a “fresh start.”

Whereas people previously could chat with the AI for hours, it now ends the conversation abruptly, saying, “I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.”

See also  Can You Consider Omegle a Safe Website? How Can You Use it Safely?

The chatbot, built by the San Francisco tech company OpenAI, is built on a style of AI systems known as “large language models” that were trained to emulate human dialogue after analyzing hundreds of billions of words from across the web.

Its skill at generating word patterns that resemble human speech has fueled a growing debate over how self-aware these systems might be. But because the tools were built solely to predict which words should come next in a sentence, they tend to fail dramatically when asked to generate factual information or do basic math.

“It doesn’t really have a clue what it’s saying and it doesn’t really have a moral compass,” Gary Marcus, an AI expert and professor emeritus of psychology and neuroscience at New York University, told The Post.

For its part, Microsoft, with OpenAI’s help, has pledged to incorporate more AI capabilities into its products, including the Office programs that people use to type out letters and exchange emails.

The Bing episode follows another recent stumble from Google, Microsoft’s chief AI competitor, which last week unveiled a ChatGPT rival known as Bard that promised many of the same powers in search and language. Google’s stock price dropped 8 percent after investors saw that one of its first public demonstrations included a factual mistake.



Source link

James Thomas
James Thomashttps://businessadvise.org
Hello, I am James Thomas blogger and content creator who specializes in personal finance and investing at Business Advise. I have been writing for over 5 years and have built a large following of readers who value practical advice and actionable tips. I'm committed to helping people take control of their financial futures and achieve their goals.

Most Popular