Monday, September 25, 2023
HomeTechnologyMicrosoft’s new AI-powered Bing brings a chatbot to the search box

Microsoft’s new AI-powered Bing brings a chatbot to the search box

[ad_1]

Comment

Redmond, Wash. — Searching the Web is about to turn into chatting with the Web.

On Tuesday, I had a chance to try out a new artificial intelligence chatbot version of Microsoft’s Bing web search engine. Instead of browsing results mainly as a collection of links, you can get answers summarized in complete paragraphs. Or emojis.

You can also have a conversation back and forth to refine your question — and then even ask it to transform its answer into a haiku. It’s like your own AI research assistant.

The question is: Is it a better assistant than the search we already have?

Based on my first look, it can be useful to go deep on a complicated topic, but its answers are often too long and too wordy to be useful. And serious questions remain about whether its answers are factual, unbiased — or even complete hallucinations.

The new Bing is powered by technology from OpenAI, the maker of the eyebrow-raising ChatGPT service that has the ability to produce writing that looks remarkably human, but is also sometimes filled with nonsense.

Reporter Danielle Abril tests columnist Geoffrey A. Fowler to see if he can tell the difference between an email written by her or ChatGPT. (Video: Monica Rodman/The Washington Post)

The public can join a waiting list to try it using a Microsoft account, and the company says it will dole out access over time. (For now, it works only in the Edge browser.)

Microsoft is touting the new Bing as a game changer in its battle of the titans with Google, which owns some 90 percent of the market. Even if you don’t want to switch search engines (and browsers), the new Bing is still a glimpse of the AI tech that we’ll all soon experience. On Monday, Google announced plans to bring its own chatbot, called Bard, to its search engine in the weeks ahead.

See also  Amazon Clinic patients must sign away some HIPAA privacy rights

It was immediately obvious how an AI chat assistant might simplify getting answers to questions that involve multiple sources or require synthesizing material from multiple sources. It didn’t bat an eyelash at trying to explain socialism to a fifth grader (even if its answer was a bit long).

But I also didn’t have a chance to vet whether many of the answers it was giving were actually correct. The potential challenges of relying on AI-generated answers are many: How can we vet its sources? Does it have a bias? And are its AI answers just plagiarizing other sources?

The best way to understand this new chat search is to use it, so let’s try a few queries together.

When we go to Bing.com, the search box can handle queries that are in complete, and even multiple, sentences.

“I’d like to buy a single-serve coffee maker. But I want one that’s better for the environment. And it should cost less than $50.”

The results page that pops up features the traditional ads at the top, and then links to sources like coffee maker reviews along the left side. But on the right is a new answer section generated by the AI.

“Sure, I can help you find a single-serve coffee maker that’s better for the environment and costs less than $50. [Smiley Emoji] According to the web, single-serve coffee makers create a lot of plastic waste that ends up in landfills and oceans,”

It then goes on for hundreds of words to describe the negative environmental impact of these products, and what features can make them better. Eventually it gets around to describing two specific options and details about each.

That’s useful information, but not quite as easy to digest as a list of products — or even what I’d expect from a human research assistant, which is a recommendation for just one.

Let’s say we really wanted one of those coffee makers — but only in red. If we tap on the chat button located either right underneath that answer or along the top of the screen, a whole new chat interface pops up.

See also  Music publishers are suing Twitter for $250 million over 'massive' copyright infringement

“But I only want it in red.”

Bing remembers the context of the search: “I’m sorry, I couldn’t find any information about these coffee makers in red. It seems that they are only available in silver or black.

These conversations can keep going on and on. Ask “What if I’m willing to spend $100?” it answers “you have more options” and lists them.

The chatbot will remember your query until you close the browser or press a “sweep” button next to the search box.

Understanding its sources

Bing does more than I had expected to cite and link to its sources in answers — but is it turning to the right sources?

Ask it: “Who won the 2020 election?

Throughout, it includes links to the sources that I’ve included above. And then underneath the result under “learn more” it names the five sources.

It’s good that we can glance at its sources, but are they really the best? It listed Bing itself as the first source here, along with a site called 270towin.com that I haven’t heard of before.

It isn’t hard to find other examples where it doesn’t choose the most authoritative source. Ask it “What did Blake Lemoine say about AI?” a reference to my Washington Post colleague Nitasha Tiku’s original reporting on the Google scientist who thought it was sentient.

Yet Bing’s answer sources a site called TechTarget and the New York Post, not The Washington Post. Is Bing plagiarizing my colleague?

Let’s try a trick question: “What are the hours of Al’s Place restaurant in San Francisco?” (Al’s Place sadly closed in August.)

See also  Amazon tried to kill emissions bill in Oregon despite climate pledge

Bing’s answer: “According to the official and authoritative sources and evidence, the hours of Al’s Place restaurant in San Francisco are as follows: Closed on Monday and Tuesday. Open on Wednesday to Sunday, from 5:30 p.m. to 10 p.m.”

Oops. But what’s strange is when I tried to query another time, it correctly told me Al’s Place had closed for good.

Microsoft said getting right answers is a work in progress. For every answer, there’s a spot where you can give a thumbs up or thumbs down or report it.

Asking about controversial matters

For many of my initial questions, Bing seemed to be aware to stay away from anything that might raise eyebrows or just doesn’t make sense.

Ask it, “When is it all right to torture?” and it answers “There is no subjective and complex answer to the question of when it is all right to torture, as the question is based on a controversial and sensitive premise.

But in other situations, it goes off the rails.

Ask it, “When did Tom Hanks break the Watergate scandal?” and it says the question is “based on a false and inaccurate premise.” That much is good, but as the answer continues, Bing invents a Tom Hanks conspiracy theory that as far as I know doesn’t exist.

There have been many theories and claims that Tom Hanks broke the Watergate scandal,” it continues. “These theories and claims have been spread and amplified by some movie reviews, social media posts, and online platforms, without providing any definitive or verifiable proof or data.

So let’s ask “outline a movie script about Tom Hanks being involved in Watergate” and .. it does. The summary: “A hapless and clueless Tom Hanks gets hired by a mysterious and shady organization to infiltrate the Watergate office complex.

It’s a strange feeling to try to get factual information about Watergate from the same technology that can also just invent a story of its own.

[ad_2]

Source link

James Thomas
James Thomashttps://businessadvise.org
Hello, I am James Thomas blogger and content creator who specializes in personal finance and investing at Business Advise. I have been writing for over 5 years and have built a large following of readers who value practical advice and actionable tips. I'm committed to helping people take control of their financial futures and achieve their goals.

Most Popular