Google works by crawling billions of web pages, indexing that content, and then ranking it in order of the most relevant answers. It then spits out a list of links to click on. ChatGPT offers something more enticing to harassed Internet users: a unique response based on its own research and synthesis of this information. ChatGPT was trained on millions of websites to glean not only the skill of human conversation, but also the information itself, as long as it was published on the internet before the end of 2021.(1)
I went through my own Google search history over the past month and put 18 of my Google queries into ChatGPT, cataloging the responses. I then went back and ran the queries through Google one more time, to refresh my memory. The end result was, in my opinion, that ChapGPT’s answer was more helpful than Google’s in 13 of the 18 examples.
“Useful” is of course subjective. What do I mean by the term? In this case, clear and complete answers. A question about whether condensed milk or evaporated milk was better for pumpkin pie over Thanksgiving elicited a detailed (if slightly wordy) response from ChatGPT that explained how condensed milk would lead to a sweeter pie. (Naturally, that was superior.) Google mostly provided a list of links to recipes I should click on, with no clear answer.
This underscores ChatGPT’s main threat to Google all down the line. It gives a unique and immediate response that does not require any additional scanning of other websites. In Silicon Valley, it’s a “frictionless” experience, something of a holy grail when online consumers overwhelmingly favor fast, easy-to-use services.
Google has its own version of summarized answers to certain queries, but these are compilations of the top-ranking web page and are usually brief. It also has its own proprietary language model, called LaMDA, which is so good that one of the company’s engineers thought the system was sensitive.
So why doesn’t Google generate its own singular answers to queries, like ChatGPT? Because anything that stops people from scanning search results is going to hurt Google’s transactional business model of getting people to click on ads. According to data compiled by Bloomberg, some 81% of Alphabet Inc.’s $257.6 billion revenue in 2021 came from advertising, much of which was Google’s pay-per-click ads.
“Everything is designed to ‘get you to click on a link,'” says Sridhar Ramaswamy, who oversaw Google’s advertising and commerce operations between 2013 and 2018, and says generative search from systems such as ChatGPT will disrupt the traditional Google system. seek business “in a massive way”.
“It’s just a better experience,” he added. “The goal of Google search is to get you to click on links, ideally ads, and any other text on the page is just filler.” Ramaswamy co-founded a subscription search engine called Neeva in 2019, which plans to roll out its own generative search feature that can summarize web pages, complete with footnotes, in the coming months.
ChatGPT does not reveal the sources of its information. In fact, chances are its own creators can’t tell how it generates the answers it offers. This underscores one of his greatest weaknesses: sometimes his answers are just plain wrong.
Stack Overflow, a Q&A site for coders, temporarily banned its users from sharing ChatGPT tips on Monday, saying the thousands of answers programmers were posting from the system were often incorrect.
My own experience confirms this. When I put my 12-year-old daughter’s English essay question into the system, she offered a long and eloquent analysis that seemed authoritative. But the answer was also riddled with errors, such as stating that a literary character’s parents were dead when they weren’t.
What’s troubling about this flaw is that inaccuracies are hard to spot, especially when ChatGPT seems so confident. System responses “generally look good,” according to Stack Overflow. And by OpenAI’s own admission, they often sound plausible. OpenAI initially trained its system to be more cautious, but the result was that it refused questions it knew the answer to. Going the other way, the result is something like a fraternity student bluffing his way through an essay after not studying. Fluid bullshit.
It is unknown how common ChatGPT errors are. An estimate doing the rounds on Twitter is a rate of 2% to 5%. It may be more. This will make people reluctant to use ChatGPT for important information. Another strength of Google: it primarily makes money on transactional search queries for products and navigational searches to other sites, like people typing in “Facebook” or “YouTube.” These types of queries made up many of the top 100 Google searches of 2022. As long as ChatGPT doesn’t link to other sites, it doesn’t trespass too deeply into Google’s territory. But both of these issues could change over time. ChatGPT may become more accurate as OpenAI expands its model training to more current parts of the web. To that end, OpenAI is working on a system called WebGPT, which it hopes will lead to more accurate answering of search queries, which will also include source citations. A combination of ChatGPT and WebGPT could be a powerful alternative to Google. And ChatGPT already gives more accurate answers than earlier OpenAI systems.
ChatGPT amassed 1 million users in about five days. This is an extraordinary step: it took 2.5 months for Instagram to reach this figure, and 10 months for Facebook. OpenAI doesn’t publicly speculate on its future apps, but if its new chatbot starts sharing links to other websites, especially ones that sell stuff, it could spell real danger for Google.
More from Bloomberg Opinion:
Creative AI Generates Messy Problems: Parmy Olson
ChatGPT could make democracy even messier: Tyler Cowen
The AI panned my scenario. Can he break Hollywood? : Trung Phan
(1) ChatGPT was refined from a model in OpenAI’s GPT-3.5 series of large language models, which was trained on a mix of text and code from before Q4 2021.
This column does not necessarily reflect the opinion of the Editorial Board or of Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former journalist for the Wall Street Journal and Forbes, she is the author of “We Are Anonymous”.
More stories like this are available at bloomberg.com/opinion
#Analysis #Google #facing #threat #ChatGPT