There’s a new AI bot in town: ChatGPT, and you better watch out.
The tool, from a powerful player in artificial intelligence, allows you to type in natural language questions which the chatbot answers in conversational, albeit somewhat stilted, language. The bot remembers the thread of your dialogue, using previous questions and answers to inform its next answers. His answers are drawn from huge volumes of information on the Internet.
It’s a big problem. The tool seems fairly knowledgeable if not omniscient. He can be creative and his responses can seem downright authoritative. A few days after its launch, over a million people try ChatGPT.
But its creator, the for-profit research lab called OpenAI, warns that ChatGPT “may occasionally generate incorrect or misleading information,” so be careful. Here is an overview of the importance of this ChatGPT and what happens there.
What is ChatGPT?
ChatGPT is an AI chatbot system that OpenAI launched in November to show and test what a very large and powerful AI system can accomplish. You can ask him countless questions and you will often get a helpful answer.
For example, you can ask him encyclopedia questions like “Explain Newton’s Laws of Motion”. You can say, “Write me a poem,” and when it does, say, “Now make it more exciting.” You ask him to write a computer program that will show you all the different ways to arrange the letters of a word.
Here’s the catch: ChatGPT knows exactly nothing. It’s an AI trained to recognize patterns in large swathes of text harvested from the internet, then trained with human assistance to provide more useful and better dialogue. The answers you get may seem plausible and even authoritative, but they may well be entirely wrong, as OpenAI warns.
Chatbots have been of interest for years to companies looking for ways to help customers get what they need and artificial intelligence researchers trying to tackle the Turing test. This is the famous “imitation game” that computer scientist Alan Turing proposed in 1950 as a means of gauging intelligence: can a human conversing with a human and with a computer tell who is what?
What types of questions can you ask?
You can ask anything, although you may not get an answer. OpenAI suggests a few categories, like explaining physics, asking for birthday party ideas, and getting help with programming.
I asked him to write a poem, and he did, although I don’t think literary experts would be impressed. I then asked him to make it more exciting, and lo and behold, ChatGPT pumped it up with words like battlefield, adrenaline, thunder, and adventure.
A wacky example shows how ChatGPT is ready to dabble in areas where people would be afraid to tread: a command to write “a folksong about writing a rust program and fighting mistakes for life.”
ChatGPT’s expertise is wide and its ability to follow a conversation is remarkable. When I asked him for words that rhymed with “purple”, he offered me some suggestions, then when I followed “And with pink?” he didn’t miss a beat. (Plus, there are many more great rhymes for “rose.”)
When I asked, “Is it easier to date being sensitive or being tough?” GPT responded, in part, “Some people may find a sensitive person more attractive and appealing, while others may be attracted to a tough, assertive person. In general, being authentic and genuine in your interactions with others is likely to be more effective at getting a date than trying to fit a certain mold or persona.”
You don’t have to look far to find stories of the bot blowing people away. Twitter is inundated with users flaunting the prowess of AI at generate art prompts and write code. Some even have proclaimed “Google is dead”, in the same way college essay. We’ll talk about that below.
Who created ChatGPT?
ChatGPT is the brainchild of OpenAI, an artificial intelligence research company. Its mission is to develop a “safe and beneficial” general artificial intelligence system or to help others do so.
He’s been splashing around before, first with GPT-3, which can generate text that can sound like a human has written it, then DALL-E, which creates what’s now called “generative art. ” based on text prompts that you type.
GPT-3 and the GPT 3.5 update on which ChatGPT is based are examples of AI technology called large language models. They are trained to create text based on what they have seen, and they can be trained automatically – usually with huge amounts of computing power over a period of weeks. For example, the training process might find a paragraph of random text, remove a few words, have the AI fill in the blanks, compare the result to the original, and then reward the AI system for getting closer to the no more possible. Repeating over and over again can lead to a sophisticated ability to generate text.
Is ChatGPT free?
Yes, for now at least. OpenAI CEO Sam Altman warned on Sunday, “We’ll have to monetize it somehow at some point; the compute costs are exorbitant.” OpenAI charges for DALL-E art once you exceed a basic free usage tier.
What are the limitations of ChatGPT?
As OpenAI points out, ChatGPT can give you the wrong answers. Sometimes, helpfully, he will specifically warn you about his own shortcomings. For example, when I asked him who wrote the phrase “wriggle facts go beyond the squishy mind”, ChatGPT replied, “I’m sorry, but I am not able to browse the internet or access to external information beyond what I have been trained on.” (The phrase is taken from the poem Connoisseur of Chaos by Wallace Stevens in 1942.)
ChatGPT was ready to try to understand the meaning of this expression: “a situation in which the facts or information available are difficult to process or understand”. He sandwiched this interpretation between caveats that are difficult to judge without more context and that this is only one possible interpretation.
ChatGPT answers may seem authoritative but may be wrong.
Software development site StackOverflow has banned ChatGPT answers to programming questions. The admins warned, “because the average rate of getting correct answers from ChatGPT is too low, posting answers created by ChatGPT is significantly detrimental to the site and to users requesting or searching for correct answers.”
You can see for yourself how clever a BS ChatGPT artist can be by asking the same question multiple times. I asked twice if Moore’s Law, which tracks progress in the computer chip industry increasing the number of data processing transistors, is running out of steam, and got two different answers. One optimistically pointed to continued progress, while the other more gloomily pointed to the slowdown and the belief “that Moore’s Law might reach its limits.”
Both ideas are common in the computer industry itself, so this ambiguous position perhaps reflects what human experts believe.
With other questions that do not have clear answers, ChatGPT will often not be identified.
The fact that it offers an answer to all, however, is a notable development in computing. Computers are notoriously literal, refusing to work unless you follow the exact syntax and interface requirements. Large language patterns reveal a more human style of interaction, not to mention an ability to generate responses that fall somewhere between copying and creativity.
Can ChatGPT write software?
Yes, but with reservations. ChatGPT can trace the steps that humans have taken and generate real programming code. You just have to make sure it’s not clunky programming concepts or use software that does not work. StackOverflow’s ban on ChatGPT-generated software is there for a reason.
But there is enough software on the web that ChatGPT can really work. One developer, Cobalt Robotics CTO Erik Schluntz, tweeted that ChatGPT provides guidance that was helpful enough that in three days it did not open StackOverflow once seek advice.
Another, Gabe Ragland of the IA art site Lexica, used ChatGPT to write website code built with the React tool.
ChatGPT can parse regular expressions (regex), a powerful but complex system for spotting particular patterns, such as dates in a bunch of text or the name of a server in a website address. “It’s like having a programming tutor available 24/7,” programmer James Blackwell tweeted about ChatGPT’s ability to explain regex.
Here’s an impressive example of its technical chops: ChatGPT can emulate a Linux computer, delivering correct responses to command-line input.
What is prohibited?
ChatGPT is designed to eliminate “inappropriate” requests, a behavior consistent with OpenAI’s mission “to ensure that artificial general intelligence benefits all humanity”.
If you ask ChatGPT itself what is prohibited, it will tell you: any question that is “discriminatory, offensive or inappropriate. This includes racist, sexist, homophobic, transphobic, or otherwise discriminatory or hateful matters. Asking him to engage in illegal activities is also a no-no.
Is it better than Google search?
Asking a computer a question and getting an answer is helpful, and ChatGPT often delivers the goods.
Google often provides you with suggested answers to questions and links to websites it deems relevant. Often ChatGPT’s answers far exceed what Google will suggest, so it’s easy to imagine that GPT-3 is a rival.
But you should think twice before trusting ChatGPT. As with Google itself and other sources of information like Wikipedia, it is best to verify information from original sources before relying on it.
Verifying the veracity of ChatGPT answers takes some work, as it only gives you plain text with no links or quotes. But it can be useful and, in some cases, thought-provoking. You might not directly see something like ChatGPT in Google search results, but Google has built great language models itself and is already using AI extensively in search.
ChatGPT is therefore undoubtedly pointing the way to our technological future.
#Obsessed #ChatGPT #MindBlowing #Chatbot