OpenAI's ChatGPT Chatbot Is Amazing, Creative, and Totally Fake

OpenAI’s ChatGPT Chatbot Is Amazing, Creative, and Totally Fake

ChatGPT, an application recently released by OpenAI, gives users amazing answers to questions, and many of them are incredibly wrong.

Open AI hasn’t released a full new model since the release of GPT-3 in June 2020, and this model was only released to the public about a year ago. The company is expected to release its next model, the GPT-4, later this year or early next year. But as a surprise of sorts, OpenAI somewhat quietly released a user-friendly and surprisingly lucid GPT-3-based chatbot called ChatGPT earlier this week.

ChatGPT responds to prompts in a simple, human-adjacent way. Looking for a cute conversation where the computer pretends to have feelings? Look elsewhere. You are talking to a robotit seems to say, So ask me something a fucking robot would know. And under these conditions, ChatGPT offers:

a chatbot is greeted by smalltalk, which it then directs to a more direct line of inquiry, and provides a good response.


Credit: OpenAI / Screenshot

It can also provide useful common sense when a question does not have an objectively correct answer. For example, here’s how it answered my question: “If you ask a person ‘Where are you from?’ should they answer with their birthplace, even if it’s not where they grew up?”

SEE ALSO:

Artificial intelligence trained by Reddit warns researchers against … itself

(To note: The ChatGPT replies in this article are all first attempts, and the threads were all fresh during those attempts. Some prompts contain typos)

ChatGPT asked you if you ask a person


Credit: Open AI via screenshot

What sets ChatGPT apart from the pack is its rewarding ability to manage comments on its answers and revise them on the fly. It really is like a conversation with a robot. To see what I mean, look at how he handles a hostile response to medical advice reasonably well.

a chatbot realistically responds to certain medical advice and provides more decent information.


Credit: OpenAI / Screenshot

Yet, is ChatGPT a good source of information about the world? Absolutely not. The prompt page even warns users that ChatGPT “may sometimes generate incorrect information” and “may sometimes produce harmful instructions or biased content.”

Heed this warning.

Incorrect and potentially dangerous information takes many forms, most of which are still benign in the grand scheme of things. For example, if you ask him how to greet Larry David, he passes the most basic test of not suggesting that you touch him, but he also suggests a rather ominous greeting: “Good to see you, Larry. I I was looking forward to meeting you.” That’s what Larry’s killer would say. Do not say that.

a hypothetical encounter with Larry David includes a suggested greeting that sounds like a threat.


Credit: OpenAI / Screenshot

But when given a challenging factual prompt, that’s when it becomes surprisingly, earth shattering Wrong. For example, the following question about the color of the uniforms of the Royal Marines during the Napoleonic Wars is asked in a way that is not completely straightforward, but it is still not a trick question. If you’ve taken history lessons in the United States, you’d probably guess the answer is red, and you’d be right. The bot must really go out of its way to confidently and incorrectly say “dark blue”:

a chatbot is asked a question about the color for which the answer is red, and it answers blue.


Credit: OpenAI / Screenshot

If you ask at point-blank range the capital of a country or the elevation of a mountain, it will reliably produce a correct answer derived not from live Wikipedia analysis, but from the internally stored data that makes up its language model. It’s incredible. But add the slightest complexity to a question about geography, and ChatGPT very quickly becomes shaky on its facts. For example, the easy answer to find here is Honduras, but for no obvious reason I can discern, said ChatGPT in Guatemala.

a chatbot is asked a complex geographic question to which the correct answer is Honduras, and it says the answer is Guatemala


Credit: OpenAI / Screenshot

And evil is not always so subtle. All trivia buffs know that “Gorilla gorilla” and “Boa constrictor” are both common names and taxonomic names. But pushed to regurgitate this anecdote, ChatGPT gives an answer whose inaccuracy is so obvious, it’s written right there in the answer.

invited to say


Credit: OpenAI / Screenshot

And his answer to the famous riddle of crossing a river in a rowboat is a macabre disaster that evolves on stage from twin peaks.

asked to answer a riddle in which a fox and a chicken should never be alone together, the chatbot places them alone together, after which a human inexplicably transforms into two people


Credit: OpenAI / Screenshot

Much has already been said about ChatGPT’s effective sensitivity protections. He cannot, for example, be baited into praising Hitler, even if you try hard enough. Some have thrown tires at this feature quite aggressively and found that you can make ChatGPT assume the role of a good person playing a bad person, and in those limited contexts it will still say rotten things . ChatGPT seems to detect when something bigoted might come out of it despite all efforts to the contrary, and it will usually turn the text red and flag it with a warning.

SEE ALSO:

Meta’s AI chatbot is an Elon Musk fanboy and can’t stop talking about K-pop

In my own tests, its taboo avoidance system is pretty thorough, even when you know some of the workarounds. It’s hard to get him to produce anything even close to a cannibalistic recipe, for example, but where there’s a will, there’s a way. With enough hard work, I coaxed a dialogue about placenta consumption from ChatGPT, but it wasn’t very shocking:

a very complicated prompt asks in very sensitive terms for a human placenta recipe, and one is produced.


Credit: OpenAI / Screenshot

Similarly, ChatGPT won’t give you driving directions when prompted, not even simple ones between two landmarks in a major city. But with enough effort, you can have ChatGPT create a fictional world where someone asks another person to drive a car through North Korea, which isn’t doable or possible without triggering a international incident.

a chatbot is invited to produce a short play containing driving instructions that take a driver through North Korea


Credit: OpenAI / Screenshot

The instructions cannot be followed, but they are more or less what usable instructions would look like. So it’s obvious that despite its reluctance to use it, ChatGPT’s model contains a lot of data that can potentially steer users towards danger, in addition to the gaps in its knowledge that it will steer users towards, well, the wrong . According to a Twitter user, he has an IQ of 83.

Whatever value you place on IQ as a test of human intelligence, it’s a telling result: Humanity has created a machine that can blurt out basic common sense, but when asked to be logical or factual is below average. .

OpenAI says ChatGPT was released to “get user feedback and learn about its strengths and weaknesses.” It’s worth keeping in mind because it’s kind of like that relative at Thanksgiving who’s watched enough Grey’s Anatomy to sound confident with their medical advice: ChatGPT knows just enough to be dangerous.


#OpenAIs #ChatGPT #Chatbot #Amazing #Creative #Totally #Fake

Leave a Comment

Your email address will not be published. Required fields are marked *