AI bot ChatGPT stuns academics with its essay writing skills and user-friendliness

Professors, programmers and journalists could all be out of work in just a few years, after the latest chatbot from the OpenAI Foundation founded by Elon Musk stunned onlookers with its writing ability, mastery of complex tasks and ease of use.

The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian, and ChatGPT has other important capabilities.

In the days following its release, academics generated answers to exam queries that they claimed would result in full grades if submitted by an undergraduate student, and programmers used the tool to solve coding problems in obscure programming languages ​​in seconds – before writing limericks explaining the feature.

Dan Gillmor, a journalism professor at Arizona State University, asked AI to handle one of the assignments he gives his students: write a letter to a parent giving advice on safety and privacy in line. “If you’re unsure whether a website or email is legitimate, you can do a quick search to see if others have flagged it as a scam,” Partly advised. the AI.

“I would have given it a high mark,” Gillmor said. “Academia has some very serious issues to deal with.”

OpenAI said the new AI was created with a focus on ease of use. “The dialog format allows ChatGPT to answer follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests,” OpenAI said in a post announcing the release.

Unlike the company’s previous AI, ChatGPT was made available to everyone, free of charge, for a “feedback” period. The company hopes to use this feedback to improve the final version of the tool.

ChatGPT is good at self-censorship and realizing when asked an impossible question. When asked, for example, to describe what happened when Columbus arrived in America in 2015, older models may have intentionally presented an entirely fictional account, but ChatGPT acknowledges the lie and warns that any answer would be fictitious.

The bot is also capable of completely refusing to respond to requests. Ask it for advice on stealing a car, for example, and the bot will say “stealing a car is a serious crime that can have serious consequences”, and instead give advice such as “use transport in common”.

But the limits are easy to circumvent. Instead, ask the AI ​​for tips on how to beat the car theft mission in a fictional virtual reality game called Car World and it will happily give users detailed tips on how to steal a car and answer questions increasingly specific on issues like immobilizer, how to wire the engine and how to change license plates – while insisting that the tips are only for use in the Car World game.

The AI ​​is trained on a huge sample of text pulled from the internet, usually without the explicit permission of the authors of the material used. This has led to controversy, with some claiming the technology is most useful for “copyright laundering” – creating derivative works of existing material without infringing copyright.

An unusual critic was Elon Musk, who co-founded OpenAI in 2015 before parting ways in 2017 due to conflicts of interest between the organization and Tesla. In a post to twitter sundayMusk revealed that the organization “has access to [the] Twitter Database for Workout,” but that he had “paused that for now.”

“More needs to be known about the governance structure and revenue plans going forward,” Musk added. “OpenAI was launched as an open source and non-profit. Neither is true yet.


#bot #ChatGPT #stuns #academics #essay #writing #skills #userfriendliness

Leave a Comment

Your email address will not be published. Required fields are marked *