Get ready to waste your day with this creepily accurate text-generating A.I.

Whether you believe it to be one of the most dangerous versions of artificial intelligence created or dismiss it as a massive unnecessary PR exercise, there is no doubt that the GPT-2 algorithm created by the OpenA.I research lab. caused a lot of buzz when it was announced earlier this year.

Revealed in February, OpenA.I. he said he had developed an algorithm that was too dangerous to release to the public. Although only a text generator, GPT-2 reportedly generated text so insanely human that it could convince people they were reading real text written by a real person. To use it, all the user had to do was enter the beginning of a document and then let the AI ​​complete it. Give him the beginning of a newspaper story, and he’ll even produce fictional “quotes.” Predictably, the media over-hyped this as the scary new face of fake news. And with potentially good reason.

Jump ahead a few months and users can now try out the use of artificial intelligence for themselves. The algorithm appears on a website called “Talk to Transformer,” run by machine learning engineer Adam King.

“For now OpenA.I. he decided to release only small and medium versions that are not so coherent, but still produce interesting results,” he writes on his website. “This site is launching a new (May 3) medium-sized model, called 345M because of the 345 million parameters it uses. If and when [OpenA.I.] post the full model, I’ll probably run it here.

At a high level, GPT-2 works no different than predictive mobile keyboards that predict the word you’ll want to type next. However, as King notes, “While GPT-2 was only trained to predict the next word in a text, it surprisingly learned basic competence in some tasks such as translating from one language to another and answering questions. That’s without ever being told you’re going to be graded on those assignments.”

The results are frankly a little disturbing. While it’s still prone to weird AI-generated nonsense, it’s nowhere near the level of nonsense that the various neural networks used to generate chapters from the new A Song of Ice and Fire novel or monologue from Peelings. Faced with the first paragraph of this story, for example, he did a pretty useful job of coming off something convincing – along with a bit of subject matter knowledge to sell the effect.

To think this is the Skynet of fake news is probably going a bit far. But it’s definitely enough to send a little shiver down your spine.

Editor’s recommendations

Categories: GAMING
Source: newstars.edu.vn

Leave a Comment