Generic selectors
Exact matches only
Search in title
Search in content
Facebook Fanpage
Twitter Feed
621 Following
Price Tags Have Been Attached To News Content On Reuters https://t.co/7RPclTwrQy via @techbooky https://t.co/JAApStsNDr
2 hours ago
Apple Fitness+ Adds New Exercise Course Suitable For Pregnancy And The Elderly https://t.co/SE7WZFxX3x via @techbooky
3 hours ago
Coinbase Goes Public As They Join Nasdaq, Now Valued At About $100b https://t.co/qZP4OIKmml via @techbooky https://t.co/lCDz7Dq7Eg
Browse By Categories

This AI System Can Create And Embellish Text Content But Its Creators Won’t Release It Publicly


Another artificial intelligence (AI) framework is so great at creating text content but the designers of this system are reluctant to release it public domain for fear of misuse.

The OpenAI (the one backed by Elon Musk and Microsoft) built system, is a text creating platform that can compose page-long reactions to prompts, mirroring everything from dream exposition to counterfeit celebrity news stories and academic work. It expands on a prior content producing framework the organisation released a year ago.

AI has been used in the past to create content for quite a long time with fluctuating dimensions of progress. As of late, the innovation has gotten especially great. OpenAI’s underlying objective was for the framework to concoct the following word in a sentence by considering the words that preceded it. To make this conceivable, it was tested on about 8 million web pages.

A bunch of coming about demos that OpenAI posted online a week ago show exactly how persuading or scary as the case may be PC composed content can be. From numerous points of view, they seem like the composed rendition of deep fakes, which are powerful yet phony  video and sound documents made with AI.

Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.

GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.

OpenAI have in the past released their projects to the public but this time it will be releasing a smaller model of what is obtainable instead. The researchers said they would not make it freely accessible because of “concerns about malicious applications of the technology.” You can see the technical paper they released attached at the bottom of this article that interested researchers can use if they are interested in building on the work done so far.

The organisation’s choice to keep it from open use is the most recent sign of a developing unease in and about the tech world about building front line innovation specifically AI without setting limits on how it tends to be used publicly.

To understand better why they are keeping this from the public, OpenAI researchers entered the following Lord of the Rings style prompt to the system: Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry.

The computer composed this appropriately violent addition: The orcs’ response was a deafening onslaught of claws, claws, and claws; even Elrond was forced to retreat. “You are in good hands, dwarf,” said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night.

You see how such a system can easily misconstrue what someone originally meant. Imagine combining this creepy text system with a deep fake video of a world leader talking about launching a nuclear weapon targeting an enemy nation and without wasting much time, the other nations responds accordingly. Disaster right? How about misquoting a celebrity in a publication among others? It could be a bad tool in the hands of bad actors especially in the fake news era. Imagine how our lives will change dramatically when we start seeing fake movie and app reviews that are so believable that it starts to affect the income of a business.

The innovation could be valuable for a scope of regular applications like helping writers paint a better picture of what their writings. There is also the voice assistant improvement. 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Download [576.43 KB]

Previous Post

Innovative Ways How Blockchain Impacts App Development

Next Post

App Stores In Numbers: A Market Overview – Infographic

Related Posts

Subscribe Our Newsletter


Never miss an important Tech news again

HTML Snippets Powered By : XYZScripts.com