SAN FRANCISCO (Reuters) – A research group backed by Silicon Valley heavyweights on Thursday released a paper showing that technology that can generate realistic news stories from little more than a headline suggestion is poised to advance rapidly in the coming years.
OpenAI, the group backed by LinkedIn founder Reid Hoffman, among others, was founded to research how to make increasingly powerful artificial intelligence tools safer. It found, among other things, that computers – already used to write brief news reports from press releases – can be trained to read and write longer blocks of text more easily than previously thought.
One of the things the group showed in the paper is that their model is able to “write news articles about scientists discovering talking unicorns.”
The researchers want their fellow scientists to start discussing the possible bad effects of the technology before openly publishing every advance, similar to how nuclear physicists and geneticists consider how their work could be misused before making it public.
“It seems like there is a likely scenario where there would be steady progress,” said Alec Radford, one of the paper’s co-authors. “We should be having the discussion around, if this does continue to improve, what are the things we should consider?”
So-called language models that let computers read and write are usually “trained” for specific tasks such as translating languages, answering questions or summarizing text. That training often comes with expensive human supervision and special datasets.
The OpenAI paper found that a general purpose language model capable of many of those specialized tasks can be trained without much human intervention and by feasting on text openly available on the internet. That would remove two big barriers to its development.
The model remains a few years away from working reliably and requires pricey cloud computing to build. But that cost could come down rapidly.
“We’re within a couple of years of this being something that an enthusiastic hobbyist could do at home reasonably easily,” said Sam Bowman, an assistant professor at New York University who was not involved in the research but reviewed it. “It’s already something that a well-funded hobbyist with an advanced degree could put together with a lot of work.”
In a move that may spark controversy, OpenAI is describing its work in the paper but not releasing the model itself out of concern it could be misused.
“We’re not at a stage yet where we’re saying, this is a danger,” said Dario Amodei, OpenAI’s research director. “We’re trying to make people aware of these issues and start a conversation.”
Reporting by Stephen Nellis; Editing by Dan Grebler