OpenAI blocks the release of its artificial intelligence deemed dangerous

By Mathilde B.
— Feb 26, 2019


real threat or communication around ethical issues?

OpenAI blocks the release of its artificial intelligence deemed dangerous

Researchers at OpenAI have decided to block the release of GPT-2, an artificial intelligence text-generation program who linguistic model is so evolved that it could become dangerous.

 

Indeed, should malicious minds seize it, the algorithm could turn into a supplier of fake news, conspiracy theories and malicious ideas, without anyone suspecting that it is one and only AI.

 

This non-profit artificial intelligence research company once supported by Elon Musk has developed a monster, ingenious of course, but reminiscent of Mary Shelley’s Frankenstein subtitled The Modern Prometheus. Did these researchers not want to steal fire to give to humans, abandoning their creation, horrified by what came to life? Well, they did. Because GPT-2 doesn’t just generate texts but also reports, press articles and fictions! By relying on a few sentences or pages, sometimes even a few words, it is able to write the rest, all the while remaining coherent with the main topic and respecting the style of the author.

 

The equivalent of this performance is that GPT-2 is then able to create “deepfakes for text”, that is to say, precisely credible false information, and to share it itself on social networks… This is why the researchers decided to keep it confidential; it will not be made available to the public or to companies: “We are not releasing the dataset, training code, or GPT-2 model weights”, write the research team on the official blog.

 

These days, the Promethean fire is the data and quantity treated to precisely simulate human intelligence, so that it exceeds it: the system, fed to the machine learning algorithm, is based on 8 million web pages, set to automatically find quality content. These pages are exclusively those on Reddit having received at least three positive votes, so at least three humans who considered the content relevant, educational or fun.

 

David Yuan, head of OpenAI engineering, told The Verge that a GPT2 essay “could have been just as well written for the SATs and receive a good grade”. It’s about knowing what you can and can’t do with it, Jack Clark explains to the Guardian: “If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”

 

Now, let us remember Elon Musk, who had warned American politicians about the need to regulate artificial intelligence, a reasoning followed by other gray eminences such as Stephen Hawking or Bill Gates. In 2015, Musk partnered with Sam Altman, president of the famous incubator Y Combinator, to found Open IA: the NGO to develop and promote artificial intelligence with a human face, as opposed to what the Tesla creator feared: “I can’t stop sounding the alarm … I’m working on some very advanced forms of artificial intelligence, and I think we should all be worried about its progress,” he said in July 2017.

 

Researchers are therefore not as horrified as Frankenstein in front of their creature because the current state of science makes things evolve based on a final goal and not by accident: “We’re trying to build the road as we travel across it.” said Jack Clark, the director of policies at OpenIA. It is therefore a way to open the debate on the regulation of such an AI, to challenge the general public about a possible and necessary ethics.”

 

The Institute believes that governments should pay attention to the increase in the capabilities of such machines and systematically measure their societal impact. Because technology doesn’t only carry with it a poetic destiny to which it suffices to abandon itself, it must be made a positive innovation in the service of the common good.


The founders