Elon Musk-backed OpenAI Program Is ‘Too Dangerous’
Mike Sanders / 6 years ago
OpenAI Is ‘Too Dangerous’ To Release
AI development is currently one of the biggest research industries in the world. Public opinion, however, seems to largely be divided into two camps. One suggests that it will provide the means of propagating the survival and betterment of the human race while others suggest that it will be a major factor (if not the factor) in our eradication.
In a report via SkyNews, however, the Elon Musk-backed OpenAI team has revealed that their current system is ‘too dangerous’ to ever be released.
Impersonation And Fake News!
Fortunately, it isn’t too dangerous because it has a pathological hatred of humans. The team believes that the current system is able to really effective at being able to adapt to situations. As such, it is feared that such a program could easily be used to create fake new and even potentially replicate the writing styles of notable figures.
In a statement issued by OpenAI, the team has said that: “due to our concerns about malicious applications of the technology”. They added: “The model is chameleon-like, it adapts to the style and content of the conditioning text.”
GPT-2
The project was initially founded in 2015 and received a $1bn backing from Tesla head Elon Musk. It should be noted that a watered-down version of the GPT-2 algorithm has been released. The team has, however, said that they will withhold the full version. This, based on their aforementioned fears of it being capable of producing highly authentic looking false information.
As far as I’m concerned, as long as it doesn’t want to kill all human, I’m not too worried.
What do you think? Do you think AI like this poses a genuine threat? Is this just hyperbole to create attention? – Let us know in the comments!