raintemplates-t7

The OpenAI Lab has created a neural network that itself considers very dangerous

OpenAI Research Lab has opened access to the full version of "GPT-2" neural network designed to generate meaningful text on arbitrary topics. It was ready back in February, but the developers were so surprised by the results of their brainchild's activity that they were simply afraid to release it into the world. Several shortened versions of AI were presented to see how the Internet community would react to them and how it would be applied.



GPT-2 neural network has been trained for 8 million texts from the Internet and knows how to quickly and accurately recognize the essence of what is written to draw conclusions and continue the text. For example, it has enough catchy headline to write the text of "sensational" news, which many will accept as the truth. AI knows how to work with literary techniques, with technical texts, he writes poems and can support the conversation, making detailed answers to questions.

The experts were concerned about how convincing the texts from GPT-2 look. AI doesn't know how to lie literally, he has no malice, but he masterfully juggles words to make up meaningful phrases. Of course, the neural network has a lot of vulnerabilities. She can't make a long story, she only works with small texts. Or she can make a gross mistake by misinterpreting the name of an object she does not know.

In the end, the decision was made on the principle of "wedge bouncing" - instead of hiding GPT-2, the developers gave full access to the AI so that everyone could personally test the neural network. The more people get acquainted with it, the more knowledgeable and therefore less vulnerable they become. And then the AI application in selfish purposes will not have such destructive consequences any more.

Post a Comment

0 Comments