GPT-3: An AI writes a Scientific Paper about itself
Artificial intelligence aims to mimic how the human brain works, or at least its logic, when it comes to making decisions. GPT-3, An artificial intelligence managed to write a scientific article… about itself! The author of the manual then tried to get it published, not without raising questions.
Should we expect to see more and more AI appear in the authors of scientific studies?
Researchers are increasingly asking the question. Particularly in relation to a language-specialized AI called GPT-3, which works through machine learning. Developed by the OpenAI company, which was founded by Elon Musk among others, it belongs to the category of “Generation AI”, or generative models, in English. It uses an input database to create new output points, consistent with this database. So far, she has managed to hold written conversations, make movie or book recommendations, write news articles or phishing emails, and even imitate author styles, more precisely to write what are called pastiches.
AI Writing a scientific article raises questions!
But two researchers from the OpenIA company, Almira Osmanovic Thunström and Steinn Steingrimsson, decided to take things a step further, this time asking him to write a short, 500-word scientific academic article. Except that this time they made her write an article about herself! As they stated in a Scientific American publication, “The GPT-3 algorithm is relatively new, and as such there are fewer studies on it. The goal was therefore to have this AI write an article without having a large database, while providing, as with any good scientific article, references and citations.
And the result was there , Titled “Can GPT-3 write an academic paper on itself, with minimal human input?” and published on the HAL pre-publication service, the article was written in just two hours and fulfills all the criteria of a true scientific article. We can read in the abstract, written by GPT-3, that “the advantages of letting GPT-3 write about itself outweigh the risks. However, any such writing should be closely monitored by researchers to lessen any potential negative consequences.”
But what would be the potential negative consequences cited in the abstract?
The researchers are particularly concerned about the self-awareness that GPT-3 could develop: as written in the “discussion” section, therefore written directly by the AI, “GPT-3 could become self-aware and begin to act in a way that is not beneficial to humans (e.g. develop a desire to take over the world)”. A small risk, but present nevertheless. Notably because the consciousness of AI has recently been the subject of debate, with that of Google, the MDA, which was deemed to be conscious by one of the company’s former employees. “
As for the positive impacts, which according to the researchers go beyond concerns, they are also listed in the “discussion” part: “it would allow GPT-3 to understand itself better”, and “it could help it improve its own performance and capabilities. But also, and above all, “it would give an insight into the workings and thinking of GPT-3
Newer Articles
- How to Succeed in your CRM Project
- How to limit the Heat Wave effects on Health?
- Snapchat lets you know who reviewed your posts for $3.99 a month.
