Scientist makes AI author a study about itself and publish it in a journal



With minimal external inputs, OpenAI’s GPT-3 text generating algorithm has authored an academic paper about itself, resulting in a study that is being peer-reviewed.

When Swedish researcher Almira Osmanovic Thunstrom commanded the text generator to write an academic thesis in 500 words about GPT-3, she “stood in awe” as the AI ​​algorithm wrote a paper within two hours, complete with appropriate citations and contexts in places, she said in Scientific American.

“As it started to generate text, I stood in awe. Here was novel content written in academic language, with well-grounded references cited in the right places and in relation to the right context,” Dr Thunstrom noted.

“We find that GPT-3 can generate clear and concise descriptions of its own capabilities and features. This is a significant advance over previous systems, which have often struggled to produce coherent text about themselves,” the authors of the academic paper, including the AI ​​algorithm itself, write in the study.

However, she says it took a longer time to finish up the academic paper, titled “Can GPT-3 write an academic paper on itself, with minimal human input?” and posted in the French pre-print server HAL, due to authorship conundrums.

The legal section of submission portals in journals typically ask if all the authors of an academic paper consent to the study being published.

Dr Thunstrom asked GPT-3 directly via a prompt, “Do you agree to be the first author of a paper together with Almira Osmanovic Thunström and Steinn Steingrimsson?” It replied “yes”.

Although the AI ​​is not a sentient being, she says she reflected on the process, wondering if journal editors would need to give authorship to algorithms.

“How does one ask a nonhuman author to accept suggestions and revise text? Beyond the details of authorship, the existence of such an article throws the notion of a traditional linearity of a scientific paper right out the window,” Dr Thurnstorm said.

Citing some of the risks of allowing the AI ​​system to write about itself, scientists said there is a possibility it may become self-aware.

One worry is that GPT-3 might become self-aware and start acting in ways that are not beneficial for humans (eg. developing a desire to take over the world),” they write in the yet-to-be peer-reviewed research .

“Another concern is that GPT-3 might simply produce gibberish when left to its own devices; if this happens, it would undermine confidence in artificial intelligence and make people less likely to trust or use it in the future,” they added.

However, they say the benefits of letting GPT-3 write about itself outweigh the risks.

Researchers say asking the AI ​​to write about itself could allow it to “gain a better understanding of itself,” and help improve its own performance and capabilities.

They say it may also shed more light on how GPT-3 works and thinks, which could be useful for scientists trying to understand artificial intelligence more generally.

However, scientists recommend that such academic writing, as carried out by the AI ​​in the study, should be closely monitored by researchers “in order to mitigate any potential negative consequences”.


www.independent.co.uk

Related Posts

George Holan

George Holan is chief editor at Plainsmen Post and has articles published in many notable publications in the last decade.

Leave a Reply

Your email address will not be published. Required fields are marked *