When researchers want to publish the algorithm’s output as is, major ethical dilemmas come to the table.
Since the advent of GPT-3, it continues to shine. Every day, specialists are pleased with new and impressive work based on this amazing language generation system. From songs and speeches to sports summaries, talks, movie reviews, and even comments on the web, those interested in AI can’t miss the algorithmic marvel of OpenAI.
Recently, researcher Almira Osmanowicz Thunstrom may have just opened Pandora’s Box by suggesting a slightly different activity than GPT-3: Write a real official scientific paper about themselves. And the result, according to the author, was surprisingly coherent. ” It looked like any other introduction to a relatively good science publication. ‘, she explained.
The first principal author of the algorithm for the study
Another point that surprised the young researcher is that no one has yet attempted to publish a serious work on this topic. This is how I came up with a strange idea: In addition to writing the article, can GPT-3 also … Spread it?
Reading this idea, you might think it is nonsense for a weary researcher who may have needed a little recuperation. But scientifically speaking, this work is very relevant and more interesting than one might think.
In fact, GPT-3 is still a fairly recent technology; So there is relatively little scientific literature on this topic. However, it is resources of this type that directly feed this algorithm. What is interesting is that it makes it possible to study its ability to produce “new” content in a context where it is clearly lacking in references.
In the prestigious Scientific American, the young researcher took the opportunity to describe the obstacles she encountered during the publishing process with a mixture of rigor and utterly refreshing humor.
Conflicts of interest…especially identity
For publication in a leading scientific journal, the research paper must undergo a peer-review process. Many other professionals in the related field are responsible for deciding whether the methodology is sound enough to make the paper worthy of publication.
This process involves rigorous verification of the author’s identity and academic references. And this is where Almira Osmanovich Thunstrom encountered the first glitches. Despite being unable to fill in the last name, phone number, or email for her authoring algorithm, she resolved to provide her own information instead.
And she wasn’t at the end of her troubles, far from that. Because right after that, the legal notices waiting for him at the juncture were with a fateful question: Do all authors approve of this publication?
“ Then I got scared for a second she explains in Scientific American. How do I know? He is not a human! But I had no intention of violating the law or my personal morals. ‘, she regrets.
Treat the program like a human
And the presentation I found was very interesting: I asked the algorithm in text if it would “agree to be the lead author of an article with Almira Osmanowicz Thunstrom and Stein Stingrimson”. His answer: “Yes” is clear, clean and flawless!
” Sweaty but relieved “So the person in question ticked the box.” yes in formation. ” If the algorithm had said “no”, my conscience wouldn’t have allowed me to move forward “, as you say.
And the half-silly, half-serious side of this exploratory work wasn’t ready to stop. Next stop: the inevitable question of conflict of interest. In fact, researchers have a legal obligation to openly declare any element that could compromise the neutrality of the business, such as an association with a particular pharmaceutical company.
And in this case, the problem itself is fascinating and raises a whole host of questions. Is artificial intelligence that is itself a product of a company only able to realize this idea? If so, does it have the tools to identify any bias? Can you ignore it? And so on !
By this point, the researchers had already taken sides in treating GPT-3 as a human culprit. This is the familiar approach. We can see a link with LaMDA, the AI whose author recently claimed to have it ‘awareness’ phase (see our Article – Commodity).
And for consistency, they decided to keep it up. So it is normal for them to have dHe asked the algorithm if it had any conflict of interest to declare – To which he calmly replied in the negative, whatever the meaning of it.
With the form now completed, Usmanovic Thunstrom and his colleague have formally submitted the paper to The process of reviewing and critiquing writings. At present, the document has not yet appeared; There is no guarantee that it will be accepted. It is no coincidence that this process takes so long. Because the proofreading committee must have opened their eyes wide like saucers when they found out the name of the lead author.
In practice, policy makers have been placed in a somewhat unique position in academic history. Since they have to decide whether the paper is worth publishing or not, they find themselves in The same position as a grand jury when issuing a verdict could set a historical precedentis likely to require a lot of AI research in the future.
In fact, this paper raises a whole host of ethical questions about the way scientific resources are produced. If the document is accepted, do researchers now have to prove that they wrote their papers themselves, not using GPT-3? And if so, is he among the authors? In this context, should the algorithm be involved in validation? Within what limits? What is the effect onpublishing raceWho is pushing some researchers to publish story papers in industrial volumes to improve their statistics?
its just The tip of a huge iceberg of crucial questions Which the proofreading committee will have to decide. And he has to take precautions before pronouncing the verdict.
A new era of scientific research?
We know, for example, that current programs still have significant problems in reasoning in terms of causation, i.e. they have trouble identifying the factor responsible for the phenomenon (see Article – Commodity). Which is very troublesome in the context of scientific research, because Its coherence depends to a large extent on the solidity of these logical connections.
In addition, we must also keep in mind all the other potential limits of AI that many observers have warned us about over the ages. But on the other hand, it is also a file A very innovative approach can highlight the unknown properties of these algorithms.
Making AI work this way, even if it means taking its conclusions with caution, is thus a way of thinking outside the box; It is the kind of approach that makes it possible to put thought experiments on the test of concrete reality. so can Advance all research in the field of artificial intelligence As a whole, because completely new styles of this kind remain a rarity.
” We have no way of knowing if our way of presenting this work will serve as a model. ’” explains Osmanovich Thunstrom. ” We look forward to knowing what it means to publish the research paper, if that happens. […]. In the end, it all depends on how we deal with artificial intelligence in the future: As a partner or as a tool ‘, summarized.
” It seems like a simple question today, but in a few years, who knows what technological dilemmas this technology will force us to deal with? All we know is that we opened the door. We just hope it’s not Pandora’s box It wraps up carefully.