A deeper dive into ChatGPT: history, use and future perspectives for orthopaedic research

  • Ollivier Matthieu
  • Pareek Ayoosh
  • Dahmen Jari
  • Kayaalp M. Enes
  • Winkler Philipp
  • Hirschmann Michael
  • Karlsson Jon

ART

The rate of plagiarism and false content in scientific literature varies depending on the field of study and the methods used to detect it. According to some studies, the overall rate of plagiarism in scientific literature is estimated to be around 2-3% [1]. However, the rate can be higher in certain fields and for certain types of content. Additionally, the rate of false or fraudulent content in scientific literature is difficult to quantify, as it often goes undetected or is not reported. However, cases of scientific misconduct, including the fabrication and falsification of data, have been reported in various fields and can have serious consequences for both the authors and the scientific community. It is important for the scientific community to maintain high standards of ethics and accuracy in scientific research to ensure the validity and reliability of the published literature. In this editorial, we will touch upon the description of Large Language Models, define their limits and strengths and finally explore options to detect fraudulent manuscripts. Large Language Models (LLM) have the potential to assist researchers in generating clear and concise writing, summarising vast amounts of information and performing various language-related tasks [2]. This can potentially save * Matthieu Ollivier