Policy of the University of Lodz Publishing House on Generative Artificial Intelligence Tools

Rules for Authors

  1. Only humans can be recognised as authors of scholarly texts. Generative AI (GAI) tools, including chatbots such as ChatGPT, cannot be recognised as authors or co-authors, nor can they be cited as authors.
  2. Authors bear full responsibility for the content of the submitted manuscript, including all parts produced using GAI tools. This responsibility extends to any breaches of publication ethics.
  3. Authors using AI tools in writing, data collection, or analysis are required to disclose, clearly and transparently: which GAI tools were used, precisely how and to what extent they were applied, as well as the impact of these tools on the work.
  4. GAI-generated or GAI-modified graphics or videos must not be used in publications unless they are the subject of the research itself.
  5. A statement regarding the use of GAI tools must accompany every manuscript submission. This applies to chatbots, large language models (LLMs), and image generators and must include a detailed description of the purpose, scope, and manner of their use.
  6. Detailed information on AI tool usage must be provided in the abstract and the early sections of the text, for example, in a dedicated section titled Declaration of GAI Use in the Writing Process, to enable replication of the research and verification of its results.
    Required Information:
    • date and time of queries,
    • tool details (name, version, model),
    • purpose, scope, and manner of use,
    • all full prompts used for generating, modifying, or converting text into tables or illustrations,
    • name and version of the AI tool used.
      Additionally, authors must disclose the use of AI tools in the cover letter when submitting the manuscript.
  7.  Authors must be aware of the limitations of GAI tools, including chatbots such as ChatGPT, which may stem from biases, errors, and knowledge gaps. It is essential to verify all outputs and check for:
    1. Lack of objectivity: Resulting from biases in the system’s training data,
    2. Lack of reliability: GAI tools may generate false content, particularly on niche or specialised topics. The text may sound linguistically correct but lack scientific accuracy. Errors may include fictitious bibliographies or misinterpreted facts,
    3. Distorted representation of facts: Some tools are trained on datasets with a cut-off date, resulting in incomplete knowledge,
    4. Misinterpretation of context: GAI tools lack human-level comprehension, especially regarding idiomatic expressions, sarcasm, humour, or metaphors, which may lead to errors or misleading interpretations,
    5. Insufficient training data: Generative AI tools require large volumes of high-quality training data for optimal performance. However, such data may be scarce in certain fields or languages, which may may reduce the tool’s reliability.
  8.  Authors should also outline the measures taken to minimise the risk of plagiarism, ensure the accuracy of knowledge presented, and verify all bibliographic references and proper attribution of cited works. Special care should be taken to review AI-generated bibliographies and citations, ensuring their accuracy, completeness, as well as proper attribution and description.

Rules for Journal and Multi-Author Monograph Editors

  1. Editors must assess the appropriateness of GAI tool use in each submitted work.
  2. Editors are required to verify all submissions using plagiarism detection software and GAI detectors, while remaining mindful of these tools’ technical limitations and the risk of misclassifying content origins.
  3. Editors must be aware that chatbots can store and publicly share input data, including manuscript content, thereby compromising the confidentiality of submitted authors’ materials.

Rules for Reviewers

  1. Reviewers should refrain from using GAI tools to assist in the peer-review process.
  2. Reviewers must assess the appropriateness of GAI tool use in the manuscript under review.
  3. Reviewers must be aware that chatbots can store and publicly share input data, including manuscript content or review reports, thereby compromising the confidentiality of submitted authors’ materials.

 

Sources:

Recommendations of the Association of Higher Education Publishers

RULES FOR USING ARTIFICIAL INTELLIGENCE SYSTEMS IN THE TEACHING AND GRADUATION PROCESS AT THE UNIVERSITY OF LODZ

Authorship and AI tools. COPE position statement

COPE forum: How to exclude AI-generated articles