Generative AI Policy
Generative AI tools can produce diverse forms of content, spanning text generation, image synthesis, audio, and synthetic data. Some examples include ChatGPT, Copilot, Gemini, Claude, NovelAI, Jasper AI, DALL-E, Midjourney, Runway, etc. While Generative AI has immense capabilities to enhance creativity for authors, there are certain risks associated with the current generation of Generative AI tools. Some of the risks associated with the way Generative AI tools work today are:
- Inaccuracy and bias: Generative AI tools are of a statistical nature (as opposed to factual) and, as such, can introduce inaccuracies, falsities (so-called hallucinations) or bias, which can be hard to detect, verify, and correct.
- Confidentiality and Intellectual Property Risks: At present, Generative AI tools are often used on third-party platforms that may not offer sufficient standards of confidentiality, data security, or copyright protection.
- Unintended uses: Generative AI providers may reuse the input or output data from user interactions (e.g. for AI training). This practice could potentially infringe on the rights of authors and publishers, amongst others.
TransportTech journal is offering guidance to authors, editors and reviewers on the use of such tools, which may evolve given the swift development of the AI field. Journal will closely monitor what’s happening in this area and will change or improve the policy as needed.
For authors
- Where authors use generative AI and AI-assisted technologies in the writing process, these technologies should only be used to improve readability, quality and language (to correct spelling and grammar) of the work. In this case, the use of AI tools is not required to declared.
- When artificial intelligence tools are used to generate content for the idea generation and idea exploration, they must be declared. Declaring the use of these technologies supports transparency and trust between authors, readers, reviewers, editors and contributors and facilitates compliance with the terms of use of the relevant tool or technology. Authors must clearly acknowledge within the article use of Generative AI tools through a statement which includes: the full name of the tool used (with version number), how it was used, and the reason for use. The statement must be included either in the Methods or Acknowledgments section. The technology must be applied with human oversight and control and authors should carefully review and edit the result, as AI can generate authoritative-sounding output that can be incorrect, incomplete or biased. Authors are ultimately responsible and accountable for the contents of the work. Articles which content is generated by an AI tool without any significant human contribution will not be accepted in TransportTech.
- Generative AI tools must not be listed as an author, because such tools are unable to assume responsibility for the submitted content or manage copyright and licensing agreements.
- TransportTech currently does not permit the use of Generative AI in the creation and alter of images and figures. This may include enhancing, obscuring, moving, removing, or introducing a specific feature within an image or figure. The term “images and figures” includes pictures, charts, data tables, medical imagery, snippets of images, computer code, and formulas. Adjustments of brightness, contrast, or color balance are acceptable if and as long as they do not obscure or eliminate any information present in the original.
For reviewers
When a researcher is invited to review another researcher’s paper, the manuscript must be treated as a confidential document. Reviewers should not upload a submitted manuscript or any part of it into a generative AI tool as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.
Generative AI may only be utilised to assist with improving review language, but reviewers will at all times remain responsible for ensuring the accuracy and integrity of their reviews.
For editors
Editors must keep submission and peer review details confidential. Use of manuscripts in Generative AI systems may give rise to risks around confidentiality, infringement of proprietary rights and data, and other risks. Therefore, editors must not upload unpublished manuscripts, including any associated files, images or information into Generative AI tools.