Skip to main content

Academic Misconduct: AI Policy and Guidance

This page sets out the Common Awards academic misconduct policy on the use of generative AI. It includes both the core policy and detailed guidance.

Further information, giving definitions, explanations, advice for students, advice for staff, and advice for academic misconduct panels, is available in our Common Awards Policy, Guidance and Advice on Generative AI document (click to view/download).

The policy requires students to copy and paste a completed AI declaration into summative assignments before submitting them. You can download that declaration here in various formats:

Core policy

This is an academic misconduct policy. It applies to students’ use of generative AI in summative assessments on Common Awards modules. Its only purpose is to define which uses of generative AI count as academic misconduct in that context.

In the case of formative assessment, any issues with generative AI should be dealt with informally by the TEI involved.

Students and staff should be aware of the difference between what counts as academic misconduct, and what counts as unwise or unethical uses of generative AI, as set out in the introduction to our full Common Awards Policy, Guidance and Advice on Generative AI document.

Students with disabilities and/or specific learning differences should note that special provisions may be made for them in relation to these rules. See the ‘exceptions’ at the end of the policy for more detail.

To avoid academic misconduct

  • You must not use generative AI to create substantive content for your assessed work that you then present as if it were your own creation.
    • The detailed guidance accompanying this policy clarifies what is meant by ‘create substantive content’.
    • This rule covers both AI-generated material that you include directly in your work, and material that you include after modification or editing.
  • You must not provide a generative AI with any text or other material produced by others, unless that material is in the public domain, or you have explicit permission to do so, or you have confirmation that the content will not be used to train the AI in question.
    • This includes materials produced by your teachers, such as handouts and slides.
    • This rule covers both uploading material to an AI and providing the AI with a link to it online.
    • Note that ‘in the public domain’ does not simply mean ‘publicly available’. You should presume that any material available to you is covered by its creators’ copyright, unless you can find explicit indication that the creators have designated it as public domain, or released it on a license that allows you to provide it to an AI.
  • You must not provide a generative AI with any confidential information.
    • This includes any personal information about identifiable individuals.

In general, however, other limited uses of generative AI to facilitate your work do not count as academic misconduct, provided that

  • the resulting work still reflects your own engagement with your sources, your own understanding, and your own reasoning and judgments;
  • you clearly acknowledge any use of AI that has substantially informed the content or presentation of your work; and
  • you demonstrate appropriate caution about the limitations of the tools you use.
    • The detailed guidance accompanying this policy, and especially its instructions on appropriate acknowledgment, clarifies how you can ensure that your work reflects your own engagement, understanding, reasoning and judgment.
    • The advice for students in our full Common Awards Policy, Guidance and Advice on Generative AI document explains in detail how to acknowledge AI use.
    • The detailed guidance below also clarifies what is meant by ‘appropriate caution’.
    • Note that failure to exercise appropriate caution when using AI may affect your marks.

Saying that these uses do not count as academic misconduct means that you will not face formal academic penalties for them. You should be aware that they may still have negative consequences, including for your learning and your marks, and that your teachers may advise against them.

Exceptions:

  • Some assignments require you to respond to questions or prompts within a short timeframe (e.g., some kinds of exam, or assessed conversations). If you are not normally allowed access to notes, books or other learning resources during such time-limited assignments, you should assume that you are not allowed any use of AI during those assignments either, unless you are given clear instructions to the contrary.
  • The core policy may be overridden by your teachers for specific assignments. In such cases, they will give you explicit written guidance on what is or is not permitted. In the absence of such explicit written guidance, you should assume that the rules in the core policy hold.
  • For some students with specific learning differences, disabilities, or other specific needs, reasonable adjustments may be made relating to these rules, either for specific assignments or more generally. Where that happens, you will be given explicit written guidance on what you are and are not permitted to do. In the absence of such explicit written guidance, you should follow the rules in the core policy.

Detailed guidance

This table must be read in conjunction with the core policy above. It gives staff and students detailed guidance on what counts as academic misconduct and what does not, and some brief indications of how to exercise appropriate caution when a usage is not ruled out as academic misconduct. The table can also be used by markers, when they are assessing whether students have exercised appropriate caution, and by Academic Misconduct Panels, when assessing whether a student has broken the policy rules.

Note that, in some areas, the line between categories is blurred. For instance, there is no hard line where ‘stylistic improvements’ tips over into ‘generating substantive text’. If in doubt, students should consult their tutors for advice, and tutors should use their judgment.

Type of use of AI

What exactly is the AI being used for?

Is it academic misconduct?

What is appropriate caution?

1. Generating substantive text

Producing sentences, paragraphs, subsections, or whole assignments, for you to include in your assignment.

This includes using generative AI to generate sentences, paragraphs, subsections or whole assignments that you then edit, paraphrase, or otherwise rework before you include them in your assignment.

Examples: prompting ChatGPT to write text for you.

Producing whole assignments or subsections of assignments is always academic misconduct.

Producing smaller amounts such as sentences or paragraphs need not count as academic misconduct, if you treat the generated text as you would other written sources: either quoting the generated material directly (putting the material in quotation marks or setting it out as a quotation in its own paragraph) or paraphrasing it. In either case you must acknowledge this use of AI explicitly at the point where it occurs.

Be aware that content generated by an AI may not be very good. Do not treat it as an authoritative source. Do not rely on it as your only source for factual claims.

2.Minor corrections

Identifying and correcting errors of punctuation, spelling and grammar.

Examples: Microsoft Word’s built in spelling and grammar checker; Grammarly’s correctness suggestions.

Not academic misconduct.

No need to give any acknowledgement.

Remember that sometimes even the most advanced proofing tools will give bad advice.

3. Consistency checking

Checking the formatting of headings, the numbering of sections, capitalisation, hyphenation, and similar.

Examples: PerfectIt; Grammarly.

Not academic misconduct.

No need to give any acknowledgement.

Don’t simply accept all proposed changes without checking.

4. Suggesting wording/ phrasing

(a) Providing alternative phrasing for a sentence you have already written, without changing the overall meaning.

Examples: Microsoft Word’s ‘clarify’ function, used on individual phrases; Grammarly’s clarity function.

Not academic misconduct.

No need to give any acknowledgement.

Check to see that your overall meaning has not been altered.

    Suggesting wording/ phrasing (continued)

(b) Suggesting ways of continuing a sentence you have begun to type, without adding substantive content or starting a new sentence.

(If this leads to adding more than five words, see ‘generating substantive text’ below.)

Examples: Autocorrect; Google Docs smart compose; ChatGPT.

Not academic misconduct.

No need to give any acknowledgement.

5. Stylistic improvements to existing text

Suggesting stylistic revisions to a paragraph, section, or whole text that you have already created, often with a view to clarifying the writing, or making it suitable for a specific audience.

(This only refers to changes to the expression of ideas, claims and arguments you have already included. It does not include suggesting new or improved ideas, claims or arguments – for which, see ‘generating substantive text’ above.)

(Note that stylistic improvements can slightly lengthen a text. If, however, widespread stylistic improvements lengthen the overall text of an assignment by more than 2% – e.g., taking a 1,000-word piece over 1,020 words – you should see ‘generating substantive text’ above.)

Examples: Microsoft Word rewrite (Copilot).

Not academic misconduct.

You must, however, acknowledge this use of AI in the declaration accompanying your work.

Check to make sure that your meaning has not been changed.

Be aware that, by altering your normal style, this may trigger suspicions that your work has been generated by AI.

6. Shortening

Bringing an overlong text down to your specified wordcount.

Examples: ChatGPT, QuillBot.

Not academic misconduct.

You must, however, acknowledge this use of AI in the declaration accompanying your work.

Check to see what has been omitted.

Check to see whether the writing still flows.

Check to see that your meaning has not been altered.

Check that the resulting word count is right.

7. Expanding

Increasing the length of a text up to a specified wordcount.
Examples: ChatGPT; Jasper AI.

Academic misconduct.

This is equivalent to ‘generating substantive text’ – see above.

 

8. Giving feedback on your draft

(a) Identifying stylistic problems.

Examples: Grammarly; ChatGPT.

In principle, this is not academic misconduct, but if it amounts to suggesting substantive new text, see ‘generating substantive text’ above.

You must acknowledge this use of AI in the declaration accompanying your work.

Be aware that the suggestions provided by the AI might not be good ones. Always use your own judgment when choosing which problems to tackle, and how.

(b) Identifying problems with your argument (without suggesting new text that avoids those problems).

Examples: ChatGPT.

9. Suggesting an outline for an essay

Producing a list of topics, headings, or bullet points that you use to guide your writing (but that you do not quote directly).

Examples: ChatGPT.

Not academic misconduct.

You must, however, acknowledge this use of AI in the declaration accompanying your work.

Be aware that the structure suggested might not be a good one. Be aware that it also might not match the instructions you were given for your assignment.

If you do this, it is good practice to ask for several structures so that you need to exercise judgement about which to use.

10. Image or diagram generation

Generating visual or audiovisual material in response to your input.

Examples: DALL-E; Midjourney; Stable Diffusion.

This is not normally academic misconduct, though you must acknowledge this use of AI at the point where it occurs.

If, however, the creation of the image is itself a distinct activity set for you by your tutors, you should check with them as to whether AI use is acceptable.

If the generation of the image or diagram involves the generation of text, see ‘generating substantive text’ above.

 

11. Translating

(a) Translating a text you want to use into a language you can read, so that you can learn from it or cite it.

Examples: Google Translate.

Not academic misconduct when translating individual words or short phrases (not normally more than five words in the original language). In these cases, there is no need for acknowledgment.

For longer passages, you must acknowledge the use of AI if you quote or paraphrase the translation in your work, at the point where it occurs.

If the translation is simply used as a tool for your own learning, however, and if you do not incorporate the translation into your work, either directly or in a paraphrased form, there is no need for acknowledgment.

Note the rule in the core policy on not providing an AI with material that is not in the public domain.

Be cautious about the accuracy of any AI translation: don’t rely on it as your only source for a claim about what the text says or how it should be understood.

(b) Translating a text you have been set as a translation exercise.

Examples: Google Translate.

Academic misconduct.

You must not use an AI translation if you have been set a translation exercise.

12. Changing the format of references

Changing the way you cite your sources – e.g., putting your bibliography entries into a recognised format, or changing from a system of footnotes giving full bibliographic details to a system of brief inline citations.

Examples: EndNote; Zotero.

Not academic misconduct.

No need to give any acknowledgement.

If using any software that is not a dedicated reference manager like EndNote or Zotero, make sure you check that the AI tool has not mangled your references.

13. Suggesting avenues for research

Identifying key ideas, arguments, or lines of approach in response to your input.

Examples: Google search summary; Elicit; Consensus; ChatGPT Deep Research.

Not academic misconduct.

You must, however, acknowledge this use of AI in the declaration accompanying your work.

Do not rely on AI to identify the only or the best avenues for research. Remember in particular that generative AI reproduces the biases of the material on which it was trained.

NB: if you use a structure that AI suggests, or if you quote from it, refer to ‘suggesting an outline for an essay’ or ‘generating substantive text’ above.

14. Summarising a source or sources

Providing a paragraph, set of bullet points or longer text summarising some source – either one that you provide to the AI, or one that the AI already ‘knows’ about.

Examples: ChatGPT; QuillBot.

This is not academic misconduct if this is simply a tool for your own learning. There is no need to acknowledge this if you do not quote, paraphrase, use one or more ideas from, or otherwise include material from the summary in your work.

If, however, you do quote, paraphrase, use one or more ideas from, or otherwise include material from the summary in your assignment, or base your assignment on it in some way, see the guidance on ‘generating substantive text’ above.

Note the rule in the core policy on not providing an AI with material that is not in the public domain.

Be aware that an AI summary may be inaccurate or otherwise misleading, and that might have an impact on your learning and on your marks. It should never be the sole way in which you engage with a source.

15.Providing a discussion of a source

Generate text, audio, or other output that mimics the kind of discussion of a source that a human commentator might produce.

Examples: Google NotebookLM ‘Audio overview’; Perplexity.

This is not academic misconduct if this is simply a tool for your own learning. There is no need to acknowledge this if you do not quote, paraphrase, use one or more ideas from, or otherwise include material from the discussion in your work.

If, however, you do quote, paraphrase, use one or more ideas from, or otherwise include material from the discussion in your assignment, or base your assignment on it in some way, see the guidance on ‘generating substantive text’ above.

Note the rule in the core policy on not providing an AI with material that is not in the public domain.

Be aware that an AI discussion may be inaccurate or otherwise misleading. It should never be the sole way in which you engage with a source.

16. Extracting key information from a source

Extract summary information such as a timeline of the events mentioned, or a list of the main people mentioned.

Examples: Google NotebookLM ‘Timeline’; ChatGPT.

This is not academic misconduct if this is simply a tool for your own learning. There is no need to acknowledge this if you do not quote, paraphrase, use one or more ideas from, or otherwise include material from this information in your work.

If, however,you do quote, paraphrase, use one or more ideas from, or otherwise include material from this information in your assignment, or base your assignment on it in some way, see the guidance on ‘generating substantive text’ above.

Note the rule in the core policy on not providing an AI with material that is not in the public domain.

Be aware that the ‘information’ an AI extracts may be inaccurate or otherwise misleading. You should always check the accuracy of any information you rely upon in your assignment.

17. Creating a revision aid

Generating a quiz, or sample questions for you to practice on, or flashcards to memorize, or similar.

Examples: Quizlet (AI)

Not academic misconduct.

Note the rule in the core policy on not providing an AI with material that is not in the public domain.

Where this material ends up including claims or references, check them to make sure you’re revising accurate material.