الوصف: |
This is a policy currently in use in an earlier form in the Faculty of Arts, Macquarie University, Sydney Australia. It is use-agnostic and sets parameters for risks, communication of risks, and possible benefits in a bid to both set guardrails and educate students. Some modification will be needed for use outside of Legal education. We are serious about the recommendation about models. Here is the document in HTML form: Use of Artificial Intelligence The use of Artificial Intelligence is not prohibited. The use of Large Language Models (LLMs) such as OpenAI's ChatGPT GPT-4 (paid), Perplexity.ai (free/paid), Microsoft Copilot in creative mode (free), or Claude 3 Opus (paid) when working on this assessment is permitted. These tools can complement your efforts on assessments and can assist with planning, research, and editing, but they must be used intentionally and with utmost care . Intentional and careful use of these tools may assist the quality of your final submission, but poor or reckless use of the tools can quite easily negatively impact the quality of your submission. You are fully responsible for any issues or errors arising from their use . If you are considering actively engaging with LLMs to assist in completing this assessment, then please read the following very carefully. Acknowledgement It is essential to acknowledge any tool used, both the model used and the way that you used it. (See below for my acknowledgement.) CRITICAL: Confabulations, hallucinations, and fictitious sources It is your responsibility to use Generative AI tools ethically and appropriately. Any fictitious sources contained in your submitted paper will result in a failure of the assessment, regardless of whether they originated from your own research, Generative AI, or a random webpage . This is not an academic integrity issue, but a matter of ensuring the accuracy and reliability of your work. Remember, LLMs always sound confident but are not always correct (and depending on how the LLM is used may be very wrong). Proper ... |