As AI tools become increasingly embedded in educational and research environments, their use must be guided by ethics and privacy for the responsible advancement of learning and scholarship.
Academic and Professional Integrity
Do not use generative AI tools for graded assessments, including exams, quizzes, take-home tests, or clinical write-ups, unless permitted.
Do not use AI to circumvent learning objectives or to generate entire assignments that are meant to demonstrate your original critical thinking or knowledge.
Accountability and Oversight
Maintain human oversight at all times and use AI tools to enhance tasks, not replace critical judgment.
Recognize the limitations and risks of AI tools, including their potential to produce inaccurate, fabricated, or "hallucinatory" outputs.
Carefully review and validate all AI-generated content for errors, biases, and factual accuracy before relying on it or utilizing it.
Data Privacy
Do not input protected health information (PHI), student data, proprietary information, or unpublished research data into any AI tool without explicit permission or proper consent.
More on data privacy and security here.
Environmental Impact
Using AI requires significant computational power, both for training large language models and for running them at scale, which contributes to a substantial carbon footprint due to energy-intensive data center operations.
Transparency and Attribution
Always disclose and appropriately cite AI-enhanced or AI-generated content that contributes to your work.
Identify the specific AI tool (e.g., ChatGPT Edu, Gemini) that was used.
Read the full Using Artificial Intelligence in Teaching, Learning, and Discovery policy in the Student Handbook & Policies.