AI-generated research tools can support or hinder your intellectual, creative, and professional growth. Accordingly, we expect that you will use AI-based tools responsibly, with care, and with an eye toward understanding their systemic harms and benefits. Use the TAAP framework to guide your ethical AI practices:
Transparency & Disclosure
Authority Evaluation
Accuracy Validation
Privacy Considerations
-
Transparency & Disclosure
-
Authority Evaluation
- Critically assess AI as a source:
- Investigate whether developers provide adequate information about how the tool was created, including:
- Check for potential conflicts of interest and consider the reputation of the AI tool's creators
- Do not automatically treat AI as an equivalent to human experts
- Recognize that general-purpose AI tools (ChatGPT, Gemini, Perplexity) are more prone to citation fabrication than specialized academic research tools (e.g., Undermind, Elicit, Consensus)
-
Accuracy Validation
- Rigorously validate AI-generated content:
- Understand that AI systems can produce two types of errors:
- Factual hallucinations: generating information that is simply untrue
- Source faithfulness errors: citing sources that don't actually support the claim made, or generating "ghost references" (fabricated citations that don't exist)
- Verify both types of errors: Check that citations exist and support their claims, AND independently verify factual information from multiple sources
- Remember that even specialized academic tools are not 100% accurate—approach all AI-generated content with critical thinking and a skeptical stance
-
Privacy Considerations
- Be mindful of potential privacy implications:
- Avoid uploading sensitive or personal information
- Review the basic privacy policy of AI platforms
- Be cautious about data sharing and storage