AI Best Practices
Is there an AI Policy at UNLV?
UNLV faculty, staff, and students are required to adhere to NSHE and institutional policies and procedures. There is no specific AI policy currently on campus, however, existing university policies already encompass the use of AI technologies. AI applications should be appropriately used within the broader framework of institutional, state, and federal policies.
These policies address most of the associated risks and use cases. These policies include:
Institutional Level:
- IT Policies and Standards
- Institutional policies and practices covering data privacy
- Research integrity and institutional review boards
- Video and audio recording policy
- Academic misconduct
State Level:
Federal Level:
- FERPA
- HIPAA
- Executive Order 13960
- National Institutes of Health (NIH)
- National Science Foundation (NSF)
It is important to note that no UNLV-wide policies have been outlined relating to AI assessment, grading, content creation, or general use across campus.
General Best Practices
Generative AI should be treated analogously to assistance from another person. Examples of usage could be for drafting a professional sounding email or documenting some code. Using generative AI tools to complete a Financial or Human Capital Management project, or entering assignment or exam answers is not advisable. Personally identifiable information, system credentials (passwords, IP addresses), and non-public records should not be provided as prompts. One should acknowledge the use of generative AI (other than incidental use) and default to disclosing such assistance when in doubt.
Privacy and Security Best Practices
Data Anonymization: Avoid inputting sensitive or personal information into AI systems. Ensure all personal data used in AI models is anonymized to protect individual privacy. This includes names, contact information, title, birthdate, salary, NSHE ID, and more. Ensure institutional data is not used.
Limit Permissions: Restrict the permissions granted to AI applications to only what is necessary for their functionality. Contact UNLV IT if needing to integrate AI into university applications to ensure compliance with data security and privacy requirements.
Monitor AI Outputs: Review AI outputs for malicious content, appropriateness, accessibility, bias or discriminatory outputs.
Awareness of AI Limitations: Always employ human judgment when using AI outputs and be aware of misuse or misapplication where human judgment is critical
Enterprise vs Third-Party AI Tools Best Practices
Enterprise tools are computer applications, software, hardware, or systems that are supported by UNLV, that are designed to be scalable and integrated into a variety of university systems. Supported tools include data agreements and other protections for institutional data and content.
Third-party tools are not supported by UNLV, and may or may not be integrated into university systems, depending on factors such as accessibility, funding, privacy and security considerations, and more.
When using third-party non-UNLV supported tools, it is imperative that you:
- Review their privacy policies: Carefully read and understand the privacy policies of the AI tool to know how your data will be used, stored, and protected.
- Check security measures: Ensure the tool has robust security measures in place, such as encryption and secure data storage, to protect against unauthorized access and breaches.
- Validate the source: Use AI tools from reputable and trusted providers and verify the legitimacy of the tool before integrating it into your work.
- Limit data sharing: Only provide the minimum necessary data to the AI tool, avoiding the input of sensitive or personal information whenever possible.
- Understand data handling: Be aware of how the AI tool handles data, including where it is stored and whether it is shared with third parties.
For more information, see this guidance on Questions to Ask When Considering a Third-Party AI Tool for Use at UNLV.
Bias and Hallucinations
AI models can output biased or discriminatory results based on statistical/computational biases, human biases, or systemic biases in their algorithms. Users are ultimately responsible for verifying the output they utilize from AI is free of discrimination, bias, and is reviewed for fairness, equity, inclusion, appropriateness, and compliance with UNLV and NSHE policies.
Similarly, AI can also provide inaccurate or misleading information presented as fact, caused by bad training data, overfitting a limited dataset on a new dataset, assumptions or misinformation, and more. As with bias, it is up to the user to verify the accuracy of the AI outputs.