Acceptable Use of Official AI Tools at St. Edward's University

Tags ai gemini

Overview

 

St. Edward's University recognizes the growing utility of Artificial Intelligence (AI) tools in enhancing productivity and learning. This article outlines the acceptable and responsible use of institutionally approved AI tools, including AI meeting assistance tools, for all faculty, staff, and students. It also lists official university-provided or supported AI tools, such as Google Gemini (including Google NotebookLM) and AI features within Zoom. These guidelines represent a living document that will continue to evolve based on development in AI tools and input from our community.
 

These guidelines primarily govern the use of AI for university operational and administrative tasks. In alignment with academic freedom, faculty will determine and communicate the acceptable use of AI within their courses. Students should consult their faculty and course syllabi for specific policies on AI. Note that different courses may have different policies.
 

Rationale
 

While many AI tools are available, using official university-supported platforms helps mitigate potential risks associated with data privacy, intellectual property, cybersecurity, and management of multiple, redundant tools. Adhering to these guidelines ensures a consistent and secure approach to leveraging AI technologies across campus.

 

Key Guidelines for Using Official AI Tools
 

This section covers critical areas for the responsible use of approved AI tools:

 

  1. Data Privacy and Confidentiality:
    • When using official AI tools like Google Gemini or Zoom's AI features, prioritize the protection of sensitive university information. Always avoid inputting confidential, proprietary, or personally identifiable information (PII), including FERPA/HIPAA/PCI, and other regulatory protected data, into any AI system, regardless of whether it is an approved tool. Even with official tools, exercise caution. (For more guidance, please review our article AI & Data Safety: A Quick Guide for Using AI Tools).
    • You can maintain data privacy when using AI tools by implementing a variety of strategies to limit the exposure of sensitive information. The most effective method is to avoid inputting confidential or personally identifiable information altogether. For example, remove columns with PII from your dataset. When that's not possible, use techniques like data masking or pseudonymization which replace direct identifiers (like names or social security numbers) with artificial ones, making the data difficult to trace back to an individual. For instance, a dataset used to train a medical AI could replace patient names with "Patient 1," "Patient 2," and so on. Additionally, you can use synthetic data generation to create new datasets with the same statistical properties as the original data but without any real-world personal information. For additional guidance, please contact IET support!
    • St. Edward's University supports the use of Zoom's approved AI features. We do not recommend or support any third-party add-ons for Zoom due to potential data security concerns. Refer to our AI Data Privacy & Cybersecurity article for more details.
  2. Approved Tools and Usage:
    • Employees are encouraged to utilize institutionally approved AI tools such as Google Gemini for general AI assistance and the AI functionalities within Zoom for meeting support.
    • A comprehensive list of officially approved AI tools will be maintained and updated in this article below.
  3. Security Best Practices:
    • Protect your university accounts and devices when using AI tools. Follow all standard cybersecurity best practices, including strong passwords and multi-factor authentication.
  4. Intellectual Property and Copyright:
    • Be mindful of intellectual property rights when using AI tools. 
    • Understand that content generated by AI tools may have complex intellectual property implications. Review and confirm ownership rights for any AI-generated content used in university work.
  5. Accuracy and Verification:
    • AI outputs should always be critically assessed.
    • Never solely rely on information generated by AI tools without independent verification. Always review, fact-check, and validate AI-generated content, especially for accuracy and completeness.
  6. Bias and Fairness:
    • Be aware that AI tools can reflect biases present in their training data.
    • Identify and address potential biases in AI outputs to ensure equitable and fair outcomes in your work.
  7. Transparency and Disclosure:
    • When AI tools have significantly contributed to your work product, disclose their use appropriately. This includes, but is not limited to, academic assignments, research papers, and professional communications.
  8. Responsible Innovation:
    • We encourage exploration and responsible experimentation with approved AI tools to enhance efficiency and effectiveness in your roles, while always adhering to established university policies and guidelines.

 

Officially Supported University AI Tools:

Google Gemini

Zoom AI Companion

Adobe Creative Cloud
Rumi - Canvas Integration currently being piloted and evaluated

Some university applications, such as Smartsheet, have developed features that incorporate AI. These applications meet our acceptable use of AI. The above guidelines still apply.

Use of Unsupported AI Tools:

While the university officially supports specific AI tools, we recognize that other AI platforms (such as ChatGPT, Deepseek, Copilot and others) are available for personal experimentation. Users may explore these tools independently; however, it is critical to understand that these platforms are not supported by the university.

Under no circumstances should St. Edward's University confidential, proprietary, or personally identifiable information (PII), including FERPA/HIPAA/PCI, and other regulatory protected data, be inputted into any unsupported AI system. The university cannot guarantee the security or privacy of data entered into these tools, and their use falls outside of university-approved guidelines for data handling and security. Exercise extreme caution and good judgment when interacting with any AI tool not officially endorsed by St. Edward's.

Caution with AI Browser Plugins and Third-Party Applications

Users should exercise extreme caution and generally avoid the use of AI-related browser plugins, extensions, or other third-party applications such as Grammarly and ComposeAI that are not explicitly supported by the university. These tools can pose significant security risks, including unauthorized data collection, exposure of sensitive information, and vulnerabilities to malware.

For Further Assistance
For questions regarding the acceptable use of AI tools or to request information on specific AI tools, please contact us.
 

Related Resources:

Getting Started with AI

AI Data Privacy & Cybersecurity

Acceptable Use of Information Technology Policy

Zoom AI Companion Meeting Summary KB

 

 

For more guidance, please review our article AI & Data Safety: A Quick Guide for Using AI Tools


Was this helpful?
0 reviews