Employee Guidelines for Use of Generative AI/LLM Models

Last Updated July 1, 2024

Generative Artificial Intelligence (AI) refers to a type of artificial intelligence technology that empowers individuals to create content, generate ideas, and streamline processes with efficiency and creativity. Generative AI tools like ChatGPT, Copilot, Gemini, and Llama are becoming more popular and part of our daily lives, including our jobs. New York Tech supports the responsible use of generative AI tools, but there are important considerations to keep in mind when using these tools, including policies regarding information security and data privacy, compliance, non-discrimination, copyright, and academic integrity.

New York Tech has established the following guidelines to help employees use generative AI tools responsibly and ethically, while protecting our intellectual property and institutional data. These guidelines are for New York Tech employees including faculty, staff, and student workers.

While employees are encouraged to use AI tools, the guidance below stresses the importance of staying informed and mindful when using this rapidly evolving technology. Keep these guidelines in mind as you do your work and apply them where necessary.

As generative AI keeps growing and changing, these guidelines may be updated—so it is suggested to check them periodically for updates. If you are unsure about something in these guidelines or have security/privacy questions about the use of AI tools, please email the university's Information Security Office at infosec@nyit.edu. Regular and ongoing training will be provided to employees on this rapidly evolving and changing topic.

Best Practices for Faculty, Staff, and Student Workers

  • You should think of generative AI as a complementary tool that can support you, rather than as a replacement for human expertise and judgment. You must maintain an active role in your work and continue to engage directly with relevant stakeholders.
  • Treat information given to an AI tool as if it were public, even if you are using Microsoft Copilot Enterprise through your New York Tech account (see below); do not share information that is personal, or sensitive, including New York Tech intellectual property. Review New York Tech's Data Classification Matrix to avoid entering confidential or restricted information into any generative AI tool.
  • New York Tech faculty, staff, and students can leverage generative AI for work-related tasks through Microsoft Copilot Enterprise. Copilot offers the power of GPT-4 with commercial data protection from Microsoft. If you would like to use a ChatGTP-like tool to in support of your work at New York Tech, please use New York Tech's Copilot Enterprise tool. To get started:
    • Navigate to copilot.microsoft.com.
    • Sign in with your New York Tech account/password to get data protection.
    • Remember, do not upload any sensitive or personal information, as this data may not remain personal to you.
  • For all other non-work-related activity, use your personal email address and a unique password to sign up for an AI tool. DO NOT use your @nyit.edu email address; and remember, never reuse your New York Tech account password.
  • Employees, including student employees, are responsible for verifying the accuracy of AI-generated content and correcting any errors or inconsistencies, as AI tools sometimes generate responses that contain subtle but meaningful hallucinations, uncited intellectual property, factual errors, and bias.
  • If you are going to use AI tools to support your work at New York Tech, it is your responsibility to ensure that any AI-generated content used adheres to New York Tech's Non-Discrimination and Discriminatory Harassment Policy and all other relevant policies. Content must not propagate or perpetuate any form of discrimination or harassment. See below for examples of high risk activities where the use of generative AI should be disclosed.
  • If a reliable source cannot be found to verify factual information generated by an AI tool, that information should not be used for purposes related to New York Tech.
  • Generative AI tools must not be used for any fraudulent or illegal activity. Such activity not only violates New York Tech policy but may also violate the law.

The activities listed below are meant to serve as examples and not a comprehensive list:

High Risk Activities

(it is suggested to disclose the use of Generative AI when used for following activities)
  • Drafting legal documents, or any other legal or regulatory activities, with potential legal implications.
  • Preparation of publicly facing university statements, press releases, advertisements, promotions, or similar written material.
  • Generating insights or recommendations that directly influence crucial decisions, such as strategic planning, financial investments, or operational changes.
  • Creating, enhancing, or diversifying products or services by generating concepts, designs, and/or solutions.

Moderate Risk Activities

  • Drafting internal reports, memos, diagrams, renderings, animations, videos, presentations, etc.
  • Generating initial drafts or outlines for training materials, manuals, or educational content, with subsequent human review and editing.
  • Drafting routine correspondence, such as emails or meeting notes, with appropriate human oversight.
  • Summarizing or drawing conclusions from publicly available datasets.

Low Risk Activities

  • General writing assistance, such as improving grammar, style, or tone in non-critical documents.
  • Task planning, scheduling, or personal productivity purposes.
  • Summarizing or extracting key points from lengthy documents or materials.
  • Language translation or text conversion tasks, while verifying accuracy of the output.