Data & Privacy

This guidance is provided to UM community members about the use of Artificial Intelligence (AI), a rapidly evolving technology. The guidance will be updated as the university reviews the role of AI at UM, and explores formal contracts and agreements with AI vendors.

ChatGPT and similar AI programs pose myriad societal opportunities and challenges, including at UM. While there are many chances to experiment and innovate using these tools at the university, at present UM does not have a contract or agreement with any AI provider. This means that standardized UM security and privacy provisions are not present for this technology.

Do not use ChatGPT or other AI with information such as student information regulated by FERPA, human subject research information, health information, HR records, etc.

Also, know that Open AI Usage policies disallow the use of its products for other specific activities. 

This guidance will change as UM engages in broader institutional review and analysis. In the meantime, please do your part to use AI responsibly, including reviewing the data you input into it to ensure it meets the guidance above.

Resources:

Additional equity, ethical, and accessibility concerns

At the current time, many AI tools are free, but this might change in the future. If you decide to incorporate these tools into your assignments, consider options that all students can access.

It is worth considering avoiding tasks that will disproportionately benefit students who can pay for expensive AI tools.

It's important to educate students on the limitations and potential biases of AI-generated content and encourage them to use it responsibly. AI tools are only as unbiased as the data they are trained on. If the training data for the AI includes bias, the results generated by the AI will also be biased. In this way, AI tools can perpetuate the biases present in their original training sets, leading to discrimination against certain groups of people and reinforcing pre-existing inequalities and stereotypes.

Just as AI tools can perpetuate bias, they can also perpetuate misinformation. AI tools can generate content that is inaccurate, misleading, or harmful, potentially creating or perpetuating misinformation based on the data it was trained on.

AI tools that generate text, such as chatGPT, are producing text outputs based on the massive data set of texts they were trained on. As a result, the texts produced by AI tools make it difficult to determine who is responsible for the content created, who is the author of the content, and whether there is any accountability for the results

Given the wide variety of current and in-development AI tools, it is important to note that not every AI tool has been designed to be accessible to all users.