Information about GenAI

A Note from the IGM Team

We recognize that information is evolving as technology does. While we don’t have all of the answers, we want to provide you with information to hopefully point you in the right direction so you can make the best decisions for yourself. If you have any concerns, please email us at [email protected]. We encourage feedback and learning from reputable sources to refine and enhance our policies and available resources over time.

Definitions for Clarity

Generative artificial intelligence (AKA Generative AI or GenAI) is a subfield of artificial intelligence that uses generative models to create AI-generated content.

AI-generated content is any text, image, video, audio, software code, or other forms of data created by GenAI algorithms trained on vast datasets.

Resources for more information: Wikipedia, Boatwright Memorial Library at University of Richmond

Common GenAI Tools/Platforms

  • Midjourney
  • ChatGPT (OpenAI)
  • DALL-E (OpenAI)
  • Stable Diffusion
  • Copilot Studio (Microsoft)
  • Bing AI (Microsoft)
  • Gemini (Google)
  • Vertex AI (Google)
  • Bedrock (Amazon)
  • ClaudeAI
  • DeepSeek
  • Meta AI
  • Grok (X, formerly Twitter)
  • BERT
  • Veo
  • Sora
  • LTX
  • AI-generation tools within software such as, but not limited to, Photoshop or Canva

GenAI Use and Dataset Training Awareness Information

If you want to be more aware of how your data is being used in training Gen-AI data sets, please always read terms of service (TOS) regarding any software or platform you use. We are not lawyers — we can only offer resources to hopefully point you in the right direction. If you’re concerned about any software or platform’s TOS regarding training AI algorithms on your data, seek a qualified attorney in technology transactions, intellectual property (IP), or AI compliance to help interpret and verify those terms.

Canva & Affinity:

Free-to-use design software
Subscription upgrades include GenAI tools

To use Affinity, it requires you to have a Canva account and accept Canva’s privacy policy. While Canva and Affinity state that they don’t use your data for AI training and that “you are in control,” there are settings within the Canva account privacy controls that allow for AI learning: “AI-powered features can learn and improve with your general usage” and “AI-powered features can learn and improve with your content.” You are usually automatically opted in for new accounts. If you do not wish to have your data used within Affinity or Canva for AI training, make sure to opt out of these settings in both your Affinity AND parent Canva account.

Canva AI and Affinity Machine Learning
Canva Privacy 
Canva Terms of Use
Affinity Terms
Canva Data Processing Addendum 
Canva AI Product Terms
Canva Manage Privacy Settings Help Page 

Adobe Creative Suite:

Subscription-based design software
Gen-AI tools are offered within the software

Adobe states they do not use customer’s content to train AI training. However, there are many settings that exist related to opting out of your data being used for AI training. If you do not wish to have your data used within Adobe for AI training, make sure to opt out of these settings in your Adobe Creative Cloud account via a web browser. You will also need to change settings within each software program such as opting out of the “Product Improvement” section that allows Photoshop to use your images and data for training.

Adobe Generative AI User Guidelines
Adobe General Terms of Use 
Adobe Privacy Choices

Anti-AI Tools for Artist Protection

What is Glaze and Nightshade?

AI-protection tools developed for artists in 2023 by a team of computer scientists at University of Chicago.

What is Glaze
Glaze can be thought of as a defense against style mimicry, and does so by disrupting AI learning models. 

What is Nightshade?
Nightshade can be thought of as an offensive attack against AI learning models. Where the images it’s applied to are “poisoned” so that if someone tries to use them in the training of their AI system without the original creator’s permission, that AI’s data pool is now corrupted and unusable. 

There are limitations in scope regarding these tools. As AI algorithms get more sophisticated, the effectiveness of these tools will decrease over time — they are not a guaranteed protection against all future AI training. University of Cambridge researchers have created a method called LightShed that can bypass protections like Glaze and Nightshade in an attempt to let creatives know that there are serious vulnerabilities in art protection tools. This research is a call to collaborate with other field experts to support the artistic community with tools that can withstand protection bypass methods like LightShed.

University of Cambridge on Artist-Centered Protection Strategies

AI Detection Tools and their Unreliability

Be aware that AI-detection tools (AI-checkers) have consistently been inaccurate in identifying AI-generated content. False-positives and false-negatives create a muddy landscape in determining what is human-made through AI-detection tools. 

According to a study from the University of Chicago, Hive might be the most accurate tool in detecting human-made from AI-generated but it has a difficult time with adversarial perturbations. Adversarial perturbations are often invisible, pixel-level alterations that are applied to images to deceive AI learning models. An example is Glaze/Nightshade which applies adversarial perturbations to protect your work and poison the learning well. Photo-compositing within programs like PhotoShop can also mimic adversarial perturbations within AI-detection tools leading them to classify human-made as AI-generated.

Because of these inaccuracies, using AI-detection tools can lead to falsely accusing someone of using GenAI in their work which ultimately does more harm than good.

University of Chicago Study on Distinguishing Human-made Art from AI-generated Images
Publication on Adversarial Machine Learning by NIST 
The Issue With AI-detection Tools in Photojournalism
The issue With AI-detection Tools in Education/Writing 
Richmond University, Considerations and Limitations of AI detection tools