Skip to Main Content

Generative AI

In this research guide you will discover the inner workings of generative AI models, learn how to appropriately use and cite these models, and dive deep into both broader ethics and breaking news of AI.

Environmental Issues

Photo of a pond with low water level.

Photo credit: Mohamed Malik https://flickr.com/photos/malikdhadha/5460702417/

Like most computers, generative AI requires running a lot of electricity for each time it generates something. With all the computers running billions of calculations for each prompt, this creates significant greenhouse gases and requires enormous amounts of water for cooling. One study [1] estimated that by 2027, generative AI will be using 4.2-6.6 billion cubic meters of water per year (about 5 times as much water as Denmark uses annually).

Because of these issues and the threat of climate change, many people think it's wrong to use generative AI at all. One session of ChatGPT questions uses half a liter of water [2], and these systems are likely to use more resources as they get more advanced over time. There are a variety of pros and cons to generative AI, but it's a lot more environmentally friendly to try and figure out answers yourself or through other methods if you can.

Copyright and Artistic Property

Generative AI inherently relies on very large amounts of pre-existing material, from all sorts of places, in all kinds of formats. For example, ChatGPT gets its language material from social media, Wikipedia, academic papers, and many other sources, which is how it seems to respond like a human would - it's because it has analyzed millions of human responses and tries to respond like them. Midjourney does something similar, but with art instead of text. Most AI companies are not very open about exactly where they get their sourced material, for a variety of reasons.

As you might imagine, this can cause complications. Many artists, writers, musicians, etc. do not want their art to be used by generative AI, or at least want to decide for themselves. While some generative AI is presumably using this sourced material in a completely legal fashion, some argue that the creators of this material didn't know what they were signing up for when they agreed to the small print. There are even some artists who have set up special image filters that humans can't see but prevent AI from being able to interpret and learn from it (though with the pace of AI technology, they don't always work for long).

 

For more in-depth reading on issues in AI and copyright:

Prejudices, Biases and Inaccuracies

Illlustration of a robot drawing itself.

Illustration credit: Electronic Frontier Foundation / Kit Walsh: https://www.eff.org/deeplinks/2023/04/how-we-think-about-copyright-and-ai-art-0

Since generative AI requires drawing from sources created by humans, it copies both the good and the bad of humanity. On the plus side, that means it can draw on the many intelligent, creative, and insightful things that people have made, said, and done. But on the other hand, it's also getting all the biased, prejudiced, offensive, and just plain wrong things. Any systemic human issues will replicate in a generative AI unless it is corrected - this includes racism, sexism, homophobia, and anything else you can think of. And of course, if AI takes data from the internet, and AI is making some of the data on the internet, some of what AI will give you isn't coming from people anymore, but from AI interpreting AI interpreting humans, and so on.

 

Overconfidence

One last note - if you've used generative AI much, this is something you may have noticed - generative AI, especially text-based ones like ChatGPT, are supposed to give you an answer. This means that even if it doesn't have the information you're looking for, there's a good chance it will give you an answer anyway, sometimes called a AI hallucination. This can come out in disastrous ways if you're not careful. In one of many examples, a lawyer was dropped from his case after he cited several supposed other legal cases, only to find out that those weren't real, just AI generated [3]. These cases looked real, but when they actually looked for them, they never actually existed at all. It's important to check your sources for anything ChatGPT tells you, and not just believe it because it seems realistic or like it knows what it's talking about.