Community

Navigating the Ethics of Generative AI 

Like the Industrial Revolution, AI is set to completely overhaul our society and change how we live and work. The evolution of AI has led to mixed feelings about the potential impact it will have on the world and whether it will have a positive or negative impact. The divide comes down to how much we should be wary of the power of AI, especially generative AI, which is being used in everyday applications such as ChatGPT. Take large learning models (LLMs), a generative AI used to create new content using text, audio files, images, and code. Physicist Michio Kaku has likened LLMs to mere augmented recording devices and believes they are getting too much attention. At the same time, AI researcher Eliezer Yudkowsky proclaims that we do not fully understand the inner workings of these LLMs and that these systems could be on the verge of achieving superhuman intelligence, far surpassing that of humanity. At the center of the argument around generation AI lies the questions of ethics and the risks it could introduce. 

What is Generative AI 

A post by MongoDB on AI describes how generative AI is based on foundation models that can perform tasks like classification, sentence completion, the generation of images and voice, and artificially generated data. These foundation models are fine-tuned to suit the specific generative task at hand. This allows generative AI to create responses to queries on Chatbots, create entire pieces of text on ChatGPT, and personalize the customer experience on e-commerce platforms. Studies show that approximately 45% of the US population already uses generative AI, which will only increase as it is further integrated into modern society. 

What Are The Issues Surrounding the Ethics of Generative AI

The Circulation of Harmful Content and Misinformation

The rise of fake and harmful content has become one of the major ethical issues surrounding AI. Generative AI allows users to create content that blurs the line between reality and fabrication. Already, it is impossible to know if a human or computer originated online content. Reports show that 67% of Americans have encountered fake news on social media. One of the biggest issues is the volume of harmful content or misinformation that can be spread. The good news is that companies like Facebook have initiated projects to tackle harmful content on their platforms.   

Workforce Roles 

Generative AI in the workplace has caused massive ethical issues for employers and employees. The technology can already complete daily tasks, such as writing, content creation, summarization, and coding. While many companies are taking the ethical route and retraining their staff to adopt generative AI in the workplace, a June 2023 McKinsey report outlined how generative AI is set to automate 60% to 70% of employee workloads. Very soon, many companies will be faced with the ethical dilemma of whether to hire employees or cut costs and use AI. This is why we suggest that companies employ a Worker’s Comp Attorney to help employees feel safe and protected in the workplace as the integration of AI increases.

Racial and Gender Bias 

As Roberto Hortal explained in a recent post, there is a big problem with AI and cultural understanding. He argues that AI systems, particularly large language models, are trained on vast amounts of data scraped from the internet, and much of that data includes cultural biases. An article by Bloomberg on AI and biases reported that Stable Diffusion, a generative AI model that produces unique photorealistic images from text and image prompts, was worse for gender and racial bias than the real world. They found that high-paying jobs were dominated by subjects with lighter skin tones, while subjects with darker skin tones were more commonly generated by prompts like “fast-food worker” and “social worker.” Stable Diffusion also generated almost three times as many images of men in successful jobs than women. 

Environmental Impact

One issue surrounding generative AI that doesn’t get as much coverage as the above points is the environmental impact. In January 2023, a study found that ChatGPT had roughly 590 million visits. It was estimated that approximately 5 questions per user equaled the same energy consumption as 175,000 people during the same period of time. As climate change becomes one of the most pressing issues in the world, the advance of generative AI presents an ethical problem. There is some progress being made this year as the US government introduced the Artificial Intelligence Environmental Impacts Act of 2024. The bill will direct the National Institute for Standards and Technology to collaborate with academia, industry, and civil society to determine standards for assessing AI’s environmental impact. It will also create a voluntary reporting framework for AI developers and operators.

While generative AI is ushering in many positive changes to the world, society is still grappling with navigating the ethical issues surrounding it. Hopefully, regulations will be introduced to protect individuals and institutions so that AI can be used as a force for good. 

Author

  • I'm Erika Balla, a Hungarian from Romania with a passion for both graphic design and content writing. After completing my studies in graphic design, I discovered my second passion in content writing, particularly in crafting well-researched, technical articles. I find joy in dedicating hours to reading magazines and collecting materials that fuel the creation of my articles. What sets me apart is my love for precision and aesthetics. I strive to deliver high-quality content that not only educates but also engages readers with its visual appeal.

    View all posts

Related Articles

Back to top button