
For the first time in history, technology is not coming after blue collar jobs. In complete contrast to long-held beliefs about how advanced technology would always be deployed, Artificial Intelligence is coming for the creative and executive class . . . not the labor class.
The Industrial Revolution changed the nature of labor and manufacturing by shifting work to machines. For two centuries, workers looked over their shoulders wondering when the machines were going to come for their jobs. With the advent of high tech and robotics in the 1960s, this trend was expected to continue. Those workers most at risk were the ones who would be replaced by mechanical arms and bodies that could paint, sort packages, weld bolts, and remove defective products from an assembly line.
The widespread use of generative AI in the past three years has stopped that equation dead in its tracks. It is now the people in creative endeavors and in the executive suite who are looking over their shoulders. No one expected that AI would generate lifelike photographs of non-existent people, provide all the elements for illustrating corporate brochures, make music with nothing more than text prompts, generate lifelike singers and actors, and produce video without cameras, film, or human direction
These capabilities are now a fact of life. Everyone who considers themselves part of the creative artsānotably those under the umbrella of the entertainment industry, such as film, music, publishing, and graphic artsāneeds to be ready to deal with it. It doesnāt matter if youāre the person doing the creating or the person making executive decisions about how to promote the products of artistic creativity. You will all be affected sooner rather than later.
The simple reality is that AI is here and it is not going away. It is specific-use cases, however, that will determine whether creators, and the entertainment industry as a whole, view AI as good or bad.
Replacing humans is bad. Making work more efficient and cheaper is good.
Giving a machine creative control is bad. Speeding up production is good.
Making value judgments about creative work is bad. Helping with budgets and schedules is good.
The observations will be made by people either using AI . . . or getting trampled by it. The latter case scenario will be manifested by those that AI hits economically. This will occur because the value that AI tools offer to creators also allows those who are not typically considered creators to do very similar creative work.
For example: A schoolteacher can write, perform, and record a song for his class without knowing how to play an instrument. An accountant can create a logo for her business without hiring an illustrator. Any person with an idea for a film can produce many elements of it without actors and without knowing how to operate a camera.
AI can be democratizing in that it gives everyone the ability to do things they donāt have the basic skills to do. Conversely, that means those with creative skills will be in less demand as individuals and organizations ask AI to create things instead. Why pay for artistry and craft when an AI will do it for free?
This is concerning, to say the least. Regardless of strikes, unions, and contract negotiations, the use of AI is going to increase across the entertainment industry. It will be gradual and small at first, but AI will eventually be recognized as a yet another useful tech tool, in the same way that digital film editing software has become a requisite studio tool after being widely reviled by an industry that was built on celluloid film.
The argument about AI does not stop at its value as a tool, though. That is because AI has a notable and publicly nefarious sideāor at least its developers do.
Artists are currently behind the proverbial eight ball when it comes to AI and the use of their original work, whether copyrighted or not. All AI language models need massive amounts of data in order to learn how to generate films, images, books, and music. That data typically comes from the Internet and published works, including copyrighted materials ranging from photos to news reports. The vast majority of this data has been acquired (or āscrapedā) without paying the creators of the material.
Numerous lawsuits have already been filed by artists, news organizations, and publishers claiming that AI developers have scraped the net and utilized copyrighted and unauthorized works to train their apps. For their part, the AI companies claim that much of that information was readily available to the public, and therefore not protected. Even if it was protected, they argue that AI outputs are original interpretations of the data used for training. This is the āWarhol Defense,ā which contents that their AIs have modified original creations enough to create new works that are unique on their own (much like Andy Warholās manipulation of news photos to create color-saturated screened-printed images).
There is currently no overarching legislation that protects artists from having their work used and repurposed by AI models. And AI companies are actively lobbying governments to look the other way. OpenAI and Google have both recently published statements urging the current U.S. administration to give AI companies freer rein to develop AI products, specifically the ability to use copyrighted work without permission. Included in the claims is an underlying rationaleāadded as a political fear factorāthat this is the only way that U.S. companies can compete with Chinaās AI efforts.
This will leave artistsāaspiring, amateur, professionalāmired in uncertainty. Will they be able to protect their original work? Will they be able to own that work and perhaps profit from it? Or will it be wantonly appropriated and used as one of billions of inseparable ingredients in a revenue-generating dataset for an AI company?
The path forward for artists is akin to confronting a frenzied horse in the wilderness. One option is to look in fear at the unbridled power of the horse, and determine you are powerless to do anything in the face of its ferocity. You allow the horse to stomp you into the ground.
The other option is to grab hold of the horse and determine how you are going to get it to do your bidding. That means learning about the horse, figuring out what it is capable of, and then making it do what you want it to do. Not an easy proposition, but one with far more benefits and long-term potential than allowing yourself to be crushed underfoot.
This is a new era for artists and the entertainment industry. No one knows exactly how AI is going to change the landscape, but there will be change. That is inexorable at this point. Being aware that AI is coming, and being ready to use AI as a tool is the first step all artists should take in dealing with the technology.
Artists cannot expect governments or AI companies to have their best interests at heart. They must look out for their own interestsābecause no one else will. And if AI is going to be used against them, artists need to explore ways to use AI to benefit themselves.