California Governor Gavin Newsom has the opportunity to put his name on the map for an incredibly important issue, and he set a precedent on artificial intelligence (AI) regulation that the entire nation has been sitting on the sidelines waiting for.
California Assembly Bill 2602 (AB 2602) is currently sitting on Gov. Newsom’s desk, waiting for his veto or written signature of approval to enact the bill into state law. The bill would protect artists and performers from the use of AI to make digital replicas of copyrighted music or work.
Not only would the bill safeguard “personal or professional services,” but it would also bolster pre-existing laws that “[prohibit] an employer from requiring an employee or applicant for employment to agree, in writing, to any term or condition that is known by the employer to be illegal.”
The main objective of the bill is to “provide that a provision in an agreement between an individual and any other person for the performance of personal or professional services is unenforceable only as it relates to a new performance, fixed on or after Jan. 1, 2025 by a digital replica of the individual if the provision meets specified conditions relating to the use of a digital replica of the voice of likeness of an individual in lieu of the work of the individual.” Essentially, the bill makes it so that individuals in California have legal protections against organizations or private citizens’ who would use the voice or image of any celebrity, performer, artist or private citizen in a professional setting.
This is important in the age of TikTok and Instagram Reels, when users have been captivated by “scrollbait” that’s been dominated by AI-generated content. An instance of this would be one of many videos on TikTok posted by the account “@tmparagon,” where presidents Donald Trump, Barack Obama and Joe Biden’s voices are manipulated by AI to make users believe that the three are playing a first-person shooting game while bickering back-and-forth with each other. Celebrities like MrBeast and Tom Hanks are also facing similar issues where their image has been used without prior approval or knowledge in videos uploaded online.
While some videos can often be funny, they’re rather dangerous for several reasons. The bottom line is that these videos are deceptive and can portray a reality that never happened. Not only is it concerning because of their defaming nature, but AI-generated content could be used to spread misinformation or fake news.
As it is, news is being censored. Google has recently faced accusations that they have been censoring media sites and articles, essentially hiding topics or websites to promote others. Facebook’s Mark Zuckerberg has also recently admitted that, following government pressure, Meta had censored or hid reports during the 2020 election season.
Pairing the issues and allegations against search engines and other social media platforms with how AI can be destructive to anyone’s character or name, the California bill comes at a crucial time when maliciousness is rampant for people online. What’s worse is that content creators can even profit from the chaos.
It is no secret that people can make money off of TikTok. Content creators employ tactics like affiliate marketing, post sponsored content and promoting services or products in the TikTok Shop to earn income.
In the United States, adults spend more than two hours on social media. Additionally, users of TikTok spend a little under an hour scrolling on the platform. Whether people use platforms to unwind after a long day of school or work, or “doomscroll” to start their day off, social media use is a massive part of life. What is most daunting about this fact of the American lifestyle is how many people receive their news from social media.
Standards haven’t been set for content creators, and they certainly haven’t been held accountable. Take, for instance, how creators are currently influenced by outside money pouring in and lining their pockets to script their content and produce garbage. For people concerned about election integrity, this sets off alarm bells. The thought of money influencing what people believe to be true isn’t something that should be an issue, yet here we are.
Research any topic and I’d bet you $10 that once you’re fully informed, you’ll find that your daily digest from your favorite TikTok news-giver has left out crucial facts that you’d find important. Some of this “news” is given to users through AI text-to-speech applications in videos. Though these “digital replicas” have the potential to be good, they can also be extremely bad. While this bill doesn’t tackle all issues relating to social media and AI, it could be the first domino to make the rest fall on social media issues.
The California bill defines “digital replicas” as anything “computer-generated, highly realistic electronic representation that is readily identifiable as the voice of visual likeness of an individual that is embodied in a sound recording, image, or audiovisual work in which the actual individual either did not actually perform or appear, or the actual individual did perform or appear, but the fundamental character of the performance or appearance has been materially altered.” If anything can be said about the bill, it is that its importance doesn’t solely lie in how it sets a playbook for how other states should handle the issue of AI, but that it does something which the government tends to fail to do: protect the individual and the public. This bill sets new standards, protects the public and people who could fall victim to potential evils of AI, and should send shockwaves to spark similar and further legislation.
Michael Duke, GSB ’26, is undecided from Scottsdale, Ariz.