Sections

Commentary

‘Childless Cat Lady’ Taylor Swift’s clapback warns against AI and deepfakes

September 13, 2024


  • The first presidential debate appeared to sway Taylor Swift to endorse Vice President Kamala Harris.
  • Swift, who has been subjected to a series of deceptive computer-generated images over the last year, addressed the harms of artificial intelligence (AI), especially deepfakes, in her endorsement.
  • How Congress and the new administration respond to a more mainstream call to action may finally address years of stalled legislation to protect celebrities and everyday people from deceptive and manipulative AI.
Taylor Swift's Instagram profile is seen in this photo illustration taken on September 11, 2024. The musician announced her endorsement of Kamala Harris for president shortly after the current vice president debated Donald Trump on live television.
Taylor Swift's Instagram profile is seen in this photo illustration taken on September 11, 2024. The musician announced her endorsement of Kamala Harris for president shortly after the current vice president debated Donald Trump on live television. Jaap Arriens/NurPhoto.

There were many takeaways from the first debate between Vice President Kamala Harris and former President Donald J. Trump. As a number of expected policy issues dominated the conversation, Harris effectively filled in the blanks for voters on her strategies to fix the economy, restore reproductive rights, and address immigration and border security concerns. Many in the media have commented on her strong performance, but the crowning moment of the night was Taylor Swift’s immediate political endorsement of Kamala Harris. The American pop superstar has not only surpassed other artists in music awards and public reach, but has become one of the most influential figures to young people and others who are equally inspired by her talent and grit.

Many have been waiting for the endorsements of well-known and influential artists like Swift and Beyoncé. My colleague, Darrell West, forecasted that the blessings of these artists could shift the campaign in Harris’ favor. In recent months, Beyoncé has quietly supported the vice president by allowing her music to be played at campaign rallies. And immediately following the debate, Taylor Swift not only endorsed Harris for president, but also signed her lengthy post as “Childless Cat Lady” to mimic Trump’s running mate J.D. Vance’s widely ridiculed reference to women without children.

But within her endorsement, Swift also sent a loud message to Trump, those in Big Tech, and others who willingly use artificial intelligence (AI) to extract, clone, and mimic content and the likenesses of celebrities like her. She shared her own fears about AI after being a recent target of the Trump campaign and vowed to be more vocal in efforts to thwart misinformation—an issue that has continued to fester in the absence of congressional action.

Taylor Swift has been a target of deceptive AI

Swift has not been immune from deceptive AI-generated content. Earlier this year, she was the subject of explicit AI-generated images that were circulated across social media platforms, mainly X (formerly known as Twitter). Those posts received more than 47 million views in less than 24 hours, and that was before the account was suspended and the images were saved to be shared via other channels online. Issues of fake pornography and revenge porn on social media sites have served to embarrass female artists and business leaders. In the case of Swift’s sexually exploitative content, the hashtag #TaylorSwiftAI trended and led to a rush on her behalf for legal removal, which by then was too late, given the propensity of consumers to download objectionable content and share false information with their own networks.

At the heart of the controversy may have been a group of online users who started operating on Telegram, which is now facing legal scrutiny for allegedly facilitating illegal online activities. But what comes through in Swift’s denouncement of AI is that she has had enough of its harmful consequences, especially the disturbing deepfakes which reveal a troubling side of the internet where anyone can create and disseminate nude, pornographic, and photorealistic images or other content of celebrities with commercially available AI software. Some would argue that increased access to commercial technology is good for the public as we seek to make online tools more readily available to everyday people. In my new book, “Digitally Invisible: How the Internet is Creating the New Underclass,” I suggest that the shift from analog to digital services not only enabled disruption, but also enabled other uses of technology—some of which were unforeseen. But just because individuals have access to these potentially harmful tools, people like Swift are not necessarily endorsing bad behaviors.

It was the more recent use of her likeness and image that the Trump campaign shared which sent her over the edge. Various AI-generated images of her and her fans, known as “Swifties,” falsely showed them endorsing Trump for president. Many of these photos, which showed young women in T-shirts displaying a Trump endorsement, started on Truth Social, Trump’s social media platform, and quickly ended up on other platforms. But this type of inappropriate behavior was neither alarming nor unexpected by Trump allies and influencers. These AI-generated images are part of a long list of other AI-powered election disinformation, including a post which depicted Harris on the beach with now-deceased sexual predator Jeffrey Epstein. In the interest of not sharing more false information, I won’t be providing a link to this content.

In her social media post, Swift also made it clear that the deceptive and illegal use of her name and image by the Trump campaign was daunting. She shared on her post that “[i]t really conjured up my fears around AI, and the dangers of spreading misinformation.” She followed this emotion by writing: “[t]he simplest way to combat misinformation is with the truth,” which should prompt urgent actions to tackle this issue.

What Congress and the global public should take away from Taylor Swift

For years, Congress has debated over the most appropriate legislative measures to quell mis- and disinformation. In 2019, Congress introduced the first version of the DEEP FAKES Accountability Act, which was designed to establish criminal penalties for individuals thought to be producing deepfakes and other illegal content without related disclosure or digital watermarking to determine the provenance of content and urged the removal of such content by violators. In 2022, Congress introduced the Educating against Misinformation and Disinformation Act, which proposed a commission to support information and media literacy resources. That same year, the first version of the Algorithmic Accountability Act was introduced, and re-introduced in 2023, to address the impacts of AI systems to bring more transparency to automated systems, as well as improved auditing.

In addition to several other bipartisan bills to address deceptive AI-generated content, in summer 2024, a bill called the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED ACT) came out of the U.S. Senate to protect a range of creators. The proposed bill would combat harmful deepfakes, for which election manipulation could be considered a use case, and implement federal transparency guidance for making, authenticating, and detecting AI-generated content. The bill is specifically targeted to protect journalists, actors, and artists from AI-driven theft of their creative content.

But given that the presidential election is only two months away, the provision of legal protections is not in the immediate future. Instead, it is highly likely that there will be more, and not less, misinformation created and leveraged to wage character attacks and accelerate voter manipulation. In fact, the web of online misinformation is so strong that Trump’s false reference during the debate to the eating habits of Haitian immigrants in a small Ohio town went viral the minute he shared the conspiracy theory.

In their new book, “Lies that Kill: A Citizen’s Guide to Disinformation,” co-authors Elaine Kamarck and Darrell West propose that everyday people need to better understand these falsehoods to effectively navigate the truth, and the only way that can be done is by educating citizens on what to look for and how to protect themselves. Taylor Swift may have started that process by stirring reactions even among legislators to do something about this growing problem. If her call to action is not enough, her fans will definitely be chiming in next.