Sections

Commentary

Will Apple’s foray into the AI space be ‘for the rest of us’?

August 22, 2024


  • Apple is launching a new artificial intelligence product and promises that data sent out for processing will be encrypted and deleted immediately, but any information leaving a device inherently carries some security risks.
  • The potential for Apple’s algorithm to repeat or amplify human biases is a crucial factor to consider, especially at a time when generative AI models have limitations.
  • Exaggerated expectations surrounding AI implementations could lead to disappointment, as the hype may not align with the actual capabilities and limitations of the technology.
Apple iOS 18 is being displayed on a smartphone, with Apple Intelligence in the background, in this photo illustration in Brussels, Belgium, on June 11, 2024.
Apple iOS 18 is being displayed on a smartphone, with Apple Intelligence in the background, in this photo illustration in Brussels, Belgium, on June 11, 2024. Jonathan Raa/NurPhoto

The last few months have seen Apple’s latest venture, Apple Intelligence, which represents the company’s effort to compete with other major corporations in artificial intelligence (AI) development. Unveiled at Apple Park in Cupertino on June 10, 2024 at the highly anticipated Worldwide Developers Conference (WWDC), Apple Intelligence is what the company is calling “AI for the rest of us,” an allusion to a Macintosh commercial from 1984 calling the device “a computer for the rest of us.” However, given the implications of widespread personalized AI rollout for privacy, data collection, and bias, whether Apple Intelligence will truly be “for the rest of us” remains to be seen.

Creating technology “for the rest of us” is a sentiment that is clear through many of Apple’s historic moves. With the introduction of the iPhone in 2007, the company bypassed marketing to the traditional buyer for smartphones (business users and enthusiasts) and took the product directly to the mass market. In May 2023, the company’s CEO, Tim Cook, was quoted saying that “[a]t Apple, we’ve always believed that the best technology is technology built for everyone.” Now, Apple has taken on the feat of creating generative AI “for the rest of us.”

The widespread adoption of generative AI has the potential to revolutionize public life, and Apple’s integration of the technology into their phones is no exception. A 2024 McKinsey study revealed intriguing trends in global personal experience with generative AI tools: 20% of individuals born in 1964 or earlier used these tools regularly outside of work. Among those born between 1965 and 1980, usage was lower, at 16%, and for those born between 1981 and 1996, it was 17%.

The integration of AI into Apple devices could dramatically reshape the role of generative AI in everyday life—making replying to in-depth emails, finding pictures of a user’s cat in a sweater, or planning the itinerary of a future road trip a one-click task. By embedding these tools into the already ubiquitous marketplace of smartphones, accessibility to generative AI would likely increase and drive up usage rates across all age groups.

Why Apple Intelligence may not be “for the rest of us”

However, it is crucial to consider the potential risks that come with the extensive deployment of more commercially deployed generative AI. A study conducted by the Polarization Research Lab on public opinions of AI, misinformation, and democracy leading up to the 2024 election reported that 65.1% of Americans are worried that AI will harm personal privacy. Apple is aware of this and has made prioritizing privacy an essential part of its business model. Advertisements from 2019 stressing privacy, public statements on privacy being a fundamental human right, and even rejecting to help the FBI bypass iPhone security measures for the sake of gathering intelligence are all ways Apple has demonstrated to consumers their commitment to privacy.

The announcement of Apple Intelligence is no different. In the keynote, Senior Vice President of Software Engineering Craig Federighi made a point of highlighting how the product integrates privacy throughout its functions. Apple has a twofold approach to generative AI: on-device task execution for more common AI tasks like schedule organization and call transcription along with cloud outsourcing for more complex tasks, an example of which could be to create a custom bedtime story for a six-year-old who loves butterflies and solving riddles. However, it is still unclear where the line between simple and complex requests is and which of these requests will be sent out to external (and potentially third-party) servers.

Further, Apple claims data that is sent out will be scrambled through encryption and immediately deleted. But, as Matthew Green, security researcher and associate professor of computer science at Johns Hopkins University, noted, “Anything that leaves your device is inherently less secure.”

Data security

It is for these reasons that the development process for future versions of Apple Intelligence remains uncertain. When training AI models, AI algorithms are given training data that they use iteratively to fine-tune their intended functions. This new Apple Intelligence model promises the ability to use personal context to make the AI interaction experience that much more seamless and integrated into a user’s everyday life. In the keynote, Apple noted that a user’s personal iOS will be able to link information between applications, meaning that if Siri was asked how to efficiently get to an event from work, it could go into a user’s messages to gather the information needed to make that assessment—all to “simplify and accelerate everyday tasks.” The company did say that there are measures put in place so that Apple employees cannot access a user’s data gathered through their AI platform.

But, looking toward the future, when Apple is developing new versions of its AI model, what training data will it use if not that which it collects from its own devices? A report investigating trends in the quantity of human-generated data used to train large language models revealed that human-generated text data will likely be completely used up at some point between 2026 and 2032. Public training data is running out, and if Apple is not collecting their users’ inputs to train future models, it is likely to run into this problem down the line. Thus, Apple’s privacy claims are quite idealistic but not entirely foolproof when considering the long-term impacts of their AI implementation.

It is also unclear where Apple’s training data for the current model is coming from or if the model was developed on equitable and inclusive datasets. AI algorithms can have inherent biases embedded in them by being trained on standardized data, which often lacks the diversity that would promote inclusivity and eliminate biases. This is particularly important because Apple Intelligence is a computer model that will make inferences about people, like their attributes, preferences, likely future behaviors, and objects related to them. It is not clear whether Apple’s algorithm will repeat or amplify human biases, err on the side of mainstream inferences about human behavior, or both. With how widespread this deployment of generative AI plans is, these are crucial factors to consider when proposing an AI product “for the rest of us.”

Navigating the hype

Dr. Kevin LaGrandeur’s paper on the consequences of AI hype provides valuable insight into the potential implications of increased commercialization of AI products. He outlines how the hype surrounding AI can distort expectations, leading to inappropriate reliance on the technology and potential societal harm. Apple’s announcement of its generative AI model and its capabilities have the potential to fall into this trap. LaGrandeur warns against the exaggerated expectations that are associated with AI implementations and how the shortcomings of these expectations mirror the Gartner Hype Cycle, which claims that society needs to reach a “peak of inflated expectations” and “plateau of productivity.” Because Apple’s technologies will not be available to the public until later this fall, we cannot be entirely sure of their culpability and the implications on user privacy and other broad protections that protect users from harm and consequences.