[ad_1]
Elon Musk, who threatened to ban his employees and visitors from using Apple devices at the companies he runs, said in a June 10 post on X that he’s no longer a fan of the iPhone, the iPad and Mac computers because he has security concerns about whether Apple’s new partnership with OpenAI, the maker of ChatGPT, will protect users’ personal data.
But the situation prompting Musk — who is one of the world’s richest men, the CEO of X, the head of a startup developing a rival to ChatGPT called Grok and a co-founder of OpenAI, a company he was suing — may be more complicated than just worries about security. Musk, who has a reputation for bluster, is now being called out by members of his social media fact-checking community, saying his claims are inaccurate and misleading.
And at least one security researcher said that Musk’s security warning seems wrong, based on information Apple and OpenAi have shared so far about how privacy between their companies is being handled.
Here’s what happened: On Monday, Apple CEO Tim Cook and his team took the stage at the company’s developers’ conference and announced generative AI features they’re bringing to iPhone, iPad and Mac users in the next versions of Apple’s operating system software this fall. The news included a deal to give Apple users access to OpenAI’s popular ChatGPT gen AI chatbot. Then Musk made his threat.
“If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies.” Musk posted to X, formerly known as Twitter, on Monday. “That is an unacceptable security violation.”
He also said in his tweets that visitors to his companies, which include Tesla, X, chatbot maker xAI, tunneling startup the Boring Company and rocket producer SpaceX, will have to “check their Apple devices at the door, where they will be stored in a Faraday cage.” Faraday cages are enclosures that shield anything placed inside them from electromagnetic fields.
What he didn’t offer was any evidence to back up his speculation about potential security risks. Instead, Musk, in a followup post on Monday, belittled Apple for inking a deal with an outside maker of a large language model (LLM), which is what enables gen AI functionality. He also said he might make his own phone to “combat this,” again without detailing what “this” is.
“It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy,” Musk wrote. “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.”
Musk, who has worked to portray himself as an advocate for users and humanity, also didn’t mention his legal beef with OpenAI, which is detailed in his February lawsuit. In that lawsuit, he claims the San Francisco-based startup, led by CEO Sam Altman, abandoned its founding mission to develop AI that will benefit humanity and instead has focused on chasing profit.
In response, OpenAI challenged Musk’s narrative in a lengthy blog post on its site on March 5, saying the billionaire investor was angry that his 2018 attempt to take over OpenAI was rebuffed. It included Musk’s demand to become the CEO and majority shareholder so he could turn it into a “for-profit entity” himself. OpenAI’s post also referenced a few of Musk’s emails that seemed to support the company’s claim that Musk was aware that OpenAI would need to become a for-profit company if it wanted to raise money to chase its dreams of building an Artificial General Intelligence (AGI), an AI that matches or surpasses human intelligence. In 2017, Musk said in an email that OpenAI would need to raise at least $1 billion in funding.
Musk withdrew his lawsuit Tuesday, “a day before a state judge in San Francisco was set to consider whether it should be dismissed,” The New York Times reported, adding that he could refile the suit in another state. Musk’s lawyers didn’t give a reason for their request to drop the lawsuit in their filing, according to CNN.
OpenAI declined to comment on Musk’s comments about the Apple partnership and about his decision to drop his lawsuit.
Watch this: Apple Introduces Private Cloud Compute for AI Processing
For its part, Apple didn’t reply to CNET’s request for comment about how ChatGPT will be integrated into “Apple Intelligence,” the name it gave to its approach to adding generative AI-based features throughout its hardware and software. Those features include the ability to rewrite or summarize notes as well as Siri’s improved capability to understand the context of conversations.
In its description of its deal with OpenAI, OpenAI said users would be able to choose to access ChatGPT through Siri, Apple’s virtual assistant, and in new Writing Tools that will proofread your writing, rewrite copy in various styles and quickly summarize long sections of text.
Apple also said Monday it intends “to add support for other AI models in the future.”
See also: Apple Says Its AI Sets a ‘New Standard’ for Privacy, Invites Security Experts to Test It
During the WWDC keynote, Apple talked at length about the security and privacy aspects of its AI systems, including what it calls Private Cloud Compute for managing communications between personal devices and Apple’s remote servers working in the cloud. The iPhone maker has championed privacy as a core value when designing products and services, and it said that Apple Intelligence would set “a new standard for privacy in AI.” To help achieve this, Apple said certain AI-related tasks will be processed on-device, while more complex requests will be routed to the cloud in data centers running Apple-made chips. In either case, “data is not stored or made accessible to Apple and is only used to fulfill the user’s requests, and independent experts can verify this privacy,” the company said.
“Privacy protections are built in for users who access ChatGPT — their IP addresses are obscured, and OpenAI won’t store requests,” Apple said in a press release.
The iPhone maker said to expect the ChatGPT integration in new software for its iPhone, iPad and Mac computers this fall (typically when it releases a new model of the iPhone.) The integration with ChatGPT is an optional feature, the company said, demonstrating that users can choose to opt in or use OpenAI’s chatbot on its website. Apple said its devices would not collect personal data, but would be aware of it.
Because access to ChatGPT is an optional feature that users need to opt into, it doesn’t seem that Apple is in fact integrating OpenAI at the operating system level as Musk claims, according to Matthew Green, an associate professor of computer science who teaches cryptography at Johns Hopkins University.
“They’re saying you have to explicitly do this and opt into it and an activate it,” Green said, citing the information provided so far by Apple and OpenAi. “It sounds like, first of all, no they’re not embedding anything at the operating system level. They’re not making it any kind of default. They’re building in some optional feature in Siri that you explicitly have to turn on that would let you use ChatGPT. It sounds pretty much not like what Elon Musk is describing.”
That said, Green said he and others will be watching for more details as the two companies share more details and move closer to Apple releasing its software.
Fact-checkers on X also pointed out that Musk’s posts, labeling the Apple-OpenAI partnership as “creepy spyware,” were not factually correct, Forbes noted. “Users, citing Apple’s own introduction to the Apple Intelligence models, said Musk’s claim the company will hand data over to OpenAI is misleading as Apple has developed its own AI systems that will run on-device, or locally, and will use private cloud computing.”
In another community note, Forbes reported, fact-checkers wrote that Musk “misrepresents what was actually announced,” as “Apple Intelligence is Apple’s own creation” and access to ChatGPT “is entirely separate, and controlled by the user.”
Originally published June 11 at 3:33 a.m. PT.
Update, June 11 at 5:35 p.m. PT: Adds information about Musk’s ChatGPT rival, Grok, and lawsuit against OpenAI.
Update, June 11 at 6:25 p.m. PT: Adds comment from security researcher, Musks’ decision to drop OpenAI lawsuit.
Editors’ note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you’re reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.
[ad_2]