Apple quietly exhibited its major advancements in cutting-edge machine learning (ML) and artificial intelligence (AI) at its annual developer’s conference, WWDC, highlighting its commitment to furthering these technologies.
Apple appears to remain very silent on this front, in contrast to Microsoft, Google, and companies like OpenAI, who have adopted cutting-edge ML technologies like chatbots and generative AI. However, the business unveiled a number of notable AI capabilities during the conference, including an enhanced iPhone autocorrect that is driven by a machine learning program using a transformer language model, much to the technique used by ChatGPT. According to Apple, the autocorrect will improve its accuracy by learning from users’ typing habits.
The keyboard would even learn uncommon phrases like “ducking” in autocorrect, a typical substitution for a swear word, according to Craig Federighi, Apple’s head of software.
While the debut of the cutting-edge augmented reality headset, Vision Pro, was the event’s main attraction, Apple also showed its commitment to developing machine learning and AI. Even though ChatGPT gained over 100 million users in only two months after its debut last year, Apple is now using the technology to improve a function that more than 1 billion iPhone customers use every day.
Apple places a higher priority on on-device AI models than its rivals, who depend on large-scale models supported by server farms, supercomputers, and vast quantities of data. The new autocorrect function is interesting since it runs entirely on the iPhone, unlike ChatGPT, which needs hundreds of pricey GPUs to operate simultaneously.
Many of the data privacy issues with cloud-based AI are reduced by on-device AI. Apple eliminates the requirement for significant data collecting by running the model on the actual device.
Because it has complete control over the hardware stack, including its own silicon chips, Apple’s strategy is intimately related to this. Each year, the business adds new AI circuits and GPUs to its chips, enabling it to keep up with developments and adopt fresh strategies.
Apple’s Useful AI Approach
In general, Apple steers clear of using the word “artificial intelligence,” opting instead for the more academic term “machine learning” or just emphasizing the features made possible by the technology.
Apple, a product-centric company with a long history of secrecy, places a greater emphasis on showcasing the feature itself than on delving into the specifics of the AI model, training data, or potential future improvements. This is in contrast to other leading AI firms with leaders with academic backgrounds.
Apple improved the AirPods Pro at WWDC so that while the user is conversing, noise cancellation is automatically turned off. Despite Apple not designating it as a machine learning feature, it is a difficult challenge that requires the use of AI models to address.
Apple’s new Digital Persona feature, which takes a 3D scan of the user’s face and body and lets them digitally recreate their look through videoconferencing while wearing the Vision Pro headset, stands out among the striking additions introduced.
Apple also noted a number of other new capabilities that make use of its experience in neural networks, such the ability to recognize fillable fields in PDF documents.
The audience’s enthusiastic reaction to a machine learning feature that allows the iPhone to distinguish between the user’s pet and other cats or dogs during the conference was noteworthy. This function automatically gathers all of the user’s pet images into a special folder once the iPhone distinguishes between the user’s pet and other cats or dogs.
Apple is well-positioned to provide cutting-edge features while preserving a focus on user privacy and using its hardware capabilities to maximize on-device performance thanks to its devotion to improving AI and ML technologies.