When thinking of AI at Apple, the first thing most of us focus on is Siri. That often isn’t a very positive association. However, there is a lot more that Apple is doing with AI and Machine Learning today that is under the hood. While many of these new features may be less obvious, they are making a positive impact on Apple’s products.
Samuel Axion of Ars Technica took an in-depth look at Apple’s approach to AI and Machine Learning in a new article titled Here’s why Apple believes it’s an AI leader—and why it says critics have it all wrong. I’ve read several articles covering Apple and AI that were little more than hit pieces or jumps to conclusion. I also kind of understand why. I’ve called Apple out for their lack of progress in these areas more than once, myself. But this article is different. It extensively covers Apple’s new approach to AI and ML and provides many examples of how they are implementing these technologies today.
The best thing about this article is that Mr Axion published a lot of material from his interviews with John Giannandrea, Apple’s Senior Vice President for Machine Learning and AI Strategy. Bob Borchers, Apple’s VP of Product Marketing, was also interviewed. These shed light into what is going on at Apple, as well as what the vision for their future is in AI and ML.
In case you don’t remember, Apple poached John Giannandrea from Google a couple of years ago, where he made a name for himself heading up Google Brain. Since making the move, he has risen to the rank of VP, which means that he reports directly to Tim Cook. However, since a lot of the AI and ML work that Apple has been focusing on has been under the hood, it’s been harder to tell exactly what Mr Giannandrea has been up to. This article changes a lot of that by letting us hear from the man himself.
If you are at all interested in learning about where Apple is going with AI and Machine Learning, then you should definitely read this article for yourself and see what Mr Giannandrea has to say about what he and his team are working on. I will just summarize by saying that I think Apple is in good hands when I read this piece. It’s obvious that Apple has made great strides in using AI and ML to deliver practical features that improve their products and services.
One example given in the article is the new iPadOS 14 digital ink features. The new handwriting recognition feature in iPadOS is powered by machine learning. I appreciate this because I can remember how difficult handwriting technology could be to work with 20 years ago, when training was crude and you typically had to change the way that you wrote to suit the limitations of the device you had. The use of machine learning changes that and I can tell you that it works very well on the iPad Pro based on my testing.
Another huge area for AI and ML is in mobile photography. Physics tells us that a small lens in a smartphone can never measure up to the performance of a traditional DSLR lens. However, thanks to computational photography, a smartphone can deliver many of the same features, as well as some things that a traditional camera and lens never could. And this “magic” is happening as the camera starts to capture before you even take the shot, to the processing, to that image automatically being sorted into an album, to auto editing and face recognition and tagging. Apple was said to be falling behind in all of the areas above until they made a huge leap forward to join the pack, and even pass them in some ways, last year. I can guarantee you that Giannandrea and his team are the reason for that.
One theme that the article touches on is key to why Mr Giannandrea decided to make the move from Google to Apple. He made it clear that he saw an opportunity to do unique things with AI and ML at Apple because of their control over the entire technology stack. They build the hardware, develop the software and even create the processors and some of the internals that power the hardware. I think we can see this man’s vision in the rapid development of the Neural Engine AI and ML chip that is in all of Apple’s mobile devices today, and will be in the Mac starting later this year. I’m sure it was attractive to be able to work hand-in-hand with Apple’s silicon team to design this integration of AI and ML from the bottom up.
Again, read the article because this topic is discussed in depth. I just think its exciting because for so long it looked like Apple didn’t care about AI and ML and didn’t seem to understand how behind they were falling. It’s easy to think that they still are that behind because Siri still hasn’t greatly improved, since that is the Apple’s most public-facing AI feature. However, if you look at how successful Mr Giannandrea’s team has been with the Neural Engine, CoreML and all of the new AI and Machine Learning features throughout iOS and its variants, there is reason for optimism.
Between the progress that Apple has made over the last two to three years and the acquisitions they have made to bolster their AI and ML teams, Apple’s future in these areas looks bright. The practical features that they are successfully delivering are exactly the kinds of things that could feed into a pair of Augmented Reality glasses to make them truly useful in a few years. Now that’s really exciting. And who knows. Maybe one day, Mr Giannandrea and his team will finally get around to fixing Siri. Until then, it’s glad to see that there is a real plan in place and a lot happening to implement AI and Machine Learning across Apple’s products and services.