Apple Responds to Outcry Over Siri Grading

Share This:

When you make privacy a brand, you have to make sure that your house is completely in order. While Apple still takes user data privacy more seriously than their competitors, that doesn’t mean they are beyond reproach. The world saw a great example of this a few days ago when we learned that third-party contractors were listening to recorded Siri interactions for grading purposes.

It’s all about timing and presentation, I guess. News of outside contractors grading Siri actually came out before the more recent report from the Guardian. However, their article punched things up with a wistleblower claiming that they were listening to recordings that allowed users’ identities to be revealed and that some recordings caught very private moments and even illegal activity.

The responses were pretty predictable all around. The tech press jumped on it hard. Many enthusiasts enjoyed some schadenfreude at Apple’s expense, while other just shrugged their shoulders and it wasn’t that big of a deal. Many normal people who heard about it overreacted. Apple first responded with a generic description claiming that what they were doing wasn’t so bad.

A small portion of Siri requests are analyzed to improve Siri and dictation. User requests are not associated with the user’s Apple ID. Siri responses are analyzed in secure facilities and all reviewers are under the obligation to adhere to Apple’s strict confidentiality requirements.

That obviously wasn’t enough. After the story didn’t go away, Apple wisely pulled the plug on Siri grading, at least for now. In the near future, they will offer either an opt-in or opt-out for human grading of Siri. Here is Apple’s exact statement to TechCrunch:

“We are committed to delivering a great Siri experience while protecting user privacy,” Apple said in a statement to TechCrunch. “While we conduct a thorough review, we are suspending Siri grading globally. Additionally, as part of a future software update, users will have the ability to choose to participate in grading.”

Absolutely none of this surprises me. In fact, I have always assumed that Apple was using human analysis as part of Siri’s development and improvement. Every player in the voice recognition space uses human analysis and grading of some kind or another because it is still necessary. Other than how they handle it, why would Apple be any different? The only part I was off on was thinking that Siri grading was included in Apple’s broader Analytics opt-in that comes up during device setup. It turns out that there wasn’t a way to opt out of this beyond turning Siri off.

This was Apple’s real mistake. Because of Apple’s stance on privacy, this should have been spelled out at device setup and opt-in, just like other opt-ins they have for analytics. However, the grading itself is still a necessity. It could certainly be argued that Apple should bring this grading in-house, as relying on third parties doesn’t inspire user confidence. The NDA that Apple pointed out in their first statement didn’t stop the Guardian’s whistleblower. Who else is playing fast and loose with what they hear? Just how secure is this “secure facility” they spoke of?

Despite what some may tell you, all voice assistants still need some amount of human interaction to move forward with any speed. Siri needs it even more, as Apple doesn’t have the same open data streams to user data that Google and Amazon make use of, or the same amount of data at their disposal. And let’s be honest here- Siri is already way behind the competition because of years of mismanagement. Apple seems to have the right team in place to move Siri forward now, but they still need human grading to take the best advantage of the data that they do have.

Apple was already anonymizing the user data for Siri recordings that were being analyzed. Now they will take the additional step of allowing users to either opt-in or out of Siri grading, which is definitely the right move. Based on how they handle other such situations, it will probably be opt-in, which is the best way to handle it. This will definitely set Apple back, as they will have less data to work with for analysis and improvement. Such is the price when you make privacy a core feature, though.

Hopefully Apple won’t stop here. As I already said, I think they should definitely bring this operation in-house so they can closely monitor the grading process. I also think it would be a wise move for Apple to make detecting false “Hey Siri” triggers through software improvements a priority. According to the Guardian report, most of the private moments and information they heard was recorded during false triggers. If they can show progress in limiting the grading process to legitimate Siri interactions, then users will feel better about letting Apple use their data to improve the service.

Dieter Bohn of The Verge also had some really interesting points on this Siri situation today. He gave Apple credit for their stance and focus on privacy, but argued that it has also left them with some blind spots. The current situation would seem to bear that out. His biggest complaint was that, even with an opt-in or out for Siri grading, Apple still doesn’t have and hasn’t committed to add a way to purge your data from their servers. Right now, once your recording data from Siri is there, it’s there for a long period of time. He made a really good point that Google, Amazon and Facebook have had to deal with this and come up with policies and procedures because they keep so much user data. Apple’s inexperience in dealing with such situation shows here and they need to address that by giving users more control over the data that Apple does keep on them.

As for me, when Apple does offer an opt-in or out for this later this year, I will continue to allow them to use my data to improve Siri. I know that Apple isn’t perfect. For all their talk about privacy, they still make mistakes. They definitely did in this instance. However, they are still doing the best job in the industry of prioritizing user privacy. They were already anonymizing my data that may have been graded by someone. With some additional regulation of the process, I still won’t have any issues letting Apple use my data to help Siri. I still use it enough that I want to see it improve and expand. Now that they have better leadership over the service, I actually have some confidence that it might now.

What about you? When this new Siri grading opt-in or put setting is available, what will you do?


Share This: