A German privacy watchdog has ordered Google to cease manual reviews of audio snippets generated by its voice AI.
This follows a leak last month of scores of audio snippets from the Google Assistant service. A contractor working as a Dutch language reviewer handed more than 1,000 recordings to the Belgian news site VRT which was then able to identify some of the people in the clips. It reported being able to hear people’s addresses, discussion of medical conditions, and recordings of a woman in distress.
The Hamburg data protection authority used Article 66 powers of the General Data Protection Regulation (GDPR) to make the order — which allows a DPA to order data processing to stop if it believes there is “an urgent need to act in order to protect the rights and freedoms of data subjects”.
The Article 66 order to Google appears to be the first use of the power since GDPR came into force across the bloc in May last year.
Google says it received the order on July 26 — which requires it to stop manually reviewing audio snippets in Germany for a period of three months. Although the company had already taken the decision to manually suspend audio reviews of Google Assistant across the whole of Europe — doing so on July 10, after learning of the data leak.
Last month it also informed its lead privacy regulator in Europe, the Irish Data Protection Commission (DPC), of the breach — which also told us it is now “examining” the issue that’s been highlighted by Hamburg’s order.
The Irish DPC’s head of communications, Graham Doyle, said Google Ireland filed an Article 33 breach notification for the Google Assistant data “a couple of weeks ago”, adding: “We note that as of 10 July Google Ireland ceased the processing in question and that they have committed to the continued suspension of processing for a period of at least three months starting today (1 August). In the meantime we are currently examining the matter.”
It’s not clear whether Google will be able to reinstate manual reviews in Europe in a way that’s compliant with the bloc’s privacy rules. The Hamburg DPA writes in a statement [in German] on its website that it has “significant doubts” about whether Google Assistant complies with EU data-protection law.
“We are in touch with the Hamburg data protection authority and are assessing how we conduct audio reviews and help our users understand how data is used,” Google’s spokesperson also told us.
In a blog post published last month after the leak, Google product manager for search, David Monsees, claimed manual reviews of Google Assistant queries are “a critical part of the process of building speech technology”, couching them as “necessary” to creating such products.
“These reviews help make voice recognition systems more inclusive of different accents and dialects across languages. We don’t associate audio clips with user accounts during the review process, and only perform reviews for around 0.2% of all clips,” Google’s spokesperson added now.
But it’s far from clear whether human review of audio recordings captured by any of the myriad always-on voice AI products and services now on the market will be able to be compatible with European’s fundamental privacy rights.
These AIs typically have trigger words for activating the recording function that streams audio data to the cloud but the technology can easily be accidentally triggered — and leaks have shown they are able to hoover up sensitive and intimate personal data of anyone in their vicinity (which can include people who never got within sniffing distance of any T&Cs).
In its website the Hamburg DPA says the order against Google is intended to protect the privacy rights of affected users in the immediate term, noting that GDPR allows for concerned authorities in EU Member States to issue orders of up to three months.
In a statement Johannes Caspar, the Hamburg commissioner for data protection, added: “The use of language assistance systems in the EU must comply with the data protection requirements of the GDPR. In the case of the Google Assistant, there are currently significant doubts. The use of language assistance systems must be done in a transparent way, so that an informed consent of the users is possible. In particular, this involves providing sufficient information and transparently informing those concerned about the processing of voice commands, but also about the frequency and risks of mal-activation. Finally, due regard must be given to the need to protect third parties affected by the recordings. First of all, further questions about the functioning of the speech analysis system have to be clarified. The data protection authorities will then have to decide on definitive measures that are necessary for a privacy-compliant operation. ”
The DPA also urges other regional privacy watchdogs to prioritize checking on other providers of language assistance systems — and “implement appropriate measures” — name-checking providers of voice AIs, such as Apple and Amazon.
This suggests there could be wider ramifications for other tech giants operating voice AIs in Europe, flowing from this single Article 66 order.
As we’ve said before, the real enforcement punch packed by GDPR is not the headline-grabbing fines, which can scale as high as 4% of a company’s global annual turnover — it’s the power that Europe’s DPAs now have in their regulatory toolbox to order that data stops flowing.
“This is just the beginning,” one expert on European data protection legislation told us, speaking on condition of anonymity. “The Article 66 chest is open and it has a lot on offer.”
In a sign of the potential scale of the looming privacy problems for voice AIs Apple also said earlier today that it’s suspending a quality control program for its Siri voice assistant.
The move, which does not appear to be linked to any regulatory order, follows a Guardian report last week detailing claims by a whistleblower that contractors working for Apple ‘regularly hear confidential details’ on Siri recordings, such as audio of people having sex and identifiable financial details, regardless of the processes Apple uses to anonymize the records.
Apple’s suspension of manual reviews of Siri snippets applies worldwide.
from TechCrunch https://ift.tt/2SZmTVv
0 comments:
Post a Comment