LinkedIn has made a U-turn on silently using UK customer data to train its AI models, following public outcry and concerns raised by the UK Information Commissioner’s Office.
The social media platform came under fire after users this week began noticing a privacy setting that introduced an opt-out setting to prevent your data from being used to train LinkedIn’s models. Its explanation of the new opt out model is here. Those in the legal sector to highlight it first include legal tech journalist and AffiniPay principal Nicole Black, who is in the US. Instructions to turn it off have been shared this week, including by Mashable.
While users in the United States will continue to have to opt out, LinkedIn says that it will not enable training for Gen AI on member data from the European Economic Area, Switzerland and the United Kingdom.
Stephen Almond, executive director regulatory risk at the Information Commissioner’s Office, said today (20 September): “We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users. We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO.
“In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset.
“We will continue to monitor major developers of generative AI, including Microsoft and LinkedIn, to review the safeguards they have put in place and ensure the information rights of UK users are protected.”
The Information Commissioner’s Office has many enforcement tools available, including the ability to impose high fines for any breaches of GDPR and other regulations.
In response to this latest development, a spokesperson for LinkedIn told Legal IT Insider: “We believe that our members should have the ability to exercise control over their data, which is why we are making available an opt out setting for training AI models used for content generation in the countries where we do this.
“We’ve always used some form of automation in LinkedIn products, and we’ve always been clear that users have the choice about how their data is used. The reality of where we’re at today is a lot of people are looking for help to get that first draft of that resume, to help write the summary on their LinkedIn profile, to help craft messages to recruiters to get that next career opportunity. At the end of the day, people want that edge in their careers and what our gen-AI services do is help give them that assist.
“At this time, we are not enabling training for generative AI on member data from the European Economic Area (EEA,) Switzerland and the United Kingdom. We welcome the opportunity to continue our constructive engagement with the ICO.”
For data privacy experts, this reaffirms the importance of the ICO in the AI space. Tim Hyman, a data protection officer and CEO of 2twenty4 Consulting told Legal IT Insider: “As many readers will know, the EU passed its new AI Act in March which led to speculation as to how/if the UK will follow. This was followed with much talk in the build up to the recent election as to whether the new government would announce a similar AI Act in the Kings speech- it didn’t. As a consequence the UK relies on the ICO as the prime regulator for AI systems with personal data protection one of the core considerations.
“The interaction with LinkedIn reaffirms the ICO’s role in the AI space and acts as a reminder to law firms and their service providers that compliance with data protection by building in ‘privacy by design’ is essential for all phases of AI from training the models, through delivery.”
caroline@legaltechnology.com