Facebook parent Meta was forced to delay its AI offering for Europe due to privacy concerns and regulatory obstacles. Meta confirmed the news via an update to a Blog that was originally published in order to defend its AI plans.
Meta was in the process of implementing a privacy policy that would allow it to use people’s personal data to train its AI model. new Terms, which allowed it to use Instagram user data collected since 2007, to train its AI model, faced a legal challenge across 11 European countries. The Irish Data Protection Commission was also strongly opposed.
Meta cited legitimate interests under EU General Data Protection as the basis for doing so, since the data it used was chosen by users and made public on social media profiles.
The firm has now been forced to stop collecting this data. Without this local information, the company said that it would only be able to provide a “second-rate experience” to its customers.
Meta wrote a blog post saying that they were disappointed by the request made by the Irish Data Protection Commission (our lead regulator) on behalf of the European DPAs to delay the training of our large language models using content shared by adults via Facebook and Instagram. This is especially disappointing since we have incorporated regulatory feedback, and the European Data Protection Authorities (DPAs), have been informed since march.
It called the move a “step backwards” for European innovation and competition in AI Development and pointed out that it would cause “further delay in bringing AI benefits to people in Europe.”
Meta’s Rivals Train AI Using Data
Meta was keen on pointing out that its competitors such as Google and OpenAI use data to train AI models. “AI training isn’t unique to our services and we are more transparent than our industry counterparts.”
Facebook’s parent company has a valid point: massive amounts of data are required to train AI models no matter who is offering the solution. Facebook, however, has a large amount of historical data from its users. The firm is also known for violating data privacy.
Jake Moore, global security advisor at ESET, says that artificial intelligent algorithms require “an extortionate” amount of data to produce a level of output expected to pass as human consumption.
Moore says that Meta has access to a large amount of personal information, but if the users have not consented to their data being analysed in this way, then data regulations may have stipulated a delay.
Moore says that AI programs “are likely to collect and analyze what is available in order to tweak their processing” each time we interact. It is important to be aware of how easily personal information can be misused.
Will Meta AI launch in Europe?
You’d be mistaken if you thought Meta would not roll out its AI offerings in Europe. The company says it’s “committed to bring Meta AI and the models that drive it to more people around world, including in Europe.”
Meta says that it needs data to create a useful product: “If our models are not trained on the public content shared by Europeans on our services or others, like public posts or comments, the models and AI features powered by them will not accurately understand important regional language, cultures, or trending topics in social media,” Meta stated.
Meta believes that “complexity and inconsistency” in the application regulations in the EU “risks placing people in Europe further in front of other countries in their adoption of new technology.”
The firm also emphasizes its “transparent” approach, saying it remains committed in putting AI “into the hands of more people throughout the world, including Europe.”
It also raises concerns for users of Facebookin other parts of the world. The data protection regulation in Europe and UK is very strict. They put consumer privacy first. The US is also starting to adopt this ethos, but it is still far behind the EU in terms of data protection.