With the consumer world waking up and jumping on the bandwagon of Generative AI, new research by the Data Protection Excellence Centre, the research arm of Straits Interactive, has unveiled significant privacy concerns in Generative AI desktop applications, particularly among start-ups and individual developers. The study, covering 113 popular apps, underscores the potential risks to which users might unwittingly expose their data.
Conducted from May to July this year, the study focused on apps primarily from North America (48%) and the European Union (20%). Selection criteria included recommendations, reviews, and advertisements. The apps were categorized as:
-Core Apps: Industry leaders in the Generative AI sector.
-Clone Apps: Typically startups or individual developers/developer teams, created using Core Apps’ APIs (Application Programming Interfaces).
-Combination Apps: Existing applications that have incorporated generative AI functionalities.
Though 63% cited the GDPR, only 32% were apparently within the GDPR’s purview. The majority, which are globally accessible, alluded to the GDPR without understanding when it applies outside the EU. Of those where GDPR seemed to be relevant, a mere 48% were compliant, with some overlooking the GDPR’s international data transfer requirements.
In terms of data retention, where users often share proprietary or personal data, 35% of the apps did not specify retention durations in their privacy policies as required by the GDPR or other laws.
Transparency regarding the use of AI in these apps was limited. Fewer than 10% transparently disclosed AI use or model sources. Out of the 113 apps, 64% remained ambiguous about their AI models, and only one clarified if AI influences user data decisions.
Apart from renowned players like OpenAI, Stability AI, and Hugging Face that disclose the existence of their AI models, the remainder primarily relied on established AI APIs, such as those from OpenAI, or integrated multiple models.
The study shows a tendency among apps to collect excessive user PII, often exceeding their primary utility. With 56% using a subscription model and 31% veering towards relying on advertising revenue, user PII becomes invaluable. The range of collected data – from specific birth dates, interaction-based inferences, and IP addresses to online and social media identifiers – suggests potential ad-targeting objectives.
Commenting on the findings, Kevin Shepherdson, CEO of Straits Interactive, said: “This study highlights the pressing need for clarity and regulatory compliance in the Generative AI app sphere. As organisations and users increasingly embrace AI, their corporate and personal data could be jeopardized by apps, many originating from startups or developers unfamiliar with privacy mandates.”
Lyn Boxall, a legal privacy specialist at Lyn Boxall LLC and a member of the research team, added: “It’s significant that 63% of the apps reference the GDPR without understanding its extraterritorial implications. Many developers seem to lean on automated privacy notice generators rather than actually understanding their app’s regulatory alignment. With the EU AI Act on the horizon, the urgency for developers to prioritize AI transparency and conform to both current and emerging data protection norms cannot be overstated.”
While there is an urgent regulatory need to look at managing the new technology, companies and individuals can do their part to keep pace with the rapid developments in the Gen AI space by taking an active role in their own education, training and upskilling. In response to these findings, Straits Interactive’s DPEX Centre developed the first of its kind Certified AI Business Professional course to help business professionals learn how to use such tools responsibly and ethically, and at the same time add value to their respective organisations.
More information about the DPEX Network and its community can be found at click here (https://everhaze.com/emailer/link/cf25aac2-eca0-4516-8ae9-5c575065b775/1669)