Anticipating the data protection landscape: Top 5 Trends in 2024

By Edwin Concepcion

2024 ushers in a new era of data protection, fuelled by generative AI’s rapid ubiquity, and intertwining ever more closely with our daily functions. This powerful technology, while transforming workflows and boosting productivity, presents fresh challenges that demand agile adaptation from businesses, regulators, and individuals alike. Here are the five key trends for the year.

  1. Rising mainstream use of AI breeds more risk

Generative AI is poised to integrate with diverse facets of our lives, from crafting marketing content, data analysis and information summarisation, to assisting in business planning and HR strategy. However, it needs to be complemented by the input of qualified and experienced business professionals as well as attention to privacy, security, or ethical issues. To achieve this, generative AI must be met with robust security measures.

Take training chatbots that are used for customer service for example. If the bots are trained on biassed data, the bot’s objectivity and ethicality are rendered unreliable. And if the bots have loopholes in their algorithms or are built on AI-generated code with overlooked security vulnerabilities, sensitive customer information can be exposed which compromises trust. As general users in the company leverage public tools for specialised queries and document analysis or creation, there is a risk of leaking proprietary or personal data. As such, implementing rigorous oversight and utilising AI responsibly will be paramount.

  1. Regulators take the helm on AI software

Privacy watchdogs, especially the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) who are actively contributing to the EU AI Act, are gearing up to actively govern generative AI.

In the ASEAN region, Singapore’s Personal Data Protection Commission (PDPC) is taking steps in promoting AI governance by proposing Advisory Guidelines on the Use of Personal Data in AI Recommendation and Decision Systems. This heightened regulatory focus emphasises accountability, transparency, and ethical use, necessitating collaboration among stakeholders to address data privacy concerns. Existing laws like the Personal Data Protection Act (PDPA) and General Data Protection Regulation (GDPR) will remain crucial for safeguarding data subject rights and ensuring compliance.

  1. From Content Creation to Content Generation

The shift from manual content creation to AI-powered generation introduces a Pandora’s box of risks, including deepfakes, identity theft, and intellectual property concerns. In a recent Channel NewsAsia feature, we demonstrated how easily a fake avatar of a reporter capable of speaking in another language could be created. Synthetic content, while beneficial, demands human supervision, fact checks, and validations to avoid the spread of misinformation.  Additionally, relying solely on prompts to guide AI outputs of chatbots opens doors to malicious manipulation through adversarial prompts. Addressing challenges such as prompt injection (inserting malicious content to manipulate the AI’s output), prompt leakage (unintentional disclosure of sensitive information in responses), and jailbreaking (tweaking prompts to bypass AI system restrictions) is essential to responsible development and harnessing the benefits of Generative AI safely.

  1. Due diligence is demanded in the marketplace of conversational AI apps

The generative AI app market place is thoroughly diverse and requires due diligence in a number of areas. Core apps that are developed by pioneering leaders who founded proprietary foundation models, such as Open AI’s ChatGPT require reliable governance alongside innovation. There are also clone apps, which are developed by startups and individual creators leveraging the APIs of core apps to create solutions for specific purposes such as Gamma. In many cases, these have questionable privacy practices. Additionally, there are combination apps – existing apps that have incorporated generative AI features like Microsoft Copilot. These introduce non-savvy users to the technology without prior understanding of how to safeguard sensitive data.

Our research conducted in August 2023 on 100 mobile clone apps using OpenAI’s GPT APIs revealed significant discrepancies in declared data safety practices and actual behaviour, posing potential privacy risks. Another study covering 113 popular apps shows many Gen AI apps are falling short of GDPR and AI transparency standards.

Evidently, responsible AI and the development of governance protocols for AI applications by appropriately qualified and skilled business professionals is critical in 2024 and also into the future. Businesses must scrutinise data handling policies, local data protection laws, and overall trustworthiness before adopting any app.

  1. Upskilling for Data Protection Professionals

As generative AI embraces multimodal capabilities, Data Protection Officers (DPOs) must evolve their skillset to manage privacy across text, image, video, voice, and other sensory data types. This evolution requires new proficiencies beyond traditional text analysis, including understanding operational aspects of machine learning, language models, and navigating ethical challenges and mitigating risks like adversarial prompts. Continuous learning and cross-disciplinary knowledge are essential for DPOs to navigate in this ever-evolving data landscape successfully.

We can expect 2024 to be a pivotal year for data protection, where responsible practices, vigilant oversight, and continuous learning take centre stage. Seeking avenues to be in touch with the latest developments in generative AI and data protection is key. Here in the Philippines, we will be holding our annual masterclass at the Asian Institute of Management (AIM), where you can hear and network with expert speakers who will be sharing about the values, risks and constraints of generative AI adoption in businesses. By collaborating and adapting to the burgeoning world of generative AI, we can reap its benefits while safeguarding trust, safety, and the fundamental right to privacy.

Edwin Concepcion is the Philippine Country Manager of Straits Interactive