AI Has A Privacy Problem, And The Solution is Privacy Tech, Not More Red Tape

In the battle involving the most advanced technologies, the key to safeguarding privacy in an AI-driven world isn’t more regulations; it’s privacy tech.

lourdes.turrecha
Privacy & Technology

--

Opinion piece by Lourdes M. Turrecha

Research and edits by chatGPT-4 (artificial intelligence) and Travis Yuille (human intelligence)

Let’s face it: artificial intelligence (AI) has a privacy problem. As AI permeates more of our daily lives, from chatbots and voice assistants to autonomous vehicles, AI’s privacy problems are dominating the headlines — and rightly so. AI brings incredible benefits, makes our lives more efficient, convenient, and in many ways easier and safer. But it also poses a significant threat to our privacy and security. AI’s data thirst is insatiable, siphoning off vast amounts of personal data often without our knowledge or explicit consent. It’s a timely issue that deserves our attention, given the increasing reliance on AI across various sectors.

We’ve all been there — that unsettling realization that our personal data might be at risk. That’s the fear that comes with AI’s privacy problem (or any tech privacy problem). But in AI’s case, it’s amplified by large-scale data processing and autonomous decision-making, often without human checks.

AI’s privacy problem is a ticking time bomb. The common response to this is to call for more policies, more paperwork, more box-checking, and more bureaucracy. But let’s be frank here: Do we really think a few more policies, papers, and checks will fix AI’s privacy problem? It’s like using band-aids to cover a bullet wound. Red tape may give an illusion of control, but underneath it the real issues persist. Our personal data continues to flow unchecked, often ending up in the hands of advertising partners, data brokers, political campaign organizations, law enforcement, and other third parties whose names we wouldn’t recognize.

Yes, the need for laws and regulations is real. But the answer to AI’s privacy problem doesn’t lie in more paperwork. Instead, the solution is in truly innovative technologies: responsible AI and privacy tech tools designed to solve our privacy, security, data governance, ethics, trust, and safety problems in the AI context.

Privacy tech, as the name suggests, are tech solutions to privacy problems. A rising body of research suggests that privacy tech can effectively address AI’s privacy concerns, including: training data, input, output, and model privacy problems. Privacy tech tools like confidential computing, differential privacy, anonymization techniques, privacy code scanners, and simple data controls like the ones OpenAI recently rolled out would help ensure that we tap into AI’s many benefits without making false trade-offs when it comes to privacy.

Some argue that stricter data protection laws are necessary to keep AI in check. While there is a place for such laws, they can’t be the primary line of defense against privacy invasions and harms in an AI world. The law is slow, often trailing behind technology, and while we wait for the legal system to catch up, serious privacy harms continue unabated. That’s why in the meantime we need to solve AI’s privacy problems through privacy tech and innovation. This is where privacy tech steps in and provides a robust technical line of defense against AI’s privacy harms.

In a world increasingly defined by data and tech, it’s time we acknowledge the potential of privacy tech. It’s time to stop hiding behind performative paperwork and start embracing tech innovations that solve privacy problems. After all, in a battle involving advanced technologies like AI, the most potent defense is, fittingly, also tech — designed to protect our privacy. When it comes to AI and privacy, we shouldn’t have to choose and make false trade-offs. We deserve both, and with privacy tech we can have them.

This post is the second in a series exploring AI, privacy, security, and ethics broadly; and OpenAI’s chatGPT, Google’s Bard, and other generative AI, more specifically. The first post was a response to Future of Life’s open letter to halt the development and training of AI systems. Upcoming posts will outline the privacy privacy, security, and ethics challenges in AI; and provide privacy recommendations for Open AI, Google, and other AI labs to use as a blueprint and leverage in developing their AI systems.

If you’d like to be the first to receive Lourdes’ future writings, feel free to subscribe to Lourdes’ Substack.

--

--

lourdes.turrecha
Privacy & Technology

Founder & CEO @PIX_LLC @PrivacyTechRise | Privacy & Cybersecurity Strategist & Board Advisor| Reformed Silicon Valley Lawyer | @LourdesTurrecha