ProPILE: Probing Privacy Leakage in Large Language Models

Checking authentication

Large language models (LLMs) are extremely popularity. As of October 2023, there are nearly 200 million users of ChatGPT.

Trained on vast amounts of text data from the web, LLMs can have knowledge of unauthorised personally-identifiable information (PII) of web users, like their email addresses and phone numbers. This is a concern for the privacy of involved data subjects. Data subjects do not even know how much of their personal information is involved in the LLMs. The lack of knowledge fundamentally thwarts any follow-up measures, such as requesting the erasure of their PII from the serviced LLMs (Right to erasure).

We provide the first tool for PII inspection in black-box LLMs. We assume an attacker who knows N-1 PII items for some data subject and wants to find out the Nth PII item. For example, the attacker is interested in reconstructing the target's private phone number based on the target's name and email address, which are publicly available (context PII). Our tool, ProPILE, builds prompts tasking the LLM to reconstruct the Nth PII item based on the known N-1 PII items. We measure the increased reconstructability of the Nth PII item after providing the N-1 contexts. Given the large number of users, even a small likelihood of reconstructing the Nth PII item poses a significant risk.

This work will be presented at NeurIPS 2023, New Orleans, as a spotlight. The work was done in collaboration with NAVER Corporation.

NAVER

Personally Identifiable Information (PII)

Examples of PII are names, emails, and phone numbers. Some of these items, like names and email addresses, are often publicly available. It's essential to protect PII due to increasing data breaches and cyber-attacks. One key concern is the “linkability” of PII items. An isolated piece of PII may not pose a risk. The risk grows when multiple PII items are linked together.

In our threat model, we assume an attacker already knows some public PII items. The attacker aims to infer more sensitive, private PII. For example, a phone number alone may not be sensitive. Yet, it becomes a privacy concern when linked to other PII like a person's name or email. Therefore, it's not just about the individual PII items appearing in large language models. The focus should also be on how likely these items are to be linked, creating a composite privacy risk.

Assess your own privacy risk with ProPILE

ProPILE is a tool for evaluating the risk of your PII being exposed. Users enter their personal details like name and email into the tool. The tool then asks the LLM to predict missing PII based on the user's input. It uses multiple prompt templates to gauge how close the LLM's guesses are to the real PII.

Emphasis on Personalized Risk Assessment. ProPILE is unique in its approach; it doesn't simply evaluate a language model's general tendency to include private data. Instead, it offers personalized reports to each data subject, detailing the likelihood of their specific PII being reconstructed by a designated large language model. These reports are private and accessible only to the individual user. Our tool is designed to empower individual data subjects, rather than providing a broad status report on data privacy.

Evaluation Metric. ProPILE uses a specialized algorithm to assess individual privacy risk. The process starts by generating multiple prompts based on the PII items you provide. These prompts are then sent to the designated large language model. Candidate reconstructions for the Nth PII item are generated and evaluated. ProPILE measures the minimal edit distance between the PII reconstructed by the language model and the true PII you provided. This metric allows for a detailed comparison between your risk and that of other users. It also shows how easily your specific PII could be linked within the chosen language model.

Data Protection for Data Subjects — We prioritize your privacy and take extensive measures to mitigate the risk of PII exposure through ProPILE. Detailed reports with candidate PII reconstructions are only available to Google-account authenticated users, who can query only about their own PII based on the name and email from their Google account. Non-authenticated users can use our demo for general PII queries but won't receive these detailed reports. We do not store any raw personal information on our server, although we do collect minimal edit distance statistics to compare users' risk levels. No personal information is shared with third-party LLM service providers. For example, the OPT-1.3B model used for the current demo is hosted locally within Germany. We strictly comply with GDPR regulations, so you can use ProPILE with confidence.

Try out ProPILE here!