Elevating AI, Ensuring Trust

We champion open-source research. We prioritize simplicity. We pursue practical solutions.

Our Research

Membership Inference
Compliance
Privacy
LLM

Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models

This paper investigates membership inference attacks (MIA), which aim to determine whether specific data, such as copyrighted text, was included in the training of large language models. By examining a continuum from single sentences to large document collections, we address a gap in understanding when MIA methods begin to succeed, shedding light on their potential to detect misuse of copyrighted or private materials in training data.

Confidence
Uncertainty
LLM

Calibrating Large Language Models Using Their Generations Only

ACL 2024

As large language models (LLMs) are integrated into user applications, accurately measuring a model's confidence in its predictions is crucial for trust and safety. We introduce APRICOT, a method that trains a separate model to predict an LLM's confidence using only its text input and output. This method is simple, does not require direct access to the LLM, and preserves the original language generation process.

Fingerprinting
Compliance
LLM

TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification

ACL 2024 (findings)

Large Language Models (LLMs) come with usage rules to protect interests and prevent misuse. This study introduces Black-box Identity Verification (BBIV), aiming to identify if a service uses a specific LLM via chat for compliance. The method, Targeted Random Adversarial Prompt (TRAP), uses adversarial suffixes to get a pre-defined answer from the specific LLM, while other models give random answers. TRAP offers a novel approach for ensuring compliance with LLM usage policies.

Privacy
LLM

ProPILE: Probing Privacy Leakage in Large Language Models

NeurIPS 2023 (spotlight)

Large language models (LLMs) absorb web data, which may include sensitive personal information. Our tool, ProPILE, acts as a detective, helping individuals assess potential personal data exposure within LLMs. It allows users to tailor prompts to monitor their data, fostering awareness and control over personal information in the age of LLMs.