Using an LLM or Pattern-based Rules for PII/PHI Redaction

In our data-driven world, being able to protect Personally Identifiable Information (PII) and Protected Health Information (PHI) is imperative. Whether you’re securing customer data, complying with regulations like GDPR or HIPAA, or simply aiming for responsible data handling, the need to effectively redact sensitive information is crucial.

Today, there are two primary approaches: leveraging the power of Large Language Models (LLMs) and employing traditional pattern-based rules. While LLMs have understandably received significant attention for their impressive natural language understanding, it’s essential to compare their capabilities against the tried-and-true methods of pattern matching.

In this blog post, we will take a look at the benefits of pattern-based rules and some drawbacks of an LLM-based approach.

Pattern-based Rules

Pattern-based rules operate on a straightforward principle: defining specific regular expressions or keyword lists to identify and redact PII. Think of it as providing a precise set of instructions to locate and mask patterns that match social security numbers, phone numbers, email addresses, and other sensitive data.

A pattern-based approach has several advantages:

  • Speed and Efficiency: Pattern matching is generally very fast, processing large volumes of text quickly without significant computational overhead.
  • Transparency and Control: You have direct control over the rules, making it easy to understand why a particular piece of text was flagged as PII. Debugging and refining these rules are straightforward and simple. If something expected is not matched, it is relatively easy to figure out why.
  • Computational Efficiency: Pattern-based methods require minimal computational resources, running effectively on standard CPUs without the need for specialized hardware like a GPU.
  • Precise Matching: When rules are well-defined, they offer high precision in identifying specific PII formats. For instance, a regex for a credit card number can incorporate the Luhn algorithm for basic validation.
  • Flexibility for Specific Formats: Handling variations like phone numbers with or without parentheses or different date formats can be explicitly addressed through rule creation.

Drawbacks of LLM-Based PII Redaction

Using an LLM for PII redaction might seem like a good option – just download a model, give it your text (and maybe a prompt), and get back the redacted text. A search for “pii” on the Hugging Face model hub returns several models trained to identify PII.

But, before you go that route, be aware of some the drawbacks:

  • Tokenization and Sentence Splitting: Before an LLM can analyze text, the input needs to be broken down into smaller units called tokens (which are often just words). This pre-processing step can sometimes lead to the fragmentation of PII that spans across token boundaries or sentence splits, potentially hindering accurate identification. Additionally, accurately splitting sentences in some text can be quite a challenge on its own. Content such as medical notes or technical information may not even be comprised of complete sentences.
  • Ambiguity of Tokens: A fundamental limitation of current LLM tokenization is the lack of inherent support for tokens that could belong to multiple PII categories. For example, a sequence of digits might represent a zip code, a portion of a social security number, or part of a phone number. The LLM might struggle to definitively classify such ambiguous tokens without sufficient contextual clues, leading to PII not being redacted.
  • Resource Intensive: Running LLMs demands substantial computational resources. They typically require powerful GPUs significant hard disk space to store the model weights. This can translate to increased infrastructure costs and more complex deployments.
  • Redaction Speed: Compared to the rapid execution of pattern-based rules, LLM inference is considerably slower. While using a GPU can provide an increase in speeds, it is most likely to never achieve the speed of a pattern-based system.
  • A Black Box: LLMs operate as complex neural networks, making it a challenge to understand exactly why a particular piece of text was (or wasn’t) identified as PII. This lack of transparency can make debugging and ensuring the reliability of the redaction process difficult. Going further, if you are using a third-party model, such as one from the Hugging Face Hub, may not know much about the data used to train the model.
  • Robustness (e.g. Dashes and Spaces:) The robustness of LLMs to minor variations in PII formatting can be surprisingly fragile. For instance, an LLM trained to recognize social security numbers in the “XXX-XX-XXXX” format might fail to identify “XXXXXXXXX” as an SSN, potentially misclassifying it as a zip code if it happens to be five digits long followed by four. Similarly, the presence or absence of spaces in phone numbers or other identifiers can significantly impact the LLM’s ability to recognize them.
  • Validation of PII: Some forms of PII, like credit card numbers, adhere to specific validation algorithms (e.g., the Luhn algorithm). LLMs, while capable of learning patterns, they don’t inherently incorporate such validation logic. This means they might flag sequences that look like credit card numbers but are actually invalid.
  • Vocabulary Limitations: The effectiveness of an LLM is heavily influenced by its training vocabulary. If a specific PII format or a term used in conjunction with PII is not well-represented in the training data, the model’s ability to identify it accurately will be compromised.
  • The Need for Constant Model Retraining: When new types of PII need to be identified, or when existing PII formats evolve, an LLM typically requires retraining on a new dataset. This process can be time-consuming, resource-intensive, and requires specialized expertise. It is likely not a process you want to do often.

The Right Balance – Combining the Two Approaches

While LLMs offer exciting possibilities for understanding the context of PII, their inherent drawbacks make them a less-than-ideal sole solution for PII redaction in many scenarios. A more robust and reliable approach often involves a hybrid strategy. Combining the speed and precision of pattern-based rules for well-defined PII formats with the contextual understanding of LLMs for more nuanced cases can offer a more comprehensive and efficient solution. Learn how Philter employs a hybrid approach to data redaction at https://www.philterd.ai/.

Shielding Your Search: Redacting PII and PHI in OpenSearch with Phinder

In today’s data-driven world, safeguarding Personally Identifiable Information (PII) and Protected Health Information (PHI) is paramount. When leveraging search platforms like OpenSearch, ensuring sensitive data remains confidential is crucial. Enter Phinder, an open-source OpenSearch plugin that leverages the power of the Phileas project to effectively redact and de-identify PII and PHI within your search results.

This post explores how Phinder can bolster your data privacy and security when using OpenSearch.Phinder is available on GitHub at https://github.com/philterd/phinder-pii-opensearch-plugin.

What is Phinder?

Phinder is a specialized OpenSearch plugin designed to seamlessly integrate redaction and de-identification capabilities directly into your search workflow. Built upon the foundation of the open-source Phileas project, Phinder provides a robust and flexible mechanism for identifying and masking sensitive information within your indexed documents. This ensures you can search your data without the risk of exposing PII or PHI, which is essential for compliance with regulations like GDPR, CCPA, and HIPAA.

Phileas: The Engine Behind Phinder

Phinder leverages the Phileas project, a powerful engine for identifying and transforming sensitive data. Phileas offers a wide range of capabilities, including:

  • Named Entity Recognition (NER): Identifying and classifying named entities like people, organizations, locations, and dates.
  • Regular Expressions: Matching patterns for specific data formats like phone numbers, email addresses, and social security numbers.
  • Dictionaries: Using lists of known sensitive terms for redaction.
  • Customizable Rules: Defining your own specific redaction rules based on your unique data and requirements.

By integrating Phileas, Phinder benefits from its sophisticated analysis and transformation capabilities, providing a comprehensive solution for data protection.

Why use Phinder?

  • Enhanced Data Privacy: Phinder gives you granular control over what information is displayed in search results, preventing the accidental exposure of sensitive data.
  • Regulatory Compliance: By redacting PII and PHI, Phinder helps your organization meet the stringent requirements of data privacy and security regulations.
  • Improved Security Posture: Reducing the risk of data breaches associated with sensitive information.
  • Flexible and Customizable: Phinder’s integration with Phileas allows for highly flexible configuration of redaction rules, tailored to your specific needs.
  • Open Source and Community Driven: Being open-source, Phinder is free to use and benefits from community contributions and ongoing improvements.

How to Use Phinder

  1. Installation: The first step is to install the Phinder plugin within your OpenSearch cluster.  Refer to the Phinder documentation on GitHub for detailed installation instructions specific to your OpenSearch version.
  2. Defining Redaction Rules in a Policy (Leveraging Phileas): This is the core of Phinder’s functionality. You’ll leverage Phileas’s capabilities to identify the types of PII and PHI you want to protect (e.g., names, addresses, social security numbers, medical record numbers) and create corresponding rules. You can use regular expressions, dictionaries, or leverage pre-trained NER models provided by Phileas.
  3. Testing and Validation: Once you’ve configured Phinder, thorough testing is essential. Run searches against your data and verify that the sensitive information is being correctly redacted and de-identified.
  4. Integration with OpenSearch Queries: After testing, you can integrate Phinder directly into your OpenSearch queries. This ensures that redaction happens automatically whenever a search is performed.

The following is an example query that redacts email addresses from the description field.

curl -s http://localhost:9200/sample_index/_search -H "Content-Type: application/json" -d'
   {
    "ext": {
       "phinder": {
          "field": "description",
          "policy": "{\"identifiers\": {\"emailAddress\":{\"emailAddressFilterStrategies\":[{\"strategy\":\"REDACT\",\"redactionFormat\":\"{{{REDACTED-%t}}}\"}]}}}"
        }
     },
     "query": {
       "match_all": {}
     }
   }'

Conclusion

Phinder, powered by Phileas, offers a robust and effective solution for protecting sensitive data within your OpenSearch environment. By implementing Phinder and defining appropriate redaction and de-identification rules, you can significantly reduce the risk of exposing PII and PHI, ensuring compliance and enhancing data privacy. Remember to consult the official Phinder documentation on GitHub for the most up-to-date information and detailed instructions. Protecting sensitive data is a continuous process, and Phinder can be a valuable tool in your data privacy strategy.

Automatically Redacting PII and PHI from Files in Amazon S3 using Amazon Macie and Philter

Amazon Macie is “a data security service that discovers sensitive data using machine learning and pattern matching.” With Amazon Macie you can find potentially sensitive information in files in your Amazon S3 buckets, but what do you do when Amazon Macie finds a file that contains an SSN, phone number, or other piece of sensitive information?

Philter is software that redacts PII, PHI, and other sensitive information from text. Philter runs entirely within your private cloud and does not require any external connectivity. Your data never leaves your private cloud and is not sent to any third-party. In fact, you can run Philter without any external network connectivity and we recommend doing so!

In this blog post we will show how you can use Philter alongside Amazon Macie, Amazon EventBridge, and AWS Lambda to find and redact PII, PHI, or other sensitive information in your files in Amazon S3. If you are setting this up for your organization and need help, feel free to reach out!

How it Works

Here’s how it will work (refer to the diagram below):

  1. Amazon Macie will look for files in Amazon S3 buckets that contain potentially sensitive information.
  2. When Amazon Macie identifies a file, it will be sent as an event to Amazon EventBridge.
  3. An Amazon EventBridge rule that detects events from Amazon Macie will invoke an AWS Lambda function.
  4. The AWS Lambda function will use Philter to redact the file.

Setting it Up

Configuring Amazon Macie

The first thing we will do is enable Amazon Macie. It’s easiest to follow the provided steps to enable Amazon Macie in your account – it’s just a few clicks. Once you have Amazon Macie configured, come back here to continue!

Configure Amazon Macie.

Creating the AWS Lambda Function

Next, we want to create an AWS Lambda function. This function will be invoked whenever a file in an Amazon S3 bucket is found to contain sensitive information. Our function will be provided the name of the bucket and the object’s key. With that information, our function can retrieve the file, use Philter to redact the sensitive information, and either overwrite the existing file or write the redacted file to a new object.

The Lambda function will receive a JSON object that contains the details of the files identified by Amazon Macie. It will look like this:

{
  "version": "0",
  "id": "event ID",
  "detail-type": "Macie Finding",
  "source": "aws.macie",
  "account": "AWS account ID (string)",
  "time": "event timestamp (string)",
  "region": "AWS Region (string)",
  "resources": [
    <-- ARNs of the resources involved in the event -->
  ],
  "detail": {
    <-- Details of a policy or sensitive data finding -->
  },
  "policyDetails": null,
  "sample": Boolean,
  "archived": Boolean
}

You can find more about the schema of the event here. What’s most important to us is the name of the bucket and the key of the object identified by Amazon Macie. In the detail section of the above JSON object, there will be an s3Object that contains that information:

"s3Object":{
  "bucketArn":"arn:aws:s3:::my-bucket",
  "key":"sensitive.txt",
  "path":"my-bucket/sensitive.txt",
  "extension":"txt",
  "lastModified":"2023-10-05T01:32:21.000Z",
  "versionId":"",
    "serverSideEncryption":{
    "encryptionType":"AES256",
    "kmsMasterKeyId":"None"
  },
  "size":807,
  "storageClass":"STANDARD",
  "tags":[
  ],
  "publicAccess":false,
  "etag":"accdb2c550e3aa13610cbd87b91e3ec7"
}

This information gives the location of the identified file! It is s3://my-bucket/sensitive.txt. Now we can use Philter to redact this file!

You have a few choices here. You can have your AWS Lambda function grab that file from S3, redact it using Philter, and then overwrite the existing file. Or, you can choose to write it to a new file in S3 and preserve the original file. Which you do is up to you and your business requirements!

Redacting the File with Philter

To use Philter you must have an instance of it running! You can quickly launch Philter as an Amazon EC2 instance via the AWS Marketplace. In under 5 minutes you will have a running Philter instance ready to redact text via its API.

With Philter’s API, you can use any programming language you like. There are client SDKs available for Java.NET, and Go, but the Philter API is simple and easily callable from other languages like Python. You just need to be able to access Philter’s API from your Lambda function at an endpoint like https://<philter-ip>:8080.

You just need to decide how you want to redact the file. Redaction in Philter is done via a policy and you can set your policy based on your business needs. Perhaps you want to mask social security numbers, shift dates, redact email addresses, and generate random person’s names. You can create a Philter policy to do just that and apply it when calling Philter’s API. Learn more about policies or to see some sample policies.

Once you have your AWS Lambda function and Philter policy the way you want it, you can deploy the Lambda function:

aws lambda create-function --function-name redact-with-philter \
  --runtime python3.11 --handler lambda_function.lambda_handler \
  --role arn:aws:iam::accountId:role/service-role/my-lambda-role \
  --zip-file fileb://code.zip

Just update the values in that command as needed. Don’t forget to set your AWS account ID in the role’s ARN!

Configuring Amazon EventBridge

To create the Amazon EventBridge rule:

aws events put-rule --name MacieFindings --event-pattern "{\"source\":[\"aws.macie\"]}"

MacieFindings is the name that you want to give the rule. The response will be an ARN – note it because you will need it.

Now we want to specify the AWS Lambda function that will be invoked by our EventBridge rule:

aws events put-targets \
  --rule MacieFindings \
  --targets Id=1,Arn=arn:aws:lambda:regionalEndpoint:accountID:function:my-findings-function

Just replace the values in the function’s ARN with the details of your AWS Lambda function. Lastly, we just need to give EventBridge permissions to invoke the Lambda function:

aws lambda add-permission \
  --function-name redact-with-philter \
  --statement-id Sid \
  --action lambda:InvokeFunction \
  --principal events.amazonaws.com \
  --source-arn arn:aws:events:regionalEndpoint:accountId:rule:MacieFindings

Again, update the ARN as appropriate.

Now, when Amazon Macie runs and finds potentially sensitive information in an object in one of your Amazon S3 buckets, an event will be sent to EventBridge, where the rule we created will incoke our Lambda function. The file will be sent to Philter where it will be redacted. The redacted text will then be returned to the Lambda function.

Summary

In this blog post we have provided the framework for using Philter alongside Amazon Macie, Amazon EventBridge, and AWS Lambda to redact PII, PHI, and other sensitive information from files in Amazon S3 buckets.

If you need help setting this up please reach out! We can help you through the steps.

Philter is available from the AWS Marketplace. Not using AWS? Philter is also available from the Google Cloud Marketplace and the Microsoft Azure Marketplace.

Redacting Text in Amazon Kinesis Data Firehose

Amazon Kinesis Firehose is a managed streaming service designed to take large amounts of data from one place to another. For example, you can take data from sources such as Amazon CloudWatch, AWS IoT, and custom applications using the AWS SDK to destinations Amazon S3, Amazon Redshift, Amazon Elasticsearch, and other services. In this post we will use Amazon S3 as the firehose’s destination.

In some cases you may need to manipulate the data as it goes through the firehose to remove sensitive information. In this blog post we will show how Amazon Kinesis Firehose and AWS Lambda can be used in conjunction with Philter to remove sensitive information (PII and PHI) from the text as it travels through the firehose.

Philter is software that redacts PII, PHI, and other sensitive information from text. Philter runs entirely within your private cloud and does not require any external connectivity. Your data never leaves your private cloud and is not sent to any third-party. In fact, you can run Philter without any external network connectivity and we recommend doing so!

Prerequisites

Your must have a running instance of Philter. If you don’t already have a running instance of Philter you can launch one through the AWS Marketplace. There are CloudFormation and Terraform scripts for launching a single instance of Philter or a load-balanced auto-scaled set of Philter instances.

It’s not required that the instance of Philter be running in AWS but it is required that the instance of Philter be accessible from your AWS Lambda function. Running Philter and your AWS Lambda function in your own VPC allows your Lambda function to communicate locally with Philter from the function. This keeps your sensitive information from being sent over the public internet and keeps the network traffic inside your VPC.

Setting up the Amazon Kinesis Firehose Transformation

There is no need to duplicate an excellent blog post on creating an Amazon Kinesis Firehose Data Transformation with AWS Lambda. Instead, refer to the linked page and substitute the Python 3 code below for the code in that blog post.

Configuring the Firehose and the Lambda Function

To start, create an AWS Firehose and configure an AWS Lambda transformation. When creating the AWS Lambda function, select Python 3.7 and use the following code:

from botocore.vendored import requests

import base64
def handler(event, context):

output = []

for record in event['records']:

   payload=base64.b64decode(record["data"]
   headers = {'Content-type': 'text/plain'}

   r = requests.post("https://PHILTER_IP:8080/api/filter", verify=False, data=payload, headers=headers, timeout=20)
   filtered = r.text

   output_record = { 'recordId': record['recordId'], 'result': 'Ok', 'data': base64.b64encode(filtered.encode('utf-8') + b'\n').decode('utf-8') }

   output.append(output_record)

return output

The following Kinesis Firehose test event can be used to test the function:

{
  "invocationId":"invocationIdExample",
  "deliveryStreamArn":"arn:aws:kinesis:EXAMPLE",
  "region":"us-east-1",
  "records":[
    {
      "recordId":"49546986683135544286507457936321625675700192471156785154",
      "approximateArrivalTimestamp":1495072949453,
      "data":"R2VvcmdlIFdhc2hpbmd0b24gd2FzIHByZXNpZGVudCBhbmQgaGlzIHNzbiB3YXMgMTIzLTQ1LTY3ODkgYW5kIGhlIGxpdmVkIGF0IDkwMjEwLiBQYXRpZW50IGlkIDAwMDc2YSBhbmQgOTM4MjFhLiBIZSBpcyBvbiBiaW90aW4uIERpYWdub3NlZCB3aXRoIEEwMTAwLg=="
    },
  {
    "recordId":"49546986683135544286507457936321625675700192471156785154",
    "approximateArrivalTimestamp":1495072949453,
    "data":"R2VvcmdlIFdhc2hpbmd0b24gd2FzIHByZXNpZGVudCBhbmQgaGlzIHNzbiB3YXMgMTIzLTQ1LTY3ODkgYW5kIGhlIGxpdmVkIGF0IDkwMjEwLiBQYXRpZW50IGlkIDAwMDc2YSBhbmQgOTM4MjFhLiBIZSBpcyBvbiBiaW90aW4uIERpYWdub3NlZCB3aXRoIEEwMTAwLg=="
    }
  ]
}

This test event contains 2 messages and the data for each is base 64 encoded, which is the value “He lived in 90210 and his SSN was 123–45–6789.” When the test is executed the response will be:

[
  "He lived in {{{REDACTED-zip-code}}} and his SSN was {{{REDACTED-ssn}}}.",
  "He lived in {{{REDACTED-zip-code}}} and his SSN was {{{REDACTED-ssn}}}."
]

When running the test, the AWS Lambda function will extract the data from the requests in the firehose and submit each to Philter for filtering. The responses from each request will be returned from the function as a JSON list. Note that in our Python function we are ignoring Philter’s self-signed certificate. It is recommended that you use a valid signed certificate for Philter.

When data is now published to the Amazon Kinesis Data Firehose stream, the data will be processed by the AWS Lambda function and Philter prior to exiting the firehose at its configured destination.

Processing Data

We can use the AWS CLI to publish data to our Amazon Kinesis Firehose stream called sensitive-text:

aws firehose put-record --delivery-stream-name sensitive-text --record "He lived in 90210 and his SSN was 123-45-6789."

Check the destination S3 bucket and you will have a single object with the following line:

He lived in {{{REDACTED-zip-code}}} and his SSN was {{{REDACTED-ssn}}}.

Conclusion

In this blog post we have created an Amazon Kinesis Data Firehose pipeline that uses an AWS Lambda function to remove PII and PHI from the text in the streaming pipeline.

Philter is available from the AWS Marketplace. Not using AWS? Philter is also available from the Google Cloud Marketplace and the Microsoft Azure Marketplace.

Phileas — The Open Source PII and PHI redaction engine

I am delighted to announce the project that provides the core PII and PHI redaction capabilities is now open source! Introducing Phileas, the PII and PHI redaction engine! Phileas is now available under the Apache license on GitHub.

Both Philter and Phirestream use Phileas to identify and redact sensitive information like PII and PHI. Phileas does all of the heavy lifting, while Philter and Phirestream make its functionality user-friendly and provide the NLP models.

Everyone is welcome to look at the code that powers Philter and Phirestream, use it, and contribute! In the next few weeks we will be adding better developer documentation to help you utilize Phileas in your applications. For the past 5 years, Phileas was only an internal project used by Philter and Phirestream, so please hang with us while we smooth out the edges and add user-facing documentation!

Philter and Phirestream will remain on the AWS, Azure, and Google Cloud marketplaces. We will continue to provide commercial support for those products. New versions of Philter and Phirestream will use the open source Phileas project.

We decided to open source Phileas because, firstly, we believe in open source. We also want to give our users the ability to look into how Philter and Phirestream work. Identifying and redacting sensitive information is a challenge with important implications! We want our users to have a better understanding of how these products work and to have a more open line of communication as to what features are implemented next. In that regard, we will be migrating our tasks over from our private Jira to GitHub issues in the next few days as well.

What is format-preserving encryption?

In cryptography, you have plain text and cipher text. An encryption algorithm transforms the plain text into the cipher text. The cipher text won’t look anything like the plain text, in terms of characters and length. There are many different kinds of encryption algorithms, serving many different purposes. The cipher text for each of these algorithms will all be different.

Let’s take the case of a credit card number, a common piece of sensitive information that is often encrypted. A credit card number is 16 digits long. Encrypting the credit card number with the industry standard AES-128-CBC algorithm will produce a cipher text much longer than the credit card number. If we are storing the credit card number in a database column configured for length 16, the cipher text will be too long to be stored in the database column.

Format-preserving encryption is a method of encryption that causes the cipher text to retain the same format as the plain text. For example, encrypting a credit card number with a format-preserving encryption algorithm will result in a cipher text of 16 characters in length, but will look nothing else like the original credit card number. Typically, only numeric, alphabetic, or alphanumeric characters can be used with format-preserving encryption.

The cipher text can be decrypted into the original plain text if the original credit card numbers are needed.

Learn more about format-preserving encryption.

Format-Preserving Encryption in Philter

Philter 2.1.0 adds format-preserving encryption as a filter strategy for bank numbers, bitcoin addresses, credit cards, drivers license numbers, IBAN codes, passport numbers, SSNs/TINs, package tracking numbers, and VINs. By specifying FPE_ENCRYPT_REPLACE as the filter strategy for one of those items of PII, Philter will encrypt the PII using format-preserving encryption.

Philter will replace the original PII with its encrypted version, and since format-preserving encryption was used, the replacement (encrypted) value will appear in the same format. This is useful when it is important that PII be encrypted but its length not be modified.

If you are not concerned about encrypting the original value, you can use the RANDOM_REPLACE filter strategy to replace PII with random values also in the same format as the original PII. Just remember that random replacement is not encryption and is not reversible. Use random replacement when using documents for machine learning or other processes where the original values are not important.

To enable format-preserving encryption for a type of sensitive information, simply add it to the filter profile. The following is an example filter profile that uses format-preserving encryption for credit card numbers. Just replace the key and tweak values with your own values.

{ "name": "credit-cards", "identifiers": { "creditCardNumbers": { "creditCardNumberFilterStrategies": [ { "strategy": "FPE_ENCRYPT_REPLACE", "key": "...", "tweak: "..." } ] } } }

Learn more about format-preserving encryption in Philter’s User Guide. Also, Philter has several other filter strategies to give full control over how your data is redacted.