PII is everywhere in daily work. It flows through the emails you send, the spreadsheets you share, the support tickets you resolve, and the screenshots you attach to a Slack message without a second thought. Most people encounter personally identifiable information dozens of times a day without ever stopping to label it as such — and that gap between awareness and action is exactly where data breaches begin.
Understanding what PII is, how to recognize its different forms, and which laws require you to protect it is no longer something only lawyers and compliance officers need to know. In 2026, it is a foundational skill for anyone who works with data — which is to say, nearly everyone.
Personally identifiable information (PII) is any data that can be used — on its own or in combination with other information — to identify, locate, or contact a specific individual. The definition is intentionally broad. A social security number is obviously PII. But so is a name paired with a zip code, or an IP address combined with a browsing history, if that combination is enough to single out one person from the crowd.
The U.S. National Institute of Standards and Technology (NIST) defines PII as "any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual's identity... and (2) any other information that is linked or linkable to an individual." The European Union's GDPR uses the term "personal data" but covers the same ground: any information relating to an identified or identifiable natural person.
The key word is identifiable. If a piece of data — alone or combined with other readily available data — can point to a real, living person, it qualifies as PII and deserves protection.
Not all PII carries the same risk level. A useful way to think about it is the distinction between direct PII and indirect PII.
Direct PII can identify an individual on its own, without needing to be combined with anything else. Exposing a single piece of direct PII is enough to constitute a privacy violation.
Indirect PII (sometimes called quasi-identifiers) may seem harmless in isolation — knowing someone's job title tells you very little. But when several indirect identifiers are combined, they can narrow the field to a single person just as effectively as a social security number.
| Type | Examples | Risk on its own |
|---|---|---|
| Direct PII | Full name, Social Security Number (SSN), passport number, driver's license number, biometric data (fingerprint, facial scan), email address, phone number, bank account numbers, credit card numbers, national ID number | High — can identify a person alone |
| Indirect PII | Job title, employer name, zip code, age, gender, race or ethnicity, date of birth (partial), general location, IP address, device identifiers, browsing behavior | Low alone, high when combined |
A classic research finding illustrates the indirect PII problem well: in a widely cited study, Latanya Sweeney demonstrated that 87% of the U.S. population could be uniquely identified using only three data points — zip code, date of birth, and gender. None of those three attributes would typically be flagged as sensitive on its own, yet together they form a precise fingerprint.
Most PII exposure incidents happen not during deliberate data theft, but during routine work. Here are the places PII most commonly appears in day-to-day professional contexts:
The screenshot scenario deserves particular attention. A developer pastes a screenshot into a bug report. An account manager shares a screen recording in a client Slack channel. A support agent attaches a screenshot to document a customer complaint. In each case, PII that was never meant to be shared ends up visible to people who have no legitimate need to see it.
A patchwork of regulations now governs how organizations must collect, store, and protect personally identifiable information. The four most significant frameworks for most businesses are GDPR, CCPA, HIPAA, and SOX.
GDPR (General Data Protection Regulation) — Enforced since 2018, the EU's GDPR applies to any organization that handles the personal data of EU residents, regardless of where the organization itself is based. It requires a lawful basis for processing personal data, mandates breach notification within 72 hours, gives individuals rights of access and erasure, and imposes fines of up to €20 million or 4% of global annual turnover. For practical day-to-day work, GDPR means that sharing screenshots containing a customer's personal data with someone who has no legitimate reason to see it can constitute a data breach. See our guide on GDPR-compliant screenshot sharing on macOS for specifics.
CCPA (California Consumer Privacy Act) — In effect since 2020 and strengthened by the CPRA amendments, the CCPA gives California residents the right to know what personal information is collected about them, the right to delete it, and the right to opt out of its sale. Businesses that meet certain size or data-volume thresholds must comply, and violations carry fines of up to $7,500 per intentional violation. For companies processing screenshots or documents that include California residents' data, CCPA obligations are real and enforceable.
HIPAA (Health Insurance Portability and Accountability Act) — HIPAA governs the handling of protected health information (PHI) in the United States. PHI is a subset of PII that includes any health-related data linked to an identifiable individual — diagnoses, treatment records, insurance information, and more. HIPAA's Security Rule requires administrative, physical, and technical safeguards. Sharing a screenshot that includes a patient's name and diagnosis, even internally, can trigger a reportable breach. Healthcare and adjacent industries must be especially rigorous about redacting PII from any visual content before sharing. Read more in our post on HIPAA-compliant screenshot sharing in healthcare.
SOX (Sarbanes-Oxley Act) — SOX applies to publicly traded U.S. companies and focuses on financial data integrity. While it is not a privacy law in the same sense as GDPR or HIPAA, SOX does impose strict controls on how financial records — which often contain PII such as employee and investor information — are stored and accessed. Improper handling of financial PII can contribute to SOX audit failures, with penalties including fines and criminal liability for executives.
The consequences of a PII exposure incident fall into three categories: regulatory fines, direct financial costs, and reputational damage — and in major incidents, all three hit simultaneously.
On the regulatory side, fines have grown dramatically. Amazon was fined €746 million under GDPR in 2021. Meta has faced multiple nine-figure penalties. In the United States, the FTC reached a $5 billion settlement with Facebook over privacy violations. These are not edge cases reserved for reckless giants — smaller organizations face proportionate enforcement too, and regulators in multiple jurisdictions have made clear that ignorance of the rules is not a mitigating factor.
Beyond fines, data breaches carry direct costs: notifying affected individuals (often legally required), providing credit monitoring services, legal fees, and remediation work. IBM's annual Cost of a Data Breach report has consistently put the average global cost of a breach above $4 million, with healthcare breaches averaging even higher.
Reputational damage is harder to quantify but often the most lasting consequence. Customers who learn that an organization carelessly handled their personal data tend to take their business elsewhere — and in an era when breach disclosures are public record and social media amplifies bad news instantly, the erosion of trust can outlast any regulatory penalty.
The practical challenge is not understanding that PII should be protected — most professionals accept that premise. The challenge is that protection requires an extra step at the exact moment when people are busy, focused on something else, and trying to move quickly. Manual redaction is slow, inconsistent, and easy to forget.
Here are the key practices for handling PII safely in visual content:
BlurData is a macOS app built specifically for this workflow. It automatically detects PII in screenshots and PDFs — names, emails, phone numbers, account numbers, license plates, and more — and blurs it before you share. Everything runs on-device, so no sensitive content is ever uploaded to an external server. You can review and adjust the automatic blurs, add manual redactions to anything the detector missed, and export a clean version in seconds.
Stop sharing unredacted screenshots by accident. BlurData auto-detects PII in your screenshots and PDFs and blurs it instantly — fully offline, no data leaves your Mac.
Try BlurData for freePII is not a compliance abstraction — it is the personal information of real people who have an expectation that organizations will handle it responsibly. Understanding the distinction between direct and indirect PII, knowing which laws apply to your context, and building practical habits around redaction in everyday documents and screenshots are all concrete steps toward that standard.
The regulations will continue to evolve, and the fines will continue to grow. But the more fundamental reason to handle PII carefully is simpler: the data belongs to people, and they deserve to have it treated that way.