Our blog posts have been somewhat fewer and farther between since the release of the Omnibus Rule, primarily because we have been busily working to understand the subtleties of the Omnibus Rule, while helping our clients implement the necessary changes. We have also seen a sharp uptick in inquiries related to breaches and potential breaches. But sometimes it’s worth focusing on the more mundane aspects of HIPAA compliance in this new post-Omnibus, high-tech, HIPAA-happy (or HIPAA-headache-inducing, depending on one’s perspective) world.
One such mundane, but important, issue has plagued some of our most diligent, compliance-seeking business associate and covered entity clients. They ask: Where do we draw the line between a run-of-the-mill, ordinary garden variety “security incident” and a “presumed breach” when it comes to reporting? How do we describe these types of reporting obligations in our Business Associate Agreements?
The Omnibus Rule doesn’t help much to answer this question. The definition of “breach” has been revised under the Omnibus Rule, but the definition of “security incident” remains broad. A “security incident” includes “the attempted or successful unauthorized access, use, disclosure, modification, or destruction of information or interference with system operations in an information system.” The Omnibus Rule also requires business associate agreements to require business associates to “[r]eport to the covered entity any security incident of which it becomes aware, including breaches of unsecured protected health information as required” by the breach notification requirements of the Omnibus Rule. Really? Does HHS truly expect us to give or get reports of every attempted hacking incident? What about system interferences caused by power outages? What if a paper medical record is left on a table or chair unattended for several minutes (or hours), whether in a public or even a private area? The potential examples of gray areas are limitless.
I think the quick answer is that not all reports are created equally. Yes, the Omnibus Rule makes it clear that almost every unauthorized “acquisition, access, use, or disclosure” is presumed to be a breach (unless a “low probability” the information has been compromised is demonstrated in accordance with a risk assessment that includes at least four minimum factors), and triggers very specific reporting obligations. Reports must be given within specific periods of time, and must include specific information. However, the Omnibus Rule does not require this type of specificity for reports of “security incidents” that do not rise to the level of being breaches or presumed breaches.
The National Institute of Standards and Technology (NIST) issued a “Computer Security Incident Handling Guide” in August of 2012, approximately 6 months before the Omnibus Rule was released. One guideline seems particularly relevant when it comes to figuring out how to deal with various types of “security incidents”:
Organizations should create written guidelines for prioritizing incidents.
Prioritizing the handling of individual incidents is a critical decision point in the incident response process. Effective information sharing can help an organization identify situations that are of greater severity and demand immediate attention. Incidents should be prioritized based on the relevant factors, such as the functional impact of the incident (e.g., current and likely future negative impact to business functions), the information impact of the incident (e.g., effect on the confidentiality, integrity, and availability of the organization’s information), and the recoverability from the incident (e.g., the time and types of resources that must be spent on recovering from the incident).
Let’s pay attention to NIST and prioritize our security incident reporting based on relevant factors. Of course, we want to ensure HIPAA compliance and appropriate breach and potential breach prevention, reporting, and mitigation, but let’s not clog operational waterways with “incident” reporting overload.