Security Breach Notification

This blog series has been following breaches of Protected Health Information (“PHI”) that have been reported on the U.S. Department of Health and Human Services (“HHS”) ever-lengthening parade list (the “HHS List”) of breaches of unsecured PHI affecting 500 or more individuals (the “List Breaches”).  As reported in a previous blog post in this series, as of August 14, 2013 (and today), there were postings of 646 List Breaches.

Several prior posts in this series here and here addressed the extent to which such List Breaches are being reported by covered entities (“CEs”) as having been attributable to events involving business associates (“BAs”).

As of August 20, 2013, 146 of the total of 646 List Breaches (22.6%) reportedly involved BAs of the reporting CEs.  This is remarkably consistent with the percentage of 22.3% (83 of the total of 372 List Breaches) as of December 2, 2011, reportedly involving BAs of the reporting CEs.

Further analysis of the HHS List as of August 20, 2013, reveals the following:

• 3 of the 6 List Breaches (50%) that affected 1,000,000 or more individuals reportedly involved BAs of the reporting CEs.

• 16 of the 43 List Breaches (37.2%) that affected between 30,000 and 999,999 individuals reportedly involved BAs of the reporting CEs.

• 21 of the 80 List Breaches (26.3%) that affected between 10,000 and 29,999 individuals reportedly involved BAs of the reporting CEs.

• 106 of the 517 List Breaches (20.5%) that affected between 500 and 9,999 individuals reportedly involved BAs of the reporting CEs.

While the foregoing review is only a snapshot of the HHS List as of a given date, it would indicate that, as the size of a List Breach increases, it is more likely that involvement of a BA will be reported. However, the overwhelming proportion of List Breaches (79.5%) on the HHS List, which affected fewer than 10,000 individuals, have reported no involvement of a BA.

More data will be required before the impact of BA involvement in smaller and larger List Breaches becomes clearer. However, there are indications that the larger the List Breach that is reported by a CE, the greater the likelihood that it will involve an alleged BA.  It is therefore incumbent upon any CE at a minimum to

(i) choose its BAs with care,

(ii) enter into effective business associate agreements with terms appropriate for the specific risks that may be present, and

(iii) continue to monitor the total performance of BAs, including both delivery of services and HIPAA compliance.

This blog series has been following breaches of Protected Health Information (“PHI”) that have been reported on the U.S. Department of Health and Human Services (“HHS”) ever-lengthening parade list (the “HHS List”) of breaches of unsecured PHI affecting 500 or more individuals (the “List Breaches”). Previous blog posts in this series discussed here and  here the volume of List Breaches that occurred in earlier periods. As of August 13, 2013, there were postings of 646 List Breaches.

In the almost 3½ years since the inception of the HHS List on March 4, 2010, there have been 646 postings for an annualized average of approximately 189 postings per twelve-month period. Approximately 334 (51.7%) of the postings reported the type of breach to involve “theft” of all kinds, including laptops, other portable electronic devices, desktop computers, network servers, paper records and others. If the approximately 66 additional List Breaches reporting the type of breach as a “loss” of various types is added to the 334 “theft” events, the total for the two categories swells to approximately 400 or 61.9% of the 646 posted List Breaches. Combining the two categories appears to make some sense since it is likely that a number of the List Breaches categorized as a “loss” event may have involved some theft aspects.

Even more significant may be the fact that approximately 230 (35.6%) of the List Breaches reflected the cause or partial cause of the breach to be “theft” or “loss” respecting laptops or other portable electronic devices. Theft or loss of laptops or other portable electronic devices thus constituted 57.5% of the approximately 400 List Breaches that involved reported theft or loss.

It is likely that it will be a number of months after the effective date of the Omnibus Rule on September 23, 2013, that List Breaches can begin to be evaluated under post-Omnibus Rule standards, such as the presumption that a PHI security event is a breach unless established otherwise. It will be interesting to see if any of the numbers reported above materially change in the post-Omnibus Rule environment.

As has been emphasized in the past, it may have become more a question of when a covered entity (“CE”) or business associate (“BA”) will suffer a PHI security breach and how severe the breach will be, rather than if it will ever suffer a breach. The geometric increase in portable electronic devices to receive, access and store PHI should be monitored carefully by CEs and BAs, as it can be expected that this type of security breach will continue to expand. Effective policies and procedures must be established by CEs and BAs to govern use of such electronic devices, both with respect to entity-supplied devices and personal devices. Many individuals have multiple portable electronic devices of both types that may become repositories of unprotected PHI, whether voluntarily or involuntarily.

If you are a federally-facilitated health insurance exchange (FFE), a “non-Exchange entity”, or a State Exchange, the answer is “Quick, report!”  Those involved with the new health insurance exchanges (or “Marketplaces”?  The name, like the rules, seems to be a moving and elusive target) should make note that privacy and security incidents and breaches are to be reported within one hour of their discovery, according to regulations proposed by the Department of Health and Human Services (HHS) on June 19, 2013 (“Exchange Regulations”).  That’s right – within one hour, or a measly 60 minutes, of discovery of a breach involving personally identifiable information (PII), the entity where the breach occurs must report it to HHS.  Even a mere security “incident” would have to be reported within one hour.  The broad term “incident” would include:

[t]he act of violating an explicit or implied security policy, which includes attempts (either failed or successful) to gain unauthorized access to a system or its data, unwanted disruption or denial of service, the unauthorized use of a system for the processing or storage of data; and changes to system hardware, firmware, or software characteristics without the owner’s knowledge, instruction, or consent. 

Whereas HIPAA breaches (those involving protected health information, or PHI) affecting more than 500 individuals must be reported to HHS “without unreasonable delay and in no case later than 60 days after discovery” and (as discussed here in an earlier blog post) there is no express requirement for reporting of security incidents to HHS , HHS’s new proposal requires a 60-minute turn-around for PII breaches and incidents alike.  HHS says that it “considered but declined to use the definitions” for “incident” and “breach” provided under the HIPAA regulations because “the PHI that triggers the HIPAA requirements is considered a subset of PII, and we believe that the HIPAA definitions would not provide broad enough protections… .” 

The 60-minute turnaround time may sound familiar to Medicare Shared Savings Programs (MSSPs, also known as Medicare Accountable Care Organizations or ACOs).  Approved MSSPs must sign a Data Use Agreement with the Centers for Medicare & Medicaid Services (CMS) before it can obtain data from CMS that contains Medicare beneficiaries’ PHI.  The 60-minute turnaround under the Data Use Agreement is even a bit more onerous than that proposed in Exchange Regulations in that breaches of PII must be reported within 60 minutes of the breach, loss, or unauthorized disclosure itself, rather than within 60 minutes of discovery of the breach, loss, or unauthorized disclosure.  Then again, the Data Use Agreement doesn’t require reporting of “incidents” like attempted access or power interruptions, and CMS is thoughtful enough to provide a phone number and email address to be used in making the reports.

Elizabeth Litten and Michael Kline write:

For the second time in less than 2 ½ years, the Indiana Family and Social Services Administration (the “FSSA”) has suffered a large breach of protected health information (“PHI”) as the result of actions of a business associate (“BA”).  If I’m a resident of Indiana and a client of FSSA, I may have received a surprise in the mail sometime between April 6th and late May or early June of this year.  I might have opened my FSSA mail to see detailed information about another FSSA client that could have included their name, address, case number, date of birth, gender, race, telephone number, email address, types of benefits received, monthly benefit amount, employer information, some financial information such as monthly income and expenses, bank balances and other assets, and certain medical information such as provider name, whether the client receives disability benefits and medical status or condition, and certain information about the client’s household members like name, gender and date of birth.

What did (or should) I do with all this PHI?  In an announcement made on July 1, 2013, the FSSA is telling its clients to return the accidentally mailed documents to the local FSSA office, or to shred them.  The FSSA provides detailed information as to how the breach occurred (a programming error made by its BA document management systems contractor, RCR Technology Corporation), and what steps can be taken by individuals whose information might have been breached to protect their credit.  But the FSSA is notably vague in providing details as to how recipients of other FSSA clients’ information should make sure that the information is not disclosed to others.  A client that has held on to the private information of another client since receiving it in April, May, or June might decide to take it to the local FSSA office in person (risking that it could be left on a bus or in a car or simply lost along the way), might send it to the wrong address, or might not think to put “Personal/Confidential” on the envelope or mark it in a way that would alert the person opening it to its private contents.  Possibly even worse, the client might simply dump it in the regular or recyclable trash (opened or even unopened in the belief that it is junk mail) where unknown persons can retrieve it.

This is the second reported large PHI security breach suffered by the FSSA as a covered entity (“CE”) at the hands of a BA.  The Department of Health and Human Services (“HHS”) list of large PHI security breaches reflects that the FSSA as the CE reported that, on November 9, 2010, its BA, the Southwestern Indiana Regional Council on Aging, had experienced the theft of a laptop computer containing unprotected PHI of 757 individuals.

Of course, programming mistakes and the many other human and technical errors that lead to breaches are and will continue to be, despite the parties’ best intentions, unavoidable.  Responding promptly, thoughtfully, and accurately to PHI breaches will be key in minimizing damage.  While the FSSA appears to have responded promptly, thoughtfully, and accurately, it is unclear when the FSSA first learned of the breach and its scope from its BA to report the breach to affected individuals and HHS within the maximum period of 60 days from discovery.  Finally, including more specific, practical instructions regarding what to do when someone else’s PHI shows up in your mail or lands in your hands could help avoid further breaches and would remind the public to treat PHI with particular care.

In January 2011 this blog series discussed here and here that the University of Rochester Medical Center (“URMC” or the “Medical Center”) became a marcher twice in 2010 in the parade of large Protected Health Information (“PHI”) security breaches.  The U.S. Department of Health and Human Services (“HHS”) publishes a list (the “HHS List”), which posts large breaches of unsecured PHI incidents affecting 500 or more individuals.  The HHS List now reveals that URMC reported a third large security breach that occurred on February 15, 2013 (the “2013 Breach”). The HHS List reveals that 537 individuals were affected by a URMC loss of an “other portable electronic device.”  There are several interesting aspects about the 2013 Breach.

First, this blog series earlier observed that URMC apparently determined that it was not necessary or appropriate to publish its PHI breaches in 2010 in the URMC Newsroom or elsewhere on the URMC website.  Our later post reported a reader’s comment that the second breach of URMC in 2010 could be located with some effort on the general University of Rochester website.  In contrast, however, the 2013 Breach was prominently published by URMC on May 3, 2013 in the URMC Newsroom and can be found in the 2013 archives.

Apparently a URMC resident physician misplaced a USB computer flash drive that carried PHI and which was used to transport information used to study and continuously improve surgical results. The information was copied from other files and, therefore, the Medical Center believes its loss will not affect follow-up care for any patients.  Additionally, the URMC posting observed that “after an exhaustive but unproductive search, hospital leaders believe that the drive likely was destroyed in the laundry.”

According to the URMC posting,

The flash drive included the patients’ names, gender, age, date of birth, weight, telephone number, medical record    number (a number internal to URMC), orthopaedic physician’s name, date of service, diagnosis, diagnostic study, procedure, and complications, if any. No address, social security number or insurance information of any patient was included.

It is refreshing that URMC has given the public notice of the 2013 Breach on its website.  Significantly, URMC also disclosed its development of new policies for the use of smart phones, iPads and other mobile devices to safeguard protected health information. In addition, URMC is retraining users of its PHI and encouraging its physicians and staff to access sensitive patient information using its secure network rather than via portable devices.

One puzzling aspect of URMC’s actions is that its notifications to affected individuals and the posting by the Medical Center did not occur until the week of April 28, 2013. This is clearly past the date required by HHS.  HHS requires that notifications be made “without unreasonable delay and in no case later than 60 days following the discovery of a breach.”  Sixty days after the breach discovery on February 15, 2013 would have been April 16, 2013.

It is clear that the proliferation of mobile devices has geometrically expanded the potential for lost or improperly accessed PHI.  Even the most carefully planned and communicated policies cannot assure the protection of PHI from inappropriate compromise, whether intentional or accidental.  Moreover, the continual advancement of technology in this area at lightning speed often renders policies obsolete almost as soon as they are finalized and disseminated.  In the long run, it may make the question of the potential for a PHI breach for a covered entity, business associate or subcontractor more of a matter of “when” and “how” rather than “if.”

Our blog posts have been somewhat fewer and farther between since the release of the Omnibus Rule, primarily because we have been busily working to understand the subtleties of the Omnibus Rule, while helping our clients implement the necessary changes. We have also seen a sharp uptick in inquiries related to breaches and potential breaches. But sometimes it’s worth focusing on the more mundane aspects of HIPAA compliance in this new post-Omnibus, high-tech, HIPAA-happy (or HIPAA-headache-inducing, depending on one’s perspective) world. 

One such mundane, but important, issue has plagued some of our most diligent, compliance-seeking business associate and covered entity clients. They ask: Where do we draw the line between a run-of-the-mill, ordinary garden variety “security incident” and a “presumed breach” when it comes to reporting? How do we describe these types of reporting obligations in our Business Associate Agreements? 

The Omnibus Rule doesn’t help much to answer this question. The definition of “breach” has been revised under the Omnibus Rule, but the definition of “security incident” remains broad. A “security incident” includes “the attempted or successful unauthorized access, use, disclosure, modification, or destruction of information or interference with system operations in an information system.” The Omnibus Rule also requires business associate agreements to require business associates to “[r]eport to the covered entity any security incident of which it becomes aware, including breaches of unsecured protected health information as required” by the breach notification requirements of the Omnibus Rule. Really? Does HHS truly expect us to give or get reports of every attempted hacking incident? What about system interferences caused by power outages?  What if a paper medical record is left on a table or chair unattended for several minutes (or hours), whether in a public or even a private area? The potential examples of gray areas are limitless.

I think the quick answer is that not all reports are created equally. Yes, the Omnibus Rule makes it clear that almost every unauthorized “acquisition, access, use, or disclosure” is presumed to be a breach (unless a “low probability” the information has been compromised is demonstrated in accordance with a risk assessment that includes at least four minimum factors), and triggers very specific reporting obligations. Reports must be given within specific periods of time, and must include specific information. However, the Omnibus Rule does not require this type of specificity for reports of “security incidents” that do not rise to the level of being breaches or presumed breaches. 

The National Institute of Standards and Technology (NIST) issued a “Computer Security Incident Handling Guide” in August of 2012, approximately 6 months before the Omnibus Rule was released. One guideline seems particularly relevant when it comes to figuring out how to deal with various types of “security incidents”:

                        Organizations should create written guidelines for prioritizing incidents.

 

Prioritizing the handling of individual incidents is a critical decision point in the incident response process. Effective information sharing can help an organization identify situations that are of greater severity and demand immediate attention.  Incidents should be prioritized based on the relevant factors, such as the functional impact of the incident (e.g., current and likely future negative impact to business functions), the information impact of the incident (e.g., effect on the confidentiality, integrity, and availability of the organization’s information), and the recoverability from the incident (e.g., the time and types of resources that must be spent on recovering from the incident).

Let’s pay attention to NIST and prioritize our security incident reporting based on relevant factors. Of course, we want to ensure HIPAA compliance and appropriate breach and potential breach prevention, reporting, and mitigation, but let’s not clog operational waterways with “incident” reporting overload. 

On February 7, 2013, our partner Keith McMurdy, Esq., posted an excellent entry on the Employee Benefits Blog of Fox Rothschild LLP that merits republishing for our readers as well. The post outlines some direct effects of the new HIPAA Omnibus Rule on employers and their health plans. 

Keith McMurdy writes as follows:

 

On January 25, the new (final?) rules about HIPAA Privacy under the HITECH Act were issued in the Federal Register.  While the effect of the new rules may not be to substantially change the way HIPAA privacy is viewed, there are a number of action items for employers as plan sponsors that have to be accomplished when these rules go into effect.

 

There are two pieces of good news.  The first is that the general purpose of compliance remains the same.  Plan sponsors have to ensure PHI is properly protected, refrain from impermissible disclosures and provide notices of security breaches.  The second is that the earliest possible deadline for compliance with the new rules is September 23, 2013, so there is some time to prepare.  But it is not a bad idea to start preparing now.  So let’s consider the key changes.

 

1. Tougher Security Breach Notification Standard

 

Under the old rule, the standard for notification to participants of a security breach was only necessary if the release of information "posed a significant risk of financial, reputational or other harm" to a covered person.  Now, that standard is tightened to apply to ANY security breach unless the plan sponsor can prove "a low probability that the [PHI] has been compromised based on a risk assessment."  This should encourage plan sponsors to tighten their security breach protections because any release, even things like accidental e-mails, can potentially become reportable events.  So the first step in compliance would be to review security standards and document steps taken to avoid security breaches.

 

2. Tougher Standards for Business Associates Agreements

 

Because the new rule provides for penalties to a covered entity for breaches by business associates, the default position is that plan sponsors should be much more concerned about how compliant their business associates really are.  Where in the past, plan sponsors may have felt comfortable simply handing off certain protection functions to service providers, the new rule makes it pretty clear that plan sponsors have to actually know that their business associates are HIPAA compliant and diligently seek to confirm that compliance.

 

3.  New Privacy Notices for 2013 Open Enrollment

 

The new rule also requires that plan sponsors add or amend their privacy notices:

  1. The notice must specifically state that the covered health plans are required to obtain plan participants’ authorization to use or disclose psychotherapy notes, to use PHI for marketing purposes, to sell PHI, or to use or disclose PHI for any purpose not described in the notice as well as a statement explaining how plan participants may revoke an authorization.
  2. The notices must state that the plans (other than a long-term care plan) are prohibited from using PHI that is genetic information for underwriting purposes
  3. The notice must inform plan participants of their right to receive a notice when there is a breach of their unsecured PHI.

The new rules makes it clear that since this new language is a "material change," plan sponsors are required to distribute this revised notice, even if they had just recently sent the old notice. 

 

4. Genetic Information and the GINA Notice

 

The Genetic Information Non-Discrimination Act of 2008 (GINA) prohibits discrimination based on genetic information.  The HIPAA Privacy Rule now similarly prohibits HIPAA-covered plans from taking genetic information into consideration when offering incentives or discounts through a health risk assessment.  Because this modification of the Privacy Rule materially affects how a plan may use PHI, the HIPAA Privacy Rule requires that plan participants be informed in the plan’s privacy notice of the prohibition on the use of PHI for underwriting purposes.  See the second item under Part 3, above.

 

So in the midst of our struggles to comply with PPACA, plan sponsors should not forget about HIPAA medical privacy concerns.  Start pulling together privacy notices, business associates agreements and plan documents for review and amendment.  Review your security practices to avoid even accidental breaches.  And be prepared to issue new notices as necessary for your next open enrollment.  For more detailed information about HIPAA and HITECH Compliance, please make sure to check out our HIPAA Blog as well.  More information means better compliance, which is always a good thing.

In the wake of the post-Omnibus Rule (the “Rule”) frenzy, it is necessary to consider some collateral effects that the Rule may have brought about with respect to compliance with HIPAA/HITECH.  The Office of Civil Rights (“OCR”) summaries of closed investigations (the “Summaries”) posted on the U.S. Department of Health and Human Services (“HHS”) list (the “HHS List”) of breaches of unsecured PHI affecting 500 or more individuals (“List Breaches”) has been a source of meaningful guidance as discussed in previous posts on this blog.  For example, the summary (the “Tennessee Summary”) for a State of Tennessee Sponsored Group Health Plan breach (the “Tennessee Breach”) continues to provide an excellent road map of pre-Omnibus Rule actions for covered entities (“CEs”) or business associates (“BAs”)  that suffer List Breaches or PHI breaches of any size.  

 

While the Tennessee Breach itself dealt with mishandling of paper PHI and not electronic health records, the Tennessee Summary does give direction for early intervention by affected CEs or BAs before HHS knocks on their door.  However, while there was excellent compliance in the aftermath of the Tennessee Breach, advice from pre-Rule Summaries cannot be used without carefully taking into account the new requirements respecting PHI breaches under the Rule.  As will be further discussed below, the most important new requirement in this regard is the necessity for a CE, BA or subcontractor to analyze the level of risk of compromise of the affected PHI.

 

The Tennessee Summary

 

The Tennessee Breach occurred on October 6, 2011 and involved approximately 1,770 enrollees with respect to names, addresses, birth dates and social security numbers.  According to the Tennessee Summary, an equipment operator at the state’s postal facility set the machine to insert four (4) pages per envelope instead of one (1) page per envelope, which caused the PHI of four individuals to be sent to one address per envelope.

 

The Tennessee Summary states that the CE did the following (with some parenthetical observations from the blog author):

 

1.         Retrained the equipment operator (suggesting that suspension and/or termination are not the only actions in appropriate cases with respect to dealing with employees involved with a PHI breach where rehabilitation is possible).

2.         Submitted a breach report to HHS (resulting in the posting on the HHS List).

3.         Provided notice to affected individuals.

4.         Notified the media.

5.         Created a toll-free number for information regarding the incident.

6.         Posted notice on the CE’s website.

7.         Modified policies to remove the social security number on templates for future mailings (a good policy whether paper or electronic PHI is involved).

8.         Offered identity theft protection to the affected individuals (a common decision for CEs and BAs based on the type of information that may have been compromised).

9.         Following the OCR investigation, reviewed its policies and procedures to ensure adequate safeguards are in place (with this disclosure in the Tennessee Summary, there is a suggestion that OCR continued to exercise some oversight or received reports after the investigation was finished).

 

The Tennessee Breach in Retrospect after the Omnibus Rule

 

There was no discussion in the Tennessee Summary of any analysis by the CE of the probable “risk of harm” from the Tennessee Breach under the proposed rule standards that prevailed prior to the Rule.  However, it is clear that, in the post-Rule period, a risk analysis of the probability that the PHI “has been compromised” would be necessary for the CE; failure to do such an analysis may be a violation in itself.   Under the Rule, there is a presumption that a breach of PHI has taken place unless there is a low probability that the PHI has been compromised.  The four factor analysis that would have been required of the CE in the Tennessee Breach case had it happened after the effectiveness of the Rule encompasses the following (with parenthetical comments):

 

(i)         Identifying the nature and extent of the PHI involved, including types of identifiers and risk of re-identification (i.e., names, addresses, birth dates and social security numbers);

 

(ii)        Identifying the unauthorized person(s) who impermissibly used the PHI or to whom the disclosure was made (in the case of the Tennessee Breach, subscribers to the health plan who were not individuals that had an obligation of their own to comply with HIPAA/HITECH);

 

(iii)       Determining whether the PHI was actually acquired or viewed or, alternatively, if only the opportunity existed for the PHI to be acquired or viewed (in the case of the Tennessee Breach, there is a likelihood that numerous recipients of the PHI or others without the right to view such PHI did in fact view it); and

 

(iv)       The extent to which risk to the PHI was mitigated (items 3, 4, 5, 6 and 8 above appear to be potential mitigating factors).

 

As stated in an earlier postings here and here, no Summary has been posted by OCR for any List Breach that occurred later than October 6, 2011. Additionally, no Summary has been posted by OCR for any List Breach involving a BA that occurred later than February 1, 2011.  While the Summaries continue to provide highly useful information for CEs, BAs and subcontractors relative to confronting PHI breaches, large and small, they must be analyzed with appropriate care and attention paid to changes brought about by the Rule.  It may be that a concern of OCR about potential confusion which could be created by publishing pre-Rule Summaries has prevented OCR from making recent postings of Summaries on the HHS List.

 

This blog series has been following breaches of Protected Health Information (“PHI”) that have been reported on the U.S. Department of Health and Human Services (“HHS”) ever-lengthening parade list (the “HHS List”) of breaches of unsecured PHI affecting 500 or more individuals (the “List Breaches”). As of January 1, 2013 (and as of today), there were 525 postings of List Breaches.

A previous blog post reported that, on February 24, 2012, HHS listed the 400th List Breach. As the first postings on the HHS List occurred on March 4, 2010, an average of about 200 postings of List Breaches were recorded in each of its first two years. However, in the 10-plus months between February 24, 2012 and January 1, 2013, 125 additional List Breaches were posted, which on an annualized twelve month period basis would translate into 150 List Breaches. It is not yet clear whether the lower volume of List Breaches since February 2012 is attributable to increased caution and better practices in protecting PHI on the part of covered entities (“CEs”) and business associates (“BAs”), greater use of encryption and other practices to protect PHI, slower postings of List Breaches by HHS, other factors or a combination thereof.

 

Of the total of 525 List Breaches posted through January 1, 2013, there were approximately 274 (52.2%) events shat attributed the type of breach to involve “theft” of all kinds, including laptops, other portable electronic devices, desktop computers, network servers, paper records and others. If the 60 additional List Breaches listing the category of “loss” of all types is added to the 274 “theft” events, the total for the two categories swells to approximately 334 or 63.6% of the 525 posted List Breaches. Combining the two categories appears to make some sense since it is likely that a number of the List Breaches categorized as a “loss” event may have involved some theft aspects.

 

Even more revealing may be the fact that approximately 193 (36.8%) of the 525 List Breaches listed the cause or partial cause of the breach to be “theft” or “loss” respecting laptops or other portable electronic devices.  Theft or loss of laptops or other portable electronic devices thus constituted 51.6% of the 334 List Breaches that involved reported theft or loss. 

 

Over the last 10 months since the number of List Breaches passed 400, it appears that the relative percentage of List Breaches attributable to theft and loss is trending mildly upward. Of the 125 additional reported List Breaches, approximately 86 or 68.8% listed theft or loss as the source of the PHI breach. The number of such 125 List Breaches that reported theft or loss of laptops or other portable electronic devices was 37 or 29.6%, a lower percentage than the 36.8% for all 525 List Breaches.  The sample sizes are relatively small, so that further following of these numbers is warranted.

 

My partner, William Maruca, Esq., recently posted a blog entry highlighting the fact that the first breach settlement announcement by HHS in 2013 (the “2013 Settlement”) involved a $50,000 fine based on theft of a laptop containing 441 patients’ unencrypted data. It was the first fine by HHS for a PHI security breach that involved fewer than 500 individuals and, therefore, was below the threshold for a List Breach. 

 

While the parade of List Breaches continues to lengthen, the 2013 Settlement underscores the fact that there are many more PHI security breaches involving fewer than 500 individuals. The PHI security breaches that are not List Breaches are receiving increased scrutiny by HHS. As this blog series has emphasized in the past, it may become more a question of when a CE or BA will suffer a PHI security breach and how severe the breach will be, rather than if it will suffer a breach. All CEs and BAs must exercise vigilance and use recommended protection procedures to avoid all PHI security breaches, not just large List Breaches. The continuing proliferation of the use of portable electronic devices to receive, access and store PHI should be monitored, as it can be expected that this type of security breach will continue to expand.

Elizabeth Litten and Michael Kline write:

We have posted several blogs, including those here and here, tracking the reported 2011 theft of computer tapes from the car of an employee of Science Applications International Corporation (“SAIC”) that contained the protected health information (“PHI”) affecting approximately 5 million military clinic and hospital patients (the “SAIC Breach”).  SAIC’s recent Motion to Dismiss (the “Motion”) the Consolidated Amended Complaint filed in federal court in Florida as a putative class action (the “SAIC Class Action”) highlights the gaps between an incident (like a theft) involving PHI, a determination that a breach of PHI has occurred, and the realization of harm resulting from the breach. SAIC’s Motion emphasizes this gap between the incident and the realization of harm, making it appear like a chasm so wide it practically swallows the breach into oblivion. 

 

SAIC, a giant publicly-held government contractor that provides information technology (“IT”) management and, ironically, cyber security services, was engaged to provide IT management services to TRICARE Management Activity, a component of TRICARE, the military health plan (“TRICARE”) for active duty service members working for the U.S. Department of Defense (“DoD”).  SAIC employees had been contracted to transport backup tapes containing TRICARE members’ PHI from one location to another.

 

According to the original statement published in late September of 2011 ( the “TRICARE/SAIC Statement”) the PHI “may include Social Security numbers, addresses and phone numbers, and some personal health data such as clinical notes, laboratory tests and prescriptions.” However, the TRICARE/SAIC Statement said that there was no financial data, such as credit card or bank account information, on the backup tapes. Note 17 to the audited financial statements (“Note 17”) contained in the SAIC Annual Report on Form 10-K for the fiscal year ended January 31, 2012, dated March 27, 2012 (the “2012 Form 10-K”), filed with the Securities and Exchange Commission (the “SEC”) includes the following:

 

There is no evidence that any of the data on the backup tapes has actually been accessed or viewed by an unauthorized person. In order for an unauthorized person to access or view the data on the backup tapes, it would require knowledge of and access to specific hardware and software and knowledge of the system and data structure.  The Company [SAIC] has notified potentially impacted persons by letter and is offering one year of credit monitoring services to those who request these services and in certain circumstances, one year of identity restoration services.

While the TRICARE/SAIC Statement contained similar language to that quoted above from Note 17, the earlier TRICARE/SAIC Statement also said, “The risk of harm to patients is judged to be low despite the data elements . . . .” Because Note 17 does not contain such “risk of harm” language, it would appear that (i) there may have been a change in the assessment of risk by SAIC six months after the SAIC Breach or (ii) SAIC did not want to state such a judgment in an SEC filing.

 

Note 17 also discloses that SAIC has reflected a $10 million loss provision in its financial statements relating to the  SAIC Class Action and various other putative class actions respecting the SAIC Breach filed between October 2011 and March 2012 (for a total of seven such actions filed in four different federal District Courts).  In Note 17 SAIC states that the $10 million loss provision represents the “low end” of SAIC’s estimated loss and is the amount of SAIC’s deductible under insurance covering judgments or settlements and defense costs of litigation respecting the SAIC Breach.  SAIC expresses the belief in Note 17 that any loss experienced in excess of the $10 million loss provision would not exceed the insurance coverage.  

 

Such insurance coverage would, however, likely not be available for any civil monetary penalties or counsel fees that may result from the current investigation of the SAIC Breach being conducted by the Office of Civil Rights of the Department of Health and Human Services (“HHS”) as described in Note 17.

  

Initially, SAIC did not deem it necessary to offer credit monitoring to the almost 5 million reportedly affected individuals. However, SAIC urged anyone suspecting they had been affected to contact the Federal Trade Commission’s identity theft website. Approximately 6 weeks later, the DoD issued a press release stating that TRICARE had “directed” SAIC to take a “proactive” response by covering a year of free credit monitoring and restoration services for any patients expressing “concern about their credit as a result of the data breach.”   The cost of such a proactive response easily can run into millions of dollars in the SAIC Breach. It is unclear the extent, if any, to which insurance coverage would be available to cover the cost of the proactive response mandated by the DoD, even if the credit monitoring, restoration services and other remedial activities of SAIC were to become part of a judgment or settlement in the putative class actions.

 

We have blogged about what constitutes an impermissible acquisition, access, use or disclosure of unsecured PHI that poses a “significant risk” of “financial, reputational, or other harm to the individual” amounting to a reportable HIPAA breach, and when that “significant risk” develops into harm that may create claims for damages by affected individuals. Our partner William Maruca, Esq., artfully borrows a phrase from former Defense Secretary Donald Rumsfeld in discussing a recent disappearance of unencrypted backup tapes reported by Women and Infants Hospital in Rhode Island. If one knows PHI has disappeared, but doubts it can be accessed or used (due to the specialized equipment and expertise required to access or use the PHI), there is a “known unknown” that complicates the analysis as to whether a breach has occurred. 

 

As we await publication of the “mega” HIPAA/HITECH regulations, continued tracking of the SAIC Breach and ensuing class action litigation (as well as SAIC’s SEC filings and other government filings and reports on the HHS list of large PHI security breaches) provides some insights as to how covered entities and business associates respond to incidents involving the loss or theft of, or possible access to, PHI.   If a covered entity or business associate concludes that the incident poses a “significant risk” of harm, but no harm actually materializes, perhaps (as the SAIC Motion repeatedly asserts) claims for damages are inappropriate. When the covered entity or business associate takes a “proactive” approach in responding to what it has determined to be a “significant risk” (such as by offering credit monitoring and restoration services), perhaps the risk becomes less significant. But once the incident (a/k/a, the ubiquitous laptop or computer tape theft from an employee’s car) has been deemed a breach, the chasm between incident and harm seems to open wide enough to encompass a mind-boggling number of privacy and security violation claims and issues.