Elizabeth Litten and Michael Kline write:

We have posted several blogs, including those here and here, tracking the reported 2011 theft of computer tapes from the car of an employee of Science Applications International Corporation (“SAIC”) that contained the protected health information (“PHI”) affecting approximately 5 million military clinic and hospital patients (the “SAIC Breach”).  SAIC’s recent Motion to Dismiss (the “Motion”) the Consolidated Amended Complaint filed in federal court in Florida as a putative class action (the “SAIC Class Action”) highlights the gaps between an incident (like a theft) involving PHI, a determination that a breach of PHI has occurred, and the realization of harm resulting from the breach. SAIC’s Motion emphasizes this gap between the incident and the realization of harm, making it appear like a chasm so wide it practically swallows the breach into oblivion. 


SAIC, a giant publicly-held government contractor that provides information technology (“IT”) management and, ironically, cyber security services, was engaged to provide IT management services to TRICARE Management Activity, a component of TRICARE, the military health plan (“TRICARE”) for active duty service members working for the U.S. Department of Defense (“DoD”).  SAIC employees had been contracted to transport backup tapes containing TRICARE members’ PHI from one location to another.


According to the original statement published in late September of 2011 ( the “TRICARE/SAIC Statement”) the PHI “may include Social Security numbers, addresses and phone numbers, and some personal health data such as clinical notes, laboratory tests and prescriptions.” However, the TRICARE/SAIC Statement said that there was no financial data, such as credit card or bank account information, on the backup tapes. Note 17 to the audited financial statements (“Note 17”) contained in the SAIC Annual Report on Form 10-K for the fiscal year ended January 31, 2012, dated March 27, 2012 (the “2012 Form 10-K”), filed with the Securities and Exchange Commission (the “SEC”) includes the following:


There is no evidence that any of the data on the backup tapes has actually been accessed or viewed by an unauthorized person. In order for an unauthorized person to access or view the data on the backup tapes, it would require knowledge of and access to specific hardware and software and knowledge of the system and data structure.  The Company [SAIC] has notified potentially impacted persons by letter and is offering one year of credit monitoring services to those who request these services and in certain circumstances, one year of identity restoration services.

While the TRICARE/SAIC Statement contained similar language to that quoted above from Note 17, the earlier TRICARE/SAIC Statement also said, “The risk of harm to patients is judged to be low despite the data elements . . . .” Because Note 17 does not contain such “risk of harm” language, it would appear that (i) there may have been a change in the assessment of risk by SAIC six months after the SAIC Breach or (ii) SAIC did not want to state such a judgment in an SEC filing.


Note 17 also discloses that SAIC has reflected a $10 million loss provision in its financial statements relating to the  SAIC Class Action and various other putative class actions respecting the SAIC Breach filed between October 2011 and March 2012 (for a total of seven such actions filed in four different federal District Courts).  In Note 17 SAIC states that the $10 million loss provision represents the “low end” of SAIC’s estimated loss and is the amount of SAIC’s deductible under insurance covering judgments or settlements and defense costs of litigation respecting the SAIC Breach.  SAIC expresses the belief in Note 17 that any loss experienced in excess of the $10 million loss provision would not exceed the insurance coverage.  


Such insurance coverage would, however, likely not be available for any civil monetary penalties or counsel fees that may result from the current investigation of the SAIC Breach being conducted by the Office of Civil Rights of the Department of Health and Human Services (“HHS”) as described in Note 17.


Initially, SAIC did not deem it necessary to offer credit monitoring to the almost 5 million reportedly affected individuals. However, SAIC urged anyone suspecting they had been affected to contact the Federal Trade Commission’s identity theft website. Approximately 6 weeks later, the DoD issued a press release stating that TRICARE had “directed” SAIC to take a “proactive” response by covering a year of free credit monitoring and restoration services for any patients expressing “concern about their credit as a result of the data breach.”   The cost of such a proactive response easily can run into millions of dollars in the SAIC Breach. It is unclear the extent, if any, to which insurance coverage would be available to cover the cost of the proactive response mandated by the DoD, even if the credit monitoring, restoration services and other remedial activities of SAIC were to become part of a judgment or settlement in the putative class actions.


We have blogged about what constitutes an impermissible acquisition, access, use or disclosure of unsecured PHI that poses a “significant risk” of “financial, reputational, or other harm to the individual” amounting to a reportable HIPAA breach, and when that “significant risk” develops into harm that may create claims for damages by affected individuals. Our partner William Maruca, Esq., artfully borrows a phrase from former Defense Secretary Donald Rumsfeld in discussing a recent disappearance of unencrypted backup tapes reported by Women and Infants Hospital in Rhode Island. If one knows PHI has disappeared, but doubts it can be accessed or used (due to the specialized equipment and expertise required to access or use the PHI), there is a “known unknown” that complicates the analysis as to whether a breach has occurred. 


As we await publication of the “mega” HIPAA/HITECH regulations, continued tracking of the SAIC Breach and ensuing class action litigation (as well as SAIC’s SEC filings and other government filings and reports on the HHS list of large PHI security breaches) provides some insights as to how covered entities and business associates respond to incidents involving the loss or theft of, or possible access to, PHI.   If a covered entity or business associate concludes that the incident poses a “significant risk” of harm, but no harm actually materializes, perhaps (as the SAIC Motion repeatedly asserts) claims for damages are inappropriate. When the covered entity or business associate takes a “proactive” approach in responding to what it has determined to be a “significant risk” (such as by offering credit monitoring and restoration services), perhaps the risk becomes less significant. But once the incident (a/k/a, the ubiquitous laptop or computer tape theft from an employee’s car) has been deemed a breach, the chasm between incident and harm seems to open wide enough to encompass a mind-boggling number of privacy and security violation claims and issues.

As reported in the Houston Chronicle on June 28, 2012, an unencrypted laptop computer containing data on more than 30,000 patients of the University of Texas MD Anderson Cancer Center (“MD Anderson”) was stolen from a faculty member’s home on April 30, 2012. The stolen laptop scenario has become all too familiar (this blog series has reported on the high proportion of breaches resulting from the theft or loss of laptops or other portable devices), and even the high number of patients affected pales in comparison with the roughly 5 million patients affected in the SAIC breach

What caught my attention was the fact that MD Anderson posted notice of the breach on its website on June 28th, exactly 59 days after the theft took place. Pursuant to the interim final breach notification regulations, a covered entity must provide notice to affected individuals “without unreasonable delay and in no case later than 60 calendar days after discovery of the breach.”   Although an exception exists for prompt notification where a law enforcement official tells the covered entity (or business associate) that notification would impede the criminal investigation or cause damage to national security, the time required for performance of a criminal investigation is, presumably, less than 60 days. MD Anderson’s website notice gives every indication that it acted promptly and investigated thoroughly:


MD Anderson was alerted to the theft on May 1 and immediately began a thorough investigation to determine what information was contained on the laptop. After a detailed review with outside forensics experts, we have confirmed that the laptop may have contained some of our patients’ personal information, including patients’ names, medical record numbers, treatment and/or research information, and in some instances Social Security numbers.


Would patients have been better off knowing their data might have been illegally accessed prior to day 59 following the breach, or does the benefit of a thorough investigation outweigh the risk that earlier notification would have benefited patients? 


Navigant Consulting released an “Information Security and Data Breach Report” in April of this year that found that the average number of days between discovery of a breach involving medical records and disclosure was 63 days in the third quarter of 2011, compared with 65 days in the fourth quarter of 2011, an increase of 3%, despite the requirement that applicable HIPAA law requires patients to be notified “without unreasonable delay” and no later than 60 days following the breach. When analyzed in terms of the entity reporting the breach, “[h]ealthcare entities registered an 84% increase between discovery and disclosure from 51 days in Q3 to 94 days in Q4.” 


From this perspective, it seems MD Anderson did pretty well. Had the faculty member delayed his or her original notification to MD Anderson regarding the theft, however, MD Anderson might have been hard-pressed to meet the 60 day deadline. Covered entities such as MD Anderson (and business associates who provide protected health information to subcontractors) should be reminded that prompt communication and investigation is essential to meeting the “without unreasonable delay and in no case later than 60 calendar days” notification requirement, and must balance the need to get the facts straight with the need to alert affected individuals, and, where applicable, the Department of Health and Human Services and state agencies, as quickly as possible. 

Postings on this blog series have been following the continuing parade of security and privacy breaches of Protected Health Information (“PHI”) that have been reported on the U.S. Department of Health and Human Services list (the “HHS List”) of breaches of unsecured PHI affecting 500 or more individuals. On March 30, 2012, a large data security breach (the “Utah Breach”) that has not yet been posted on the HHS List was experienced by the Utah Department of Technology Services (“DTS”) on a computer server (the “DTS Server”) that stores Medicaid and Children’s Health Insurance Program (“CHIP”) claims data.  

DTS detected the Utah Breach on Monday, April 2, 2012 after the putative thieves began removing data from the DTS Server. Upon detection, DTS stated that it immediately shut down the DTS Server, has identified where the breakdown in security occurred and has implemented new processes to ensure this type of breach will not happen again.


DTS and the Utah Department of Health (“UDOH”) have established a separate Web page to provide “Latest Information” respecting the Utah Breach (the “Update Page”). The Update Page has turned out to be a useful reporting mechanism for what has become a continuously rising count of individuals affected by the Utah Breach. Currently the Update Page reports that “approximately 280,000 victims had their Social Security numbers stolen and approximately 500,000 other victims had less-sensitive personal information stolen.” Therefore, the total current number of identified affected individuals of the Utah Breach appears to be approximately 780,000. However, the various numbers of victims reflected on the Update Page are somewhat confusing, possibly due at least in part to the addition on a serial basis of newly discovered victims.

Information on the DTS Server included claims payment and eligibility inquiries regarding potential Medicaid and CHIP claimants. According to UDOH:

This could include sensitive, personal health information from individuals and health care providers such as Social Security numbers, names, dates of birth, addresses, diagnosis codes, national provider identification numbers, provider taxpayer identification numbers, and billing codes.

Interestingly, UDOH and DTS have made a clear distinction as to the assistance and support that they will provide to identified victims of the Utah Breach. Victims who had their Social Security numbers (“SSNs”) stolen will be offered one year of free credit monitoring services. Those victims of the Utah Breach who did not have SSNs stolen will not be offered free credit monitoring services, even though they have had other information compromised that has been characterized by UDOH as “less-sensitive.” Moreover, those who had SSNs stolen will receive priority in being alerted as to the Utah Breach over those victims who did not have stolen SSNs.

The Utah Breach is not the first large PHI breach experienced by UDOH.  The HHS List reports that on March 1, 2010, UDOH had an "Unauthorized Access/Disclosure" affecting 1,298 individuals respecting "Computer, Paper."  The HHS List also reflects that Utah Department of Workforce Services was involved as a Business Associate in the 2010 UDOH PHI breach.

It is possible that the current offering by UDOH of free credit monitoring services only to those Utah Breach victims who had stolen SSNs may be reevaluated or changed in the future. This blog series has previously reported the abrupt about-face by SAIC to offer credit monitoring services to the millions of victims of its large 2011 PHI breach after pressure by the Department of Defense to do so.

We will continue to monitor developments with regard to the Utah Breach.

The widely publicized pre-Christmas breach of confidential data held by Stratfor Global Intelligence Service (“Stratfor”), a company specializing in data security, reminded me that very little (if any) electronic information is truly secure. If Stratfor’s data can be hacked into, and the health information of nearly 5 million military health plan (TRICARE) members maintained by multi-billion dollar Department of Defense contractor Science Applications International Corporation (SAIC) (the subject of a five-part series of blog postings) can be accessed, can we trust that any electronically transmitted or stored information is really safe?  

I had the pleasure of having lunch with my friend Al yesterday, an IT guru who has worked in hospitals for years. Al understands and appreciates the need for privacy and security of information, and has the technological expertise to know where and how data can be hacked into or leaked out. Perhaps not surprisingly, Al does not do his banking on-line, and tries to avoid making on-line credit card purchases. 


Al and I discussed the proliferation of the use of iPhones and other mobile technology by physicians and staff in hospitals and other settings, a topic recently discussed in a newsletter published by the American Medical Association. Quick access to a patient’s electronic health record (EHR) is convenient and may even be life-saving in some circumstances, but use of these mobile devices creates additional portals for access to personal information that should be protected and secured. Encryption technology and, perhaps most significantly, use of this technology, barely keeps pace with the exponential rate at which we are creating and/or transmitting data electronically.  


On the other hand, trying to reverse the exponential growth of electronic communications and transactions would be futile and probably counter-productive. The horse is out of the gate, and expecting it to stop mid-stride and retreat back with a false-start call is irrational. The horse will race ahead just as surely as my daughter will text and check her Facebook page, my son will recharge his iPad, and I will turn around and head back to my office if I forget my iPhone. We want and need technology, but seem to forget or fail to fully understand the vast, unprotected and ever-expanding universe into which we send information when we use this technology. 


If we expect breaches or, at least, question our assumptions that personal information will be protected, perhaps we will get better at discerning how and when we disclose our personal information. An in-person conversation or transaction (for example, when Al goes to his bank in person or when a physician speaks directly to another physician about a patient’s care) is less likely to be accessed and used inappropriately than an electronic one. We can better assess the risks and benefits of communicating information electronically when we appreciate the security frailties inherent in electronic communication and storage. 


Perhaps Congress should take the lead in enacting laws that will help protect against data breaches that could compromise “critical infrastructure systems” (as proposed in the “PRECISE Act” introduced by Rep. Daniel E. Lungren (R-CA)), but more comprehensive, potentially expensive, and/or use-impeding cybersecurity laws might have the effect of tripping the racehorse mid-lap rather than controlling its pace or keeping it safely on course.

Five members of Congress (two Republicans and three Democrats) representing districts from far-flung states (Colorado, Florida, Massachusetts, New Jersey and Texas) are co-signers of a bipartisan letter dated December 2, 2011 (the “December 2 Letter”), addressed to the Director of the TRICARE Management Authority. The December 2 Letter was written to express the Congress members’ “deep concerns about a major breach of personally identifiable and protected health information” by TRICARE contractor Science Applications International Corporation (SAIC).” 

Michael Kline and I have previously blogged about the SAIC PHI breach in four previous postings on this blog series, the most recent posting of which was on November 9, 2011, shortly after TRICARE did an about-face and announced that it was directing SAIC to offer the 4.9 million affected individuals credit monitoring services and assistance.

The December 2 Letter requests “timely and thorough responses” by no later than February 2, 2012 to seventeen startlingly direct and often blame-loaded questions. The questions make it very clear that the authors believe SAIC (and/or TRICARE) should have done more to prevent the SAIC breach and should be doing more to protect affected individuals. Question 9 notes that SAIC offered to provide “victims” (note the word choice) credit monitoring services for a year, but goes on to point out that “such services are useless in protecting against medical identity theft and fraudulent health insurance claims.” It then asks whether victims will also be provided with “newly available medical identity theft monitoring,” and, if not, to explain why such monitoring would not be provided.


The December 2 Letter closes with a brief and scathing chronology of recent SAIC misconduct, after noting that “SAIC has received more than $20 billion in federal contracts over the previous three fiscal years,” and asks: “Why does [TRICARE] continue to contract with SAIC for its data handling and IT needs despite these major performance problems?”


The members of Congress who authored the December 2 Letter hail from both sides of the aisle and from various parts of the country, but a common link seems to be a strong interest in information privacy and security. For example, Edward Markey (D-Mass) and Joe Barton (R-Texas) co-chair the Bi-Partisan Privacy Caucus and recently focused on Facebook privacy issues.    Cliff Stearns (R-Florida) introduced an online privacy bill last spring. Diana DeGette (D-Colorado) has commented publicly on the importance of online privacy. 


While Rob Andrews (D-New Jersey) has no apparent recent history with respect to information privacy and security, he was the sponsor in 2003 of a bill, which was not ultimately enacted, designed to afford students and parents with private civil remedies for the violation of their privacy rights under the General Education Provisions Act. Moreover, in his continuing role as a member of the House Committee on Armed Services and its Subcommittee on Oversight and Investigation, he has a deep interest and abiding concern regarding large scale threats to the privacy and security of protected health information of millions of service individuals and their families.

By: Elizabeth Litten and Michael Kline

[Capitalized terms not otherwise defined in this Part 4 shall have the meanings assigned to them in Part 3 or earlier Parts.]


As reported in Part 3 of this blog series, Tricare and SAIC did not initially offer credit monitoring services to patients affected by the 2011 Breach made public on September 29, 2011, due to what was then judged to be the low “risk of harm” to those affected.  The Public Statement specifically answered the question “Will credit monitoring and restoration services be provided to protect affected individuals against possible identity theft?” as follows:


No.  The risk of harm to patients is judged to be low despite the data elements involved. Retrieving the data on the tapes would require knowledge of and access to specific hardware and software and knowledge of the system and data structure. To date, we have no conclusive evidence that indicates beneficiaries are at risk of identify theft, but all are encouraged to monitor their credit and place a free fraud alert of their credit for a period of 90 days using the Federal Trade Commission (FTC) web site.  


Now, less than 6 weeks later, Tricare has directed SAIC to provide one year of credit monitoring and restoration services to patients “who express concern about their credit” as a result of the 2011 Breach.  In a press release issued by the DoD on November 4, 2011, entitled "Proactive Response to Recent Data Breach Announced" (the “DoD Press Release”), Tricare Management Activity’s deputy director explains,


These additional proactive security measures exceed the industry standard to protect against the risk of identity theft.  We take very seriously our responsibility to offer patients peace of mind that their credit and quality of life will be unaffected by this breach.  


It is unclear that the new security measure exceeds the “industry standard,” as evidenced by numerous past postings respecting PHI security breaches in this blog series. In some cases as long as two years of credit monitoring was offered to affected individuals. However, given the assurances in the Public Statement to the “approximately 4.9 million patients treated at military hospitals and clinics during the past 20 years” that the risk of harm was low and there was no conclusive evidence that patients were at risk of identity theft, one can speculate as to whether Tricare’s abrupt about-face relates to new evidence, a revised judgment as to the risk of harm to affected patients and/or simply an abundance of caution as to its own exposure to risk. 


Then again, Tricare’s new position could have less to do with new concerns related to patient identity theft risk, and more to do with a “proactive response” or even a preemptive strike by Tricare and DoD to combat certain of the allegations in the putative class action lawsuit filed against them  in the U.S. District Court for the District of Columbia on October 11, 2011 (Gaffney v. Tricare Management Activity, et. al., Case No. 1:2011cv01800) (the “Class Action Complaint”).  Each of Virginia Gaffney and Adrienne Taylor, two of the plaintiffs named in the Class Action Complaint, has alleged that she had “incurred an economic loss as a result of having to purchase a credit monitoring service to alert her to potential misappropriation of her identity.” 


By offering the credit monitoring services to all of the 4.9 million affected individuals, Tricare and DoD may be endeavoring to render moot or at least mitigate the risk from those allegations in the Class Action Complaint. [Note: The recent posting of the 2011 Breach in the HHS List, which did not provide any information beyond that reflected in the Public Statement, earlier reported “5,117,799” as the approximate number of individuals affected, but the current number reported is “4,901,432.”]


The Class Action Complaint seeks judgment against Tricare and DoD for damages in an amount of $1,000 for each affected individual.  Perhaps Tricare and DoD did the quick math and realized that the cost of credit monitoring and restoration for a subset (those “expressing concern”) of the roughly 4.9 million affected patients would be far less than the almost $5 billion aggregate damages award sought in the Class Action Complaint.  Tricare may have reversed its stance as a result of this “risk of harm” analysis, and not because of new information or a revised evaluation related to a heightened risk of harm to affected individuals.

By Michael Kline and Elizabeth Litten


[Capitalized terms not otherwise defined in this Part 3 shall have the meanings assigned to them in Parts 1 and 2.]


The Public Statement reports that SAIC and Tricare are cooperating in the notification process but that no credit monitoring or restoration services will be provided in light of the “low risk of harm.” This was in contrast to the decision of Nemours in the Nemours Report to provide such services.


Since the release by SAIC of the Public Statement, Law 360 has reported that


(i)   According to Tricare, SAIC was “on the hook for the cost of notifying nearly 5 million program beneficiaries that computer tapes containing their personal data had been stolen”;

(ii)  A putative class action lawsuit was filed against Tricare and DoD (but not SAIC) respecting the 2011 Breach; and

(iii) Another putative class action lawsuit was filed against SAIC (but not Tricare and DoD) respecting the 2011 Breach. 


Further review of SAIC and its incidents regarding PHI reveals that the 2011 Breach was not the first such event for SAIC. However, it appears to the first such breach since the adoption of the Breach Notification Rule in August of 2009.


On July 21, 2007 The Washington Post reported that SAIC had acknowledged the previous day that “some of its employees sent unencrypted data — such as medical appointments, treatments and diagnoses — across the Internet” that related to 867,000 U.S. service members and their families. The Post article continues:


So far, there is no evidence that personal data have been compromised, but ‘the possibility cannot be ruled out,’ SAIC said in a press release. The firm has fixed the security breach, the release said.


Embedded later in the Post article is the following: 


The [2007] disclosure comes less than two years after a break-in at SAIC’s headquarters that put Social Security numbers and other personal information about tens of thousands of employees at risk. Among those affected were former SAIC executive David A. Kay, who was the chief U.N. weapons inspector in Iraq, and a former director who was a top CIA official.


It is not clear whether the earlier 2005 breach reported in the Post involved PHI or other personal information.

On January 20, 2009, SPAMfighter reported that SAIC had informed the Attorney General of New Hampshire of a data breach that had occurred involving malware. The SPAMfighter report continues that SAIC wrote a letter to many affected users to inform them about the potential compromise of personal information.  (A portion of such personal information would have been deemed PHI had it been part of health-related material.)

The SPAMfighter report also discloses the following:

Furthermore, the current [2009] breach at SAIC is not the only one. There was one other last year (2008), when keylogging software managed to bypass SAIC’s malware detection system. That breach had exposed mainly business account information.

As of the date of this blog post, the “News Releases” section on the SAIC Web site has no reference to the 2011 Breach. Nor does the “SEC Filings” section under “Investor Relations” on the SAIC Web site indicate any recent SEC filing that discloses the 2011 Breach. 

Coincidentally, the SEC issued a release on October 13, 2011 containing guidelines for public companies regarding disclosure obligations relating to cybersecurity risks and cyber incidents. In the context of SAIC, an $11 billion company, while the actual costs of notification and remediation of the 2011 Breach may run into millions of dollars, the 2011 Breach may not be deemed a “material” reportable event for SEC purposes by its management.

It is likely that much more will be heard in the future about the mammoth 2011 Breach and its aftermath that may give covered entities and their business associates valuable information and guidance to consider in identifying and confronting a future large PHI security breach. The 2011 Breach has not even yet appeared on the HHS List. The regulatory barriers preventing private actions under HIPAA/HITECH may be tested by the putative class action lawsuits. It will also be interesting to see whether the cooperation of SAIC with Tricare and DoD may wither in the face of the pressures of the lawsuits and potential controversy regarding the decision of SAIC not to provide credit monitoring and identity theft protection to affected individuals.

By Elizabeth Litten and Michael Kline

[Capitalized terms not otherwise defined in this Part 2 shall have the meanings assigned to them in Part 1.]


In an October 3, 2011 Securities and Exchange Commission (“SEC”) filing posted on its Web site, SAIC described itself as


a FORTUNE 500® scientific, engineering, and technology applications company that uses its deep domain knowledge to solve problems of vital importance to the nation and the world, in national security, energy and the environment, critical infrastructure, and health. The company’s approximately 41,000 employees serve customers in the U.S. Department of Defense, the intelligence community, the U.S. Department of Homeland Security, other U.S. Government civil agencies and selected commercial markets. Headquartered in McLean, Va., SAIC had annual revenues of approximately $11 billion for its fiscal year ended January 31, 2011.


The SAIC PHI breach, which potentially affected nearly 5 million individuals, was reported despite the fact that the PHI was contained on backup tapes used by the military health system, and despite, as explained in the Public Statement: 


The risk of harm to patients is judged to be low despite the data elements involved since retrieving the data on the tapes would require knowledge of and access to specific hardware and software and knowledge of the system and data structure…  [Q and A] Q. Can just anyone access this data? A. No. Retrieving the data on the tapes requires knowledge of and access to specific hardware and software and knowledge of the system and data structure.


The Public Statement goes on to say the following in another answer:


After careful deliberation, we have decided that we will notify all affected beneficiaries. We did not come to this decision lightly. We used a standard matrix to determine the level of risk that is associated with the loss of these tapes. Reading the tapes takes special machinery. Moreover, it takes a highly skilled individual to interpret the data on the tapes. Since we do not believe the tapes were taken with malicious intent, we believe the risk to beneficiaries is low. Nevertheless, the tapes are missing and given the totality of the circumstances, we determined that individual notification was required in accordance with DoD guidance. [Emphasis supplied.]


The lynchpin of SAIC’s final decision to notify all of the potentially affected individuals appeared to be the DoD guidance. In SAIC’s position as an $11 billion contractor that is heavily dependent on DoD and other U.S. government contracts as described above, it would appear that SAIC may not have had many practical alternatives but to notify beneficiaries.


SAIC conducted “careful deliberation” before reaching its result and indicated that the risk of breach was “low.” Had the DoD guidance not been a factor and had SAIC concluded that the case was one where an unlocked file or unencrypted data was discovered to exist, but it appeared that no one had opened such file or viewed such data, would SAIC’s conclusion have been the same? Would SAIC have come to the same conclusion as Nemours and decided to report? 

What is clear is that the breach notice determination should involve a careful risk and impact analysis, as SAIC asserts that it performed. Even the most deafening sound created by a tree crashing in the forest is unlikely to affect the ears of the airplane passengers flying overhead. Piping that sound into the airplane, though, is very likely to disgruntle (or even unduly panic) the passengers. 


[To be continued in Part 3]

By Elizabeth Litten and Michael Kline

A recent public statement (the “Public Statement”) was published regarding a breach (the “2011 Breach”) of protected health information (“PHI”) of nearly 5 million military clinic and hospital patients that involved Science Applications International Corporation (SAI-NYSE) (“SAIC”). The 2011 Breach occurred in SAIC’s apparent role as a business associate and/or subcontractor for Tricare Management Activity, a component of Tricare, the military health plan (collectively, “Tricare”) for active duty service members of the U.S. Department of Defense (“DoD”). 


According to the Public Statement the PHI “may include Social Security numbers, addresses and phone numbers, and some personal health data such as clinical notes, laboratory tests and prescriptions.” However, the Public Statement says that there is no financial data, such as credit card or bank account information, on the backup tapes.


The 2011 Breach is the largest single PHI security breach reported to date. The 2011 Breach highlights the decision-making process that covered entities and business associates should employ with respect to notifying the Department of Health and Human Services (“HHS”), other regulators and potentially affected individuals of a PHI breach.


The published “interim final rule” governing “Breach Notification for Unsecured Protected Health Information” (the “Breach Notification Rule”)  defines “breach” as “the acquisition, access, use or disclosure of protected health information [“PHI”] in a manner not permitted under subpart E of this part which compromises the security or privacy of the protected health information.” It further explains that “compromises the security or privacy of the protected health information means poses a significant risk of financial, reputational, or other harm to the individual.”  The Breach Notification Rule also defines the term “access” for purposes of the interim final rule as “the ability or the means necessary to read, write, modify, or communicate data/information or otherwise use any system resource.”


These definitions, reviewed in the context of several recent PHI breaches (including those “marchers in the parade” previously discussed on this blog), raise an important issue: at what point does “access” matter?   When is the mere “ability” to read PHI, without evidence that the PHI was actually read or was likely to have been read, enough to trigger the notice requirement under the Breach Notification Rule? Will covered entities provide notice out of an abundance of caution to report every unlocked or unencrypted data file, possibly flooding the HHS website that lists large PHI breaches (the “HHS List”) with potential breaches that have minimal or no likelihood of access and unduly alarming notified individuals? Could such reporting have the unintended effect of diluting the impact of reports involving actual theft and snooping?  


In this regard, an event reported on the Nemours Web site on October 7, 2011 (the “Nemours Report”), about a PHI security breach involving approximately 1.9 million individuals at a Nemours facility in Wilmington, DE is relevant. The Nemours Report stated that three unencrypted computer backup tapes containing patient billing and employee payroll were missing. The tapes reportedly were stored in a locked cabinet following a computer systems conversion completed in 2004. The tapes and locked cabinet were reported missing on September 8, 2011 and are believed to have been removed on or about August 10, 2011 during a facility remodeling project. 

Significantly, the Nemours Report stated the following:

There is no indication that the tapes were stolen or that any of the information on them has been accessed or misused. Independent security experts retained by Nemours determined that highly specialized equipment and specific technical knowledge would be necessary to access the information stored on these backup tapes. There are no medical records on the tapes.

The Nemours Report reveals that, in spite of the low likelihood of access, it not only disclosed the breach but was offering free credit monitoring, identify theft protection, and call center support to affected individuals. 


If the analysis as to whether access “poses a significant risk of … harm” takes into account the likelihood that PHI was actually accessed, rather than simply whether a theoretical “ability or means” to read, write, modify, or communicate PHI existed at some point in time, perhaps the “possible breach” floodgates will not burst open unnecessarily.  


[To be continued in Part 2]