I must thank Justice Scalia for injecting this delightfully descriptive term into the realm of health care.  Justice Scalia’s scathing dissent from the majority in the recent Supreme Court decision interpreting the Patient Protection and Affordable Care Act is rife with memorable expressions, but this is my favorite.

The Merriam Webster definition of jiggery-pokery is:

dishonest or suspicious activity:  underhanded manipulation or dealings; trickery.”

It’s not a term I’ve ever used before, but this old-fashioned, Dickensian-sounding term somehow practically begs for use in the context of a very modern and increasingly common context:  the HIPAA hacking incident.  A recent article in Becker’s Hospital Review lists the “50 biggest data breaches in healthcare” and the most common breach causes are far-and-away hacking and theft.   Notably, hacking incidents result in the highest number of affected individuals.  Here is the break-down:

*          18 hacking incidents (approximately 94 million affected individuals)

*          18 thefts (approximately 14 million affected individuals)

*          9 unauthorized accesses

*          3 missing equipment (1 storage disk, 1 hard drives, and 1 computer server)

*          1 improper disposal

*          1 “other”

In short, it seems that jiggery-pokery is involved far more often than mere carelessness when it comes to HIPAA breaches.  Covered entities and business associates should be alert to dishonest or suspicious activity generally, including from within, but should be especially alert when that activity involves the systems or equipment on which protected health information is created, received, maintained, or transmitted.

Copyright:  / 123RF Stock Photo
Copyright: / 123RF Stock Photo

Two recently reported breaches of hospital data affecting thousands of patients highlight the prevalence, and apparent success, of phishing attacks.  Boston-based Partners HealthCare notified approximately 3,300 patients after a group of staff members were tricked by a phishing scam, and Indiana-based St. Vincent Medical Group, a 20-hospital system that is part of Ascension Health, reported a breach affecting nearly 760 patients that resulted from a phishing attack that involved a single employee’s email account.

The Department of Health and Human Services (HHS), Office of the Chief Information Officer, published an “Information Systems Security Awareness Training” document for FY 2015 that is simple to follow, has easy and useful tips, and even includes enough pictures and graphic images to make what could be dull cybersecurity lessons visually stimulating (the kitten fishing photo comes from page 34).

The phishing-avoidance tips from HHS may seem obvious, but are worth regular review with covered entity and business associate staff that use company email accounts:

NEVER provide your password to anyone via email

*     Be suspicious of any email that:

    — Requests personal information.

    — Contains spelling and grammatical errors.

    — Asks you to click on a link.

    — Is unexpected or from a company or organization with whom you do not have a relationship.

*  If you are suspicious of an email:

    — Do not click on the links provided in the email.

    — Do not open any attachments in the email.

    — Do not provide personal information or financial data.

    — Do forward the email to the HHS Computer Security Incident Response Center (CSIRC) at csirc@hhs.gov and then delete it from your Inbox.

Although HHS’ CSIRC undoubtedly does not want a barrage of emails from non-government entity staff reporting potential phishing attacks, a covered entity or business associate should articulate a similar process for staff to follow when a suspicious email is identified.

Bill Maruca, a Fox Rothschild partner and editor of this blog, added the following tips for recognizing potential phishing emails:

* Be suspicious of any email that:

— Includes multiple other recipients in the “to” or “cc” fields.

— Displays a suspicious “from” address, such as a foreign URL for a U.S. company or a gmail or other “disposable” address for a business sender.  However, even when the sender’s address looks legitimate, it can still be “spoofed” or falsified by a malicious sender.

Bill points out that he has noticed these indicators in phishing emails in the past, even those that otherwise looked like they came from official sources.

My partner Elizabeth Litten was quoted at length by Alexis Kateifides in his recent article in DataGuidance entitled “USA: ‘Unique’ HIPAA violation results in $800,000 settlement.”  While the full text can be found in the June 26, 2014 article in DataGuidance.com, the following considerations are based upon points discussed in the article.  (Elizabeth herself has written many entries on this blog related to the topic of large breaches of protected health information (“PHI”) under HIPAA.)

The article discusses the U.S. Department of Health and Human Services (“HHS”) press release on June 23, 2014 that it had reached a Resolution Agreement (the “Resolution Agreement”) with Parkview Health System, Inc. d/b/a Parkview Physicians Group, f/k/a Parkview Medical Group, a nonprofit Indiana health provider (“Parkview”).  Pursuant to the Resolution Agreement, Parkview has agreed to pay $800,000 as a “Resolution Amount” and to enter a corrective action plan to address its HIPAA compliance issues.

There are several interesting aspects to the Parkview incident and Resolution Agreement, including those in Elizabeth’s comments quoted below.  The Resolution Agreement recites that it relates to an incident that was reported in a complaint to HHS on June 10, 2009 by Dr. Christine Hamilton, a physician.  Dr. Hamilton apparently asserted that Parkview failed to appropriately and reasonably safeguard the PHI of thousands of her patients in paper medical records that had been in the custody of Parkview from September, 2008 when Dr. Hamilton had retired.  The Resolution Agreement alleged that

Parkview employees, with notice that Dr. Hamilton had refused delivery and was not at home, delivered and left 71 cardboard boxes of these medical records unattended and accessible to unauthorized persons on the driveway of Dr. Hamilton’s home, within 20 feet of the public road and a short distance away (four doors down) from a heavily trafficked public shopping venue.

Elizabeth pointed out in the DataGuidance article, “The fact that Parkview left such a large volume of medical records in an unsecured location suggests that Parkview acted with ‘willful neglect’ as defined by the HIPAA regulations.”  Elizabeth went on to say in the article,

Although the resolution amount of $800,000 seems high given the fact that the records were, apparently, intended to be transferred from one covered entity to another, the circumstances suggest that Parkview was intentionally or recklessly indifferent to its obligation to secure the records. Second, the incident underscores the risks attendant to paper records. A majority of large breaches involve electronic records, but paper PHI is also vulnerable to breach and covered entities and business associates need to realize that large fines and penalties are also likely to be imposed for failure to secure PHI contained in paper form. . . .  While the Resolution Agreement does not provide very much information as to the events leading up to the ‘driveway dumping’ event, its recitation of the facts raises the possibility that Parkview may not have had proper authorization to hold the records in the first place. . . .  Parkview ‘received and took control’ of the records of 5,000 to 8,000 of the physician’s patients in September of 2008, because it was ‘assisting’ the physician with transitioning the patients to new providers and was ‘considering the possibility of purchasing’ the records from the physician, who was retiring and closing her practice. The ‘driveway dumping’ did not occur until June of 2009. It is not clear from the Resolution Agreement when the physician retired, whether Parkview ever treated the patients, and/or whether Parkview was otherwise appropriately authorized under HIPAA to receive, control and hold the records for this  10-month period.

In addition to the incisive analysis by Elizabeth in the DataGuidance article, there are a few other points worth making relative to the Resolution Agreement.  First, the incident is not posted on the HHS “Wall of Shame” for large PHI breaches affecting 500 or more individuals because it occurred several months before the effective date in September 2009 for such posting.  Second, it is noteworthy that it took almost five years after the incident for the Resolution Agreement to be signed between Parkview and HHS.  Third, the Web site of Parkview appears to be notably void to this point in time of any reference to the Resolution Agreement or payment of the Resolution Amount.

Finally, the Resolution Agreement took great effort to make it clear that the $800,000 payment by Parkview was not a civil monetary penalty (“CMP”) but a “resolution amount”; in the Resolution Agreement, HHS reserved the right to impose a CMP if there was noncompliance by Parkview with the corrective action plan.  The HHS Web site says the following about the relatively few cases of resolution agreements (only 21 reported to date):

A resolution agreement is a contract signed by HHS and a covered entity in which the covered entity agrees to perform certain obligations (e.g., staff training) and make reports to HHS, generally for a period of three years. During the period, HHS monitors the covered entity’s compliance with its obligations. A resolution agreement likely would include the payment of a resolution amount. These agreements are reserved to settle investigations with more serious outcomes. When HHS has not been able to reach a satisfactory resolution through the covered entity’s demonstrated compliance or corrective action through other informal means, civil money penalties (CMPs) may be imposed for noncompliance against a covered entity. To date, HHS has entered into 21 resolution agreements and issued CMPs to one covered entity.

LabMD is not the only company that has tried to buck the FTC’s assertion of authority over data security breaches. Wyndham Worldwide Corp. has spent the past year contesting the FTC’s authority to pursue enforcement actions based upon companies’ alleged “unfair” or “unreasonable” data security practices.  On Monday, April 7, 2014, the United States District Court for the District of New Jersey sided with the FTC and denied Wyndham’s motion to dismiss the FTC’s complaint.  The Court found that Section 5 of the FTC Act permits the FTC to regulate data security, and that the FTC is not required to issue formal rules about what companies must do to implement “reasonable” data security practices.  Notably, Wyndham’s data breach involved personal information that included names, addresses, email addresses, telephone numbers, payment card account numbers, expiration dates, and security codes, and did not involve HIPAA-covered Protected Health Information (PHI), so the court did not address the coexistence of data security authority under the FTC Act and HIPAA.

My Fox Rothschild LLP colleague, Todd Rodriguez, recently posted a blog describing the new HIPAA “Security Risk Assessment Tool” (SRA Tool) developed by the Department of Health and Human Services (HHS) as a collaboration between the Office for Civil Rights (OCR) and Office of the National Coordinator for Health Information Technology (ONC).  The tool, available for download, supplements the detailed Omnibus Rule standards with a practical, hands-on resource entities can use to evaluate the efficacy of their data security practices, and users are asked to provide feedback on the SRA Tool by submitting comments before June 2, 2014.

By contrast, the FTC expects companies to review its enforcement actions and figure out what not to do when it comes to data security practices.  As reported by Andrew Scurria in Law360 on March 26, 2014, FTC Chairwoman Ramirez appeared before a Senate Commerce Committee panel and responded to critiques that the FTC has not provided enough guidance to businesses regarding appropriate data security practices.  Ramirez referenced the consent decrees resulting from the cases the agency has brought and settled under the unfairness and deception prongs of Section 5 of the FTC Act, and said that companies can “discern” the FTC’s approach to data security enforcement from those.

The recent victory in the Wyndam case may be a sign that the “other” data security sheriff in town, the FTC, will ramp up its enforcement actions and catch more companies that have either been unable to “discern” the FTC’s expectations or to avoid hacking incidents or other security intrusions.  Unfortunately, because it does not appear that the FTC will issue any regulatory guidance in the near future about what companies can do to ensure that their data security practices are reasonable, companies must monitor closely the FTC’s actions, adjudications or other signals in an attempt to predict what the FTC views as data security best practices.

Elizabeth Litten and Michael Kline write:

We have posted several blogs, including those here and here, tracking the reported 2011 theft of computer tapes from the car of an employee of Science Applications International Corporation (“SAIC”) that contained the protected health information (“PHI”) affecting approximately 5 million military clinic and hospital patients (the “SAIC Breach”).  SAIC’s recent Motion to Dismiss (the “Motion”) the Consolidated Amended Complaint filed in federal court in Florida as a putative class action (the “SAIC Class Action”) highlights the gaps between an incident (like a theft) involving PHI, a determination that a breach of PHI has occurred, and the realization of harm resulting from the breach. SAIC’s Motion emphasizes this gap between the incident and the realization of harm, making it appear like a chasm so wide it practically swallows the breach into oblivion. 

 

SAIC, a giant publicly-held government contractor that provides information technology (“IT”) management and, ironically, cyber security services, was engaged to provide IT management services to TRICARE Management Activity, a component of TRICARE, the military health plan (“TRICARE”) for active duty service members working for the U.S. Department of Defense (“DoD”).  SAIC employees had been contracted to transport backup tapes containing TRICARE members’ PHI from one location to another.

 

According to the original statement published in late September of 2011 ( the “TRICARE/SAIC Statement”) the PHI “may include Social Security numbers, addresses and phone numbers, and some personal health data such as clinical notes, laboratory tests and prescriptions.” However, the TRICARE/SAIC Statement said that there was no financial data, such as credit card or bank account information, on the backup tapes. Note 17 to the audited financial statements (“Note 17”) contained in the SAIC Annual Report on Form 10-K for the fiscal year ended January 31, 2012, dated March 27, 2012 (the “2012 Form 10-K”), filed with the Securities and Exchange Commission (the “SEC”) includes the following:

 

There is no evidence that any of the data on the backup tapes has actually been accessed or viewed by an unauthorized person. In order for an unauthorized person to access or view the data on the backup tapes, it would require knowledge of and access to specific hardware and software and knowledge of the system and data structure.  The Company [SAIC] has notified potentially impacted persons by letter and is offering one year of credit monitoring services to those who request these services and in certain circumstances, one year of identity restoration services.

While the TRICARE/SAIC Statement contained similar language to that quoted above from Note 17, the earlier TRICARE/SAIC Statement also said, “The risk of harm to patients is judged to be low despite the data elements . . . .” Because Note 17 does not contain such “risk of harm” language, it would appear that (i) there may have been a change in the assessment of risk by SAIC six months after the SAIC Breach or (ii) SAIC did not want to state such a judgment in an SEC filing.

 

Note 17 also discloses that SAIC has reflected a $10 million loss provision in its financial statements relating to the  SAIC Class Action and various other putative class actions respecting the SAIC Breach filed between October 2011 and March 2012 (for a total of seven such actions filed in four different federal District Courts).  In Note 17 SAIC states that the $10 million loss provision represents the “low end” of SAIC’s estimated loss and is the amount of SAIC’s deductible under insurance covering judgments or settlements and defense costs of litigation respecting the SAIC Breach.  SAIC expresses the belief in Note 17 that any loss experienced in excess of the $10 million loss provision would not exceed the insurance coverage.  

 

Such insurance coverage would, however, likely not be available for any civil monetary penalties or counsel fees that may result from the current investigation of the SAIC Breach being conducted by the Office of Civil Rights of the Department of Health and Human Services (“HHS”) as described in Note 17.

  

Initially, SAIC did not deem it necessary to offer credit monitoring to the almost 5 million reportedly affected individuals. However, SAIC urged anyone suspecting they had been affected to contact the Federal Trade Commission’s identity theft website. Approximately 6 weeks later, the DoD issued a press release stating that TRICARE had “directed” SAIC to take a “proactive” response by covering a year of free credit monitoring and restoration services for any patients expressing “concern about their credit as a result of the data breach.”   The cost of such a proactive response easily can run into millions of dollars in the SAIC Breach. It is unclear the extent, if any, to which insurance coverage would be available to cover the cost of the proactive response mandated by the DoD, even if the credit monitoring, restoration services and other remedial activities of SAIC were to become part of a judgment or settlement in the putative class actions.

 

We have blogged about what constitutes an impermissible acquisition, access, use or disclosure of unsecured PHI that poses a “significant risk” of “financial, reputational, or other harm to the individual” amounting to a reportable HIPAA breach, and when that “significant risk” develops into harm that may create claims for damages by affected individuals. Our partner William Maruca, Esq., artfully borrows a phrase from former Defense Secretary Donald Rumsfeld in discussing a recent disappearance of unencrypted backup tapes reported by Women and Infants Hospital in Rhode Island. If one knows PHI has disappeared, but doubts it can be accessed or used (due to the specialized equipment and expertise required to access or use the PHI), there is a “known unknown” that complicates the analysis as to whether a breach has occurred. 

 

As we await publication of the “mega” HIPAA/HITECH regulations, continued tracking of the SAIC Breach and ensuing class action litigation (as well as SAIC’s SEC filings and other government filings and reports on the HHS list of large PHI security breaches) provides some insights as to how covered entities and business associates respond to incidents involving the loss or theft of, or possible access to, PHI.   If a covered entity or business associate concludes that the incident poses a “significant risk” of harm, but no harm actually materializes, perhaps (as the SAIC Motion repeatedly asserts) claims for damages are inappropriate. When the covered entity or business associate takes a “proactive” approach in responding to what it has determined to be a “significant risk” (such as by offering credit monitoring and restoration services), perhaps the risk becomes less significant. But once the incident (a/k/a, the ubiquitous laptop or computer tape theft from an employee’s car) has been deemed a breach, the chasm between incident and harm seems to open wide enough to encompass a mind-boggling number of privacy and security violation claims and issues.

The widely publicized pre-Christmas breach of confidential data held by Stratfor Global Intelligence Service (“Stratfor”), a company specializing in data security, reminded me that very little (if any) electronic information is truly secure. If Stratfor’s data can be hacked into, and the health information of nearly 5 million military health plan (TRICARE) members maintained by multi-billion dollar Department of Defense contractor Science Applications International Corporation (SAIC) (the subject of a five-part series of blog postings) can be accessed, can we trust that any electronically transmitted or stored information is really safe?  

I had the pleasure of having lunch with my friend Al yesterday, an IT guru who has worked in hospitals for years. Al understands and appreciates the need for privacy and security of information, and has the technological expertise to know where and how data can be hacked into or leaked out. Perhaps not surprisingly, Al does not do his banking on-line, and tries to avoid making on-line credit card purchases. 

 

Al and I discussed the proliferation of the use of iPhones and other mobile technology by physicians and staff in hospitals and other settings, a topic recently discussed in a newsletter published by the American Medical Association. Quick access to a patient’s electronic health record (EHR) is convenient and may even be life-saving in some circumstances, but use of these mobile devices creates additional portals for access to personal information that should be protected and secured. Encryption technology and, perhaps most significantly, use of this technology, barely keeps pace with the exponential rate at which we are creating and/or transmitting data electronically.  

 

On the other hand, trying to reverse the exponential growth of electronic communications and transactions would be futile and probably counter-productive. The horse is out of the gate, and expecting it to stop mid-stride and retreat back with a false-start call is irrational. The horse will race ahead just as surely as my daughter will text and check her Facebook page, my son will recharge his iPad, and I will turn around and head back to my office if I forget my iPhone. We want and need technology, but seem to forget or fail to fully understand the vast, unprotected and ever-expanding universe into which we send information when we use this technology. 

 

If we expect breaches or, at least, question our assumptions that personal information will be protected, perhaps we will get better at discerning how and when we disclose our personal information. An in-person conversation or transaction (for example, when Al goes to his bank in person or when a physician speaks directly to another physician about a patient’s care) is less likely to be accessed and used inappropriately than an electronic one. We can better assess the risks and benefits of communicating information electronically when we appreciate the security frailties inherent in electronic communication and storage. 

 

Perhaps Congress should take the lead in enacting laws that will help protect against data breaches that could compromise “critical infrastructure systems” (as proposed in the “PRECISE Act” introduced by Rep. Daniel E. Lungren (R-CA)), but more comprehensive, potentially expensive, and/or use-impeding cybersecurity laws might have the effect of tripping the racehorse mid-lap rather than controlling its pace or keeping it safely on course.

Five members of Congress (two Republicans and three Democrats) representing districts from far-flung states (Colorado, Florida, Massachusetts, New Jersey and Texas) are co-signers of a bipartisan letter dated December 2, 2011 (the “December 2 Letter”), addressed to the Director of the TRICARE Management Authority. The December 2 Letter was written to express the Congress members’ “deep concerns about a major breach of personally identifiable and protected health information” by TRICARE contractor Science Applications International Corporation (SAIC).” 

Michael Kline and I have previously blogged about the SAIC PHI breach in four previous postings on this blog series, the most recent posting of which was on November 9, 2011, shortly after TRICARE did an about-face and announced that it was directing SAIC to offer the 4.9 million affected individuals credit monitoring services and assistance.

The December 2 Letter requests “timely and thorough responses” by no later than February 2, 2012 to seventeen startlingly direct and often blame-loaded questions. The questions make it very clear that the authors believe SAIC (and/or TRICARE) should have done more to prevent the SAIC breach and should be doing more to protect affected individuals. Question 9 notes that SAIC offered to provide “victims” (note the word choice) credit monitoring services for a year, but goes on to point out that “such services are useless in protecting against medical identity theft and fraudulent health insurance claims.” It then asks whether victims will also be provided with “newly available medical identity theft monitoring,” and, if not, to explain why such monitoring would not be provided.

 

The December 2 Letter closes with a brief and scathing chronology of recent SAIC misconduct, after noting that “SAIC has received more than $20 billion in federal contracts over the previous three fiscal years,” and asks: “Why does [TRICARE] continue to contract with SAIC for its data handling and IT needs despite these major performance problems?”

 

The members of Congress who authored the December 2 Letter hail from both sides of the aisle and from various parts of the country, but a common link seems to be a strong interest in information privacy and security. For example, Edward Markey (D-Mass) and Joe Barton (R-Texas) co-chair the Bi-Partisan Privacy Caucus and recently focused on Facebook privacy issues.    Cliff Stearns (R-Florida) introduced an online privacy bill last spring. Diana DeGette (D-Colorado) has commented publicly on the importance of online privacy. 

 

While Rob Andrews (D-New Jersey) has no apparent recent history with respect to information privacy and security, he was the sponsor in 2003 of a bill, which was not ultimately enacted, designed to afford students and parents with private civil remedies for the violation of their privacy rights under the General Education Provisions Act. Moreover, in his continuing role as a member of the House Committee on Armed Services and its Subcommittee on Oversight and Investigation, he has a deep interest and abiding concern regarding large scale threats to the privacy and security of protected health information of millions of service individuals and their families.

By: Elizabeth Litten and Michael Kline

[Capitalized terms not otherwise defined in this Part 4 shall have the meanings assigned to them in Part 3 or earlier Parts.]

 

As reported in Part 3 of this blog series, Tricare and SAIC did not initially offer credit monitoring services to patients affected by the 2011 Breach made public on September 29, 2011, due to what was then judged to be the low “risk of harm” to those affected.  The Public Statement specifically answered the question “Will credit monitoring and restoration services be provided to protect affected individuals against possible identity theft?” as follows:

 

No.  The risk of harm to patients is judged to be low despite the data elements involved. Retrieving the data on the tapes would require knowledge of and access to specific hardware and software and knowledge of the system and data structure. To date, we have no conclusive evidence that indicates beneficiaries are at risk of identify theft, but all are encouraged to monitor their credit and place a free fraud alert of their credit for a period of 90 days using the Federal Trade Commission (FTC) web site.  

 

Now, less than 6 weeks later, Tricare has directed SAIC to provide one year of credit monitoring and restoration services to patients “who express concern about their credit” as a result of the 2011 Breach.  In a press release issued by the DoD on November 4, 2011, entitled "Proactive Response to Recent Data Breach Announced" (the “DoD Press Release”), Tricare Management Activity’s deputy director explains,

 

These additional proactive security measures exceed the industry standard to protect against the risk of identity theft.  We take very seriously our responsibility to offer patients peace of mind that their credit and quality of life will be unaffected by this breach.  

 

It is unclear that the new security measure exceeds the “industry standard,” as evidenced by numerous past postings respecting PHI security breaches in this blog series. In some cases as long as two years of credit monitoring was offered to affected individuals. However, given the assurances in the Public Statement to the “approximately 4.9 million patients treated at military hospitals and clinics during the past 20 years” that the risk of harm was low and there was no conclusive evidence that patients were at risk of identity theft, one can speculate as to whether Tricare’s abrupt about-face relates to new evidence, a revised judgment as to the risk of harm to affected patients and/or simply an abundance of caution as to its own exposure to risk. 

 

Then again, Tricare’s new position could have less to do with new concerns related to patient identity theft risk, and more to do with a “proactive response” or even a preemptive strike by Tricare and DoD to combat certain of the allegations in the putative class action lawsuit filed against them  in the U.S. District Court for the District of Columbia on October 11, 2011 (Gaffney v. Tricare Management Activity, et. al., Case No. 1:2011cv01800) (the “Class Action Complaint”).  Each of Virginia Gaffney and Adrienne Taylor, two of the plaintiffs named in the Class Action Complaint, has alleged that she had “incurred an economic loss as a result of having to purchase a credit monitoring service to alert her to potential misappropriation of her identity.” 

 

By offering the credit monitoring services to all of the 4.9 million affected individuals, Tricare and DoD may be endeavoring to render moot or at least mitigate the risk from those allegations in the Class Action Complaint. [Note: The recent posting of the 2011 Breach in the HHS List, which did not provide any information beyond that reflected in the Public Statement, earlier reported “5,117,799” as the approximate number of individuals affected, but the current number reported is “4,901,432.”]

 

The Class Action Complaint seeks judgment against Tricare and DoD for damages in an amount of $1,000 for each affected individual.  Perhaps Tricare and DoD did the quick math and realized that the cost of credit monitoring and restoration for a subset (those “expressing concern”) of the roughly 4.9 million affected patients would be far less than the almost $5 billion aggregate damages award sought in the Class Action Complaint.  Tricare may have reversed its stance as a result of this “risk of harm” analysis, and not because of new information or a revised evaluation related to a heightened risk of harm to affected individuals.

By Michael Kline and Elizabeth Litten

 

[Capitalized terms not otherwise defined in this Part 3 shall have the meanings assigned to them in Parts 1 and 2.]

 

The Public Statement reports that SAIC and Tricare are cooperating in the notification process but that no credit monitoring or restoration services will be provided in light of the “low risk of harm.” This was in contrast to the decision of Nemours in the Nemours Report to provide such services.

 

Since the release by SAIC of the Public Statement, Law 360 has reported that

 

(i)   According to Tricare, SAIC was “on the hook for the cost of notifying nearly 5 million program beneficiaries that computer tapes containing their personal data had been stolen”;

(ii)  A putative class action lawsuit was filed against Tricare and DoD (but not SAIC) respecting the 2011 Breach; and

(iii) Another putative class action lawsuit was filed against SAIC (but not Tricare and DoD) respecting the 2011 Breach. 

 

Further review of SAIC and its incidents regarding PHI reveals that the 2011 Breach was not the first such event for SAIC. However, it appears to the first such breach since the adoption of the Breach Notification Rule in August of 2009.

 

On July 21, 2007 The Washington Post reported that SAIC had acknowledged the previous day that “some of its employees sent unencrypted data — such as medical appointments, treatments and diagnoses — across the Internet” that related to 867,000 U.S. service members and their families. The Post article continues:

 

So far, there is no evidence that personal data have been compromised, but ‘the possibility cannot be ruled out,’ SAIC said in a press release. The firm has fixed the security breach, the release said.

 

Embedded later in the Post article is the following: 

 

The [2007] disclosure comes less than two years after a break-in at SAIC’s headquarters that put Social Security numbers and other personal information about tens of thousands of employees at risk. Among those affected were former SAIC executive David A. Kay, who was the chief U.N. weapons inspector in Iraq, and a former director who was a top CIA official.

 

It is not clear whether the earlier 2005 breach reported in the Post involved PHI or other personal information.

On January 20, 2009, SPAMfighter reported that SAIC had informed the Attorney General of New Hampshire of a data breach that had occurred involving malware. The SPAMfighter report continues that SAIC wrote a letter to many affected users to inform them about the potential compromise of personal information.  (A portion of such personal information would have been deemed PHI had it been part of health-related material.)

The SPAMfighter report also discloses the following:

Furthermore, the current [2009] breach at SAIC is not the only one. There was one other last year (2008), when keylogging software managed to bypass SAIC’s malware detection system. That breach had exposed mainly business account information.

As of the date of this blog post, the “News Releases” section on the SAIC Web site has no reference to the 2011 Breach. Nor does the “SEC Filings” section under “Investor Relations” on the SAIC Web site indicate any recent SEC filing that discloses the 2011 Breach. 

Coincidentally, the SEC issued a release on October 13, 2011 containing guidelines for public companies regarding disclosure obligations relating to cybersecurity risks and cyber incidents. In the context of SAIC, an $11 billion company, while the actual costs of notification and remediation of the 2011 Breach may run into millions of dollars, the 2011 Breach may not be deemed a “material” reportable event for SEC purposes by its management.

It is likely that much more will be heard in the future about the mammoth 2011 Breach and its aftermath that may give covered entities and their business associates valuable information and guidance to consider in identifying and confronting a future large PHI security breach. The 2011 Breach has not even yet appeared on the HHS List. The regulatory barriers preventing private actions under HIPAA/HITECH may be tested by the putative class action lawsuits. It will also be interesting to see whether the cooperation of SAIC with Tricare and DoD may wither in the face of the pressures of the lawsuits and potential controversy regarding the decision of SAIC not to provide credit monitoring and identity theft protection to affected individuals.

By Elizabeth Litten and Michael Kline

[Capitalized terms not otherwise defined in this Part 2 shall have the meanings assigned to them in Part 1.]

 

In an October 3, 2011 Securities and Exchange Commission (“SEC”) filing posted on its Web site, SAIC described itself as

 

a FORTUNE 500® scientific, engineering, and technology applications company that uses its deep domain knowledge to solve problems of vital importance to the nation and the world, in national security, energy and the environment, critical infrastructure, and health. The company’s approximately 41,000 employees serve customers in the U.S. Department of Defense, the intelligence community, the U.S. Department of Homeland Security, other U.S. Government civil agencies and selected commercial markets. Headquartered in McLean, Va., SAIC had annual revenues of approximately $11 billion for its fiscal year ended January 31, 2011.

 

The SAIC PHI breach, which potentially affected nearly 5 million individuals, was reported despite the fact that the PHI was contained on backup tapes used by the military health system, and despite, as explained in the Public Statement: 

 

The risk of harm to patients is judged to be low despite the data elements involved since retrieving the data on the tapes would require knowledge of and access to specific hardware and software and knowledge of the system and data structure…  [Q and A] Q. Can just anyone access this data? A. No. Retrieving the data on the tapes requires knowledge of and access to specific hardware and software and knowledge of the system and data structure.

 

The Public Statement goes on to say the following in another answer:

 

After careful deliberation, we have decided that we will notify all affected beneficiaries. We did not come to this decision lightly. We used a standard matrix to determine the level of risk that is associated with the loss of these tapes. Reading the tapes takes special machinery. Moreover, it takes a highly skilled individual to interpret the data on the tapes. Since we do not believe the tapes were taken with malicious intent, we believe the risk to beneficiaries is low. Nevertheless, the tapes are missing and given the totality of the circumstances, we determined that individual notification was required in accordance with DoD guidance. [Emphasis supplied.]

 

The lynchpin of SAIC’s final decision to notify all of the potentially affected individuals appeared to be the DoD guidance. In SAIC’s position as an $11 billion contractor that is heavily dependent on DoD and other U.S. government contracts as described above, it would appear that SAIC may not have had many practical alternatives but to notify beneficiaries.

 

SAIC conducted “careful deliberation” before reaching its result and indicated that the risk of breach was “low.” Had the DoD guidance not been a factor and had SAIC concluded that the case was one where an unlocked file or unencrypted data was discovered to exist, but it appeared that no one had opened such file or viewed such data, would SAIC’s conclusion have been the same? Would SAIC have come to the same conclusion as Nemours and decided to report? 

What is clear is that the breach notice determination should involve a careful risk and impact analysis, as SAIC asserts that it performed. Even the most deafening sound created by a tree crashing in the forest is unlikely to affect the ears of the airplane passengers flying overhead. Piping that sound into the airplane, though, is very likely to disgruntle (or even unduly panic) the passengers. 

 

[To be continued in Part 3]