Individuals who have received notice of a HIPAA breach are often offered free credit monitoring services for some period of time, particularly if the protected health information involved included social security numbers.  I have not (yet) received such a notice, but was concerned when I learned about the massive Equifax breach (see here to view a post on this topic on our Privacy Compliance and Data Security blog).

The Federal Trade Commission’s Consumer Information page sums it up well:

If you have a credit report, there’s a good chance that you’re one of the 143 million American consumers whose sensitive personal information was exposed in a data breach at Equifax… .”

I read the news reports this morning, and decided to go on the Equifax site, equifaxsecurity2017.com, to see if my information may have been affected and to sign up for credit file monitoring and identify theft protection (the services are free to U.S. consumers, whether or not affected by the breach, for one year).

The Equifax site describes the breach and lets users click on a “Potential Impact” tab to find out whether their information “may have been impacted” by the breach. Users can find out by clicking on the “Check Potential Impact” link and following these steps:

  1. Click on the below link, “Check Potential Impact,” and provide your last name and the last six digits of your Social Security number.
  2. Based on that information, you will receive a message indicating whether your personal information may have been impacted by this incident.
  3. Regardless of whether your information may have been impacted, we will provide you the option to enroll in TrustedID Premier. You will receive an enrollment date. You should return to this site and follow the “How do I enroll?” instructions below on or after that date to continue the enrollment and activation process. The enrollment period ends on Tuesday, November 21, 2017.

Before satisfying my curiosity, though, I decided to click on the “Terms of Use”, that too-rarely-used link typically included at the bottom of a webpage that sets forth the quid pro quo of using a website. Perhaps it was because my law partner (and the firm’s Chief Privacy Officer), Mark McCreary, has instilled some cautiousness in me, or because I wondered if there might be a catch. Why would Equifax offer a free year of credit monitoring to everyone, even those not affected by the breach? What would Equifax get in return?

I skimmed the “Product Agreement and Terms of Use”, noted the bolded text requiring arbitration of disputes and waiving my right to participate in a class action, but wasn’t concerned enough to resist the urge to find out if my information was affected.

I then followed the “Getting Started” process by following the TrustedID Premier link, and quickly received a notice stating that my information “may have been impacted” and that I could enroll on September 11, 2017 (my “designated enrollment date”).

Not more than a couple of hours later, I came across an article warning of the legal rights consumers give up by signing up on Equifax’s website. The article describes the arbitration clause in the Terms of Use provisions, and reports on New York Attorney General Eric Schneiderman’s tweet stating that the arbitration provision is “unacceptable and unenforceable”. The article also reports that, today, Equifax updated the Terms of Use language to include a new provision allowing a user to write to Equifax to opt-out of the arbitration provision within 30 days of the date the user first accepts the Product Agreement and Terms of Use.

My curiosity got the best of me and I now know I’m on the “affected persons” list, but I haven’t yet signed up for my free TrustedID Premier credit monitoring service. I have the weekend to decide whether to sign up for the service, and 30 days from Monday (if I actually sign up for the service) to decide whether to accept the “cost” of agreeing to binding arbitration.

 

In some respects, HIPAA has had a design problem from its inception. HIPAA is well known today as the federal law that requires protection of individually identifiable health information (and, though lesser-known, individual access to health information), but privacy and security were practically after-thoughts when HIPAA was enacted back in 1996. HIPAA (the Health Information Portability and Accountability Act) was originally described as an act:

To amend the Internal Revenue Code of 1986 to improve portability and continuity of health insurance coverage in the group and individual markets, to combat waste, fraud, and abuse in health insurance and health care delivery, to promote the use of medical savings accounts, to improve access to long-term care services and coverage, to simplify the administration of health insurance, and for other purposes.”

The privacy of individually identifiable health information was one of those “other purposes” only peripherally included in the 1996 act. Privacy protection was to be a follow-up, a “to-do” checklist item for the future. HIPAA directed the Secretary of Health and Human Services to recommend privacy standards to specified congressional committees within a year of enactment, and, if Congress did not enact privacy legislation within 3 years of enactment, the Secretary was to proceed with the promulgation of privacy regulations. Security was a bit more urgent, at least in the context of electronic health transactions such as claims, enrollment, eligibility, payment, and coordination of benefits. HIPAA required the Secretary to adopt standards for the security of electronic health information systems within 18 months of enactment.

This historical context casts some light on why our 2017-era electronic health records (EHR) systems often lack interoperability and yet are vulnerable to security breaches. HIPAA may be partially to blame, since it was primarily designed to make health insurance more portable and to encourage health insurers and providers to conduct transactions electronically. Privacy and security were the “oh, yeah, that too” add-ons to be fully addressed once electronic health information transactions were underway and EHR systems needed to support them already up and running. Since 1996, EHRs have developed at a clunky provider-by-provider (or health system-by-health system) and patient encounter-by-patient encounter basis, not only making them less accurate and efficient, but vulnerable to privacy and security lapses. (Think of the vast quantity of patient information breached when a hospital’s EHR or a health plan’s claims data base is hacked.)

This past June, I participated on a California Israel Medical Technology Summit panel discussing privacy and security issues. An audience member asked the panel whether we thought blockchain technology was the answer to HIPAA and other privacy and security-related legal requirements. I didn’t have a good answer, thinking “isn’t that the technology used to build Bitcoin, the payment system used by data hackers everywhere?”

This past July, Ritesh Gandotra, a director of global outsourcing for Xerox, wrote that blockchain technology could overhaul our “crippled” EHR management system. Gandotra writes “Historically, EHRs were never really designed to manage multi-institutional and lifetime medical records; in fact, patients tend to leave media data scattered across various medical institutes … This transition of data often leads to the loss of patient data.” He goes on to explain how blockchain, the “distributed ledger” technology originally associated with Bitcoin, can be used to link discrete patient records (or data “blocks”) contained in disparate EHRs into “an append-only, immutable, timestamped chain of content.”

Using blockchain technology to reconfigure EHRs makes sense. Ironically, the design flaw inherent in HIPAA’s original 1996 design (the promotion of electronic health transactions to foster portability and accountability in the health insurance context while treating privacy and security as an afterthought) can be fixed using the very same technology that built the payment network favored by ransomware hackers.

Post Contributed by Matthew J. Redding.

On April 26, 2017, Memorial Hermann Health System (“MHHS”) agreed to pay the U.S. Department of Health and Human Services (“HHS”) $2.4 million to settle potential violations of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Privacy Rule.

The underlying incident occurred in September of 2015, when a patient presented a falsified Texas driver’s license to MHHS’ staff upon appearing for the patient’s scheduled appointment. MHHS’ staff contacted law enforcement to verify the patient’s identification, and law enforcement thereafter came to the facility and arrested the patient. The incident drew some national attention from immigration activist groups.  Our partner Bill Maruca posted a blog in September 2015 that discussed the event.

It is important to note that the disclosure to law enforcement was not a contributing factor to the alleged HIPAA violation. In fact, a covered entity is permitted under HIPAA to disclose protected health information (“PHI”) to the limited extent necessary to report a crime occurring on its premises to law enforcement (see 45 CFR 164.512(f)(5)). However, in the MHHS case, the potential HIPAA violation occurred when MHHS issued press releases to several media outlets, addressed activist groups and state officials, and published a statement on its website following the incident, identifying the patient by name on each occasion.

The MHHS facility was a gynecology clinic, and its disclosure of a patient’s name associated with the facility constituted PHI. Therefore, the release of the patient’s name without the patient’s authorization was an impermissible disclosure of PHI under HIPAA.

The OCR alleged that, in addition to the impermissible disclosure of PHI, MHHS failed to document the sanctions imposed on its workforce members responsible for the impermissible disclosures.

6 Takeaways:

Covered entities, such as hospitals, physician practices, and other health care entities, should be cautious in publicizing any event involving its patients so to avoid impermissibly disclosing PHI. Further, public disclosure could open the door to liability under state statutes and common law (e.g., patient’s right of privacy, freedom from defamation, and contractual rights). Here are a few takeaways from the MHHS HIPAA settlement:

  1. PHI must remain protected. The disclosure of PHI to law enforcement, or the presence of health information in the public domain generally, does not relieve the covered entity of its obligations under HIPAA. Instead, covered entities have a continuing obligation to protect and maintain the privacy and security of PHI in their possession and control, and to use and disclose only such information as is permitted under HIPAA.
  2. Avoid inadvertently publishing PHI. PHI is not limited to health information that identifies a patient by his/her name, SSN, address or date of birth. In addition, it includes any other health information that could be used to identify the patient in conjunction with information publicly available. We’ve seen other instances where health care entities inadvertently publish PHI in violation of HIPAA, leading to significant fines (see NY Med: $2.2 Million settlement).
  3. Review your HIPAA policies and procedures with respect to your workforce’s publications and disclosures to the media. To the extent not done so already:
    1. Develop a policy prohibiting your general workforce from commenting to the media on patient events.
    2. Develop a policy with respect to monitoring statements published on your website to avoid publishing any PHI.
    3. Designate a workforce member with a sufficient HIPAA background (nudge, nudge, HIPAA Privacy Officer) to handle media inquiries and provide the workforce with contact information of such member.
  4. Review your HIPAA policies and procedures with respect to law enforcement events.
    1.  For events not likely to compromise the health and safety of others, encourage your workforce to handle such events as discreetly as possible, involving only those members of the workforce who have a need to know.
    2. Train your workforce to identify the situations where disclosure of a patient’s PHI to law enforcement is permissible and those situations where the patient’s authorization must be obtained before disclosing his/her PHI to law enforcement.
  5. Don’t forget to timely notify the affected individuals. If an impermissible disclosure of PHI occurs, do not let the publicizing of such disclosure cause you to forget your breach notification obligations. Failing to timely notify the affected individual could result in additional penalties (see Presence Health: $475,000 settlement). The breach notification clock starts ticking upon the covered entity’s discovery (as defined under HIPAA) of the impermissible disclosure.
  6. Document your responses to impermissible disclosures of PHI and your compliance with HIPAA. HIPAA places the burden on the covered entity to maintain sufficient documentation necessary to prove that it fulfilled all of its administrative obligations under HIPAA (see 78 FR 5566 at 5641). Therefore, once you discover an impermissible disclosure, document how your entity responds, including, without limitation, the breach analysis, proof that the patient notices were timely sent, sanctions imposed upon the responsible workforce members, actions taken to prevent similar impermissible disclosures, etc. Don’t forget, the covered entity is required to maintain such documentation for at least 6 years (see 45 C.F.R. 164.414 and 164.530(j)) .

Our partner Elizabeth Litten and I were recently featured again by our good friend Marla Durben Hirsch in her article in the April 2017 issue of Medical Practice Compliance Alert entitled “Business associates who farm out work create more risks for your patients’ PHI.” Full text can be found in the April, 2017 issue, but a synopsis is below.

In her article Marla cautioned, “Fully one-third of the settlements inked in 2016 with OCR [the Office of Civil Rights of the U.S. Department of Health and Human Services] dealt with breaches involving business associates.” She pointed out that the telecommuting practices of business associates (“BAs”) and their employees with respect to protected health information (“PHI”) create heightened risks for medical practices that are the covered entities (“CEs”) — CEs are ultimately responsible not only for their own HIPAA breaches but for HIPAA breaches of their BAs as well.

Kline observed, “Telecommuting is on the rise and this trend carries over to organizations that provide services to health care providers, such as billing and coding, telehealth providers, IT support and law firms.” Litten commented, “Most business associate agreements (BAAs) merely say that the business associate will protect the infor­mation but are not specific about how a business associate will do so, let alone how it will when PHI is off site.”

Litten and Kline added, “OCR’s sample business associate agreement is no dif­ferent, using general language that the business associate will use ‘appropriate safeguards’ and will ensure that its subcontractors do so too.”

Kline continued, “You have much less control over [these] people, who you don’t even know . . . . Moreover, frequently practices don’t even know that the business associate is allowing staff or subcontractors to take patient PHI off site. This is a collateral issue that can become the fulcrum of the relationship. And one loss can be a disaster.”

Some conclusions that can be drawn from Marla’s article include the following items which a CE should consider doing  when dealing with BAs:

  1. Select BAs with due care and with references where possible.
  2. Be certain that there is an effective BAA executed and in place with a BA before transmitting any PHI.
  3. Periodically review and update BAAs to ensure that they address changes in technology such as telecommuting, mobile device expansion and PHI use and maintenance practices.
  4. Ask questions of BAs to know where they and their employees use and maintain PHI, such as on laptops, personal mobile devices or network servers, and what encryption or other security practices are in place.
  5. Ask BAs what subcontractors (“SCs”) they may use and where the BAs and SCs are located (consider including a provision in BAAs that requires BAs and their SCs to be legally subject to the jurisdiction of HIPAA, so that HIPAA compliance by the CE and enforcement of the BAA can be more effective).
  6. Transmit PHI to the BA using appropriate security and privacy procedures, such as encryption.
  7. To the extent practicable, alert the BA in advance as to when and how transmission of PHI will take place.
  8. Obtain from each BAA a copy of its HIPAA policies and procedures.
  9. Maintain a readily accessible archive of all BAAs in effect to allow quick access and review when PHI issues arise.
  10. Have a HIPAA consultant available who can be contacted promptly to assist in addressing BA issues and provide education as to best practices.
  11. Document all actions taken to reduce risk from sharing PHI with BAs, including items 1 to 10 above.

Minimizing risk of PHI breaches by a CE requires exercising appropriate control over selection of, and contracting and ongoing interaction with, a BA. While there can be no assurance that such care will avoid HIPAA breaches for the CE, evidence of such responsible activity can reduce liability and penalties should violations occur.

It was the wallet comment in the response brief filed by the Federal Trade Commission (FTC) in the U.S. Court of Appeals for the 11th Circuit that prompted me to write this post. In its February 9, 2017 filing, the FTC argues that the likelihood of harm to individuals (patients who used LabMD’s laboratory testing services) whose information was exposed by LabMD roughly a decade ago is high because the “file was exposed to millions of users who easily could have found it – the equivalent of leaving your wallet on a crowded sidewalk.”

However, if one is to liken the LabMD file (referred to throughout the case as the “1718 File”) to a wallet and the patient information to cash or credit cards contained in that wallet, it is more accurate to describe the wallet as having been left on the kitchen counter in an unlocked New York City apartment. Millions of people could have found it, but they would have had to go looking for it, and would have had to walk through the door (or creep through a window) into a private residence to do so.

I promised to continue my discussion of LabMD’s appeal in the U.S. Court of Appeals for the 11th Circuit of the FTC’s Final Order back in January (see prior post here), planning to highlight arguments expressed in briefs filed by various amici curiae in support of LabMD.   Amici include physicians who used LabMD’s cancer testing services for their patients while LabMD was still in business, the non-profit National Federation of Independent Business, the non-profit, nonpartisan think tank TechFreedom, the U.S. Chamber of Commerce, and others. Amici make compelling legal arguments, but also emphasize several key facts that make this case both fascinating and unsettling:

The FTC has spent millions of taxpayer dollars on this case – even though there were no victims (not one has been identified in over seven years), LabMD’s data security practices were already regulated by the HHS under HIPAA, and, according to the FTC’s paid litigation expert, LabMD’s “unreasonableness” ceased no later than 2010. During the litigation, …   a whistleblower testified that the FTC’s staff … were bound up in collusion with Tiversa [the cybersecurity firm that discovered LabMD’s security vulnerability, tried to convince LabMD to purchase its remediation services, then reported LabMD to the FTC], a prototypical shakedown racket – resulting in a Congressional investigation and a devastating report issued by House Oversight Committee staff.” [Excerpt from TechFreedom’s amicus brief]

An image of Tiversa as taking advantage of the visible “counter-top wallet” emerges when reading the facts described in the November 13, 2015 Initial Decision of D. Michael Chappell, the Chief Administrative Law Judge (ALJ), a decision that would be reversed by the FTC in the summer of 2016 when it concluded that the ALJ applied the wrong legal standard for unfairness. The ALJ’s “Findings of Fact” (which are not disputed by the FTC in the reversal, notably) include the following:

“121. On or about February 25, 2008, Mr. Wallace, on behalf of Tiversa, downloaded the 1718 File from a LabMD IP address …

  1. The 1718 File was found by Mr. Wallace, and was downloaded from a peer-to-peer network, using a stand alone computer running a standard peer-to-peer client, such as LimeWire…
  2. Tiversa’s representations in its communications with LabMD … that the 1718 File was being searched for on peer-to-peer networks, and that the 1718 File had spread across peer-to-peer networks, were not true. These assertions were the “usual sales pitch” to encourage the purchase of remediation services from Tiversa… .”

The ALJ found that although the 1718 File was available for peer-to-peer sharing via use of specific search terms from June of 2007 through May of 2008, the 1718 File was actually only downloaded by Tiversa for the purpose of selling its security remediation services. The ALJ also found that there was no contention that Tiversa (or those Tiversa shared the 1718 File with, namely, a Dartmouth professor working on a study and the FTC) used the contents of the file to harm patients.

In short, while LabMD may have left its security “door” unlocked when an employee downloaded LimeWire onto a work computer, only Tiversa actually walked through that door and happened upon LabMD’s wallet on the counter-top. Had the wallet been left out in the open, in a public space (such as on a crowded sidewalk), it’s far more likely its contents would have been misappropriated.

As she has done in January for several years, our good friend Marla Durben Hirsch quoted my partner Elizabeth Litten and me in Medical Practice Compliance Alert in her article entitled “MIPS, OSHA, other compliance trends likely to affect you in 2017.” For her article, Marla asked various health law professionals to make predictions on diverse healthcare matters including HIPAA and enforcement activities. Full text can be found in the January 2017 issue, but excerpts are included below.

Marla also wrote a companion article in the January 2017 issue evaluating the results of predictions she published for 2016. The 2016 predictions appeared to be quite accurate in most respects. However, with the new Trump Administration, we are now embarking on very uncertain territory in multiple aspects of healthcare regulation and enforcement. Nevertheless, with some trepidation, below are some predictions for 2017 by Elizabeth and me taken from Marla’s article.

  1. The Federal Trade Commission’s encroachment into privacy and security will come into question. Litten said, “The new administration, intent on reducing the federal government’s size and interference with businesses, may want to curb this expansion of authority and activity. Other agencies’ wings may be clipped.” Kline added, “However, the other agencies may try to push back because they have bulked up to handle this increased enforcement.”
  2. Telemedicine will run into compliance issues. As telemedicine becomes more common, more legal problems will occur. “For instance, the privacy and the security of the information stored and transmitted will be questioned,” says Litten. “There will also be heightened concern of how clinicians who engage in telemedicine are being regulated,” adds Kline.
  3. The risks relating to the Internet of things will increase. “The proliferation of cyberattacks from hacking, ransomware and denial of service schemes will not abate in 2017, especially with the increase of devices that access the Internet, known as the ‘Internet of things,’ warns Kline. “More devices than ever will be networked, but providers may not protect them as well as they do other electronics and may not even realize that some of them —such as newer HVAC systems, ‘smart’ televisions or security cameras that can be controlled remotely — are also on the Internet and thus vulnerable,” adds Litten. “Those more vulnerable items will then be used to infiltrate providers’ other systems,” Kline observes.
  4. More free enterprise may create opportunities for providers. “For example, there may not be as much of a commitment to examine mergers,” says Kline. “The government may allow more gathering and selling of data in favor of business interests over privacy and security concerns,” says Litten.

The ambitious and multi-faceted foray by the Trump Administration into the world of healthcare among its many initiatives will make 2017 an interesting and controversial year. Predictions are always uncertain, but 2017 brings new and daunting risks to the prognosticators.  Nonetheless, when we look back at 2017, perhaps we may be saying, “The more things change, the more they stay the same.”

It was nearly three years ago that I first blogged about the Federal Trade Commission’s “Wild West” data breach enforcement action brought against now-defunct medical testing company LabMD.   Back then, I was simply astounded that a federal agency (the FTC) with seemingly broad and vague standards pertaining generally to “unfair” practices of a business entity would belligerently gallop onto the scene and allege non-compliance by a company specifically subject by statute to regulation by another federal agency. The other agency, the U.S. Department of Health and Human Services (HHS), has adopted comprehensive regulations containing extremely detailed standards pertaining to data security practices of certain persons and entities holding certain types of data.

The FTC Act governs business practices, in general, and has no implementing regulations, whereas HIPAA specifically governs Covered Entities and Business Associates and their Uses and Disclosures of Protected Health Information (or “PHI”) (capitalized terms that are all specifically defined by regulation). The HIPAA rulemaking process has resulted in hundreds of pages of agency interpretation published within the last 10-15 years, and HHS continuously posts guidance documents and compliance tools on its website. Perhaps I was naively submerged in my health care world, but I had no idea back then that a Covered Entity or Business Associate could have HIPAA-compliant data security practices that could be found to violate the FTC Act and result in a legal battle that would last the better part of a decade.

I’ve spent decades analyzing regulations that specifically pertain to the health care industry, so the realization that the FTC was throwing its regulation-less lasso around the necks of unsuspecting health care companies was both unsettling and disorienting. As I followed the developments in the FTC’s case against LabMD over the past few years (see additional blogs here, here, here and here), I felt like I was moving from the Wild West into Westworld, as the FTC’s arguments (and facts coming to light during the administrative hearings) became more and more surreal.

Finally, though, reality and reason have arrived on the scene as the LabMD saga plays out in the U.S. Court of Appeals for the 11th Circuit. The 11th Circuit issued a temporary stay of the FTC’s Final Order (which reversed the highly-unusual decision against the FTC by the Administrative Law Judge presiding over the administrative action) against LabMD.

The Court summarized the facts as developed in the voluminous record, portraying LabMD as having simply held its ground against the appalling, extortion-like tactics of the company that infiltrated LabMD’s data system. It was that company, Tiversa, that convinced the FTC to pursue LabMD in the first place. According to the Court, Tiversa’s CEO told one of its employees to make sure LabMD was “at the top of the list” of company names turned over to the FTC in the hopes that FTC investigations would pressure the companies into buying Tiversa’s services. As explained by the Court :

In 2008, Tiversa … a data security company, notified LabMD that it had a copy of the [allegedly breached data] file. Tiversa employed forensic analysts to search peer-to-peer networks specifically for files that were likely to contain sensitive personal information in an effort to “monetize” those files through targeted sales of Tiversa’s data security services to companies it was able to infiltrate. Tiversa tried to get LabMD’s business this way. Tiversa repeatedly asked LabMD to buy its breach detection services, and falsely claimed that copies of the 1718 file were being searched for and downloaded on peer-to-peer networks.”

As if the facts behind the FTC’s action weren’t shocking enough, the FTC’s Final Order imposed bizarrely stringent and comprehensive data security measures against LabMD, a now-defunct company, even though its only remaining data resides on an unplugged, disconnected computer stored in a locked room.

The Court, though, stayed the Final Order, finding even though the FTC’s interpretation of the FTC Act is entitled to deference,

LabMD … made a strong showing that the FTC’s factual findings and legal interpretations may not be reasonable… [unlike the FTC,] we do not read the word “likely” to include something that has a low likelihood. We do not believe an interpretation [like the FTC’s] that does this is reasonable.”

I was still happily reveling in the refreshingly simple logic of the Court’s words when I read the brief filed in the 11th Circuit by LabMD counsel Douglas Meal and Michelle Visser of Ropes & Gray LLP. Finally, the legal rationale for and clear articulation of the unease I felt nearly three years ago:   Congress (through HIPAA) granted HHS the authority to regulate the data security practices of medical companies like LabMD using and disclosing PHI, and the FTC’s assertion of authority over such companies is “repugnant” to Congress’s grant to HHS.

Continuation of discussion of 11th Circuit case and filings by amicus curiae in support of LabMD to be posted as Part 2.

It may not come as a surprise that Congressman Tom Price, MD (R-GA), a vocal critic of the Affordable Care Act who introduced legislation to replace it last spring, was selected to serve as Secretary of the U.S. Department of Health and Human Services (HHS) in the Trump administration. What may come as a bit of a surprise is how Price’s proposed replacement bill appears to favor transparency over individual privacy when it comes to certain health care claim information.

Section 601 of the “Empowering Patients First” bill (Bill) would require a health insurance issuer to send a report including specific claim information to a health plan, plan sponsor or plan administrator upon request (Report). The Bill would require the Report to include all information available to the health insurance issuer that is responsive to the request including … protected health information [PHI] … .”

Since a “plan sponsor” includes an employer (in the case of an employee benefit plan established or maintained by the employer), the Bill would entitle an employer to receive certain PHI of employees and employees’ dependents, as long as the employer first certifies to the health insurance issuer that its plan documents comply with HIPAA and that the employer, as plan sponsor, will safeguard the PHI and limit its use and disclosure to plan administrative functions.

The Report would include claim information that would not necessarily be PHI (such as aggregate paid claims experience by month and the total amount of claims pending as of the date of the report), but could also include:

“A separate description and individual claims report for any individual whose total paid claims exceed $15,000 during the 12-month period preceding the date of the report, including the following information related to the claims for that individual –

(i) a unique identifying number, characteristic or code for the individual;

(ii) the amounts paid;

(iii) the dates of service; and

(iv) applicable procedure and diagnosis codes.”

After reviewing the Report and within 10 days of its receipt, the plan, plan sponsor, or plan administrator would be permitted to make a written request for additional information concerning these individuals. If requested, the health insurance issuer must provide additional information on “the prognosis or recovery if available and, for individuals in active case management, the most recent case management information, including any future expected costs and treatment plan, that relate to the claims for that individual.”

Price transparency has been studied as a potentially effective way to lower health care costs, and employers are often in a difficult position when it comes to understanding what they pay, as plan sponsors, to provide health insurance coverage to employees and their families.   Laws and tools that increase the transparency of health care costs are desperately needed, and the Empowering Patients First bill valiantly attempts to create a mechanism whereby plan sponsors can identify and plan for certain health care costs. On the other hand, in requiring the disclosure of procedure and diagnosis codes to employers, and in permitting employers to obtain follow-up “case management” information, the bill seems to miss the HIPAA concept of “minimum necessary”. Even if an employer certifies that any PHI it receives will be used only for plan administration functions, employees might be concerned that details regarding their medical condition and treatments might affect employment decisions unfairly and in ways prohibited by HIPAA.

If Dr. Price steps up to lead HHS in the coming Trump administration, let’s hope he takes another look at this Section from the perspective of HHS as the enforcer of HIPAA privacy protections.

According to the latest HIPAA-related guidance (Guidance) published by the U.S. Department of Health and Human Services (HHS), a cloud service provider (CSP) maintaining a client’s protected health information (PHI) is a business associate even when the CSP can’t access or view the PHI. In other words, even where the PHI is encrypted and the CSP lacks the decryption key, the CSP is a business associate because it maintains the PHI and, therefore, has HIPAA-related obligations with respect to the PHI.

HHS explains:

While encryption protects ePHI by significantly reducing the risk of the information being viewed by unauthorized persons, such protections alone cannot adequately safeguard the confidentiality, integrity and availability of the ePHI, such as ensuring that the information is not corrupted by malware, or ensuring through contingency planning that the data remains available to authorized persons even during emergency or disaster situations. Further, encryption does not address other safeguards that are also important to maintaining confidentiality, such as administrative safeguards to analyze the risks to the ePHI or physical safeguards for systems and services that may house the ePHI.”

It makes sense to treat a CSP as a business associate if it holds PHI, even if it cannot view or access that PHI. After all, a business associate is a person or entity that performs a function or service on behalf of a covered entity (or another business associate) that requires it to create, receive, maintain, or transmit PHI.

Still, HHS’s explanation is less than satisfying, perhaps because it rather crudely mixes together very distinct HIPAA obligations:  protecting the confidentiality of PHI, on one hand, and protecting the integrity and availability of PHI, on the other.

Under the HIPAA regulations, a business associate is only required to provide notice to the covered entity following the discovery of a breach of unsecured PHI. “Unsecured” PHI is defined as PHI that is “not rendered unusable, unreadable, or indecipherable to unauthorized persons through the use of a technology or methodology specified by the Secretary [of HHS]…” – in other words, PHI that is not encrypted at a level that meets HHS’s standards. The HIPAA regulations also say that a breach excludes a “disclosure of PHI where a covered entity or business associate has a good faith belief that an unauthorized person to whom the disclosure was made would not reasonably have been able to retain such information.” Obviously, a disclosure of PHI that cannot be viewed will also not be able to be retained.

HHS contends that encryption “alone cannot adequately safeguard the confidentiality” of the PHI, but, later in the Guidance, concedes that if the PHI is encrypted at a level that meets HHS’s standards, an unauthorized incident would fall within the breach “safe harbor” and would not need to be reported to the CSP’s customer. In such a case, the confidentiality of the PHI would be adequately safeguarded by encryption alone and the CSP arguably would not have an obligation to do anything else under HIPAA to protect the confidentiality of the PHI.  The CSP would have an ongoing obligations, however, to protect the integrity and accessibility of the encrypted PHI under HIPAA. The encryption “blindfold” will simplify the CSP’s obligations under HIPAA.

A CSP is in a tricky position if it holds encrypted PHI for a customer, but does not know that it holds it. The Guidance emphasizes that if a CSP maintains PHI for a customer that is a covered entity or business associate, it must execute a business associate agreement with the customer, and risks enforcement action (such as reported here) by the Office of Civil Rights (OCR) within HHS if it doesn’t have one.

“OCR recognizes that there may, however, be circumstances where a CSP may not have actual or constructive knowledge that a covered entity or another business associate is using its services to create, receive, maintain, or transmit ePHI.  The HIPAA Rules provide an affirmative defense in cases where a CSP takes action to correct any non-compliance within 30 days … of the time that it knew or should have known of the violation… This affirmative defense does not, however, apply in cases where the CSP was not aware of the violation due to its own willful neglect.”

Two key takeaways from the Guidance for a CSP? If you are blindfolded from viewing the data you maintain or transmit on behalf of a customer, or otherwise do not know whether the data might bring HIPAA obligations along with it, take reasonable steps to find out if the customer is a covered entity or business associate and whether the data includes PHI.  If so, execute a business associate agreement. Then, make sure the blindfold (i.e., encryption level) meets HHS’s standards and do NOT accept or have access to the decryption key.  This way, you can focus your HIPAA compliance efforts on protecting the integrity and accessibility of the data, not on protecting its confidentiality.

The aftermath of the Orlando nightclub tragedy has led to much discussion about ways that healthcare providers can and should deal with compliance with health information privacy requirements in the face of disasters that injure or sicken many individuals in a limited time frame. One aspect is the pressure to treat patients while simultaneously fulfilling the need to supply current and relevant information to family, friends and the media about patient status without breaching HIPAA by improperly disclosing protected health information (PHI).

Our partner Elizabeth Litten has already posted a prior blog entry on some HIPAA issues that surfaced in the Orlando disaster. She and I were recently featured again by our good friend Marla Durben Hirsch in her article in the August, 2016 issue of Medical Practice Compliance Alert entitled “After Orlando: Keep family, friends informed without violating HIPAA.” Full text can be found in the August, 2016 issue, but a synopsis is below.

Some of the tips provided by Litten and Kline in the article include the following:

  1. Kline: Review and update your practice’s disaster/emergency plan. “[Orlando] was such a disaster, and [there was an appearance created that] the hospital didn’t approach it with calmness and a professional approach.”
  2. Litten: One of the easily forgotten parts of HIPAA is that a covered entity can exercise professional discretion. “It’s best if the patient can agree [to the disclosure]. But if the patient can’t give consent, the provider has ways to provide information and exercise that discretion.” Kline added, “So there’s no need for a HIPAA waiver; the rule anticipates such situa­tions.”
  3. Litten: Make sure that the practice’s desig­nated spokesperson is knowledgeable about HIPAA. “This includes what can and can’t be divulged to friends, family members and the media.
  4. Litten: Educate clinicians on professional discretion. “Remember when disclosing information to view it through the eyes of the patient. If you reasonably believe that a patient would want the information communicated, it’s OK. The professional is acting as proxy for a patient who can’t speak.” 
  5. Kline: Share contact information so staff can quickly get guidance from the practice’s compliance officer, especially during emer­gency situations. “For instance, a clinician being bombarded in the emergency department may have a question regarding whether she can tell a patient’s relative that the patient has been treated and released (she can).”
  6. Kline: Add this information to your practice’s HIPAA compliance program. “If you have policies and procedures on this, docu­ment that training occurred, and [if it] can show you attempted to comply with HIPAA, a court would be very hard pressed to find liability if a patient later claims invasion of privacy.” 
  7. Kline: Don’t discriminate. “So clinicians exercis­ing their professional discretion in informing friends and family members need to be gender neutral and objective.”
  8. Kline and Litten: Train administrative staff about HIPAA. “Not only should medical staff know the rules, but so should other staff members such as front desk staff, managers and billing personnel. It’s pretty bad when the head of a hospital is so uninformed about HIPAA that he provides misinformation to the mayor.”
  9. Kline and LittenHighlight the limitations of the disclosure. “You can’t go overboard and reveal more than is allowed. For instance, a provider can tell a friend or family member about an incapacitated patient’s location, general condition or death. But that doesn’t mean that he can divulge that the lab tests indicate the patient has hepatitis. HIPAA also requires that a disclosure be made only of information that’s ‘minimally necessary.'”

Planning ahead by healthcare providers can help them comply with HIPAA if a disaster situation occurs to keep family and friends informed as to patient status, while contemporaneously carrying out their most important tasks: saving lives, alleviating pain and providing quality care to victims. This approach, however, combined with a good helping of common sense and professionalism, is not confined to disasters – it should be the practice of providers for non-emergent situations as well.