Individuals who have received notice of a HIPAA breach are often offered free credit monitoring services for some period of time, particularly if the protected health information involved included social security numbers.  I have not (yet) received such a notice, but was concerned when I learned about the massive Equifax breach (see here to view a post on this topic on our Privacy Compliance and Data Security blog).

The Federal Trade Commission’s Consumer Information page sums it up well:

If you have a credit report, there’s a good chance that you’re one of the 143 million American consumers whose sensitive personal information was exposed in a data breach at Equifax… .”

I read the news reports this morning, and decided to go on the Equifax site, equifaxsecurity2017.com, to see if my information may have been affected and to sign up for credit file monitoring and identify theft protection (the services are free to U.S. consumers, whether or not affected by the breach, for one year).

The Equifax site describes the breach and lets users click on a “Potential Impact” tab to find out whether their information “may have been impacted” by the breach. Users can find out by clicking on the “Check Potential Impact” link and following these steps:

  1. Click on the below link, “Check Potential Impact,” and provide your last name and the last six digits of your Social Security number.
  2. Based on that information, you will receive a message indicating whether your personal information may have been impacted by this incident.
  3. Regardless of whether your information may have been impacted, we will provide you the option to enroll in TrustedID Premier. You will receive an enrollment date. You should return to this site and follow the “How do I enroll?” instructions below on or after that date to continue the enrollment and activation process. The enrollment period ends on Tuesday, November 21, 2017.

Before satisfying my curiosity, though, I decided to click on the “Terms of Use”, that too-rarely-used link typically included at the bottom of a webpage that sets forth the quid pro quo of using a website. Perhaps it was because my law partner (and the firm’s Chief Privacy Officer), Mark McCreary, has instilled some cautiousness in me, or because I wondered if there might be a catch. Why would Equifax offer a free year of credit monitoring to everyone, even those not affected by the breach? What would Equifax get in return?

I skimmed the “Product Agreement and Terms of Use”, noted the bolded text requiring arbitration of disputes and waiving my right to participate in a class action, but wasn’t concerned enough to resist the urge to find out if my information was affected.

I then followed the “Getting Started” process by following the TrustedID Premier link, and quickly received a notice stating that my information “may have been impacted” and that I could enroll on September 11, 2017 (my “designated enrollment date”).

Not more than a couple of hours later, I came across an article warning of the legal rights consumers give up by signing up on Equifax’s website. The article describes the arbitration clause in the Terms of Use provisions, and reports on New York Attorney General Eric Schneiderman’s tweet stating that the arbitration provision is “unacceptable and unenforceable”. The article also reports that, today, Equifax updated the Terms of Use language to include a new provision allowing a user to write to Equifax to opt-out of the arbitration provision within 30 days of the date the user first accepts the Product Agreement and Terms of Use.

My curiosity got the best of me and I now know I’m on the “affected persons” list, but I haven’t yet signed up for my free TrustedID Premier credit monitoring service. I have the weekend to decide whether to sign up for the service, and 30 days from Monday (if I actually sign up for the service) to decide whether to accept the “cost” of agreeing to binding arbitration.

 

In some respects, HIPAA has had a design problem from its inception. HIPAA is well known today as the federal law that requires protection of individually identifiable health information (and, though lesser-known, individual access to health information), but privacy and security were practically after-thoughts when HIPAA was enacted back in 1996. HIPAA (the Health Information Portability and Accountability Act) was originally described as an act:

To amend the Internal Revenue Code of 1986 to improve portability and continuity of health insurance coverage in the group and individual markets, to combat waste, fraud, and abuse in health insurance and health care delivery, to promote the use of medical savings accounts, to improve access to long-term care services and coverage, to simplify the administration of health insurance, and for other purposes.”

The privacy of individually identifiable health information was one of those “other purposes” only peripherally included in the 1996 act. Privacy protection was to be a follow-up, a “to-do” checklist item for the future. HIPAA directed the Secretary of Health and Human Services to recommend privacy standards to specified congressional committees within a year of enactment, and, if Congress did not enact privacy legislation within 3 years of enactment, the Secretary was to proceed with the promulgation of privacy regulations. Security was a bit more urgent, at least in the context of electronic health transactions such as claims, enrollment, eligibility, payment, and coordination of benefits. HIPAA required the Secretary to adopt standards for the security of electronic health information systems within 18 months of enactment.

This historical context casts some light on why our 2017-era electronic health records (EHR) systems often lack interoperability and yet are vulnerable to security breaches. HIPAA may be partially to blame, since it was primarily designed to make health insurance more portable and to encourage health insurers and providers to conduct transactions electronically. Privacy and security were the “oh, yeah, that too” add-ons to be fully addressed once electronic health information transactions were underway and EHR systems needed to support them already up and running. Since 1996, EHRs have developed at a clunky provider-by-provider (or health system-by-health system) and patient encounter-by-patient encounter basis, not only making them less accurate and efficient, but vulnerable to privacy and security lapses. (Think of the vast quantity of patient information breached when a hospital’s EHR or a health plan’s claims data base is hacked.)

This past June, I participated on a California Israel Medical Technology Summit panel discussing privacy and security issues. An audience member asked the panel whether we thought blockchain technology was the answer to HIPAA and other privacy and security-related legal requirements. I didn’t have a good answer, thinking “isn’t that the technology used to build Bitcoin, the payment system used by data hackers everywhere?”

This past July, Ritesh Gandotra, a director of global outsourcing for Xerox, wrote that blockchain technology could overhaul our “crippled” EHR management system. Gandotra writes “Historically, EHRs were never really designed to manage multi-institutional and lifetime medical records; in fact, patients tend to leave media data scattered across various medical institutes … This transition of data often leads to the loss of patient data.” He goes on to explain how blockchain, the “distributed ledger” technology originally associated with Bitcoin, can be used to link discrete patient records (or data “blocks”) contained in disparate EHRs into “an append-only, immutable, timestamped chain of content.”

Using blockchain technology to reconfigure EHRs makes sense. Ironically, the design flaw inherent in HIPAA’s original 1996 design (the promotion of electronic health transactions to foster portability and accountability in the health insurance context while treating privacy and security as an afterthought) can be fixed using the very same technology that built the payment network favored by ransomware hackers.

 

This blog recently discussed tips for a covered entity (CE) in dealing with a HIPAA business associate (BA). Now, even though you have adopted all of the tips and more, in this dangerous and ever more complex data security world, one of your BAs suffers a breach and it becomes your responsibility as the victim CE to respond. What should you do?

Our partner Elizabeth Litten and I discussed aspects of this issue with our good friend Marla Durben Hirsch who included some of our discussion in her article in the June 2017 issue of Medical Practice Compliance Alert entitled “6 ways practices can reduce the risk of delegating breach-notification duties.” Full text of the article can be found in the June, 2017 issue, but a number of the items included below are drawn from the article.

  1. Locate the most recent Business Associate Agreement (BAA) with the BA who experienced the breach, and see what it says about the post-breach obligations of the CE and the BA. Two important threshold issues are whether the BA complied with the time period for reporting breaches to the CE contained in the BAA and the remaining time, if any, available to the CE for complying with any reporting requirements under HIPAA and state law, remediation and limitation of loss requirements, and notification requirements to affected individuals (collectively, the Requirements).
  2. Determine promptly what are the time deadlines for notification to insurance carriers if cybersecurity or general liability insurance may be available to the BA and/or the CE for payment of expenses of the breach and its remediation.
  3. Spell out any circumstances where the BA will handle the consequences of a breach that occurred on its watch, and the scope of its responsibilities vs. that of the CE. These can range from delegating to the BA the entire range of Requirements to assumption by the CE of complying with the Requirements with payment by the BA of the costs thereof.
  4.  Make sure that the required reporting and notification Requirements are sent on CE stationery or, if such Requirements are being delegated to the BA (especially where the breach affected a number of different CEs), the notifications make it clear that the breach was attributable to the acts of the BA and not the CE. As CE, insist that the final wording of the required reporting and notification documents be subject to your approval.
  5.  Ensure that your staff is familiar with the circumstances of the breach so that they will be able to answer questions from affected individuals and the media intelligently. It may be advisable to designate a single trained and articulate person to be referred all inquiries, so that the responses are uniform, accurate and clear.
  6.  Assess whether the BA handled the breach adequately and whether you want to retain your relationship with the BA. Did the BA comply with HIPAA and the BAA in the post-breach period? Did the BA cooperate with the CE? What is the likelihood of a repeat breach by the BA? Is the CE assuming the risk of potential repeat HIPAA breaches if the BA relationship is continued?
  7. If you determine as CE that you will continue your relationship with the breaching BA, consider whether the BAA with the BA requires changes based upon the experience of the breach and its aftermath.
  8. As CE, consider modifying, updating and/or strengthening all of your BAAs as a result of your experience.
  9. As CE, you may require improving and/or changing your cybersecurity insurance coverage as a result of experience with the breach.
  10.  As CE, document all activities and decisions respecting HIPAA made in the post-breach period to defend your actions as reasonable and to provide concrete planning steps for future HIPAA compliance.

While all the precautions in the universe by a CE cannot eliminate a HIPAA breach by a BA, a CE that is victimized by such a HIPAA breach can do many things to reduce its liability and image damage and strengthen its own HIPAA compliance and risk avoidance efforts for the future by adopting the steps described above.

On July 23, 2017, Washington State will become the third state (after Illinois and Texas) to statutorily restrict the collection, storage and use of biometric data for commercial purposes. The law focuses on “biometric identifiers,” which it defines as “data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises, or other unique biological patterns or characteristics that is used to identify a specific individual.”

Notably for our readers, the law excludes all photos, video or audio recordings, or information “collected, used, or stored for health care treatment, payment or operations” subject to HIPAA from the definition of “biometric identifiers.”

We invite you to read Fox partner Gavin Skok’s extensive discussion of the new law and how it handles businesses’ collection, storage and use of biometric identifiers.

Post Contributed by Matthew J. Redding.

On April 26, 2017, Memorial Hermann Health System (“MHHS”) agreed to pay the U.S. Department of Health and Human Services (“HHS”) $2.4 million to settle potential violations of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Privacy Rule.

The underlying incident occurred in September of 2015, when a patient presented a falsified Texas driver’s license to MHHS’ staff upon appearing for the patient’s scheduled appointment. MHHS’ staff contacted law enforcement to verify the patient’s identification, and law enforcement thereafter came to the facility and arrested the patient. The incident drew some national attention from immigration activist groups.  Our partner Bill Maruca posted a blog in September 2015 that discussed the event.

It is important to note that the disclosure to law enforcement was not a contributing factor to the alleged HIPAA violation. In fact, a covered entity is permitted under HIPAA to disclose protected health information (“PHI”) to the limited extent necessary to report a crime occurring on its premises to law enforcement (see 45 CFR 164.512(f)(5)). However, in the MHHS case, the potential HIPAA violation occurred when MHHS issued press releases to several media outlets, addressed activist groups and state officials, and published a statement on its website following the incident, identifying the patient by name on each occasion.

The MHHS facility was a gynecology clinic, and its disclosure of a patient’s name associated with the facility constituted PHI. Therefore, the release of the patient’s name without the patient’s authorization was an impermissible disclosure of PHI under HIPAA.

The OCR alleged that, in addition to the impermissible disclosure of PHI, MHHS failed to document the sanctions imposed on its workforce members responsible for the impermissible disclosures.

6 Takeaways:

Covered entities, such as hospitals, physician practices, and other health care entities, should be cautious in publicizing any event involving its patients so to avoid impermissibly disclosing PHI. Further, public disclosure could open the door to liability under state statutes and common law (e.g., patient’s right of privacy, freedom from defamation, and contractual rights). Here are a few takeaways from the MHHS HIPAA settlement:

  1. PHI must remain protected. The disclosure of PHI to law enforcement, or the presence of health information in the public domain generally, does not relieve the covered entity of its obligations under HIPAA. Instead, covered entities have a continuing obligation to protect and maintain the privacy and security of PHI in their possession and control, and to use and disclose only such information as is permitted under HIPAA.
  2. Avoid inadvertently publishing PHI. PHI is not limited to health information that identifies a patient by his/her name, SSN, address or date of birth. In addition, it includes any other health information that could be used to identify the patient in conjunction with information publicly available. We’ve seen other instances where health care entities inadvertently publish PHI in violation of HIPAA, leading to significant fines (see NY Med: $2.2 Million settlement).
  3. Review your HIPAA policies and procedures with respect to your workforce’s publications and disclosures to the media. To the extent not done so already:
    1. Develop a policy prohibiting your general workforce from commenting to the media on patient events.
    2. Develop a policy with respect to monitoring statements published on your website to avoid publishing any PHI.
    3. Designate a workforce member with a sufficient HIPAA background (nudge, nudge, HIPAA Privacy Officer) to handle media inquiries and provide the workforce with contact information of such member.
  4. Review your HIPAA policies and procedures with respect to law enforcement events.
    1.  For events not likely to compromise the health and safety of others, encourage your workforce to handle such events as discreetly as possible, involving only those members of the workforce who have a need to know.
    2. Train your workforce to identify the situations where disclosure of a patient’s PHI to law enforcement is permissible and those situations where the patient’s authorization must be obtained before disclosing his/her PHI to law enforcement.
  5. Don’t forget to timely notify the affected individuals. If an impermissible disclosure of PHI occurs, do not let the publicizing of such disclosure cause you to forget your breach notification obligations. Failing to timely notify the affected individual could result in additional penalties (see Presence Health: $475,000 settlement). The breach notification clock starts ticking upon the covered entity’s discovery (as defined under HIPAA) of the impermissible disclosure.
  6. Document your responses to impermissible disclosures of PHI and your compliance with HIPAA. HIPAA places the burden on the covered entity to maintain sufficient documentation necessary to prove that it fulfilled all of its administrative obligations under HIPAA (see 78 FR 5566 at 5641). Therefore, once you discover an impermissible disclosure, document how your entity responds, including, without limitation, the breach analysis, proof that the patient notices were timely sent, sanctions imposed upon the responsible workforce members, actions taken to prevent similar impermissible disclosures, etc. Don’t forget, the covered entity is required to maintain such documentation for at least 6 years (see 45 C.F.R. 164.414 and 164.530(j)) .

Our partner Elizabeth Litten and I were recently featured again by our good friend Marla Durben Hirsch in her article in the April 2017 issue of Medical Practice Compliance Alert entitled “Business associates who farm out work create more risks for your patients’ PHI.” Full text can be found in the April, 2017 issue, but a synopsis is below.

In her article Marla cautioned, “Fully one-third of the settlements inked in 2016 with OCR [the Office of Civil Rights of the U.S. Department of Health and Human Services] dealt with breaches involving business associates.” She pointed out that the telecommuting practices of business associates (“BAs”) and their employees with respect to protected health information (“PHI”) create heightened risks for medical practices that are the covered entities (“CEs”) — CEs are ultimately responsible not only for their own HIPAA breaches but for HIPAA breaches of their BAs as well.

Kline observed, “Telecommuting is on the rise and this trend carries over to organizations that provide services to health care providers, such as billing and coding, telehealth providers, IT support and law firms.” Litten commented, “Most business associate agreements (BAAs) merely say that the business associate will protect the infor­mation but are not specific about how a business associate will do so, let alone how it will when PHI is off site.”

Litten and Kline added, “OCR’s sample business associate agreement is no dif­ferent, using general language that the business associate will use ‘appropriate safeguards’ and will ensure that its subcontractors do so too.”

Kline continued, “You have much less control over [these] people, who you don’t even know . . . . Moreover, frequently practices don’t even know that the business associate is allowing staff or subcontractors to take patient PHI off site. This is a collateral issue that can become the fulcrum of the relationship. And one loss can be a disaster.”

Some conclusions that can be drawn from Marla’s article include the following items which a CE should consider doing  when dealing with BAs:

  1. Select BAs with due care and with references where possible.
  2. Be certain that there is an effective BAA executed and in place with a BA before transmitting any PHI.
  3. Periodically review and update BAAs to ensure that they address changes in technology such as telecommuting, mobile device expansion and PHI use and maintenance practices.
  4. Ask questions of BAs to know where they and their employees use and maintain PHI, such as on laptops, personal mobile devices or network servers, and what encryption or other security practices are in place.
  5. Ask BAs what subcontractors (“SCs”) they may use and where the BAs and SCs are located (consider including a provision in BAAs that requires BAs and their SCs to be legally subject to the jurisdiction of HIPAA, so that HIPAA compliance by the CE and enforcement of the BAA can be more effective).
  6. Transmit PHI to the BA using appropriate security and privacy procedures, such as encryption.
  7. To the extent practicable, alert the BA in advance as to when and how transmission of PHI will take place.
  8. Obtain from each BAA a copy of its HIPAA policies and procedures.
  9. Maintain a readily accessible archive of all BAAs in effect to allow quick access and review when PHI issues arise.
  10. Have a HIPAA consultant available who can be contacted promptly to assist in addressing BA issues and provide education as to best practices.
  11. Document all actions taken to reduce risk from sharing PHI with BAs, including items 1 to 10 above.

Minimizing risk of PHI breaches by a CE requires exercising appropriate control over selection of, and contracting and ongoing interaction with, a BA. While there can be no assurance that such care will avoid HIPAA breaches for the CE, evidence of such responsible activity can reduce liability and penalties should violations occur.

It was the wallet comment in the response brief filed by the Federal Trade Commission (FTC) in the U.S. Court of Appeals for the 11th Circuit that prompted me to write this post. In its February 9, 2017 filing, the FTC argues that the likelihood of harm to individuals (patients who used LabMD’s laboratory testing services) whose information was exposed by LabMD roughly a decade ago is high because the “file was exposed to millions of users who easily could have found it – the equivalent of leaving your wallet on a crowded sidewalk.”

However, if one is to liken the LabMD file (referred to throughout the case as the “1718 File”) to a wallet and the patient information to cash or credit cards contained in that wallet, it is more accurate to describe the wallet as having been left on the kitchen counter in an unlocked New York City apartment. Millions of people could have found it, but they would have had to go looking for it, and would have had to walk through the door (or creep through a window) into a private residence to do so.

I promised to continue my discussion of LabMD’s appeal in the U.S. Court of Appeals for the 11th Circuit of the FTC’s Final Order back in January (see prior post here), planning to highlight arguments expressed in briefs filed by various amici curiae in support of LabMD.   Amici include physicians who used LabMD’s cancer testing services for their patients while LabMD was still in business, the non-profit National Federation of Independent Business, the non-profit, nonpartisan think tank TechFreedom, the U.S. Chamber of Commerce, and others. Amici make compelling legal arguments, but also emphasize several key facts that make this case both fascinating and unsettling:

The FTC has spent millions of taxpayer dollars on this case – even though there were no victims (not one has been identified in over seven years), LabMD’s data security practices were already regulated by the HHS under HIPAA, and, according to the FTC’s paid litigation expert, LabMD’s “unreasonableness” ceased no later than 2010. During the litigation, …   a whistleblower testified that the FTC’s staff … were bound up in collusion with Tiversa [the cybersecurity firm that discovered LabMD’s security vulnerability, tried to convince LabMD to purchase its remediation services, then reported LabMD to the FTC], a prototypical shakedown racket – resulting in a Congressional investigation and a devastating report issued by House Oversight Committee staff.” [Excerpt from TechFreedom’s amicus brief]

An image of Tiversa as taking advantage of the visible “counter-top wallet” emerges when reading the facts described in the November 13, 2015 Initial Decision of D. Michael Chappell, the Chief Administrative Law Judge (ALJ), a decision that would be reversed by the FTC in the summer of 2016 when it concluded that the ALJ applied the wrong legal standard for unfairness. The ALJ’s “Findings of Fact” (which are not disputed by the FTC in the reversal, notably) include the following:

“121. On or about February 25, 2008, Mr. Wallace, on behalf of Tiversa, downloaded the 1718 File from a LabMD IP address …

  1. The 1718 File was found by Mr. Wallace, and was downloaded from a peer-to-peer network, using a stand alone computer running a standard peer-to-peer client, such as LimeWire…
  2. Tiversa’s representations in its communications with LabMD … that the 1718 File was being searched for on peer-to-peer networks, and that the 1718 File had spread across peer-to-peer networks, were not true. These assertions were the “usual sales pitch” to encourage the purchase of remediation services from Tiversa… .”

The ALJ found that although the 1718 File was available for peer-to-peer sharing via use of specific search terms from June of 2007 through May of 2008, the 1718 File was actually only downloaded by Tiversa for the purpose of selling its security remediation services. The ALJ also found that there was no contention that Tiversa (or those Tiversa shared the 1718 File with, namely, a Dartmouth professor working on a study and the FTC) used the contents of the file to harm patients.

In short, while LabMD may have left its security “door” unlocked when an employee downloaded LimeWire onto a work computer, only Tiversa actually walked through that door and happened upon LabMD’s wallet on the counter-top. Had the wallet been left out in the open, in a public space (such as on a crowded sidewalk), it’s far more likely its contents would have been misappropriated.

A patient requests a copy of her medical record, and the hospital charges the per-page amount permitted under state law. Does this violate HIPAA? It may.

In the spring of 2016, the Office of Civil Rights (OCR) within the U.S. Department of Health and Human Services, the agency that enforces HIPAA, issued a new guidance document on individuals’ right to access their health information under HIPAA (“Access Guidance”).   The Access Guidance reminds covered entities that state laws that provide individuals with a greater right of access (for example, where the state law requires that access be given within a shorter time frame than that required by HIPAA, or allows individuals a free copy of medical records) preempt HIPAA, but state laws that are contrary to HIPAA’s access rights (such as where the state law prohibits disclosure to an individual of certain health information, like test reports) are preempted by HIPAA.

For New Jersey physicians, for example, this means they may not automatically charge $1.00 per page or $100.00 for the a copy of the entire medical record, whatever is less, despite the fact that the New Jersey Board of Medical Examiners (“BME”) expressly permits these charges.  In fact, according to the Access Guidance, physicians should not charge “per page” fees at all unless they maintain medical records in paper form only.  New Jersey physicians also may not charge the “administrative fee” of the lesser of $10.00 or 10% of the cost of reproducing x-rays and other documents that cannot be reproduced by ordinary copying machines.  Instead, a New Jersey physician may charge only the lesser of the charges permitted by the BME or those permitted under HIPAA, as described below.

HIPAA limits the amount that covered entities may charge a patient (or third party) requesting access to medical records to only a “reasonable, cost-based fee to provide the individual (or the individual’s personal representative) with a copy” of the record.  Only the following may be charged:   

(1) the reasonable cost of labor for creating and delivering the electronic or paper copy in the form and format requested or agreed upon by the individual, but not costs associated with reviewing the request, searching for or retrieving the records, and segregating or “otherwise preparing” the record for copying;  

(2) the cost of supplies for creating the paper copy (e.g., paper, toner) or electronic media (e.g., CD or USB drive) if the individual requests the records in portable electronic media; and  

(3) actual postage costs, when the individual requests mailing. 

The fee may also include the reasonable cost of labor to prepare an explanation or summary of the record, but only if the individual, in advance, chooses to receive and explanation or summary AND agrees to the fee to be charged for the explanation or the summary.   

A provider may calculate its actual labor costs each time an individual requests access, or may develop a schedule of costs for labor based on the average (and HIPAA-permitted types of) labor costs incurred in fulfilling standard types of access requests.  However, a provider is NOT permitted to charge an average labor cost as a per-page fee unless the medical record is: (1) maintained in paper form; and (2) the individual requests a paper copy or asks that the paper record be scanned into an electronic format.  Thus, under HIPAA, a per-page fee is not permitted for medical records that are maintained electronically.  As stated in the Access Guidance, “OCR does not consider per page fees for copies of … [protected health information] maintained electronically to be reasonable” for purposes of complying with the HIPAA rules.   

A provider may also decide to charge a flat fee of up to $6.50 (inclusive of labor, supplies, and any applicable postage) for requests for electronic copies of medical records maintained electronically.    OCR explains that the $6.50 is not a maximum, simply an alternative that may be used if the provider does not want to go through the process of calculating actual or average allowable costs for requests for electronic copies. 

OCR has identified compliance with “individual access rights” as one of seven areas of focus in the HIPAA audits of covered entities and business associates currently underway, signaling its concern that physicians and other covered entities may be violating HIPAA in this respect.  All covered entities should, therefore, calculate what HIPAA permits them to charge when copies of medical records are requested by an individual (or someone acting at the direction of or as a personal representative of an individual), compare that amount to the applicable state law charge limits, and make sure that only the lesser of the two amounts is charged.

 

As she has done in January for several years, our good friend Marla Durben Hirsch quoted my partner Elizabeth Litten and me in Medical Practice Compliance Alert in her article entitled “MIPS, OSHA, other compliance trends likely to affect you in 2017.” For her article, Marla asked various health law professionals to make predictions on diverse healthcare matters including HIPAA and enforcement activities. Full text can be found in the January 2017 issue, but excerpts are included below.

Marla also wrote a companion article in the January 2017 issue evaluating the results of predictions she published for 2016. The 2016 predictions appeared to be quite accurate in most respects. However, with the new Trump Administration, we are now embarking on very uncertain territory in multiple aspects of healthcare regulation and enforcement. Nevertheless, with some trepidation, below are some predictions for 2017 by Elizabeth and me taken from Marla’s article.

  1. The Federal Trade Commission’s encroachment into privacy and security will come into question. Litten said, “The new administration, intent on reducing the federal government’s size and interference with businesses, may want to curb this expansion of authority and activity. Other agencies’ wings may be clipped.” Kline added, “However, the other agencies may try to push back because they have bulked up to handle this increased enforcement.”
  2. Telemedicine will run into compliance issues. As telemedicine becomes more common, more legal problems will occur. “For instance, the privacy and the security of the information stored and transmitted will be questioned,” says Litten. “There will also be heightened concern of how clinicians who engage in telemedicine are being regulated,” adds Kline.
  3. The risks relating to the Internet of things will increase. “The proliferation of cyberattacks from hacking, ransomware and denial of service schemes will not abate in 2017, especially with the increase of devices that access the Internet, known as the ‘Internet of things,’ warns Kline. “More devices than ever will be networked, but providers may not protect them as well as they do other electronics and may not even realize that some of them —such as newer HVAC systems, ‘smart’ televisions or security cameras that can be controlled remotely — are also on the Internet and thus vulnerable,” adds Litten. “Those more vulnerable items will then be used to infiltrate providers’ other systems,” Kline observes.
  4. More free enterprise may create opportunities for providers. “For example, there may not be as much of a commitment to examine mergers,” says Kline. “The government may allow more gathering and selling of data in favor of business interests over privacy and security concerns,” says Litten.

The ambitious and multi-faceted foray by the Trump Administration into the world of healthcare among its many initiatives will make 2017 an interesting and controversial year. Predictions are always uncertain, but 2017 brings new and daunting risks to the prognosticators.  Nonetheless, when we look back at 2017, perhaps we may be saying, “The more things change, the more they stay the same.”

It was nearly three years ago that I first blogged about the Federal Trade Commission’s “Wild West” data breach enforcement action brought against now-defunct medical testing company LabMD.   Back then, I was simply astounded that a federal agency (the FTC) with seemingly broad and vague standards pertaining generally to “unfair” practices of a business entity would belligerently gallop onto the scene and allege non-compliance by a company specifically subject by statute to regulation by another federal agency. The other agency, the U.S. Department of Health and Human Services (HHS), has adopted comprehensive regulations containing extremely detailed standards pertaining to data security practices of certain persons and entities holding certain types of data.

The FTC Act governs business practices, in general, and has no implementing regulations, whereas HIPAA specifically governs Covered Entities and Business Associates and their Uses and Disclosures of Protected Health Information (or “PHI”) (capitalized terms that are all specifically defined by regulation). The HIPAA rulemaking process has resulted in hundreds of pages of agency interpretation published within the last 10-15 years, and HHS continuously posts guidance documents and compliance tools on its website. Perhaps I was naively submerged in my health care world, but I had no idea back then that a Covered Entity or Business Associate could have HIPAA-compliant data security practices that could be found to violate the FTC Act and result in a legal battle that would last the better part of a decade.

I’ve spent decades analyzing regulations that specifically pertain to the health care industry, so the realization that the FTC was throwing its regulation-less lasso around the necks of unsuspecting health care companies was both unsettling and disorienting. As I followed the developments in the FTC’s case against LabMD over the past few years (see additional blogs here, here, here and here), I felt like I was moving from the Wild West into Westworld, as the FTC’s arguments (and facts coming to light during the administrative hearings) became more and more surreal.

Finally, though, reality and reason have arrived on the scene as the LabMD saga plays out in the U.S. Court of Appeals for the 11th Circuit. The 11th Circuit issued a temporary stay of the FTC’s Final Order (which reversed the highly-unusual decision against the FTC by the Administrative Law Judge presiding over the administrative action) against LabMD.

The Court summarized the facts as developed in the voluminous record, portraying LabMD as having simply held its ground against the appalling, extortion-like tactics of the company that infiltrated LabMD’s data system. It was that company, Tiversa, that convinced the FTC to pursue LabMD in the first place. According to the Court, Tiversa’s CEO told one of its employees to make sure LabMD was “at the top of the list” of company names turned over to the FTC in the hopes that FTC investigations would pressure the companies into buying Tiversa’s services. As explained by the Court :

In 2008, Tiversa … a data security company, notified LabMD that it had a copy of the [allegedly breached data] file. Tiversa employed forensic analysts to search peer-to-peer networks specifically for files that were likely to contain sensitive personal information in an effort to “monetize” those files through targeted sales of Tiversa’s data security services to companies it was able to infiltrate. Tiversa tried to get LabMD’s business this way. Tiversa repeatedly asked LabMD to buy its breach detection services, and falsely claimed that copies of the 1718 file were being searched for and downloaded on peer-to-peer networks.”

As if the facts behind the FTC’s action weren’t shocking enough, the FTC’s Final Order imposed bizarrely stringent and comprehensive data security measures against LabMD, a now-defunct company, even though its only remaining data resides on an unplugged, disconnected computer stored in a locked room.

The Court, though, stayed the Final Order, finding even though the FTC’s interpretation of the FTC Act is entitled to deference,

LabMD … made a strong showing that the FTC’s factual findings and legal interpretations may not be reasonable… [unlike the FTC,] we do not read the word “likely” to include something that has a low likelihood. We do not believe an interpretation [like the FTC’s] that does this is reasonable.”

I was still happily reveling in the refreshingly simple logic of the Court’s words when I read the brief filed in the 11th Circuit by LabMD counsel Douglas Meal and Michelle Visser of Ropes & Gray LLP. Finally, the legal rationale for and clear articulation of the unease I felt nearly three years ago:   Congress (through HIPAA) granted HHS the authority to regulate the data security practices of medical companies like LabMD using and disclosing PHI, and the FTC’s assertion of authority over such companies is “repugnant” to Congress’s grant to HHS.

Continuation of discussion of 11th Circuit case and filings by amicus curiae in support of LabMD to be posted as Part 2.