You may be surprised to learn that those “extra” benefits your company offers to its employees such as your employee assistance program (“EAP”) and wellness program likely are subject to the HIPAA privacy, security and breach notification rules (collectively, “HIPAA Rules”). Part 1 covers why most EAPs are subject to the HIPAA Rules. Part 2 will discuss wellness programs. In both cases, EAPs and wellness programs must comply with the HIPAA Rules to the extent that they are “group health plans” that provide medical care.

As background, the HIPAA Rules apply to “covered entities” and their “business associates.” Health plans and most healthcare providers are “covered entities.” Employers, in their capacity as employers, are not subject to the HIPAA Rules. However, the HIPAA Rules do apply to any “protected health information” (“PHI”) an employer/plan administrator holds on a health plan’s behalf when the employer designs or administers the plan.

Plan administrators and some EAP vendors may not consider EAPs to be group health plans because they do not think of EAPs as providing medical care. Most EAPs, however, do provide medical care. They are staffed by health care providers, such as licensed counselors, and assist employees who are struggling with family or personal problems that rise to the level of a medical condition, including substance abuse and mental health issues. In contrast, an EAP that provides only referrals on the basis of generally available public information, and that is not staffed by health care providers, such as counselors, does not provide medical care and is not subject to the HIPAA Rules.

A self-insured EAP that provides medical care is subject to the HIPAA Rules, and the employer that sponsors and administers the EAP remains responsible for compliance with the HIPAA Rules because it acts on behalf of the plan.   On the other hand, for an EAP that is fully-insured or embedded in a fully-insured policy, such as long-term disability coverage, the insurer will have the primary obligations for compliance with the HIPAA Rules for the EAP. The employer will not be responsible for overall compliance with the HIPAA Rules for an insured EAP even though it provides medical care, but only if the employer does not receive PHI from the insurer or only receives summary health information or enrollment/disenrollment information. Even then, the employer needs to ensure it doesn’t retaliate against a participant for exercising their rights under the HIPAA Rules or require waiver of rights under the HIPAA Rules with respect to the EAP.

An EAP that qualifies as an “excepted benefit” for purposes of HIPAA portability and the Affordable Care Act (as is most often the case because the EAP is offered at no cost, eligibility is not conditioned on participation in another plan (such as a major medical plan), benefits aren’t coordinated with another plan, and the EAP does not provide “significant benefits in the nature of medical care”) can be subject to the HIPAA Rules. In other words, just because you’ve determined that your EAP is a HIPAA excepted benefit doesn’t mean the EAP avoids the HIPAA Rules. Most EAPs are HIPAA excepted benefits, yet subject to full compliance with the HIPAA Rules.

Employers/plan administrators facing unexpected compliance obligations under the HIPAA Rules because of a self-insured EAP that provides medical care will need to enter into a HIPAA business associate agreement with the EAP vendor, amend the EAP plan document to include language required by the HIPAA Rules and develop and implement other compliance documents and policies and procedures under the HIPAA Rules. One option is to amend any existing compliance documents and policies and procedures under the HIPAA Rules for another self-insured group health plan to make them apply to the EAP as well. If the EAP is the plan administrator’s only group health plan for which it has compliance responsibility under the HIPAA Rules, the plan administrator should consult with legal counsel to develop and implement all necessary documentation for compliance with the HIPAA Rules.

Text messaging is a convenient way for busy doctors to communicate, but for years, the question has remained: are doctors allowed to convey sensitive health information with other members of their provider team over SMS? The answer is now “yes,” thanks to a memo published last week by the U.S. Department of Health & Human Services (HHS), Centers for Medicare & Medicaid Services (CMS).   The memo clarifies that “texting patient information among members of the health care team is permissible if accomplished through a secure platform.”

However, texting patient orders is prohibited “regardless of the platform utilized” under the CMS hospital Conditions of Participation or Conditions of Coverage, and providers should enter orders into an electronic health record (EHR) by Computerized Provider Order Entry (CPOE).

According to the memo, CMS expects providers and organizations to implement policies and procedures that “routinely assess the security and integrity of the texting systems/platforms that are being utilized” to avoid negatively affecting patient care.

What’s interesting about the CMS memo is that texting on a cell phone has become as routine (if not more routine) as speaking into a cell phone – and HHS published guidance way back in 2013 explaining that the HIPAA Privacy Rule permits doctors and other health care providers to share protected health information over the phone. Telling a 21st century doctor not to communicate by text message (within the proper HIPAA parameters, of course) is like telling the President he can’t communicate on Twitter.

CMS’s restriction on texting patient orders appears to relate to concerns about medical record accuracy, not privacy and security. “CMS has held to the long standing practice that a physician … should enter orders into the medical record via a hand written order” or by CPOE, “with an immediate download into the … [EHR, which] would be dated, timed, authenticated, and promptly placed in the medical record.”

I asked a couple of IT security experts here at Fox how a provider or organization would go about “routinely assessing the security and integrity of the texting systems/platforms” being used by doctors. According Fox partner and Chief Privacy Officer Mark McCreary, CIPP/US, the provider or organization might want to start by:

“… receiv[ing] and review[ing] their third party audits and certifications.  Most platform providers would make those available to customers (if not the public).  They like to tout their security.”

Matthew Bruce, Fox’s Information Security Officer, agreed:

“That is really the only practical way to routinely assess. SMS, which is standard text messaging, isn’t secure so it would likely require the potential use of third party app like Signal.  iMessages are encrypted and secure but only between iPhone users. Both companies should publish their security practices.”

So, providers or organizations participating in Medicare can (continue to) allow doctors to communicate (but not enter treatment orders) by text, but should periodically review the security of the texting systems or platforms the doctors are using. They may also want to remind doctors to make sure they know when and how to preserve text messages, whether by taking screen shots, using an SMS backup app, or some other method.

Long gone are the days when social media consisted solely of Myspace and Facebook, accessible only by logging in through a desktop computer at home or personal laptop. With every single social media platform readily available on personal cellular devices, HIPAA violations through social media outlets are becoming a frequent problem for healthcare providers and individual employees alike. In fact, social media platforms like Snapchat® and Instagram® that offer users the opportunity to post “stories” or send their friends temporary “snaps” seem to be a large vehicle for HIPAA violations, specifically amongst the millennial generation.

Megaphone and social media illustrationIn a recent poll by CNBC of the younger-end of the millennial generation, CNBC found that a majority of teens ranked Snapchat and Instagram among their top three favorite apps.  One teen claimed that they enjoyed the “instant gratification” of having a quick conversation, and another teen even stated that “Snapchat is a good convenient way to talk to friends (sharing pictures) but you can say things you would regret later because they disappear (I don’t do that though).”

This dangerous and erroneous mentality, while prevalent in teens, exists to some extent among the younger generation of nurses, residents, and other employees working for healthcare providers. With just a few taps and swipes, an employee can post a seemingly innocuous disclosure of PHI. Interns and residents of the younger generation may innocently upload a short-term post (be it a picture for two-seconds or an eight-second long video) of a busy hospital room or even an innocent “selfie” without realizing that there is visible and identifiable PHI in the corner. Two major categories of HIPAA violations have become apparent to me in relation to Snapchat and Instagram Stories and HIPAA: (1) The innocent poster, as described above, who does not realize there is PHI in their post; or (2) The poster who knows that their picture or video could constitute a HIPAA violation, but posts it anyway because they think it’s “temporary”.

The first category of violators are employees who do not realize that they’re violating HIPAA but can still be punished for such behavior. Think of a resident deciding to post a picture on their “Snapchat story” of a cluttered desk during a hectic day at work, not realizing that there are sensitive documents in clear view. Again, whether the resident meant to or not, he or she still violated HIPAA.

The second category of violators think that they’re safe from HIPAA violations, but don’t realize that their posts may not be as temporary as they think. Let us imagine a nursing assistant, working at an assisted-living facility, “snapping” a video of an Alzheimer’s patient because the patient “was playing tug of war with her and she thought it was funny.”  The story only lasts 24 hours on the nursing assistant’s Snapchat “story”, but it is still a clear breach of HIPAA. In this case (a true story), the nursing assistant was fired from the facility and a criminal complaint was filed against her.

Violations in this category do not even need to be as severe as the one in the scenario with the nursing assistant. An employee at a hospital taking a “snap” with one of their favorite patients and sending it to just one friend on Snapchat directly (instead of posting it on their “story”) is a violation because that friend could easily take a screenshot of the “snap”. In fact, any “snap” is recordable by a receiving party; all the receiving party would have to do is press and hold the home button in conjunction with the side button on their iPhone. Voila, now a third-party has PHI saved on their phone, and worse yet, that third-party can distribute the PHI to the world on any number of social media outlets.

Snapchat posts and Instagram stories are not temporary. In fact, in 2014, Snapchat experienced a security breach that released 100,000 Snapchat photos.  The hack – cleverly called “The Snappening” – involved hackers who released a vast database of intercepted Snapchat photos and videos that they had been amassing for years.  In that instance, the hackers acquired the files from a third-party site called “Snapsave.com”, which allowed users to send and receive “snaps” from a desktop computer and stored them on their servers. Snapchat argued back then that it was not in fact their own server which was hacked, but currently the app does allow users to save “snaps” on their phone and on the application before sending them to their friends or stories. This change was made in 2016. Where are those pictures being saved? Could hackers get their hands on them?

The appeal of “instant gratifications” and “temporary conversations” is what makes social media platforms such as Snapchat and Instagram dangerous to healthcare providers. To avoid HIPAA violations of this nature, it is important to inform and educate employees, especially of the millennial generation, of the dangers of posting pictures that they think are temporary. I have an anonymous friend at the age of 26 who is a resident at a hospital that completely disabled her ability to access G-mail through her phone. While this method is a severe solution to a growing issue, and not absolutely necessary, healthcare providers should definitely consider other creative ways to keep their younger employees off their social media apps.

Individuals who have received notice of a HIPAA breach are often offered free credit monitoring services for some period of time, particularly if the protected health information involved included social security numbers.  I have not (yet) received such a notice, but was concerned when I learned about the massive Equifax breach (see here to view a post on this topic on our Privacy Compliance and Data Security blog).

The Federal Trade Commission’s Consumer Information page sums it up well:

If you have a credit report, there’s a good chance that you’re one of the 143 million American consumers whose sensitive personal information was exposed in a data breach at Equifax… .”

I read the news reports this morning, and decided to go on the Equifax site, equifaxsecurity2017.com, to see if my information may have been affected and to sign up for credit file monitoring and identify theft protection (the services are free to U.S. consumers, whether or not affected by the breach, for one year).

The Equifax site describes the breach and lets users click on a “Potential Impact” tab to find out whether their information “may have been impacted” by the breach. Users can find out by clicking on the “Check Potential Impact” link and following these steps:

  1. Click on the below link, “Check Potential Impact,” and provide your last name and the last six digits of your Social Security number.
  2. Based on that information, you will receive a message indicating whether your personal information may have been impacted by this incident.
  3. Regardless of whether your information may have been impacted, we will provide you the option to enroll in TrustedID Premier. You will receive an enrollment date. You should return to this site and follow the “How do I enroll?” instructions below on or after that date to continue the enrollment and activation process. The enrollment period ends on Tuesday, November 21, 2017.

Before satisfying my curiosity, though, I decided to click on the “Terms of Use”, that too-rarely-used link typically included at the bottom of a webpage that sets forth the quid pro quo of using a website. Perhaps it was because my law partner (and the firm’s Chief Privacy Officer), Mark McCreary, has instilled some cautiousness in me, or because I wondered if there might be a catch. Why would Equifax offer a free year of credit monitoring to everyone, even those not affected by the breach? What would Equifax get in return?

I skimmed the “Product Agreement and Terms of Use”, noted the bolded text requiring arbitration of disputes and waiving my right to participate in a class action, but wasn’t concerned enough to resist the urge to find out if my information was affected.

I then followed the “Getting Started” process by following the TrustedID Premier link, and quickly received a notice stating that my information “may have been impacted” and that I could enroll on September 11, 2017 (my “designated enrollment date”).

Not more than a couple of hours later, I came across an article warning of the legal rights consumers give up by signing up on Equifax’s website. The article describes the arbitration clause in the Terms of Use provisions, and reports on New York Attorney General Eric Schneiderman’s tweet stating that the arbitration provision is “unacceptable and unenforceable”. The article also reports that, today, Equifax updated the Terms of Use language to include a new provision allowing a user to write to Equifax to opt-out of the arbitration provision within 30 days of the date the user first accepts the Product Agreement and Terms of Use.

My curiosity got the best of me and I now know I’m on the “affected persons” list, but I haven’t yet signed up for my free TrustedID Premier credit monitoring service. I have the weekend to decide whether to sign up for the service, and 30 days from Monday (if I actually sign up for the service) to decide whether to accept the “cost” of agreeing to binding arbitration.

 

In some respects, HIPAA has had a design problem from its inception. HIPAA is well known today as the federal law that requires protection of individually identifiable health information (and, though lesser-known, individual access to health information), but privacy and security were practically after-thoughts when HIPAA was enacted back in 1996. HIPAA (the Health Information Portability and Accountability Act) was originally described as an act:

To amend the Internal Revenue Code of 1986 to improve portability and continuity of health insurance coverage in the group and individual markets, to combat waste, fraud, and abuse in health insurance and health care delivery, to promote the use of medical savings accounts, to improve access to long-term care services and coverage, to simplify the administration of health insurance, and for other purposes.”

The privacy of individually identifiable health information was one of those “other purposes” only peripherally included in the 1996 act. Privacy protection was to be a follow-up, a “to-do” checklist item for the future. HIPAA directed the Secretary of Health and Human Services to recommend privacy standards to specified congressional committees within a year of enactment, and, if Congress did not enact privacy legislation within 3 years of enactment, the Secretary was to proceed with the promulgation of privacy regulations. Security was a bit more urgent, at least in the context of electronic health transactions such as claims, enrollment, eligibility, payment, and coordination of benefits. HIPAA required the Secretary to adopt standards for the security of electronic health information systems within 18 months of enactment.

This historical context casts some light on why our 2017-era electronic health records (EHR) systems often lack interoperability and yet are vulnerable to security breaches. HIPAA may be partially to blame, since it was primarily designed to make health insurance more portable and to encourage health insurers and providers to conduct transactions electronically. Privacy and security were the “oh, yeah, that too” add-ons to be fully addressed once electronic health information transactions were underway and EHR systems needed to support them already up and running. Since 1996, EHRs have developed at a clunky provider-by-provider (or health system-by-health system) and patient encounter-by-patient encounter basis, not only making them less accurate and efficient, but vulnerable to privacy and security lapses. (Think of the vast quantity of patient information breached when a hospital’s EHR or a health plan’s claims data base is hacked.)

This past June, I participated on a California Israel Medical Technology Summit panel discussing privacy and security issues. An audience member asked the panel whether we thought blockchain technology was the answer to HIPAA and other privacy and security-related legal requirements. I didn’t have a good answer, thinking “isn’t that the technology used to build Bitcoin, the payment system used by data hackers everywhere?”

This past July, Ritesh Gandotra, a director of global outsourcing for Xerox, wrote that blockchain technology could overhaul our “crippled” EHR management system. Gandotra writes “Historically, EHRs were never really designed to manage multi-institutional and lifetime medical records; in fact, patients tend to leave media data scattered across various medical institutes … This transition of data often leads to the loss of patient data.” He goes on to explain how blockchain, the “distributed ledger” technology originally associated with Bitcoin, can be used to link discrete patient records (or data “blocks”) contained in disparate EHRs into “an append-only, immutable, timestamped chain of content.”

Using blockchain technology to reconfigure EHRs makes sense. Ironically, the design flaw inherent in HIPAA’s original 1996 design (the promotion of electronic health transactions to foster portability and accountability in the health insurance context while treating privacy and security as an afterthought) can be fixed using the very same technology that built the payment network favored by ransomware hackers.

Post Contributed by Matthew J. Redding.

On April 26, 2017, Memorial Hermann Health System (“MHHS”) agreed to pay the U.S. Department of Health and Human Services (“HHS”) $2.4 million to settle potential violations of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Privacy Rule.

The underlying incident occurred in September of 2015, when a patient presented a falsified Texas driver’s license to MHHS’ staff upon appearing for the patient’s scheduled appointment. MHHS’ staff contacted law enforcement to verify the patient’s identification, and law enforcement thereafter came to the facility and arrested the patient. The incident drew some national attention from immigration activist groups.  Our partner Bill Maruca posted a blog in September 2015 that discussed the event.

It is important to note that the disclosure to law enforcement was not a contributing factor to the alleged HIPAA violation. In fact, a covered entity is permitted under HIPAA to disclose protected health information (“PHI”) to the limited extent necessary to report a crime occurring on its premises to law enforcement (see 45 CFR 164.512(f)(5)). However, in the MHHS case, the potential HIPAA violation occurred when MHHS issued press releases to several media outlets, addressed activist groups and state officials, and published a statement on its website following the incident, identifying the patient by name on each occasion.

The MHHS facility was a gynecology clinic, and its disclosure of a patient’s name associated with the facility constituted PHI. Therefore, the release of the patient’s name without the patient’s authorization was an impermissible disclosure of PHI under HIPAA.

The OCR alleged that, in addition to the impermissible disclosure of PHI, MHHS failed to document the sanctions imposed on its workforce members responsible for the impermissible disclosures.

6 Takeaways:

Covered entities, such as hospitals, physician practices, and other health care entities, should be cautious in publicizing any event involving its patients so to avoid impermissibly disclosing PHI. Further, public disclosure could open the door to liability under state statutes and common law (e.g., patient’s right of privacy, freedom from defamation, and contractual rights). Here are a few takeaways from the MHHS HIPAA settlement:

  1. PHI must remain protected. The disclosure of PHI to law enforcement, or the presence of health information in the public domain generally, does not relieve the covered entity of its obligations under HIPAA. Instead, covered entities have a continuing obligation to protect and maintain the privacy and security of PHI in their possession and control, and to use and disclose only such information as is permitted under HIPAA.
  2. Avoid inadvertently publishing PHI. PHI is not limited to health information that identifies a patient by his/her name, SSN, address or date of birth. In addition, it includes any other health information that could be used to identify the patient in conjunction with information publicly available. We’ve seen other instances where health care entities inadvertently publish PHI in violation of HIPAA, leading to significant fines (see NY Med: $2.2 Million settlement).
  3. Review your HIPAA policies and procedures with respect to your workforce’s publications and disclosures to the media. To the extent not done so already:
    1. Develop a policy prohibiting your general workforce from commenting to the media on patient events.
    2. Develop a policy with respect to monitoring statements published on your website to avoid publishing any PHI.
    3. Designate a workforce member with a sufficient HIPAA background (nudge, nudge, HIPAA Privacy Officer) to handle media inquiries and provide the workforce with contact information of such member.
  4. Review your HIPAA policies and procedures with respect to law enforcement events.
    1.  For events not likely to compromise the health and safety of others, encourage your workforce to handle such events as discreetly as possible, involving only those members of the workforce who have a need to know.
    2. Train your workforce to identify the situations where disclosure of a patient’s PHI to law enforcement is permissible and those situations where the patient’s authorization must be obtained before disclosing his/her PHI to law enforcement.
  5. Don’t forget to timely notify the affected individuals. If an impermissible disclosure of PHI occurs, do not let the publicizing of such disclosure cause you to forget your breach notification obligations. Failing to timely notify the affected individual could result in additional penalties (see Presence Health: $475,000 settlement). The breach notification clock starts ticking upon the covered entity’s discovery (as defined under HIPAA) of the impermissible disclosure.
  6. Document your responses to impermissible disclosures of PHI and your compliance with HIPAA. HIPAA places the burden on the covered entity to maintain sufficient documentation necessary to prove that it fulfilled all of its administrative obligations under HIPAA (see 78 FR 5566 at 5641). Therefore, once you discover an impermissible disclosure, document how your entity responds, including, without limitation, the breach analysis, proof that the patient notices were timely sent, sanctions imposed upon the responsible workforce members, actions taken to prevent similar impermissible disclosures, etc. Don’t forget, the covered entity is required to maintain such documentation for at least 6 years (see 45 C.F.R. 164.414 and 164.530(j)) .

Our partner Elizabeth Litten and I were recently featured again by our good friend Marla Durben Hirsch in her article in the April 2017 issue of Medical Practice Compliance Alert entitled “Business associates who farm out work create more risks for your patients’ PHI.” Full text can be found in the April, 2017 issue, but a synopsis is below.

In her article Marla cautioned, “Fully one-third of the settlements inked in 2016 with OCR [the Office of Civil Rights of the U.S. Department of Health and Human Services] dealt with breaches involving business associates.” She pointed out that the telecommuting practices of business associates (“BAs”) and their employees with respect to protected health information (“PHI”) create heightened risks for medical practices that are the covered entities (“CEs”) — CEs are ultimately responsible not only for their own HIPAA breaches but for HIPAA breaches of their BAs as well.

Kline observed, “Telecommuting is on the rise and this trend carries over to organizations that provide services to health care providers, such as billing and coding, telehealth providers, IT support and law firms.” Litten commented, “Most business associate agreements (BAAs) merely say that the business associate will protect the infor­mation but are not specific about how a business associate will do so, let alone how it will when PHI is off site.”

Litten and Kline added, “OCR’s sample business associate agreement is no dif­ferent, using general language that the business associate will use ‘appropriate safeguards’ and will ensure that its subcontractors do so too.”

Kline continued, “You have much less control over [these] people, who you don’t even know . . . . Moreover, frequently practices don’t even know that the business associate is allowing staff or subcontractors to take patient PHI off site. This is a collateral issue that can become the fulcrum of the relationship. And one loss can be a disaster.”

Some conclusions that can be drawn from Marla’s article include the following items which a CE should consider doing  when dealing with BAs:

  1. Select BAs with due care and with references where possible.
  2. Be certain that there is an effective BAA executed and in place with a BA before transmitting any PHI.
  3. Periodically review and update BAAs to ensure that they address changes in technology such as telecommuting, mobile device expansion and PHI use and maintenance practices.
  4. Ask questions of BAs to know where they and their employees use and maintain PHI, such as on laptops, personal mobile devices or network servers, and what encryption or other security practices are in place.
  5. Ask BAs what subcontractors (“SCs”) they may use and where the BAs and SCs are located (consider including a provision in BAAs that requires BAs and their SCs to be legally subject to the jurisdiction of HIPAA, so that HIPAA compliance by the CE and enforcement of the BAA can be more effective).
  6. Transmit PHI to the BA using appropriate security and privacy procedures, such as encryption.
  7. To the extent practicable, alert the BA in advance as to when and how transmission of PHI will take place.
  8. Obtain from each BAA a copy of its HIPAA policies and procedures.
  9. Maintain a readily accessible archive of all BAAs in effect to allow quick access and review when PHI issues arise.
  10. Have a HIPAA consultant available who can be contacted promptly to assist in addressing BA issues and provide education as to best practices.
  11. Document all actions taken to reduce risk from sharing PHI with BAs, including items 1 to 10 above.

Minimizing risk of PHI breaches by a CE requires exercising appropriate control over selection of, and contracting and ongoing interaction with, a BA. While there can be no assurance that such care will avoid HIPAA breaches for the CE, evidence of such responsible activity can reduce liability and penalties should violations occur.

It was the wallet comment in the response brief filed by the Federal Trade Commission (FTC) in the U.S. Court of Appeals for the 11th Circuit that prompted me to write this post. In its February 9, 2017 filing, the FTC argues that the likelihood of harm to individuals (patients who used LabMD’s laboratory testing services) whose information was exposed by LabMD roughly a decade ago is high because the “file was exposed to millions of users who easily could have found it – the equivalent of leaving your wallet on a crowded sidewalk.”

However, if one is to liken the LabMD file (referred to throughout the case as the “1718 File”) to a wallet and the patient information to cash or credit cards contained in that wallet, it is more accurate to describe the wallet as having been left on the kitchen counter in an unlocked New York City apartment. Millions of people could have found it, but they would have had to go looking for it, and would have had to walk through the door (or creep through a window) into a private residence to do so.

I promised to continue my discussion of LabMD’s appeal in the U.S. Court of Appeals for the 11th Circuit of the FTC’s Final Order back in January (see prior post here), planning to highlight arguments expressed in briefs filed by various amici curiae in support of LabMD.   Amici include physicians who used LabMD’s cancer testing services for their patients while LabMD was still in business, the non-profit National Federation of Independent Business, the non-profit, nonpartisan think tank TechFreedom, the U.S. Chamber of Commerce, and others. Amici make compelling legal arguments, but also emphasize several key facts that make this case both fascinating and unsettling:

The FTC has spent millions of taxpayer dollars on this case – even though there were no victims (not one has been identified in over seven years), LabMD’s data security practices were already regulated by the HHS under HIPAA, and, according to the FTC’s paid litigation expert, LabMD’s “unreasonableness” ceased no later than 2010. During the litigation, …   a whistleblower testified that the FTC’s staff … were bound up in collusion with Tiversa [the cybersecurity firm that discovered LabMD’s security vulnerability, tried to convince LabMD to purchase its remediation services, then reported LabMD to the FTC], a prototypical shakedown racket – resulting in a Congressional investigation and a devastating report issued by House Oversight Committee staff.” [Excerpt from TechFreedom’s amicus brief]

An image of Tiversa as taking advantage of the visible “counter-top wallet” emerges when reading the facts described in the November 13, 2015 Initial Decision of D. Michael Chappell, the Chief Administrative Law Judge (ALJ), a decision that would be reversed by the FTC in the summer of 2016 when it concluded that the ALJ applied the wrong legal standard for unfairness. The ALJ’s “Findings of Fact” (which are not disputed by the FTC in the reversal, notably) include the following:

“121. On or about February 25, 2008, Mr. Wallace, on behalf of Tiversa, downloaded the 1718 File from a LabMD IP address …

  1. The 1718 File was found by Mr. Wallace, and was downloaded from a peer-to-peer network, using a stand alone computer running a standard peer-to-peer client, such as LimeWire…
  2. Tiversa’s representations in its communications with LabMD … that the 1718 File was being searched for on peer-to-peer networks, and that the 1718 File had spread across peer-to-peer networks, were not true. These assertions were the “usual sales pitch” to encourage the purchase of remediation services from Tiversa… .”

The ALJ found that although the 1718 File was available for peer-to-peer sharing via use of specific search terms from June of 2007 through May of 2008, the 1718 File was actually only downloaded by Tiversa for the purpose of selling its security remediation services. The ALJ also found that there was no contention that Tiversa (or those Tiversa shared the 1718 File with, namely, a Dartmouth professor working on a study and the FTC) used the contents of the file to harm patients.

In short, while LabMD may have left its security “door” unlocked when an employee downloaded LimeWire onto a work computer, only Tiversa actually walked through that door and happened upon LabMD’s wallet on the counter-top. Had the wallet been left out in the open, in a public space (such as on a crowded sidewalk), it’s far more likely its contents would have been misappropriated.

As she has done in January for several years, our good friend Marla Durben Hirsch quoted my partner Elizabeth Litten and me in Medical Practice Compliance Alert in her article entitled “MIPS, OSHA, other compliance trends likely to affect you in 2017.” For her article, Marla asked various health law professionals to make predictions on diverse healthcare matters including HIPAA and enforcement activities. Full text can be found in the January 2017 issue, but excerpts are included below.

Marla also wrote a companion article in the January 2017 issue evaluating the results of predictions she published for 2016. The 2016 predictions appeared to be quite accurate in most respects. However, with the new Trump Administration, we are now embarking on very uncertain territory in multiple aspects of healthcare regulation and enforcement. Nevertheless, with some trepidation, below are some predictions for 2017 by Elizabeth and me taken from Marla’s article.

  1. The Federal Trade Commission’s encroachment into privacy and security will come into question. Litten said, “The new administration, intent on reducing the federal government’s size and interference with businesses, may want to curb this expansion of authority and activity. Other agencies’ wings may be clipped.” Kline added, “However, the other agencies may try to push back because they have bulked up to handle this increased enforcement.”
  2. Telemedicine will run into compliance issues. As telemedicine becomes more common, more legal problems will occur. “For instance, the privacy and the security of the information stored and transmitted will be questioned,” says Litten. “There will also be heightened concern of how clinicians who engage in telemedicine are being regulated,” adds Kline.
  3. The risks relating to the Internet of things will increase. “The proliferation of cyberattacks from hacking, ransomware and denial of service schemes will not abate in 2017, especially with the increase of devices that access the Internet, known as the ‘Internet of things,’ warns Kline. “More devices than ever will be networked, but providers may not protect them as well as they do other electronics and may not even realize that some of them —such as newer HVAC systems, ‘smart’ televisions or security cameras that can be controlled remotely — are also on the Internet and thus vulnerable,” adds Litten. “Those more vulnerable items will then be used to infiltrate providers’ other systems,” Kline observes.
  4. More free enterprise may create opportunities for providers. “For example, there may not be as much of a commitment to examine mergers,” says Kline. “The government may allow more gathering and selling of data in favor of business interests over privacy and security concerns,” says Litten.

The ambitious and multi-faceted foray by the Trump Administration into the world of healthcare among its many initiatives will make 2017 an interesting and controversial year. Predictions are always uncertain, but 2017 brings new and daunting risks to the prognosticators.  Nonetheless, when we look back at 2017, perhaps we may be saying, “The more things change, the more they stay the same.”

It was nearly three years ago that I first blogged about the Federal Trade Commission’s “Wild West” data breach enforcement action brought against now-defunct medical testing company LabMD.   Back then, I was simply astounded that a federal agency (the FTC) with seemingly broad and vague standards pertaining generally to “unfair” practices of a business entity would belligerently gallop onto the scene and allege non-compliance by a company specifically subject by statute to regulation by another federal agency. The other agency, the U.S. Department of Health and Human Services (HHS), has adopted comprehensive regulations containing extremely detailed standards pertaining to data security practices of certain persons and entities holding certain types of data.

The FTC Act governs business practices, in general, and has no implementing regulations, whereas HIPAA specifically governs Covered Entities and Business Associates and their Uses and Disclosures of Protected Health Information (or “PHI”) (capitalized terms that are all specifically defined by regulation). The HIPAA rulemaking process has resulted in hundreds of pages of agency interpretation published within the last 10-15 years, and HHS continuously posts guidance documents and compliance tools on its website. Perhaps I was naively submerged in my health care world, but I had no idea back then that a Covered Entity or Business Associate could have HIPAA-compliant data security practices that could be found to violate the FTC Act and result in a legal battle that would last the better part of a decade.

I’ve spent decades analyzing regulations that specifically pertain to the health care industry, so the realization that the FTC was throwing its regulation-less lasso around the necks of unsuspecting health care companies was both unsettling and disorienting. As I followed the developments in the FTC’s case against LabMD over the past few years (see additional blogs here, here, here and here), I felt like I was moving from the Wild West into Westworld, as the FTC’s arguments (and facts coming to light during the administrative hearings) became more and more surreal.

Finally, though, reality and reason have arrived on the scene as the LabMD saga plays out in the U.S. Court of Appeals for the 11th Circuit. The 11th Circuit issued a temporary stay of the FTC’s Final Order (which reversed the highly-unusual decision against the FTC by the Administrative Law Judge presiding over the administrative action) against LabMD.

The Court summarized the facts as developed in the voluminous record, portraying LabMD as having simply held its ground against the appalling, extortion-like tactics of the company that infiltrated LabMD’s data system. It was that company, Tiversa, that convinced the FTC to pursue LabMD in the first place. According to the Court, Tiversa’s CEO told one of its employees to make sure LabMD was “at the top of the list” of company names turned over to the FTC in the hopes that FTC investigations would pressure the companies into buying Tiversa’s services. As explained by the Court :

In 2008, Tiversa … a data security company, notified LabMD that it had a copy of the [allegedly breached data] file. Tiversa employed forensic analysts to search peer-to-peer networks specifically for files that were likely to contain sensitive personal information in an effort to “monetize” those files through targeted sales of Tiversa’s data security services to companies it was able to infiltrate. Tiversa tried to get LabMD’s business this way. Tiversa repeatedly asked LabMD to buy its breach detection services, and falsely claimed that copies of the 1718 file were being searched for and downloaded on peer-to-peer networks.”

As if the facts behind the FTC’s action weren’t shocking enough, the FTC’s Final Order imposed bizarrely stringent and comprehensive data security measures against LabMD, a now-defunct company, even though its only remaining data resides on an unplugged, disconnected computer stored in a locked room.

The Court, though, stayed the Final Order, finding even though the FTC’s interpretation of the FTC Act is entitled to deference,

LabMD … made a strong showing that the FTC’s factual findings and legal interpretations may not be reasonable… [unlike the FTC,] we do not read the word “likely” to include something that has a low likelihood. We do not believe an interpretation [like the FTC’s] that does this is reasonable.”

I was still happily reveling in the refreshingly simple logic of the Court’s words when I read the brief filed in the 11th Circuit by LabMD counsel Douglas Meal and Michelle Visser of Ropes & Gray LLP. Finally, the legal rationale for and clear articulation of the unease I felt nearly three years ago:   Congress (through HIPAA) granted HHS the authority to regulate the data security practices of medical companies like LabMD using and disclosing PHI, and the FTC’s assertion of authority over such companies is “repugnant” to Congress’s grant to HHS.

Continuation of discussion of 11th Circuit case and filings by amicus curiae in support of LabMD to be posted as Part 2.