Yesterday’s listserv announcement from the Office for Civil Rights (OCR) within the U.S. Department of Health and Human Services (HHS) brought to mind this question. The post announces the agreement by a Florida company, Advanced Care Hospitalists PL (ACH), to pay $500,000 and adopt a “substantial corrective action plan”. The first alleged HIPAA violation? Patient information, including name, date of birth, and social security number was viewable on the website of ACH’s medical billing vendor, and reported to ACH by a local hospital in 2014.

To add insult (and another alleged HIPAA violation) to injury, according to the HHS Press Release, ACH did not have a business associate agreement (BAA) in place with the vendor, Doctor’s First Choice Billings, Inc. (First Choice), during the period when medical billing services were rendered (an 8-month period running from November of 2011 to June of 2012). Based on the HHS Press Release, it appears that ACH only scrambled to sign a BAA with First Choice in 2014, likely after learning of the website issue. In addition, according to the HHS Press Release, the person hired by ACH to provide the medical billing services used “First Choice’s name and website, but allegedly without any knowledge or permission of First Choice’s owner.”

These allegations are head-spinning, starting with those implicating the “should’ve-been” business associate. First, how does a medical billing company allow an employee or any other individual access to its website without its knowledge or permission? Next, shouldn’t someone at First Choice have noticed that an unauthorized person was posting information on its website back in 2011-2012, or at some point prior to its discovery by an unrelated third party in 2014? Finally, how does a medical billing company (a company that should know, certainly by late 2011, that it’s most likely acting a business associate when it performs medical billing services), not realize that individually identifiable health information and social security numbers are viewable on its website by outsiders?

ACH’s apparent lackadaisical attitude about its HIPAA obligations is equally stunning. What health care provider engaged in electronic billing was not aware of the need to have a BAA in place with a medical billing vendor in 2011? While the Omnibus Rule wasn’t published until January of 2013 (at which point ACH had another chance to recognize its need for a BAA with First Choice), HHS has been publishing FAQs addressing all kinds of business associate-related issues and requirements since 2002.

It seems pretty obvious that ACH should have had a BAA with First Choice, but, in many instances, having a BAA is neither required by HIPAA nor prudent from the perspective of the covered entity. A BAA generally is not necessary if protected health information is not created, received, maintained or transmitted by or to the vendor in connection with the provision of services on behalf of a covered entity, business associate, or subcontractor, and having one in place may backfire. Consider the following scenario:

*          Health Plan (HP), thinking it is acting out of an abundance of HIPAA caution, requires all of its vendors to sign BAAs.

*          Small Law Firm (SLF) provides legal advice to HP, but does not create, receive, maintain or transmit protected health information in connection with the services it provides on behalf of HP.

*          However, SLF signs HP’s BAA at HP’s request and because SLF thinks it might, at some point, expand the scope of legal services it provides to HP to include matters that require it to receive protected health information from HP.

*          SLF suffers a ransomware attack that results in some of its data being encrypted, including data received from HP. It reviews HHS’s fact sheet on Ransomware and HIPAA, and realizes that a HIPAA breach may have occurred, since it cannot rule out the possibility that it received protected health information from HP at some point after it signed the BAA and prior to the attack.

*          SLF reports the attack to HP as per the BAA. Neither SLF nor HP can rule out the possibility that protected health information of individuals covered by HP was received by SLF at some point and affected by the attack.

HP is now in the position of having to provide breach notifications to individuals and HHS. Had it been more circumspect at the outset, deciding it would only ask SLF to sign a BAA if/when SLF needed protected health information in order to provide legal services on behalf of HP, it may have avoided these HIPAA implications completely.

So while it seems stunning that a health care provider entity such as ACH would have neglected to sign a BAA with First Choice before 2014, having a BAA in place when it is not necessary can create its own problems. Better to constantly ask (and carefully consider): to BAA or not to BAA?

The new Apple Watch Series 4® is one of the more recent and sophisticated consumer health engagement tools. It includes a sensor that lets wearers take an electrocardiogram (ECG) reading and detect irregular heart rhythms. The U.S. Food & Drug Administration (FDA) recently approved these functions as Class II medical devices, which generally means that they have a high to moderate risk to the user. The FDA approval letters describe the Apple Watch Series 4 functions as intended for over-the-counter use and not to replace traditional methods of diagnosis or treatment.

Tech developers and HIPAA lawyers often mean different things when describing a health app or medical device as HIPAA compliant. For example, a health app developer will likely focus on infrastructure, whereas the lawyer will likely focus on implementation. When asked about HIPAA, the app developer might rely on International Organization for Standardization (ISO) certification to demonstrate its data privacy and security controls and highlight how the infrastructure supports HIPAA compliance. The HIPAA lawyer, on the other hand, will likely focus on how (and by whom) data is created, received, maintained and transmitted and must look to the HIPAA regulations and guidance documents issued by the U.S. Department of Health and Human Services (HHS) to determine when and whether the data is subject to HIPAA protection. ISO certification does not equate to HIPAA certification; in fact, there is no HIPAA compliance certification process, and it is often difficult from the outset to determine if and when HIPAA applies.

As discussed in this prior blog post, HHS’s guidance on various “Health App Scenarios” underscores that fact that health data collected by an app may be HIPAA-protected in some circumstances and not in others, depending on the relationship between an app developer and a covered entity or business associate. The consumer (or app user) is unlikely to understand exactly when or whether HIPAA applies, particularly if the consumer has no idea whether such a relationship exists.

Back to the Apple Watch Series 4, and the many other consumer-facing medical devices or health apps in already on the market or in development. When do the nuances of HIPAA applicability begin to impede the potential health benefits of the device or app? If I connect my Apple Watch to Bluetooth and create a pdf file to share my ECG data with my physician, it becomes protected heath information (PHI) upon my physician’s receipt of the data. It likely was not PHI before then (unless my health care provider told me to buy the watch and has process in place to collect the data from me).

Yet the value of getting real-time ECG data lies not in immediate user access, but in immediate physician/provider access. If my device can immediately communicate with my provider, without my having to take the interim step of moving the data into a separate file or otherwise capturing it, my physician can let me know if something is of medical concern. I may not want my health plan or doctor getting detailed information from my Fitbit® or knowing whether I ate dessert every night last week, but if I’m at risk of experiencing a medical emergency or if my plan or provider gives me an incentive to engage in healthy behavior, I may be willing to allow real-time or ongoing access to my information.

The problem, particularly when it comes to health apps and consumer health devices, is that HIPAA is tricky when it comes to non-linear information flow or information that changes over time. It can be confusing when information shifts from being HIPAA-protected or not, depending on who has received it. As consumers become more engaged and active in managing health conditions, it is important that they realize when or whether HIPAA applies and how their personal data could be used (or misused) by recipients. Findings from Deloitte’s 2018 consumer health care survey suggest that many consumers are interested in using apps to help diagnose and treat their conditions. For example, 29% were interested in using voice recognition software to identify depression or anxiety, but perhaps not all of the 29% would be interested in using the software if they were told their information would not be protected by HIPAA (unless and until received by their provider, or if the app developer was acting as a business associate at the time of collection).

Perhaps certain HIPAA definitions or provisions can be tweaked to better fit today’s health data world, but, in the meantime, health app users beware.

Registration to the Privacy Summit is open.

Fox Rothschild’s Minneapolis Privacy Summit on November 8 will explore key cybersecurity issues and compliance questions facing company decision-makers. This free event will feature an impressive array of panelists drawn from cybersecurity leaders, experienced regulatory and compliance professionals and the Chief Division Counsel of the Minneapolis Division of the FBI.

Attendees receive complimentary breakfast and lunch, and can take advantage of networking opportunities and informative panel sessions:

GDPR and the California Consumer Privacy Act: Compliance in a Time of Change

The European Union’s General Data Protection Regulation has been in effect since May. Companies that process or control EU citizens’ personal data should understand how to maintain compliance and avoid costly fines. Health care businesses should also prepare for the next major privacy mandate: the California Consumer Privacy Act.

Risk Management – How Can Privacy Officers Ensure They Have the Correct Security Policies in Place?

Panelists offer best practices for internal policies, audits and training to help maintain protected health information (PHI), personally identifiable information (PII) or other sensitive data. Learn the cutting edge strategies to combat the technology threats of phishing and ransomware.

Fireside Chat

Jeffrey Van Nest, Chief Division Counsel of the Minneapolis Division of the FBI, speaks on the state of affairs in regulation and enforcement; including how to partner with the FBI, timelines of engagement and the latest on cyber threat schemes. His insights offer details on forming effective cyber incident response plans.

Keynote Speaker – Ken Barnhart

Ken is the former CEO of the Occam Group, a cybersecurity industry advisor and the founder and principal consultant for Highground Cyber – a spin-off of the Occam Group’s Cybersecurity Practice Group. For more than a decade, he has helped companies of all sizes design, host and secure environments in private, public and hybrid cloud models. Prior to his work in the corporate sector, Ken served as a non-commissioned officer in the United States Marine Corp and is a decorated combat veteran of Operation Desert Shield\Storm with the HQ Battalion of the 2nd Marine Division.

Geared toward an audience of corporate executives, in-house chief privacy officers and general counsel, the summit will provide important take-aways about the latest risks and threats facing the health care industry.

Stay tuned for more agenda details. Registration is open.

The European Union’s General Data Protection Regulation (GDPR) went into effect on May 25, 2018. Whereas HIPAA applies to particular types or classes of data creators, recipients, maintainers or transmitters (U.S. covered entities and their business associates and subcontractors), GDPR applies much more generally – it applies to personal data itself. Granted, it doesn’t apply to personal data that has absolutely no nexus to the EU, but assuming it doesn’t apply to your U.S.-based entity simply because you don’t have a physical location in the EU is a mistake.

So when does GDPR apply to a U.S.-based covered entity, business associate, or subcontractor? As with HIPAA, the devil is in the definitions, so I’ve capitalized certain GDPR-defined terms below. GDPR is comprised of 99 articles set forth in 11 chapters, and 173 “Recitals” explain the rationales for adoption. Similar to the way regulatory preambles and guidance published by the U.S. Department of Health and Human Services (HHS) can be helpful to understanding HIPAA compliance, the Recitals offer insight into GDPR applicability and scope.

Under Article 3, GDPR applies:

(1) To the Processing of Personal Data in the context of the activities of an establishment of a Controller or Processor in the EU, regardless of whether the Processing takes place in the EU;

(2) To the Processing of Personal Data of data subjects who are in the EU by a Controller or Processor not established in the EU, where the Processing activities are related to:

(a) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the EU; or

(b) the monitoring of their behavior as far as their behavior takes place within the EU; and

(3) To the Processing of Personal Data by a Controller not established in the EU, but in a place where EU member state law applies by virtue of public international law.

It is paragraph (2) that seems most likely to capture unwitting U.S.-based covered entities, business associates, and subcontractors that are not established in the EU (though Recital 22 offers further explanation of what it means to be Processing data in the context of the activities of an establishment).

Notably, paragraph (2) makes it clear that while the entity need not be located within the EU for GDPR to apply, the data subject must be. If the U.S. entity offers goods or services to, or monitors the behavior of, data subjects who are “in” the EU, GDPR likely applies. It is the location of the data subject, not his or her citizenship, residency or nationality, that matters. GDPR does not follow the data subject outside the EU, but it does follow the data subject (even an American) into the EU – so long as the Processing of the Personal Data takes place in the EU.

So what does this mean for the U.S.-based covered entity, business associate, or subcontractor not established in the EU? It should carefully review its website, marketing activities, discharge or post-service follow-up procedures, and any other activities that might involve the offering goods or services to, or monitoring the behavior of, individuals in EU. If GDPR applies, the company will need to analyze how its HIPAA privacy and data security policies are inconsistent with and fall short of GDPR requirements. The company, whether a covered entity, business associate, or subcontractor, should also make sure that none of its vendors process data on its behalf in the EU.

In addition to understanding where data subjects are located and where Processing takes place in order to determine GDPR applicability, covered entities, business associates and subcontractors must determine whether they are acting as Controllers or Processors in order to understand their GDPR compliance obligations.

This can create particular challenges for a business associate.  If a covered entity is subject to GDPR, a business associate that creates, receives, maintains or transmits Personal Data on behalf of the covered entity will either be acting as a Processor (for example, where the covered entity simply uses the business associates tools or services to conduct its business), or a Controller (for example, where the business associate reaches out directly to plan members or patients, such as by an email campaign).  If the business associate’s services agreement or business associate agreement makes no mention of the fact that the covered entity is subject to GDPR, the business associate may not know whether it is also subject to GDPR, let alone whether it is a Controller or Processor.

The bottom line is that focusing on compliance with HIPAA and other federal and state laws pertaining to privacy and security of personal information is not enough, even for companies that view themselves as operating solely within the U.S.  A thorough risk assessment should include not only careful consideration of HIPAA requirements, but of the potential applicability and compliance requirements of GDPR.

Kristen Marotta writes:

Many believe that educated millennials are choosing to work in urban, rather than rural areas, during their early career due to societal milestones being steadily pushed back and the professional opportunities and preferences of a young professional. Recent medical school graduates are a good example of this dichotomy. The shortage of physicians in rural areas is a well-known phenomenon. Over the years, locum tenens staffing has helped to soften the impact and, recently, so has telemedicine.

Illustration of stethoscope and mobile phone, symbolizing telemedicineThe growing prevalence of telemedicine around the country is an important consideration for new physicians as they decide where to settle down and establish their careers.  In New York, medical graduates should be aware that a $500,000 federal grant was given to New York State’s Office of Mental Health this month, February 2018 by the U.S. Department of Agriculture Rural Development Distance Learning and Telemedicine program.  Using telemedicine to provide mental health services may be a productive and efficient way to deliver healthcare, not only because many mental health examinations would not have to be conducted in-person, but also because of the general shortage of psychiatrists and mental health providers to meet these patient needs. Now, medical graduates who would like to establish their lifestyle in a city can simultaneously care for patients living miles apart from them.

It is essential that health care providers engaging in telemedicine understand the implications of this practice model with respect to compliance with the Health Insurance Portability and Accountability Act of 1996 (HIPAA).  Providers rendering health care services via telemedicine should update and adjust their security risk assessments and HIPAA privacy and security policies and procedures, because protected health information is likely to be created in two separate locations (i.e., the location of the provider and the location of the patient).  Providers should also make sure that their (or their practice’s) Notice of Privacy Practices has been updated to reflect the provision of services via telemedicine, so that the patient has the opportunity to make an informed decision about engaging in this type of health care. Additionally, new business associate agreements may be required with telehealth vendors that do not meet the narrow “mere conduit” exception and any new parties who will have access to the individual’s protected health information as a result of the provision of services via telemedicine. In connection with these efforts, Providers should research and conduct due diligence on vendors to confirm that they understand the services model and are HIPAA-compliant.

As telemedicine emerges and gains more traction in health care, state laws and regulations will also be created and/or updated, and physicians will need to keep abreast of these changes. A good example of this is the State of New York, which has an entire section of mental health regulations dedicated to telepsychiatry. Stay tuned to Fox Rothschild’s Physician Law Blog for further updates on these specific New York regulations, as well as the developments in telemedicine.


Kristen A. Marotta is an associate in the firm’s Health Law Department, based in its New York office.

You may be surprised to learn that those “extra” benefits your company offers to its employees such as your employee assistance program (“EAP”) and wellness program likely are subject to the HIPAA privacy, security and breach notification rules (collectively, “HIPAA Rules”). Part 1 covers why most EAPs are subject to the HIPAA Rules. Part 2 will discuss wellness programs. In both cases, EAPs and wellness programs must comply with the HIPAA Rules to the extent that they are “group health plans” that provide medical care.

As background, the HIPAA Rules apply to “covered entities” and their “business associates.” Health plans and most healthcare providers are “covered entities.” Employers, in their capacity as employers, are not subject to the HIPAA Rules. However, the HIPAA Rules do apply to any “protected health information” (“PHI”) an employer/plan administrator holds on a health plan’s behalf when the employer designs or administers the plan.

Plan administrators and some EAP vendors may not consider EAPs to be group health plans because they do not think of EAPs as providing medical care. Most EAPs, however, do provide medical care. They are staffed by health care providers, such as licensed counselors, and assist employees who are struggling with family or personal problems that rise to the level of a medical condition, including substance abuse and mental health issues. In contrast, an EAP that provides only referrals on the basis of generally available public information, and that is not staffed by health care providers, such as counselors, does not provide medical care and is not subject to the HIPAA Rules.

A self-insured EAP that provides medical care is subject to the HIPAA Rules, and the employer that sponsors and administers the EAP remains responsible for compliance with the HIPAA Rules because it acts on behalf of the plan.   On the other hand, for an EAP that is fully-insured or embedded in a fully-insured policy, such as long-term disability coverage, the insurer will have the primary obligations for compliance with the HIPAA Rules for the EAP. The employer will not be responsible for overall compliance with the HIPAA Rules for an insured EAP even though it provides medical care, but only if the employer does not receive PHI from the insurer or only receives summary health information or enrollment/disenrollment information. Even then, the employer needs to ensure it doesn’t retaliate against a participant for exercising their rights under the HIPAA Rules or require waiver of rights under the HIPAA Rules with respect to the EAP.

An EAP that qualifies as an “excepted benefit” for purposes of HIPAA portability and the Affordable Care Act (as is most often the case because the EAP is offered at no cost, eligibility is not conditioned on participation in another plan (such as a major medical plan), benefits aren’t coordinated with another plan, and the EAP does not provide “significant benefits in the nature of medical care”) can be subject to the HIPAA Rules. In other words, just because you’ve determined that your EAP is a HIPAA excepted benefit doesn’t mean the EAP avoids the HIPAA Rules. Most EAPs are HIPAA excepted benefits, yet subject to full compliance with the HIPAA Rules.

Employers/plan administrators facing unexpected compliance obligations under the HIPAA Rules because of a self-insured EAP that provides medical care will need to enter into a HIPAA business associate agreement with the EAP vendor, amend the EAP plan document to include language required by the HIPAA Rules and develop and implement other compliance documents and policies and procedures under the HIPAA Rules. One option is to amend any existing compliance documents and policies and procedures under the HIPAA Rules for another self-insured group health plan to make them apply to the EAP as well. If the EAP is the plan administrator’s only group health plan for which it has compliance responsibility under the HIPAA Rules, the plan administrator should consult with legal counsel to develop and implement all necessary documentation for compliance with the HIPAA Rules.

Text messaging is a convenient way for busy doctors to communicate, but for years, the question has remained: are doctors allowed to convey sensitive health information with other members of their provider team over SMS? The answer is now “yes,” thanks to a memo published last week by the U.S. Department of Health & Human Services (HHS), Centers for Medicare & Medicaid Services (CMS).   The memo clarifies that “texting patient information among members of the health care team is permissible if accomplished through a secure platform.”

However, texting patient orders is prohibited “regardless of the platform utilized” under the CMS hospital Conditions of Participation or Conditions of Coverage, and providers should enter orders into an electronic health record (EHR) by Computerized Provider Order Entry (CPOE).

According to the memo, CMS expects providers and organizations to implement policies and procedures that “routinely assess the security and integrity of the texting systems/platforms that are being utilized” to avoid negatively affecting patient care.

What’s interesting about the CMS memo is that texting on a cell phone has become as routine (if not more routine) as speaking into a cell phone – and HHS published guidance way back in 2013 explaining that the HIPAA Privacy Rule permits doctors and other health care providers to share protected health information over the phone. Telling a 21st century doctor not to communicate by text message (within the proper HIPAA parameters, of course) is like telling the President he can’t communicate on Twitter.

CMS’s restriction on texting patient orders appears to relate to concerns about medical record accuracy, not privacy and security. “CMS has held to the long standing practice that a physician … should enter orders into the medical record via a hand written order” or by CPOE, “with an immediate download into the … [EHR, which] would be dated, timed, authenticated, and promptly placed in the medical record.”

I asked a couple of IT security experts here at Fox how a provider or organization would go about “routinely assessing the security and integrity of the texting systems/platforms” being used by doctors. According Fox partner and Chief Privacy Officer Mark McCreary, CIPP/US, the provider or organization might want to start by:

“… receiv[ing] and review[ing] their third party audits and certifications.  Most platform providers would make those available to customers (if not the public).  They like to tout their security.”

Matthew Bruce, Fox’s Information Security Officer, agreed:

“That is really the only practical way to routinely assess. SMS, which is standard text messaging, isn’t secure so it would likely require the potential use of third party app like Signal.  iMessages are encrypted and secure but only between iPhone users. Both companies should publish their security practices.”

So, providers or organizations participating in Medicare can (continue to) allow doctors to communicate (but not enter treatment orders) by text, but should periodically review the security of the texting systems or platforms the doctors are using. They may also want to remind doctors to make sure they know when and how to preserve text messages, whether by taking screen shots, using an SMS backup app, or some other method.

Long gone are the days when social media consisted solely of Myspace and Facebook, accessible only by logging in through a desktop computer at home or personal laptop. With every single social media platform readily available on personal cellular devices, HIPAA violations through social media outlets are becoming a frequent problem for healthcare providers and individual employees alike. In fact, social media platforms like Snapchat® and Instagram® that offer users the opportunity to post “stories” or send their friends temporary “snaps” seem to be a large vehicle for HIPAA violations, specifically amongst the millennial generation.

Megaphone and social media illustrationIn a recent poll by CNBC of the younger-end of the millennial generation, CNBC found that a majority of teens ranked Snapchat and Instagram among their top three favorite apps.  One teen claimed that they enjoyed the “instant gratification” of having a quick conversation, and another teen even stated that “Snapchat is a good convenient way to talk to friends (sharing pictures) but you can say things you would regret later because they disappear (I don’t do that though).”

This dangerous and erroneous mentality, while prevalent in teens, exists to some extent among the younger generation of nurses, residents, and other employees working for healthcare providers. With just a few taps and swipes, an employee can post a seemingly innocuous disclosure of PHI. Interns and residents of the younger generation may innocently upload a short-term post (be it a picture for two-seconds or an eight-second long video) of a busy hospital room or even an innocent “selfie” without realizing that there is visible and identifiable PHI in the corner. Two major categories of HIPAA violations have become apparent to me in relation to Snapchat and Instagram Stories and HIPAA: (1) The innocent poster, as described above, who does not realize there is PHI in their post; or (2) The poster who knows that their picture or video could constitute a HIPAA violation, but posts it anyway because they think it’s “temporary”.

The first category of violators are employees who do not realize that they’re violating HIPAA but can still be punished for such behavior. Think of a resident deciding to post a picture on their “Snapchat story” of a cluttered desk during a hectic day at work, not realizing that there are sensitive documents in clear view. Again, whether the resident meant to or not, he or she still violated HIPAA.

The second category of violators think that they’re safe from HIPAA violations, but don’t realize that their posts may not be as temporary as they think. Let us imagine a nursing assistant, working at an assisted-living facility, “snapping” a video of an Alzheimer’s patient because the patient “was playing tug of war with her and she thought it was funny.”  The story only lasts 24 hours on the nursing assistant’s Snapchat “story”, but it is still a clear breach of HIPAA. In this case (a true story), the nursing assistant was fired from the facility and a criminal complaint was filed against her.

Violations in this category do not even need to be as severe as the one in the scenario with the nursing assistant. An employee at a hospital taking a “snap” with one of their favorite patients and sending it to just one friend on Snapchat directly (instead of posting it on their “story”) is a violation because that friend could easily take a screenshot of the “snap”. In fact, any “snap” is recordable by a receiving party; all the receiving party would have to do is press and hold the home button in conjunction with the side button on their iPhone. Voila, now a third-party has PHI saved on their phone, and worse yet, that third-party can distribute the PHI to the world on any number of social media outlets.

Snapchat posts and Instagram stories are not temporary. In fact, in 2014, Snapchat experienced a security breach that released 100,000 Snapchat photos.  The hack – cleverly called “The Snappening” – involved hackers who released a vast database of intercepted Snapchat photos and videos that they had been amassing for years.  In that instance, the hackers acquired the files from a third-party site called “Snapsave.com”, which allowed users to send and receive “snaps” from a desktop computer and stored them on their servers. Snapchat argued back then that it was not in fact their own server which was hacked, but currently the app does allow users to save “snaps” on their phone and on the application before sending them to their friends or stories. This change was made in 2016. Where are those pictures being saved? Could hackers get their hands on them?

The appeal of “instant gratifications” and “temporary conversations” is what makes social media platforms such as Snapchat and Instagram dangerous to healthcare providers. To avoid HIPAA violations of this nature, it is important to inform and educate employees, especially of the millennial generation, of the dangers of posting pictures that they think are temporary. I have an anonymous friend at the age of 26 who is a resident at a hospital that completely disabled her ability to access G-mail through her phone. While this method is a severe solution to a growing issue, and not absolutely necessary, healthcare providers should definitely consider other creative ways to keep their younger employees off their social media apps.

Individuals who have received notice of a HIPAA breach are often offered free credit monitoring services for some period of time, particularly if the protected health information involved included social security numbers.  I have not (yet) received such a notice, but was concerned when I learned about the massive Equifax breach (see here to view a post on this topic on our Privacy Compliance and Data Security blog).

The Federal Trade Commission’s Consumer Information page sums it up well:

If you have a credit report, there’s a good chance that you’re one of the 143 million American consumers whose sensitive personal information was exposed in a data breach at Equifax… .”

I read the news reports this morning, and decided to go on the Equifax site, equifaxsecurity2017.com, to see if my information may have been affected and to sign up for credit file monitoring and identify theft protection (the services are free to U.S. consumers, whether or not affected by the breach, for one year).

The Equifax site describes the breach and lets users click on a “Potential Impact” tab to find out whether their information “may have been impacted” by the breach. Users can find out by clicking on the “Check Potential Impact” link and following these steps:

  1. Click on the below link, “Check Potential Impact,” and provide your last name and the last six digits of your Social Security number.
  2. Based on that information, you will receive a message indicating whether your personal information may have been impacted by this incident.
  3. Regardless of whether your information may have been impacted, we will provide you the option to enroll in TrustedID Premier. You will receive an enrollment date. You should return to this site and follow the “How do I enroll?” instructions below on or after that date to continue the enrollment and activation process. The enrollment period ends on Tuesday, November 21, 2017.

Before satisfying my curiosity, though, I decided to click on the “Terms of Use”, that too-rarely-used link typically included at the bottom of a webpage that sets forth the quid pro quo of using a website. Perhaps it was because my law partner (and the firm’s Chief Privacy Officer), Mark McCreary, has instilled some cautiousness in me, or because I wondered if there might be a catch. Why would Equifax offer a free year of credit monitoring to everyone, even those not affected by the breach? What would Equifax get in return?

I skimmed the “Product Agreement and Terms of Use”, noted the bolded text requiring arbitration of disputes and waiving my right to participate in a class action, but wasn’t concerned enough to resist the urge to find out if my information was affected.

I then followed the “Getting Started” process by following the TrustedID Premier link, and quickly received a notice stating that my information “may have been impacted” and that I could enroll on September 11, 2017 (my “designated enrollment date”).

Not more than a couple of hours later, I came across an article warning of the legal rights consumers give up by signing up on Equifax’s website. The article describes the arbitration clause in the Terms of Use provisions, and reports on New York Attorney General Eric Schneiderman’s tweet stating that the arbitration provision is “unacceptable and unenforceable”. The article also reports that, today, Equifax updated the Terms of Use language to include a new provision allowing a user to write to Equifax to opt-out of the arbitration provision within 30 days of the date the user first accepts the Product Agreement and Terms of Use.

My curiosity got the best of me and I now know I’m on the “affected persons” list, but I haven’t yet signed up for my free TrustedID Premier credit monitoring service. I have the weekend to decide whether to sign up for the service, and 30 days from Monday (if I actually sign up for the service) to decide whether to accept the “cost” of agreeing to binding arbitration.

 

In some respects, HIPAA has had a design problem from its inception. HIPAA is well known today as the federal law that requires protection of individually identifiable health information (and, though lesser-known, individual access to health information), but privacy and security were practically after-thoughts when HIPAA was enacted back in 1996. HIPAA (the Health Information Portability and Accountability Act) was originally described as an act:

To amend the Internal Revenue Code of 1986 to improve portability and continuity of health insurance coverage in the group and individual markets, to combat waste, fraud, and abuse in health insurance and health care delivery, to promote the use of medical savings accounts, to improve access to long-term care services and coverage, to simplify the administration of health insurance, and for other purposes.”

The privacy of individually identifiable health information was one of those “other purposes” only peripherally included in the 1996 act. Privacy protection was to be a follow-up, a “to-do” checklist item for the future. HIPAA directed the Secretary of Health and Human Services to recommend privacy standards to specified congressional committees within a year of enactment, and, if Congress did not enact privacy legislation within 3 years of enactment, the Secretary was to proceed with the promulgation of privacy regulations. Security was a bit more urgent, at least in the context of electronic health transactions such as claims, enrollment, eligibility, payment, and coordination of benefits. HIPAA required the Secretary to adopt standards for the security of electronic health information systems within 18 months of enactment.

This historical context casts some light on why our 2017-era electronic health records (EHR) systems often lack interoperability and yet are vulnerable to security breaches. HIPAA may be partially to blame, since it was primarily designed to make health insurance more portable and to encourage health insurers and providers to conduct transactions electronically. Privacy and security were the “oh, yeah, that too” add-ons to be fully addressed once electronic health information transactions were underway and EHR systems needed to support them already up and running. Since 1996, EHRs have developed at a clunky provider-by-provider (or health system-by-health system) and patient encounter-by-patient encounter basis, not only making them less accurate and efficient, but vulnerable to privacy and security lapses. (Think of the vast quantity of patient information breached when a hospital’s EHR or a health plan’s claims data base is hacked.)

This past June, I participated on a California Israel Medical Technology Summit panel discussing privacy and security issues. An audience member asked the panel whether we thought blockchain technology was the answer to HIPAA and other privacy and security-related legal requirements. I didn’t have a good answer, thinking “isn’t that the technology used to build Bitcoin, the payment system used by data hackers everywhere?”

This past July, Ritesh Gandotra, a director of global outsourcing for Xerox, wrote that blockchain technology could overhaul our “crippled” EHR management system. Gandotra writes “Historically, EHRs were never really designed to manage multi-institutional and lifetime medical records; in fact, patients tend to leave media data scattered across various medical institutes … This transition of data often leads to the loss of patient data.” He goes on to explain how blockchain, the “distributed ledger” technology originally associated with Bitcoin, can be used to link discrete patient records (or data “blocks”) contained in disparate EHRs into “an append-only, immutable, timestamped chain of content.”

Using blockchain technology to reconfigure EHRs makes sense. Ironically, the design flaw inherent in HIPAA’s original 1996 design (the promotion of electronic health transactions to foster portability and accountability in the health insurance context while treating privacy and security as an afterthought) can be fixed using the very same technology that built the payment network favored by ransomware hackers.