With the explosion of health data sifting through cutting-edge companies, industry stakeholders are left to wonder how wearable devices, wellness programs, health applications, and the like should be regulated.

Despite current belief, the Health Insurance Portability and Accountability Act (“HIPAA”) does not regulate all health information. HIPAA regulates health information collected and retained by covered entities and imposes downstream obligations on entities called business associates. HIPAA began with a limited purpose and was not created to cover all health information held by all entities. Created in 1996, HIPAA was originally designed to address the exchange of electronic health information and portability, so that an employee could maintain health insurance between employers.

Today’s perceived gaps in HIPAA, therefore, seem plausible, given its history and the realization that when HIPAA was created 23 years ago, the health landscape was without today’s innovative health companies collecting and aggregating health data in new ways for new purposes and the accompanying geometric increase in the complexity and types of risk. While newer health tech companies may find themselves outside the HIPAA regime, a recent Senate Bill hopes to expand HIPAA to include health information collected by fitness trackers, health-focused social media sites, and direct to consumer genetic testing companies. Though the Senate Bill has stayed stagnate, companies have seen enforcement beyond the HIPAA regime.

In March 2017, New York Attorney General announced a settlement with developers of three health apps and alleged the creators used misleading claims and had irresponsible privacy practices with unclear and inconsistent statements about how they collected and shared users’ personal information with third parties. The Attorney General alleged violations to New York’s Consumer Protection Act and False Advertising laws.

So what is the moral of the story? Just because your health company does not fit squarely within the HIPAA regime, you aren’t excluded from being regulated. Keep in mind applicable state laws like a state’s Consumer Fraud Act. Consider obligations to federal regulators like the FTC regarding deceptive consumer practices and FDA’s oversight over medical devices, for example.

Have a good understanding of what your company is (and what it isn’t). If you’re a covered entity or business associate, your obligation to comply with HIPAA is clear. However, consider wearable devices, like Fitbit and smartwatches that track users’ heart rate and sync their health data to smartphone apps. Consider wearable biosensors that monitor patients’ vital signs, temperature, and body posture. A deeper analysis on when health information shifts from HIPAA protected to non-HIPAA protected, can be found on a separate Alert by Elizabeth Litten.

A large New York hospital system learned this lesson the expensive way.  According to a U.S. Department of Health and Human Services (HHS) press release issued earlier this week, the Office for Civil Rights (OCR) investigated a hospital system breach back in 2010 involving the loss of an unencrypted flash drive. According to the press release, OCR provided technical assistance to the hospital system as a result of that breach.

The hospital system apparently didn’t follow or benefit from OCR’s technical assistance, as it reported a breach in 2013 involving the loss of an unencrypted flash drive. According to OCR,

Despite the previous OCR investigation, and [the hospital system’s] own identification of a lack of encryption as a high risk to ePHI, [the hospital system] permitted the continued use of unencrypted mobile devices.”

The hospital system then reported a third incident involving the theft of an unencrypted mobile device (an unencrypted personal laptop used by a resident surgeon) in 2017.  Although the laptop contained the PHI of only 43 patients, it wasn’t the size of the breach that likely triggered the $3 million payment amount.  The high payment amount seems directed at the hospital system’s apparent continuing failure to implement fairly straightforward security measures.

This hospital system had three strikes involving unencrypted devices before being hit with the $3 million resolution amount, and three important lessons can be learned from this resolution agreement. First, correct identified vulnerabilities. Second, when OCR offers technical assistance, follow it. And third, make sure you have a mobile device policy that requires encryption or addresses why encryption is not feasible.

OCR likely also considered the large size of the hospital system, and the relatively simple security policies and procedures the hospital system could have implemented to prevent the third breach when it imposed the $3 million penalty and two year corrective action plan.  However, even small covered entities and business associates should pay attention to this resolution agreement and take steps to minimize the risk of mobile device breaches.

“New York Gov. Andrew Cuomo recently signed legislation that will effectively prohibit ambulance and first response service providers from disclosing or selling patient data to third parties for marketing purposes.

The bill was signed into law on October 7. The new law bans the sale of patient data, or individually identifying information to third parties, outside of sales to health providers, the patient’s insurer, and other parties with appropriate legal authority.

Under the law, all information that can be used to identify a patient is protected from sales for marketing purposes, such as advertising, detailing, marketing, promotion, or any activity used to influence sales.”

Details from HealthIT Security.

This post also appears on Fox Rothschild’s Privacy Compliance & Data Security blog.

Artificial Intelligence (“AI”) refers to algorithm tools that simulate human intelligence, mimic human actions, and can incorporate self-learning software. The benefits of AI tech can reduce spending, provide alternative treatment ideas, and improve patient experience, diagnosis, and outcome.

Consider virtual health assistants who deliver medication alerts and patient education, AI used to detect abnormalities in x-rays and MRIs, and AI that gives simultaneous feedback to patients and their physicians from elements captured on patient smartphones and wearable devices.  But with the advent of unchecked AI comes concerns related to health information  privacy and unconscious bias.

Privacy Concerns in AI Health Tech

AI advances could negatively impact health data privacy. The level of impact depends, to some extent, on how we define health data. “Health data” generally refers to information concerning the physical or mental health status of an individual and the delivery of or payment for health services to the individual. It incorporates data exchanged through traditional health technologies used by health care providers and health plans, as well as data exchanged through newer technologies, like wearable devices and virtual assistants.

Regulations do not yet fully address privacy concerns in AI health tech. HIPAA, for example, legislation that was originally enacted in 1996, requires covered entities and business associates to implement standards to ensure the privacy and security for protected health information. Those standards, however, may not apply to tech companies that use ever-evolving third party apps or algorithms to access the data. Experts express concerns with companies collaborating to re-identify data formerly considered de-identified.[1] Consider data brokers who mine personal health data with AI tech who then sell their acquired health data to third parties like marketers, employers, and insurers. While a tech company’s relationship with a health insurance company falls under HIPAA’s scope, it is less clear what privacy laws would apply to tech companies that limit clientele to entities that are not otherwise directly or indirectly subject to HIPAA, such as life or disability insurers, which may be the case for marketers and employers.

Bias Concerns in AI Health Tech

AI health tech must be developed and trained responsibly. AI learns by identifying patterns in data collected over many years. If the data collected reflects historical biases against vulnerable populations, the data projected will only exasperate those biases. Biases creep into data sets through various ways. Consider whether input data has incomplete data sets for “at-risk” populations (groups with a historically higher risk for certain health conditions or illnesses). Consider the potential for under-diagnosis of health conditions within certain populations. Correcting for these biases in the development of the data set and in training processes will help to avoid the creation of biased results in AI health tech.

Companies should consider safeguards when employing their AI health tech. Employee training is paramount. While AI promises a host of benefits, those involved in its creation and use must be aware of the potential for bias. Companies should also consider data integrity and built in biases. Consider the information AI tech relies on and consider rechecking the data reviewed by AI.  Lastly, diversifying those engaged in creating, testing, and using the health care tech could also decrease bias.

[1]              https://www.ncvhs.hhs.gov/wp-content/uploads/2013/12/2017-Ltr-Privacy-DeIdentification-Feb-23-Final-w-sig.pdf

Last May, around the time many schools let out for the summer, the Office for Civil Rights (“OCR”) published guidance entitled “Direct Liability of Business Associates” (the “Guidance”), which focuses, not surprisingly, on OCR’s ability to take enforcement action directly against HIPAA business associates. I meant to write about this guidance before Memorial Day, but since the back-to-school season is a good time to get things (including business associate agreements or “BAAs”) in order, this timing feels right.

The Guidance caught my attention not because it lists ten HIPAA failures or violations for which business associates are directly liable, but it calls out one specific HIPAA violation that will fall on the shoulders of the contracted covered entity:

… OCR lacks the authority to enhance the “reasonable, cost-based fee” limitation in 45 C.F.R. § 164.524(c)(4) against business associates … .

In other words, the OCR explains that, if a covered entity engages a business associate to fulfill an individual’s request for access to protected health information, it is the covered entity’s responsibility to ensure that the business associate complies with HIPAA’s “reasonable, cost-based fee” limitation (and any more stringent state law requirement).

We’ve posted on the topic of individual access rights under HIPAA (see here and here), and have also posted on the topic of what amounts can be charged, both under HIPAA and under state law (see here and here). What the Guidance compels me to point out, though, is that covered entities often include a provision in BAAs that requires the business associate to respond to an individual’s access request by either notifying the covered entity of the request or by providing the requested electronic or paper copy directly. The provision may require the business associate to comply with the HIPAA regulatory requirements regarding the timing of the response, either in terms of notifying the covered entity within a specified time period or by responding directly to the individual.

However, a provision stating simply that the business associate must “comply with 45 C.F.R. § 164.524 [the regulation governing individuals’ access rights]” may not be enough to ensure that the business associate limits the amount charged as per the regulation, which potentially creates unexpected exposure for noncompliance for the covered entity. Thus, in light of the Guidance, covered entities should review their BAAs and consider whether updates are required to such provisions. If they don’t they may end up dealing with an OCR enforcement action that could have been prevented with a few well-placed BAA words.

The Dutch Data Protection Authority has levied a fine of 460,000 euros on Haga Hospital for insufficient security following an investigation revealing that dozens of hospital staff had unnecessarily checked the medical records of a well-known Dutch person.

In addition, if the hospital has not improved security before October 2, 2019, it must pay 100,000 euros every two weeks, up to a maximum of 300,000 euros.

According to DutchNews.nl, the authority’s chairman Aleid Wolfsen said: “The relationship between a healthcare provider and a patient should be completely confidential. Also within the walls of a hospital. It doesn’t matter who you are.”

Key takeaways:
  • Have adequate logs in place: The hospital must regularly check who consults which file.
  • Good security requires authentication that involves at least two factors.

Details from the Dutch Data Protection Authority.

The California Consumer Privacy Act (CCPA) will take effect on January 1, 2020 and regulates most entities that collect personal information of California residents.  CCPA was patterned after the European Union’s General Data Protection Regulation (GDPR) which went online on May 28, 2018 and has been called “GDPR-Lite.”  In May, Fox Rothschild partner Odia Kagan described when CCPA applies in an Alert that listed the categories of entities who are affected: generally,  for-profit businesses who do business in California, collect California consumers’ personal information and determine the purposes and means of processing that information, and have at least $25 million in annual gross revenues, buy, sell, share and/or receive the personal information of at least 50,000 California consumers, households or devices, per year, or derive at least 50 percent of their annual revenue from selling California consumers’ personal information, as wells as entities that control or are controlled by such businesses and share common branding.  Each of those terms has a technical definition that should be carefully reviewed.   But isn’t there a HIPAA exception?

Yes, CCPA contains a carve-out for HIPAA covered entities, but it is not as broad as you may have heard.  In a recent alert entitled  Where HIPAA Stops, CCPA Begins – Why Covered Entities and Business Associates Cannot Ignore the New California Data Privacy Law, Fox Rothshchild partners Odia Kagen and Elizabeth Litten explain when information that appears to be exempt PHI may fall under the new CCPA:

Personal information created, received, maintained or transmitted by companies subject to HIPAA is likely subject to CCPA if it falls into one of the following five categories:

  1. It is not created or collected as part of the payment, treatment or health care operations trifecta
  2. It was never PHI (or is excluded from the definition of PHI) under HIPAA
  3. It was once PHI, but has been de-identified under HIPAA
  4. It is not PHI, but is derived from PHI
  5. It is PHI that is used for research purposes in accordance with HIPAA

The bottom line is that what you think is PHI and exempt from CCPA may not be covered by the carve-out after all. For details, see the Alert.

“The right to be forgotten does not apply in principle to medical records. However, as a patient, you may ask your health care provider to remove data from your medical record,” according to the Dutch Data Protection Authority, Autoriteit Persoonsgegevens (AG), which has issued a guidance on GDPR and medical records.

Key takeaways:

  • For medical data that are not covered by the Medical Treatment Agreement Act, such as nursing care and in-home care, personal data should not be kept longer than necessary.
  • The personal data that you have actively and consciously provided is covered by the right to data portability. This also applies to the data that you have provided indirectly through the use of a service or device. For example, the data that your pacemaker or blood pressure monitor generates.
  • The right to data portability does not apply to the conclusions, diagnoses, suspicions or treatment plans that your health care provider establishes on the basis of the information you provide.
  • As a health care provider, you must in any case use two-factor authentication. Such as logging in with DigiD in combination with SMS.

Read the full guidance.

“TMI” usually means “too much information”, but it was used aptly by the Office for Civil Rights (OCR) as an acronym for a covered entity that exposed protected health information (PHI) of more than 300,000 patients through an insecurely configured server. According to the April 5, 2019 Resolution Agreement, the covered entity, Touchstone Medical Imaging, Inc. (TMI), not only used an insecure file transfer protocol (FTP) that allowed visibility to patient information via google searches, but it seemingly dragged its HIPAA compliance feet upon learning of the PHI exposure.

TMI was notified of its insecure FTP on May 9, 2014 and apparently implemented technical safeguards to limit access rights to the FTP server that maintained PHI to approved persons and software programs, but TMI failed to provide notice to individuals and the media of the breach until October 3, 2014, 147 days after discovery of the breach. Adding insult to injury, TMI failed to enter into a business associate agreement with its IT vendor until June 2, 2016, and (as of the date of the Resolution Agreement) “continues” to engage another business associate “without the protections of a business associate agreement in place.”

It is not clear from the Resolution Agreement exactly how the insecurity of the FTP was initially discovered or by whom. The Resolution Agreement states that TMI conducted a HIPAA security risk assessment on April 3, 2014, but the Press Release states that TMI was notified by the FBI and OCR in May of 2014. The Press Release also says that TMI “initially claimed that no patient PHI was exposed,” and that OCR found that TMI did not thoroughly investigate the incident until several months after notice of the breach by both the FBI and OCR.

A more immediate and robust breach response may very well have saved this covered entity millions, let alone negative publicity. The PHI exposure was significant (especially when combined with the delayed and seemingly insufficient security risk assessment), but the combination of TMI (as in too much information) and not enough in terms of response activity is the perfect recipe for a HIPAA settlement.

A study shows that “92 percent of 36 mental health apps shared data with at least one third party — mostly services that help with marketing, advertising, or data analytics.”

“About half of those apps did not disclose that third-party data sharing, for a few different reasons: nine apps didn’t have a privacy policy at all; five apps did but didn’t say the data would be shared this way; and three apps actively said that this kind of data sharing wouldn’t happen.”

While some of this information is not immediately identifying, that could soon change.

“We live in an age where, with enough breadcrumbs, it’s possible to reidentify people” says John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center. “Advertisers could use this to compromise someone’s privacy … For example, if an advertiser discovers someone is trying to quit smoking … would they be interested in electronic cigarettes … Or other similar products, like alcohol?” says Steven Chan, a physician at Veterans Affairs Palo Alto Health Care System.

Details from The Verge.