Flo Health, Inc., which marketed an app used by more than 100 million women interested in tracking their personal menstruation and fertility information, seems to be getting off easily as compared with HIPAA-covered entities who misuse individual health information.  The FTC’s January 13, 2021 press release announcing its proposed settlement with Flo Health sidesteps mention (let alone enforcement) of a federal law (and the FTC’s own rule).  This puzzling sidestep deserves attention, not only in light of the proliferation of the use of personal health apps, but given the particularly sensitive nature of the health information collected by the Flo Health app.

The Health Information Technology for Clinical and Economic Health Act (HITECH), enacted as part of the American Recovery and Reinvestment Act of 2009 (the Recovery Act), not only amended HIPAA, but added HIPAA-like breach notification requirements that apply to vendors of “personal health records” (PHRs) that are not covered entities, business associates, or subcontractors subject to HIPAA.  As described by the FTC in a “request for comment” published last May:

The Recovery Act recognized that vendors of personal health records and PHR related entities (i.e., companies that offer products and services through PHR websites or access information in or send information to PHRs) were collecting consumers’ health information but were not subject to the privacy and security requirements of the Health Insurance Portability and Accountability Act (‘‘HIPAA’’).  The Recovery Act directed the FTC to issue a rule requiring these entities, and their third-party service providers, to provide notification of any breach of unsecured individually identifiable health information. Accordingly, the HBN [Health Breach Notification] Rule requires vendors of PHRs and PHR related entities to provide: (1) Notice to consumers whose unsecured individually identifiable health information has been breached; (2) notice to the media, in many cases; and (3) notice to the Commission…

The [HBN] Rule requires notice ‘‘without unreasonable delay and in no case later than 60 calendar days’’ after discovery of a data breach. If the breach affects 500 or more individuals, notice to the FTC must be provided ‘‘as soon as possible and in no case later than ten business days’’ after discovery of the breach.”

Yet, surprisingly, the FTC’s Flo Health press release and proposed settlement is completely silent with respect to Flo Health’s failure to abide by the Recovery Act and the FTC’s own breach notification rule.  Although its impermissible practices seem to have been “discovered” back in February of 2019 (see here for original WSJ revealing Flo Health’s data practices), Flo Health failed to notify its millions of female users that it allowed their personal and uniquely sensitive health information to be used by third parties, including Google and Facebook, for their own purposes, including advertising.

While the proposed settlement requires website notice and individual email/mobile app notice within 14 days after the filing of the Consent Order, such notice would come well beyond the “60-day following discovery” deadline.   In addition, as drafted by the FTC, the notice is focused on what was not improperly disclosed (name, address, or birthday), rather than what was.  When a covered entity notifies individuals, regulators, and the media of a HIPAA breach, it must include a description of the types of information involved in the breach.

Two FTC commissioners, Rohit Chopra and Rebecca Kelly Slaughter, picked up on the FTC’s failure to enforce the Recovery Act and FTC breach notification rule. Their Joint Statement points out that the “explosion in connected health apps” make the breach notification rule “more important than ever”:

[S]ervices like Flo need to come clean when they experience privacy or security breaches.”

Unless they do, health app users will have no idea when their trust is misplaced.

Yesterday’s listserv announcement from the Office for Civil Rights (OCR) within the U.S. Department of Health and Human Services (HHS) brought to mind this question. The post announces the agreement by a Florida company, Advanced Care Hospitalists PL (ACH), to pay $500,000 and adopt a “substantial corrective action plan”. The first alleged HIPAA violation? Patient information, including name, date of birth, and social security number was viewable on the website of ACH’s medical billing vendor, and reported to ACH by a local hospital in 2014.

To add insult (and another alleged HIPAA violation) to injury, according to the HHS Press Release, ACH did not have a business associate agreement (BAA) in place with the vendor, Doctor’s First Choice Billings, Inc. (First Choice), during the period when medical billing services were rendered (an 8-month period running from November of 2011 to June of 2012). Based on the HHS Press Release, it appears that ACH only scrambled to sign a BAA with First Choice in 2014, likely after learning of the website issue. In addition, according to the HHS Press Release, the person hired by ACH to provide the medical billing services used “First Choice’s name and website, but allegedly without any knowledge or permission of First Choice’s owner.”

These allegations are head-spinning, starting with those implicating the “should’ve-been” business associate. First, how does a medical billing company allow an employee or any other individual access to its website without its knowledge or permission? Next, shouldn’t someone at First Choice have noticed that an unauthorized person was posting information on its website back in 2011-2012, or at some point prior to its discovery by an unrelated third party in 2014? Finally, how does a medical billing company (a company that should know, certainly by late 2011, that it’s most likely acting a business associate when it performs medical billing services), not realize that individually identifiable health information and social security numbers are viewable on its website by outsiders?

ACH’s apparent lackadaisical attitude about its HIPAA obligations is equally stunning. What health care provider engaged in electronic billing was not aware of the need to have a BAA in place with a medical billing vendor in 2011? While the Omnibus Rule wasn’t published until January of 2013 (at which point ACH had another chance to recognize its need for a BAA with First Choice), HHS has been publishing FAQs addressing all kinds of business associate-related issues and requirements since 2002.

It seems pretty obvious that ACH should have had a BAA with First Choice, but, in many instances, having a BAA is neither required by HIPAA nor prudent from the perspective of the covered entity. A BAA generally is not necessary if protected health information is not created, received, maintained or transmitted by or to the vendor in connection with the provision of services on behalf of a covered entity, business associate, or subcontractor, and having one in place may backfire. Consider the following scenario:

*          Health Plan (HP), thinking it is acting out of an abundance of HIPAA caution, requires all of its vendors to sign BAAs.

*          Small Law Firm (SLF) provides legal advice to HP, but does not create, receive, maintain or transmit protected health information in connection with the services it provides on behalf of HP.

*          However, SLF signs HP’s BAA at HP’s request and because SLF thinks it might, at some point, expand the scope of legal services it provides to HP to include matters that require it to receive protected health information from HP.

*          SLF suffers a ransomware attack that results in some of its data being encrypted, including data received from HP. It reviews HHS’s fact sheet on Ransomware and HIPAA, and realizes that a HIPAA breach may have occurred, since it cannot rule out the possibility that it received protected health information from HP at some point after it signed the BAA and prior to the attack.

*          SLF reports the attack to HP as per the BAA. Neither SLF nor HP can rule out the possibility that protected health information of individuals covered by HP was received by SLF at some point and affected by the attack.

HP is now in the position of having to provide breach notifications to individuals and HHS. Had it been more circumspect at the outset, deciding it would only ask SLF to sign a BAA if/when SLF needed protected health information in order to provide legal services on behalf of HP, it may have avoided these HIPAA implications completely.

So while it seems stunning that a health care provider entity such as ACH would have neglected to sign a BAA with First Choice before 2014, having a BAA in place when it is not necessary can create its own problems. Better to constantly ask (and carefully consider): to BAA or not to BAA?

Elizabeth G. Litten, Partner, Fox Rothschild LLPOn November 9, the Florida Supreme Court ruled in the case of Emma Gayle Weaver, etc. v. Stephen C. Myers, M.D., et al., that the right to privacy under the Florida Constitution does not end upon an individual’s death. Fox partner and HIPAA Privacy & Security Officer Elizabeth Litten recently reacted to the decision in an article in Data Guidance. She noted the decision’s compatibility with HIPAA regulations concerning the protected health information of a deceased patient. She also discussed certain elements of the Florida statutes that were deemed unconstitutional by the court, and how they differ from HIPAA’s judicial and administrative proceedings disclosure rules.

We invite you to read the article and Elizabeth’s remarks.

My partner Elizabeth Litten and I were recently interviewed for an article entitled “Connecticut ‘opens floodgates’ for HIPAA litigation” published in “Privacy this Week” by DataGuidance. The full text of the article can be found in the November 13, 2014 issue of “Privacy this Week,” but a discussion of the article is set forth below.

On November 11, 2013, the Connecticut Supreme Court ruled in the case of Byrne v. Avery Center for Obstetrics and Gynecology, P.C. that (i) an action for negligence arising from a health care provider’s breach of patient privacy is not preempted by the HIPAA statute and regulations, which do not permit a private right of action to be brought by an individual under HIPAA, and (ii) HIPAA regulations may well inform the applicable standard of care in certain circumstances. Elizabeth and I have previously posted blog entries respecting the Byrne case that may be read here and here, respectively.

Elizabeth pointed out, “The precedents this case sets may have exponential repercussions and may twist the decision in extreme illogical directions.”

I observed that the Byrne case may have opened the floodgates of litigation because the decision may have established a new level of punishment that is not present under the federal HIPAA law itself.  Just consider the liability a doctor could incur if he or she mistakenly leaves a document with personal health data on the wrong nurse station desk. If, for example, someone improperly accesses that information and uploads the data to the Internet, we have a data breach under HIPAA standards – which in turn may be an act of negligence under state tort or malpractice law with liability to the doctor under the principles of the Byrne case.

Elizabeth also stated that there is fear that some of the things HIPAA tries to regulate, such as transparency in data breaches, may be undermined. If individuals can resort to state law to seek compensation for data breaches, companies may see benefits in not complying with the transparency finality of HIPAA. “Furthermore there are many other federal standards with implications in data protection, such as the Family Educational Rights and Privacy Act (FERPA), that could follow the case of HIPAA,” Elizabeth noted.

I added my view that it would not be surprising if HIPAA is taken to the United States Supreme Court to delimit its preemption scope. We certainly haven’t seen the end of it.  The Connecticut case may provide a new avenue for an individual plaintiff to sue for a health data breach under state law by using HIPAA indirectly when he or she cannot sue under HIPAA itself directly.  This blog will continue to follow the Byrne case and other cases involving HIPAA and other federal and state law interactions and potential conflicts.

By Michael Kline and Elizabeth Litten

 

[Capitalized terms not otherwise defined in this Part 3 shall have the meanings assigned to them in Parts 1 and 2.]

 

The Public Statement reports that SAIC and Tricare are cooperating in the notification process but that no credit monitoring or restoration services will be provided in light of the “low risk of harm.” This was in contrast to the decision of Nemours in the Nemours Report to provide such services.

 

Since the release by SAIC of the Public Statement, Law 360 has reported that

 

(i)   According to Tricare, SAIC was “on the hook for the cost of notifying nearly 5 million program beneficiaries that computer tapes containing their personal data had been stolen”;

(ii)  A putative class action lawsuit was filed against Tricare and DoD (but not SAIC) respecting the 2011 Breach; and

(iii) Another putative class action lawsuit was filed against SAIC (but not Tricare and DoD) respecting the 2011 Breach. 

 

Further review of SAIC and its incidents regarding PHI reveals that the 2011 Breach was not the first such event for SAIC. However, it appears to the first such breach since the adoption of the Breach Notification Rule in August of 2009.

 

On July 21, 2007 The Washington Post reported that SAIC had acknowledged the previous day that “some of its employees sent unencrypted data — such as medical appointments, treatments and diagnoses — across the Internet” that related to 867,000 U.S. service members and their families. The Post article continues:

 

So far, there is no evidence that personal data have been compromised, but ‘the possibility cannot be ruled out,’ SAIC said in a press release. The firm has fixed the security breach, the release said.

 

Embedded later in the Post article is the following: 

 

The [2007] disclosure comes less than two years after a break-in at SAIC’s headquarters that put Social Security numbers and other personal information about tens of thousands of employees at risk. Among those affected were former SAIC executive David A. Kay, who was the chief U.N. weapons inspector in Iraq, and a former director who was a top CIA official.

 

It is not clear whether the earlier 2005 breach reported in the Post involved PHI or other personal information.

On January 20, 2009, SPAMfighter reported that SAIC had informed the Attorney General of New Hampshire of a data breach that had occurred involving malware. The SPAMfighter report continues that SAIC wrote a letter to many affected users to inform them about the potential compromise of personal information.  (A portion of such personal information would have been deemed PHI had it been part of health-related material.)

The SPAMfighter report also discloses the following:

Furthermore, the current [2009] breach at SAIC is not the only one. There was one other last year (2008), when keylogging software managed to bypass SAIC’s malware detection system. That breach had exposed mainly business account information.

As of the date of this blog post, the “News Releases” section on the SAIC Web site has no reference to the 2011 Breach. Nor does the “SEC Filings” section under “Investor Relations” on the SAIC Web site indicate any recent SEC filing that discloses the 2011 Breach. 

Coincidentally, the SEC issued a release on October 13, 2011 containing guidelines for public companies regarding disclosure obligations relating to cybersecurity risks and cyber incidents. In the context of SAIC, an $11 billion company, while the actual costs of notification and remediation of the 2011 Breach may run into millions of dollars, the 2011 Breach may not be deemed a “material” reportable event for SEC purposes by its management.

It is likely that much more will be heard in the future about the mammoth 2011 Breach and its aftermath that may give covered entities and their business associates valuable information and guidance to consider in identifying and confronting a future large PHI security breach. The 2011 Breach has not even yet appeared on the HHS List. The regulatory barriers preventing private actions under HIPAA/HITECH may be tested by the putative class action lawsuits. It will also be interesting to see whether the cooperation of SAIC with Tricare and DoD may wither in the face of the pressures of the lawsuits and potential controversy regarding the decision of SAIC not to provide credit monitoring and identity theft protection to affected individuals.

By Elizabeth Litten and Michael Kline

[Capitalized terms not otherwise defined in this Part 2 shall have the meanings assigned to them in Part 1.]

 

In an October 3, 2011 Securities and Exchange Commission (“SEC”) filing posted on its Web site, SAIC described itself as

 

a FORTUNE 500® scientific, engineering, and technology applications company that uses its deep domain knowledge to solve problems of vital importance to the nation and the world, in national security, energy and the environment, critical infrastructure, and health. The company’s approximately 41,000 employees serve customers in the U.S. Department of Defense, the intelligence community, the U.S. Department of Homeland Security, other U.S. Government civil agencies and selected commercial markets. Headquartered in McLean, Va., SAIC had annual revenues of approximately $11 billion for its fiscal year ended January 31, 2011.

 

The SAIC PHI breach, which potentially affected nearly 5 million individuals, was reported despite the fact that the PHI was contained on backup tapes used by the military health system, and despite, as explained in the Public Statement: 

 

The risk of harm to patients is judged to be low despite the data elements involved since retrieving the data on the tapes would require knowledge of and access to specific hardware and software and knowledge of the system and data structure…  [Q and A] Q. Can just anyone access this data? A. No. Retrieving the data on the tapes requires knowledge of and access to specific hardware and software and knowledge of the system and data structure.

 

The Public Statement goes on to say the following in another answer:

 

After careful deliberation, we have decided that we will notify all affected beneficiaries. We did not come to this decision lightly. We used a standard matrix to determine the level of risk that is associated with the loss of these tapes. Reading the tapes takes special machinery. Moreover, it takes a highly skilled individual to interpret the data on the tapes. Since we do not believe the tapes were taken with malicious intent, we believe the risk to beneficiaries is low. Nevertheless, the tapes are missing and given the totality of the circumstances, we determined that individual notification was required in accordance with DoD guidance. [Emphasis supplied.]

 

The lynchpin of SAIC’s final decision to notify all of the potentially affected individuals appeared to be the DoD guidance. In SAIC’s position as an $11 billion contractor that is heavily dependent on DoD and other U.S. government contracts as described above, it would appear that SAIC may not have had many practical alternatives but to notify beneficiaries.

 

SAIC conducted “careful deliberation” before reaching its result and indicated that the risk of breach was “low.” Had the DoD guidance not been a factor and had SAIC concluded that the case was one where an unlocked file or unencrypted data was discovered to exist, but it appeared that no one had opened such file or viewed such data, would SAIC’s conclusion have been the same? Would SAIC have come to the same conclusion as Nemours and decided to report? 

What is clear is that the breach notice determination should involve a careful risk and impact analysis, as SAIC asserts that it performed. Even the most deafening sound created by a tree crashing in the forest is unlikely to affect the ears of the airplane passengers flying overhead. Piping that sound into the airplane, though, is very likely to disgruntle (or even unduly panic) the passengers. 

 

[To be continued in Part 3]

By Elizabeth Litten and Michael Kline

A recent public statement (the “Public Statement”) was published regarding a breach (the “2011 Breach”) of protected health information (“PHI”) of nearly 5 million military clinic and hospital patients that involved Science Applications International Corporation (SAI-NYSE) (“SAIC”). The 2011 Breach occurred in SAIC’s apparent role as a business associate and/or subcontractor for Tricare Management Activity, a component of Tricare, the military health plan (collectively, “Tricare”) for active duty service members of the U.S. Department of Defense (“DoD”). 

 

According to the Public Statement the PHI “may include Social Security numbers, addresses and phone numbers, and some personal health data such as clinical notes, laboratory tests and prescriptions.” However, the Public Statement says that there is no financial data, such as credit card or bank account information, on the backup tapes.

 

The 2011 Breach is the largest single PHI security breach reported to date. The 2011 Breach highlights the decision-making process that covered entities and business associates should employ with respect to notifying the Department of Health and Human Services (“HHS”), other regulators and potentially affected individuals of a PHI breach.

 

The published “interim final rule” governing “Breach Notification for Unsecured Protected Health Information” (the “Breach Notification Rule”)  defines “breach” as “the acquisition, access, use or disclosure of protected health information [“PHI”] in a manner not permitted under subpart E of this part which compromises the security or privacy of the protected health information.” It further explains that “compromises the security or privacy of the protected health information means poses a significant risk of financial, reputational, or other harm to the individual.”  The Breach Notification Rule also defines the term “access” for purposes of the interim final rule as “the ability or the means necessary to read, write, modify, or communicate data/information or otherwise use any system resource.”

 

These definitions, reviewed in the context of several recent PHI breaches (including those “marchers in the parade” previously discussed on this blog), raise an important issue: at what point does “access” matter?   When is the mere “ability” to read PHI, without evidence that the PHI was actually read or was likely to have been read, enough to trigger the notice requirement under the Breach Notification Rule? Will covered entities provide notice out of an abundance of caution to report every unlocked or unencrypted data file, possibly flooding the HHS website that lists large PHI breaches (the “HHS List”) with potential breaches that have minimal or no likelihood of access and unduly alarming notified individuals? Could such reporting have the unintended effect of diluting the impact of reports involving actual theft and snooping?  

 

In this regard, an event reported on the Nemours Web site on October 7, 2011 (the “Nemours Report”), about a PHI security breach involving approximately 1.9 million individuals at a Nemours facility in Wilmington, DE is relevant. The Nemours Report stated that three unencrypted computer backup tapes containing patient billing and employee payroll were missing. The tapes reportedly were stored in a locked cabinet following a computer systems conversion completed in 2004. The tapes and locked cabinet were reported missing on September 8, 2011 and are believed to have been removed on or about August 10, 2011 during a facility remodeling project. 

Significantly, the Nemours Report stated the following:

There is no indication that the tapes were stolen or that any of the information on them has been accessed or misused. Independent security experts retained by Nemours determined that highly specialized equipment and specific technical knowledge would be necessary to access the information stored on these backup tapes. There are no medical records on the tapes.

The Nemours Report reveals that, in spite of the low likelihood of access, it not only disclosed the breach but was offering free credit monitoring, identify theft protection, and call center support to affected individuals. 

 

If the analysis as to whether access “poses a significant risk of … harm” takes into account the likelihood that PHI was actually accessed, rather than simply whether a theoretical “ability or means” to read, write, modify, or communicate PHI existed at some point in time, perhaps the “possible breach” floodgates will not burst open unnecessarily.  

 

[To be continued in Part 2]

 By Elizabeth Litten and Michael Kline

 

What was the highlight of the Macy’s® Thanksgiving Day parade when we were kids? The Snoopy® float (shown below) was probably right up there, along with the Sesame Street® and Disney® floats. Spectators of the Protected Health Information (“PHI”) Breach Parade (and of the “silent brigade” of Business Associate breaches, discussed in this blog series on August 1, 2011) will be awed by the sight of the recent, somewhat bizarre, Business Associate (“BA”) breach involving Stanford Hospital’s emergency room data, as reported in the New York Times by Kevin Sack on September 8, 2011. The PHI of 20,000 emergency room patients seen in the Palo Alto, CA hospital reportedly somehow made its way from the hospital’s BA, Multi-Specialty Collection Services, to a public website used by students. The publicly-posted information included names and diagnoses for patients who visited the emergency room during a 6 month period in 2009.

 

This PHI breach stands out for a couple of unusual aspects. First, the data was allegedly made publicly accessible in September of 2010 as a spreadsheet attached to a document on the Web site “Student of Fortune,” a site describing itself as “Your source for easy online homework help!” As reported in the Sack article: “Gary Migdol, a spokesman for Stanford Hospital and Clinics, said that the spreadsheet first appeared on the site on Sept. 9, 2010, as an attachment to a question about how to convert the data into a bar graph."  The PHI breach was purportedly discovered on August 22, 2011 by a Stanford Hospital patient and reported to the hospital. The fact that nearly a year had lapsed from the time of the breach to its reported discovery suggests that the PHI was

 

(i)   not recognized as “real” by viewers,

(ii)  not thought by viewers to be worth noting or reporting, and/or

(iii) not actually viewed by anyone during the year it was accessible to students seeking bar graph tutorial. 

 

Nonetheless, the volume of patients affected, the sensitivity of the PHI data (more on that in a minute), the apparent lack of sufficient care by the BA, and the surprising nonchalance of whoever posted the PHI to be sifted and sorted by “Students of Fortune” accessing a publicly available Web site combine to make an attention-grabbing PHI breach event (the Snoopy float). 

 

Also reported on a New York Times blog site by Nick Bilton on September 8, 2011, Senator Richard Blumenthal (D-CT) introduced a bill, the Personal Data Protection and Breach Accountability Act of 2011, that, if passed, would impose strict storage and protection requirements for companies that store online data for more than 10,000 people. (Senator Blumenthal was previously highlighted in several postings in this blog series for his groundbreaking activities as Attorney General of Connecticut in investigations and enforcement actions against entities involved in PHI security breaches.)

 

While “Student of Fortune” was certainly not “storing” the emergency room PHI, the bill would likely affect BAs such as Multi-Specialty Collection Services. To the extent the Blumenthal bill imposes new or additional privacy and security provisions, Covered Entities and BAs handling large amounts of PHI would be subject to these provisions in addition to existing HIPAA/HITECH and state law requirements.

 

Back to the Snoopy float – the Stanford Hospital PHI breach (and the manner in which it was reported in the Sack article) stands out for a number of ironies. A large amounts of sensitive PHI was accessible to the public, but obscurely so (only to Students of Fortune using a particular learning tool and astute enough to recognize, or care about, the sensitivity of the information). If the Stanford Hospital patient had not noticed and reported the PHI breach, would the breach have ever been noticed? Would any patient have been harmed? (If a tree falls in the forest when no one is present, does it make a sound?) 

 

Even more ironic is the fact that one affected patient may actually have been harmed as a result of the breach reporting, rather than from the breach itself. The Sack article quotes (by name) a patient’s mother who “intercepted” the breach notice mailed from Stanford Hospital to her 21-year-old son (leaving the reader to wonder why Mom is opening her adult son’s mail and whether she was authorized to access his PHI). Mom is quoted as stating (i) that her son received psychiatric treatment at Stanford in 2009 and (ii) “My son, I can tell you [Kevin Sack], is fragile and confused enough that this would have sent him over the edge."  One can only hope that the disclosure of his "fragile" state in a national newspaper will not have a similar effect.  Perhaps, in this post-Facebook and Twitter age, we could all use reminders about what kind of information is private and sensitive, when we should report breaches of it, and with whom we should share it.  The Snoopy float is a good reminder.    

 

A final irony is that Michael Mucha, the Stanford Hospital Chief Information Security Officer at the time of the Stanford PHI breach, has written extensively and has been widely-quoted regarding information security. He has been quoted as saying, “The biggest thing we [Stanford Hospital] focus on with all of this is control of the data.” Unfortunately the Snoopy float PHI breach belies the level of control of the data that can be exercised by Stanford and other Covered Entities, even with safeguards in place.

 

This story will undoubtedly have further developments. It will be especially interesting to see what statement, if any, Stanford provides to the U.S. Department of Health and Human Services (“HHS”) about its PHI breach for posting on the HHS list of reported large breaches of unsecured PHI affecting 500 or more individuals.

 

[Capitalized items that have ® after their names may be registered trademarks of other entities as to which no claim is made.]

 

From , About.com Guide 

 The Snoopy Balloon floats along Central Park West in the 2000 Thanksgiving Day Parade.

Ambulance-chasing meets the age of electronic records.  The husband and wife team of Ruben E. Rodriguez and Maria Victoria Suarez  have been charged with conspiring with an ambulance company worker to steal personal identification information of individuals transported by Randle Eastern Ambulance Service, Inc., d/b/a American Medical Response (“AMR”) and sell the information to various South Florida personal injury attorneys and clinics. This is the second time the couple has been charged with theft and sale of patient records. In a plea bargain agreement he later renounced, Rodriguez admitted to paying a hospital technologist for information from records of accident victims that he then sold to personal injury lawyers for a percentage of damage awards and settlements.  See http://www.miamiherald.com/2010/03/07/1518101/coral-gables-couple-accused-again.html

According to the FBI press release, the couple faces a maximum of five (5) years’ imprisonment for both the conspiracy and fraud in connection with computers. They also face a mandatory consecutive term to any other potential sentence of two (2) years’ imprisonment on the aggravated identity theft offenses.

 

Last week, the Federal Trade Commission (“FTC”) and the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) issued a joint letter (“Joint Letter”) (https://www.ftc.gov/system/files/ftc_gov/pdf/FTC-OCR-Letter-Third-Party-Trackers-07-20-2023.pdf) to approximately 130 hospitals and telehealth providers, warning that online tracking technologies integrated into their websites and/or mobile apps may be improperly disclosing personal health data to third parties. The FTC and OCR strongly urged monitoring of data flows to third parties via technologies integrated into websites, and warned that disclosure of such information without a consumer’s authorization can, in some circumstances, violate the FTC Act as well as constitute a breach of security under the FTC’s Health Breach Notification Rule.

You can view the full post on the Fox Rothschild Health Care Law Matters blog, here: FTC and OCR Issue Joint Website Tracking Warning Letter | Health Care Law Matters (foxrothschild.com)