Artificial Intelligence (“AI”) refers to algorithm tools that simulate human intelligence, mimic human actions, and can incorporate self-learning software. The benefits of AI tech can reduce spending, provide alternative treatment ideas, and improve patient experience, diagnosis, and outcome.

Consider virtual health assistants who deliver medication alerts and patient education, AI used to detect abnormalities in x-rays and MRIs, and AI that gives simultaneous feedback to patients and their physicians from elements captured on patient smartphones and wearable devices.  But with the advent of unchecked AI comes concerns related to health information  privacy and unconscious bias.

Privacy Concerns in AI Health Tech

AI advances could negatively impact health data privacy. The level of impact depends, to some extent, on how we define health data. “Health data” generally refers to information concerning the physical or mental health status of an individual and the delivery of or payment for health services to the individual. It incorporates data exchanged through traditional health technologies used by health care providers and health plans, as well as data exchanged through newer technologies, like wearable devices and virtual assistants.

Regulations do not yet fully address privacy concerns in AI health tech. HIPAA, for example, legislation that was originally enacted in 1996, requires covered entities and business associates to implement standards to ensure the privacy and security for protected health information. Those standards, however, may not apply to tech companies that use ever-evolving third party apps or algorithms to access the data. Experts express concerns with companies collaborating to re-identify data formerly considered de-identified.[1] Consider data brokers who mine personal health data with AI tech who then sell their acquired health data to third parties like marketers, employers, and insurers. While a tech company’s relationship with a health insurance company falls under HIPAA’s scope, it is less clear what privacy laws would apply to tech companies that limit clientele to entities that are not otherwise directly or indirectly subject to HIPAA, such as life or disability insurers, which may be the case for marketers and employers.

Bias Concerns in AI Health Tech

AI health tech must be developed and trained responsibly. AI learns by identifying patterns in data collected over many years. If the data collected reflects historical biases against vulnerable populations, the data projected will only exasperate those biases. Biases creep into data sets through various ways. Consider whether input data has incomplete data sets for “at-risk” populations (groups with a historically higher risk for certain health conditions or illnesses). Consider the potential for under-diagnosis of health conditions within certain populations. Correcting for these biases in the development of the data set and in training processes will help to avoid the creation of biased results in AI health tech.

Companies should consider safeguards when employing their AI health tech. Employee training is paramount. While AI promises a host of benefits, those involved in its creation and use must be aware of the potential for bias. Companies should also consider data integrity and built in biases. Consider the information AI tech relies on and consider rechecking the data reviewed by AI.  Lastly, diversifying those engaged in creating, testing, and using the health care tech could also decrease bias.

[1]              https://www.ncvhs.hhs.gov/wp-content/uploads/2013/12/2017-Ltr-Privacy-DeIdentification-Feb-23-Final-w-sig.pdf