In 1996, the Journal of the American Medical Association published a study on nuns and their unique writing styles. Before joining a nunnery, the School Sisters of Notre Dame, young women were asked to write brief autobiographies. Many decades later, their medical information, autobiographies, and other personal attributes were studied.

Researchers noticed that distinctions in the nuns’ writing styles “predicted with uncanny accuracy” which of them would become severely afflicted with Alzheimer’s disease 60 years later.1

Results like these and advancements in computing since 1996 inspire further inquiry: Can we use personal information to predict future health conditions?

The State of AI-based Health Forecasting

Today, sophisticated computers can review monumental amounts of data and identify complex patterns. Machine learning takes ordinary computing farther by using adaptive algorithms that improve themselves as they process information.

Many in the health care field are familiar with the potential of artificial intelligence (AI) as a diagnostic partner: helping physicians diagnose breast cancer,2 hip fractures,3 and skin cancer.4 Certain AI diagnostic tools are characterized as “black box” algorithms, meaning the underlying logic of the computer’s decision-making is not known to the patients or providers who use them.5

If the prospect of private companies using personal data to forecast medical conditions seems futuristic, it’s worth considering how health forecasting is already being used.

Several companies, including LexisNexis, HBI Solutions, and Milliman, aggregate personal and medical information and generate “risk scores” to predict which patients are at greater risk for opioid overdose. According to Politico, “health insurance giant Cigna and UnitedHealth’s Optum are also using risk scores” for this purpose. Aside from health records, risk scores can also draw from “housing records, and even information about a patient’s friends.”6

In 2018, Google announced technology that interprets eye scans to predict future cardiac events, like heart attacks.7 While one might imagine this technology being used in the clinic, it may not be. Video conferencing, face-recognition software, and photo storage applications scan eyes routinely. Other ophthalmological research by Johns Hopkins and the University of Wisconsin revealed markers in retinal photographs associated with cognitive decline over a 20-year period.8 Predictive assessments may as readily be built into health apps – or incorporated into online advertising (for example, targeting ads for vitamins or prescription drugs to consumers based on markers in their eye scans).

Facebook also uses personal data to evaluate future health risks. It scans user posts with AI tools to forecast the suicide risk of its users. When Facebook’s AI predicts there is an imminent risk a user will harm themselves or others, Facebook contacted local first responders to intervene.9

Developing interventions for overdose, heart attack, and suicide are hugely beneficial endeavors. Still, it is worthwhile to consider what protections exist against 1) AI tools that are ineffective or unreliable; and 2) the ability to use health forecasts to discriminate against the “future unhealthy.”

Regulation of Predictive AI

In December 2017, the Food and Drug Administration (FDA) announced its intention to regulate certain AI clinical decision-support tools as “medical devices.”10 More than 40 AI medical devices have already been cleared by the FDA.11 However, AI-based tools that allow “for the provider to independently review the basis for the recommendations are excluded from the FDA’s regulation.”12 In short, not all AI-based tools will receive FDA review.

New AI innovations promise real help in the face of clinical uncertainty. At the same time, health care organizations must grapple with potential downsides.

For example, when IBM’s Watson for Oncology had its treatment recommendations for cancer patients reviewed, medical specialists and patients identified “multiple examples of unsafe and incorrect treatment recommendations.”13 With respect to AI tools that develop risk scores based on personal information, commentators have raised concerns over the potential use of inaccurate or outdated personal data, such as outdated addresses or employment information.

Ethicists are also concerned because AI-tools analyze data reflecting the status quo and may unwittingly reinforce status quo inequities. In 2019, Science published a study reviewing Optum’s AI tool for determining which hospital patients should be referred into personalized care programs. The tool assigned risk scores to patients, with higher risk scores qualifying patients for additional care. The authors concluded:

At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%.
The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise.14

To Optum’s credit, it collaborated with the researchers behind this study, made changes to their algorithm, and substantially reduced bias in the tool’s risk scoring.15

AI Tools and Risk

Once an AI tool is approved by the FDA or deemed not a medical device, the question becomes whether health care organizations are willing to take the practical and legal risks – and whether they accept the price tag. Customers may include providers, payors, and data aggregators. The legal implications vary for each.

Providers may deploy AI tools as part of their preventive health services and to maximize value-based payment incentives. Providing informed consent and avoiding medical malpractice will be principal considerations. For AI tools that are self-teaching and “black box,” the inner workings underlying the tool’s treatment recommendation will not be readily available. Providers may prefer tools with greater transparency, so physicians are in the best position to answer patient questions about the tool’s recommendation.

At the same time, Wisconsin’s informed consent law, Wis. Stat. section 448.30, may provide some leeway, because it does not compel technocratic disclosures: “The physician’s duty to inform the patient under this section does not require disclosure of … detailed technical information that in all probability a patient would not understand.”

In the event an AI tool fails to provide competent medical predictions, plaintiffs may seek recovery under product liability or medical malpractice theories. Product liability claims will likely encounter difficulty due to the physician’s adoption of the AI results before proceeding with treatment.

On the other hand, while medical malpractice suits are a possibility, health care providers in Wisconsin may see this risk as part of their organization’s ordinary course of business. Every medical device in hospitals today, from robot-assisted surgery to MRIs, was at one time a “cutting edge” technology. Additionally, providers insure for physician decisions that fall short of the standard of care and noneconomic damages in Wisconsin malpractice suits are capped at $750,000.16 To encourage adoption of their products, manufacturers could also offer indemnification to health care organizations if the AI’s results become the subject of litigation.

Here, the practical and reputational impacts tend to supersede legal considerations: is this tool good for patients? Is there sufficient confidence in the product for providers to associate their name with the tool?

Exploring Boundaries

If health forecasting “risk scores” are adopted by insurers and other payors, the scores may be employed as part of targeted marketing campaigns, incorporated into prior authorization criteria, or used to underwrite large group employer plans.

However, under the Affordable Care Act, insurers cannot use health factors, including the risk of future health conditions, in underwriting for individual and small group members.17 In the large group space, although the Genetic Information and Nondiscrimination Act prohibits the use of genetic information to discriminate against a person purchasing insurance, there is no similar prohibition on predictive algorithms based on nongenetic data.18

Perhaps most open-ended is what boundaries exist for data aggregators who may employ AI tools to forecast future health. Unlike most health care organizations, HIPAA’s restrictions do not extend to data aggregators which fall outside the definition of “covered entity.”19

Even so, Google, Facebook, or Amazon may hold substantial personal health data (e.g., purchases of pregnancy tests, searches for symptoms). Might hospital employers like to know the opioid abuse risk score for potential hires who would administer medications?

While several employment laws prohibit discrimination based on current and previous medical conditions, in general, these laws do not contemplate discrimination based on predictions of future medical problems.20

Practical Takeaways for Health Care Organizations

The American Medical Association (AMA) adopted its policy on “Augmented Intelligence” in 2018.21 Health care organizations can use key points from the AMA’s policy when assessing whether to implement an AI-based health forecasting tool:

  • Is the underlying logic of the tool transparent so providers can explain its use to patients?
  • Does the tool conform to leading standards for reproducibility with respect to its promised benefits?
  • Has the maker identified and taken steps to address bias and avoid exacerbating health care disparities?
  • How does the tool respect patients’ and other individuals’ privacy interests? What protections are in place to secure data the tool reviews?

In addition, health care organizations would benefit from asking:

  • Is the tool FDA approved? If not, is the only evidence of effectiveness from the company’s studies on its own product?
  • Are the datasets the AI uses to generate its predictions valid, or is the data prone to be outdated or unverified?
  • Is the manufacturer willing to indemnify the customer if the AI’s assessment becomes the subject of a malpractice claim?

Conclusion: Challenging Decisions Ahead

AI-based tools will drive new health care possibilities in the next decade. While AI may not unravel the supposed relationship between writing style and Alzheimer’s in that time, health care organizations will likely need to make challenging decisions about whether to implement novel forecasting tools. They would benefit from setting internal standards for adoption, given the state of nascent regulation.

This article was originally published on the State Bar of Wisconsin’s Health Law Blog. Visit the State Bar sections or the Health Law Section web pages to learn more about the benefits of section membership.

Endnotes

1 Gina Kolata, “Research Links Writing Style to the Risk of Alzheimer’s,” New York Times, Feb. 21, 1996.

2 Scott Mayer McKinney, Marcin Sieniek, and Shravya Shetty, et. al., “International evaluation of an AI system for breast cancer screening,” Nature 577,89–94 (2020).

3 Mats Geijer, et. al., “A computer-assisted systematic quality monitoring method for cervical hip fracture radiography,” Acta Radiologica Open, Dec. 5, 2016.

4 A. Esteva, et. al., “Dermatologist-level classification of skin cancer with deep neural networks,” Nature 542(7639):115-118 (2017).

5 W. Nicholson Price II, “Artificial Intelligence in Health Care: Applications and Legal Implications,” The SciTech Lawyer 14, no. 1 (2017).

6 Mohana Ravindranath, “How your health information is sold and turned into ‘risk scores,’Politico, Feb. 3, 2019.

7 Bill Siwicki, “Google AI now can predict cardiovascular problems from retinal scans,” HealthcareIT News, Feb. 19, 2018.

8 J.A. Deal, et. al., “Retinal signs and 20-year cognitive decline in the Atherosclerosis risk in communities study,” Neurology, March 27, 2018.

9 Martin Kaste, “Facebook Increasingly Reliant on A.I. To Predict Suicide Risk,” National Public Radio, Nov. 17, 2018.

10 Statement from FDA Commissioner Scott Gottlieb, M.D., on advancing new digital health policies to encourage innovation, bring efficiency and modernization to regulation, Dec. 6, 2017.

11 E.J. Topol, “High-performance medicine: the convergence of human and artificial intelligence,” Nat Med.2019; 25:44–56.

12 FDA Statement, Dec. 6, 2017.

13 Casey Ross, “IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show,” Stat New, July 25, 2018; Hernandez, Daniela, “IBM Has a Watson Dilemma,” Aug. 11, 2018.

14 Z. Obermeyer, B. Powers, C. Vogeli, and S. Mullainathan, Science 336, 447–453 (2019).

15 Heidi Ledford, “Millions of black people affected by racial bias in health-care,” Nature, Oct. 26, 2019.

16 Wis. Stat. §893.55 (2017-18).

17 PHSA § 2701(a)(1)(A), as amended by PPACA, Pub. L. No. 111-148, § 1201 (2010).

18 Genetic Information Nondiscrimination Act of 2008, Pub. L. No. 110-233 (2008), 42 U.S.C. §2000ff.

19 Health Insurance Portability and Accountability Act of 1996, Pub. L. No. 104-191 (Aug. 21, 1996); 45 CFR §164.500.

20 Americans with Disabilities Act of 1990 (ADA), 42 U.S.C. §§ 12101-12213 (2018). One notable exception is pregnancy.

21 Augmented Intelligence in Health Care H-480.940, American Medical Association, 2018.