Dr. Albert Hsiao and his colleagues in the UC San Diego healthcare system had been working for 18 months on an artificial intelligence program designed to help doctors identify pneumonia on a chest X-ray. When the coronavirus hit the United States, they decided to see what they could do.
The researchers quickly implemented their program, which dots X-ray images with colored patches where lung damage or other signs of pneumonia may occur. It has now been applied to more than 6,000 chest X-rays and is providing some value in diagnosis, said Hsiao, director of the UCSD's enhanced data analysis and artificial intelligence laboratory.
His team is one of several across the country that has led AI programs to the COVID-19 crisis to perform tasks such as deciding which patients face the greatest risk of complications and which can be safely channeled into less intensive care.
Machine learning programs run through millions of data to detect patterns that can be difficult for doctors to discern. However, few algorithms have been rigorously tested against standard procedures. So, although they often seem useful, implementing programs in the midst of a pandemic can be confusing for doctors and dangerous for patients, warn some AI experts.
"AI is being used for questionable things at the moment," he said Dr. Eric Topol, director of the Translational Research Institute of Scripps and author of several books on health IT.
Get the free Coronavirus Today newsletter
Sign up to receive the latest news, best stories and what they mean to you, as well as answers to your questions.
You may occasionally receive promotional content from the Los Angeles Times.
Topol highlighted a system created by Epic, a major provider of electronic health records software, which predicts which patients with coronaviruses may become seriously ill. Using the tool before it is validated is "pandemic exceptionalism," he said.
Epic said the company's model has been validated with data from more than 16,000 COVID-19 patients hospitalized in 21 health organizations. No research on the tool has been published for independent researchers to evaluate, but it was "designed to help clinicians make treatment decisions and does not replace their judgment," said James Hickman, software developer at Cognitive Computing at Epic. team.
Others see the COVID-19 crisis as an opportunity to learn about the value of AI tools.
"My intuition is that it is a little good, bad and ugly," he said. Eric Perakslis, a data science researcher at Duke University and a former director of information for the Food and Drug Administration. "Research in this scenario is important."
Almost $ 2 billion was invested in companies reporting advances in healthcare AI in 2019. Investments in the first quarter of 2020 totaled $ 635 million, up from $ 155 million in the first quarter of 2019, according to the funder digital health technology Rock Health.
At least three healthcare AI technology companies have entered into specific financing arrangements for the COVID-19 crisis, including Life Diagnostics, an artificial intelligence lung imaging company, according to Rock Health.
Overall, the implementation of AI in daily clinical care is less common than the hype about technology suggests. However, the coronavirus has inspired some hospital systems to accelerate promising applications.
UCSD accelerated its AI imaging project, launching it in just two weeks.
Hsiao's project, with research funding from Amazon Web Services, the University of California and the National Science Foundation, performs all chest X-rays taken at the hospital using an AI algorithm. Although no data on implementation has yet been published, doctors report that the tool influences their clinical decision-making about a third of the time, he said. Dr. Christopher Longhurst, Chief Information Officer at UCSD Health.
"The results so far are very encouraging and we are not seeing any unintended consequences," he said. "Interestingly, we feel it is useful, not harmful."
AI has advanced further in imaging than in other areas of clinical medicine, because radiological images have a lot of data for algorithms to process, and more data makes programs more effective, said Longhurst.
But while AI experts are trying to get AI to do things like predict sepsis and acute breathing difficulties – Johns Hopkins University researchers recently won a grant from the National Science Foundation using it to predict cardiac damage in patients with COVID-19 – it was easier to connect it to less risky areas, such as hospital logistics.
In New York City, two major hospital systems are using AI-enabled algorithms to help them decide when and how patients should move on to another phase of treatment or be sent home.
At Mount Sinai Health System, an artificial intelligence algorithm identifies which patients may be ready to be discharged from the hospital in 72 hours, said Robbie Freeman, vice president of clinical innovation at Mount Sinai.
Freeman described the AI's suggestion as an "initial conversation," designed to help doctors working on patient cases decide what to do. AI is not making the decisions.
NYU Langone Health developed a similar AI model. He predicts whether a COVID-19 patient entering the hospital will experience adverse events in the next four days, he said Dr. Yindalon Aphinyanaphongs, who leads the predictive analytics team at NYU Langone.
The model will run in a four to six week study of patients randomized into two groups: one whose doctors will receive the alerts and another whose doctors will not. The algorithm should help doctors generate a list of things they can predict if patients are at risk of complications after being admitted to the hospital, said Aphinyanaphongs.
Some health systems are wary of launching a technology that requires clinical validation in the midst of a pandemic. Others say they did not need AI to deal with the coronavirus.
Stanford Health Care is not using AI to manage hospitalized patients with COVID-19, said Ron Li, director of medical informatics at the center for clinical integration of AI. The San Francisco Bay area did not see the expected increase in patients who would have provided the mass of data needed to ensure that AI works in a population, he said.
Outside the hospital, AI-enabled risk factor modeling is being used to help healthcare systems track patients who are not infected with the coronavirus, but may be susceptible to complications if they contract COVID-19.
At Scripps Health, doctors are stratifying patients to assess their risk of contracting COVID-19 and experiencing severe symptoms using a risk scoring model that considers factors such as age, chronic conditions and recent hospital visits. When a patient reaches 7 or more points, a screening nurse contacts information about the coronavirus and can schedule an appointment.
While emergencies offer unique opportunities to test advanced tools, it is essential for healthcare systems to ensure doctors are comfortable with them and to use tools with caution, with extensive testing and validation, said Topol.
"When people are at the height of the battle and overwhelmed, it would be great to have an algorithm to support them," he said. "We just need to make sure that the algorithm and the AI tool are not misleading, because lives are at risk here."
Gold writes to Kaiser Health News, an independent editing program from the Kaiser Family Foundation. It is not affiliated with Kaiser Permanente.