While artificial intelligence (AI) was used to detect and warn people about the latest pandemic, the same technology could lead to the spread of misinformation if the proper guardrails aren't in place, the founder of a Canadian company that was among the first to detect COVID-19 says.
As doctors, scientists and policymakers consider how best to use AI to track a possible pandemic, Dr. Kamran Khan, an infectious disease specialist and founder of BlueDot, says the first step is "to make sure we're not creating any potential harm in the process."
Speaking to CTVNews.ca in June at the Collision tech conference in Toronto, where the potential dangers of AI were among the most popular conversations had, Khan said "this is a problem that is not just a problem for a government alone," but for the whole of society.
Large language models (LLMs), essentially an algorithm that can take massive sets of data to predict and generate text, can be subject to "hallucination" or making stuff up, Khan warned.
"We need to create … some guardrails around it, because as you can imagine, LLMs could amplify misinformation and that doesn't help us," he said.
BLUEDOT DETECTS COVID-19
The Toronto-based BlueDot became renowned for being among the first to detect the signs of what would later be called SARS-CoV-2, the coronavirus that causes the disease COVID-19.
The company accomplished this by using AI to scour tens of thousands of articles each day in dozens of languages, which led to its system spotting an article about a "pneumonia of unknown cause" on the morning of Dec. 31, 2019.
BlueDot sent out an alert to its clients the same day, nearly a week before the U.S. Centers for Disease Control and Prevention and the World Health Organization issued their own alerts.
In June, Harvard Public Health reported that after BlueDot sent its alert to clients, the company's customer base grew by 475 per cent.
LEVERAGING AI TO GET AHEAD OF EMERGING ILLNESSES
Much has been written about the benefits of AI, namely the speed by which it can help identify an emerging illness and send out those early warning signals.
Khan said he founded BlueDot because he felt there was a need to be able to respond to infectious disease emergencies quickly and precisely, in ways that were not "necessarily possible in the academic arena."
"We should be leveraging the latest in technology and innovation to get ahead of this problem, which is not just one for Canada, but it's actually one for the rest of the world," he said.
But attempting to do so is "anchored in trust and there's been a lot of erosion of trust in the last several years," Khan added.
The Organisation for Economic Co-operation and Development in April 2020 said while AI is not a "silver bullet," policymakers should encourage the sharing of medical, molecular and scientific data to help AI researchers build the tools that could assist the medical community, while also ensuring that AI systems are "trustworthy."
"Instead of doing manual data analysis, or starting the data labelling, or spending some time to consolidate the data coming from different resources, we have our AI modules that can process the data and generate some insightful information for the decision-makers in the context," Zahra Shakeri, an assistant professor of health informatics and information visualization at the University of Toronto, told CTVNews.ca in an interview on Sunday.
‘INTEGRATED MIX OF EXPERTS’ REQUIRED
Shakeri, who is also a member of U of T's Institute for Pandemics and director of the school's HIVE Lab, added that while AI could help improve the readiness and resiliency of the health-care system, "it cannot be the only tool that we can use to come up with the final conclusion."
Generative AI models, she said, work by trying to detect relationships between words, not necessarily what is factual.
And while certain text can be flagged to AI as misinformation, not everything will be detected.
One solution could be to have experts from different fields help determine what is true or to make AI models better able to detect misinformation. Increasing public awareness about the potential harms of the information produced by generative AI could also help.
But Shakeri says more leadership, governance, researchers, policymakers and stakeholders from different sectors need to come together to address the issue, similar to the advent of nuclear power.
"It might sound very straightforward to talk about these concepts, but when it comes to the implementation of the solutions, we really need to have more expertise, more support," she said.
Khan also says we need an "integrated mix of experts who understand the problem."
"Like myself as a physician, I'm an epidemiologist. We've got veterinarians, we have other people in public health sciences and then we've got to marry that with the data scientist, machine learning experts and the engineers who will build this whole infrastructure," he said.
It's a matter of "not getting caught flat-footed" and preparing now, he added.
"And I don't think we need to be in panic mode, but we need to use every day well, because the clock's ticking."