Alex Engler, author of a Brookings Institution report, “A guide to healthy skepticism of artificial intelligence and coronavirus,” wrote recently at tech mag Wired that we need more realistic expectations than “AI will save the day”:
As the world confronts the outbreak of coronavirus, many have lauded AI as our omniscient secret weapon. Although corporate press releases and some media coverage sing its praises, AI will play only a marginal role in our fight against Covid-19. While there are undoubtedly ways in which it will be helpful—and even more so in future pandemics—at the current moment, technologies like data reporting, telemedicine, and conventional diagnostic tools are far more impactful. So how can you avoid falling for the AI hype? In a recent Brookings Institution report, I identified the necessary heuristics for a healthy skepticism of AI claims around Covid-19. Let’s start with the most important rule: always look to the subject matter experts. If they are applying AI, fantastic! If not, be wary of AI applications from software companies that don’t employ those experts. Data is always dependent on its context, which takes expertise to understand. Does data from China apply to the United States? How long might exponential growth continue? By how much will our interventions reduce transmission? All models, even AI models, make assumptions about questions like these. If the modelers don’t understand those assumptions, their models are more likely to be harmful than helpful.Alex Engler, “Artificial Intelligence Won’t Save Us From Coronavirus ” at Wired
He offers several specific cautions, including: 1) Be wary of claims for a high accuracy rate from AI systems in the current environment and 2) In general, real-life circumstances usually degrade AI performance because no one can predict all the little problems that will arise. He’s positive about AI in principle but counsels, “its advantages need to be hedged in a realistic understanding of its limitations.” It’s a system, not a supermachine.
On March 16, 2020, the White House asked AI experts to mine 29,000 scholarly papers for data that might provide useful information to help fight the coronavirus pandemic (COVID-19). The collective tome was assembled with the help of Microsoft and Alphabet, the parent company of Google.
It’s unclear whether the AI powerhouses Microsoft and Deep Mind, another Alphabet company, tried cracking the problem themselves. Deep Mind was the developer of AlphaGo, the AI computer program that defeated the world champion in GO. Media lauded the success of AlphaGo as a potential solution to half the world’s problems.
A Deep Mind slogan is “What if solving one problem could unlock solutions to thousands more?” Think of the good will generated for Deep Mind if the company had successfully examined the 29,000 papers and found useful coronavirus information! The same for Microsoft. Having first access, both companies may have tried and failed. It could be that, after Microsoft and Deep Mind failed, the decision was made to pass the buck to give the rest of the world a chance.Robert J. Marks, “Coronavirus: Is data mining failing its first really big test?” at Mind Matters News
Like Engler, Marks stresses that someday AI methods like data mining may be really helpful. But first we need to be sure of the relationship between AI’s capabilities and the problems we are thing to solve.
A technology writer recently listed the problems the use of AI faces,
I find that AI has not yet been impactful against COVID-19. Its use of AI is hampered by a lack of data, and by too much noisy and outlier data. Overcoming these constraints will require a careful balance between data privacy and public health concerns, and more rigorous human-AI interaction. It is unlikely that these will be addressed in time to be of much help during the present pandemic. Instead, AI may “help with the next pandemic”. In the meantime, gathering diagnostic data on who is infectious will be essential to save lives and limiting the economic havoc due to containment.Wim Naudé, “Artificial Intelligence against COVID-19: An Early Review” at Towards Data Science (April 1, 2020)
An Organizational Behaviour prof reminds us of the fundamental difference between human learning and machine learning:
ndeed, the COVID-19 crisis will likely expose some of the key shortfalls of AI. Machine learning, the current form of AI, works by identifying patterns in historical training data. When used wisely, AI has the potential to exceed humans not only through speed but also by detecting patterns in that training data that humans have overlooked.
However, AI systems need a lot of data, with relevant examples in that data, in order to find these patterns. Machine learning also implicitly assumes that conditions today are the same as the conditions represented in the training data. In other words, AI systems implicitly assume that what has worked in the past will still work in the future.Matista Hollister, “AI can help with the COVID-19 crisis – but the right human input is key” at World Economic Forum
Don’t humans do that too? Yes, she says, but also notes,
Humans have an advantage over AI, though. We are able to learn lessons from one setting and apply them to novel situations, drawing on our abstract knowledge to make best guesses on what might work or what might happen. AI systems, in contrast, have to learn from scratch whenever the setting or task changes even slightly.
The COVID-19 crisis, therefore, will highlight something that has always been true about AI: it is a tool, and the value of its use in any situation is determined by the humans who design it and use it. In the current crisis, human action and innovation will be particularly critical in leveraging the power of what AI can do.Matista Hollister, “AI can help with the COVID-19 crisis – but the right human input is key” at World Economic Forum
On that view, perhaps the most valuable thing we can do with AI right now is to define more rigorously what we would like it to do the next time a disaster on this scale happens and aim for preparedness.
Further reading: Coronavirus: Is data mining failing its first really big test? Computers scanning thousands of paper don’t seem to be providing answers for COVID-19. If Alphabet’s Deep Mind or Microsoft had successfully data mined the 29,000 papers and found useful coronavirus information, that would be pretty impressive. But they appear to be giving others a chance to try instead, raising issues once again about the value of data mining in medicine. (Robert J. Marks)