After seeing users attempt to search for what that suspicious bump or mole on their skin might be, Google developed an AI tool to help them narrow down the possibilities.
The web-based tool, which Google shared a preview of at its annual developer conference, would let people upload an image of their skin and see the three most likely results.
Users take a few images of the spot, and answer questions about how long it has been there, their skin type, and other factors that could affect their diagnosis. The model can pull in information from 288 different conditions to give people a list of ones that might match theirs.
Like other symptom checkers, the tool can only provide suggestions — it hasn’t been cleared by the Food and Drug Administration as a diagnostic device.
Google built a deep-learning system using de-identified data from 65,000 cases, along with images from skin concerns and examples of healthy skin. Importantly, it included examples from people of different ages, races, and skin types, to ensure the model works across all demographics.
Given disparities in which patients have the most access to care, and even in how clinicians diagnose skin conditions, it would be easy for bias to creep into a model.
Because of this, Yuan Liu, technical lead on the project, emphasized the importance of thoughtful model development.
“We took extra precautions to obtain data from multiple sources and sourced diverse clinicians and consultants to guide the research and product development, as well as to annotate the data,” she said in a Tuesday press conference. “Because of those challenges, it took us over 3 years of fundamental research to develop a good model.”
The images were brought in from a variety of sources, including licensed clinical partners and cases that had been donated directly, said Dr. Peggy Bui, a project manager at Google Health. Teams of more than 100 dermatologists were asked to review them, with the goal being to have multiple dermatologists label each case to train a more accurate model.
Users can save or delete their data, but they will also have the option to “donate” their de-identified results to improve the model’s accuracy.
Although it hasn’t yet been released to the public in the U.S., Google has published two papers on the tool’s performance so far. The first, published in Nature, found that its performance was comparable to six dermatologists when tested on 963 validation cases. It also asserted its algorithm could improve primary care providers’ ability to diagnose skin conditions based on a randomized study published in JAMA Network Open, where 20 physicians and 20 nurse practitioners evaluated skin conditions with the algorithm or unassisted.
But both studies were retrospective, meaning that they didn’t evaluate how physicians used the tool in real time.
How the results are communicated to patients is also important. Google had dermatologists review the language for what they would want people to know, such as whether a condition might be contagious. For potentially serious conditions, such as melanoma, they would flag the result. The algorithm also flags cases where it is still “learning,” to indicate if it might have less certainty about a result.
The tool currently is CE marked in Europe as a class I medical device, for devices with a lower perceived risk. It’s not clear whether the company plans to seek FDA clearance in the future.
“There isn’t really a clear cut path for this sort of technology,” Bui said.
Google plans to release it in the U.S. later this year as a pilot. While its part of a broader effort by the company to grow its footprint in healthcare, it’s its first foray into consumer-facing medical devices.
But while dermatology is less well-trodden than other specialties, such as cardiology, Google will still face a few competitors. VisualDx, has built a similar app for checking skin conditions, and Miiskin developed a tool to help people map moles and marks to see if they’ve changed over time, though it does not suggest a potential condition.