Recently, I attended a urology congress at the Amsterdam Medical Center (AMC), where a series of presentations was given to an audience that predominantly consisted of medical professionals. Significant parts of the talks were devoted to automated cancer detection and treatment, eg in the prostate or the bladder. It struck me that the proposed solutions showed a great deal of congruence.
This congruence, beyond any doubt, is the result of standardization of convolutional networks and the proven variants of them, like the popular VGG neural network for object detection or the U-Net for segmentation. These networks have been widely tested and evaluated and have shown to give almost optimal or state-of-the-art results for their application areas. Furthermore, the hardware and software vendors offer dedicated packages, which deploy smoothly and execute efficiently on prescribed GPUs or CPUs.
This democracy of high-tech applications is a great achievement of artificial intelligence. Only some superficial understanding is sufficient to apply sophisticated deep learning to medical problems. I was pleasantly surprised by a physician’s presentation that showed state-of-the-art results for cancer detection in a urology field. The work was of high quality, although the presenter was neither skilled in processing imaging data nor in the machine learning used for finding the cancer.
At the conference, I also noticed something else: the divide between technical people and medical personnel is not as pronounced as it used to be some years ago. In the recent past, physicians were worried about the big wave of AI that was rolling in. Meanwhile, the AI people concerned themselves primarily with the performance of their solutions, while their practicality was of much less concern.
We’re close to the point that the democratizing effect of standard AI solutions has found a sufficient base in the ever-growing group of medical experts embracing this new technology. As university researchers, we’re increasingly confronted with requests to enhance the methodology of a featured solution in the presentation of our work. These trends imply that AI researchers have to move on to the second wave of AI.
Indeed, there’s still much to be gained. Most of the established successful solutions in analysis and decision-making are based on combinations of detecting something (like a cancerous polyp) and then classifying it (is it dangerous/malignant?). U-Net is such an example that combines detection and segmentation into one system. This solution is applicable to images but doesn’t scale to moving video.
The video signal domain is essentially different from high-quality still images because a stream of pictures is used over time, like 30 frames per second, albeit that each individual picture is of lower quality than a photo. This concept is still at an immature stage for AI. For the medical domain, there are multiple applications with video, like endoscopic imaging and interventional imaging with catheters.
Another important emerging area of AI-based applications in the medical domain is “explainable” AI. At present, many users of AI have no idea what the network is learning from all the images and, thus, do not know what causes the network to fail at decision-making. Evidently, this topic is highly relevant for the medical boards that are approving medical equipment. Let the companies show what their systems learn from the data and when and how the network fails in decision-making.
The high-tech industry, too, stands to benefit from a second generation of AI systems. For equipment manufacturers, the robustness and safety of using AI is a crucial point. When a failure occurs – and this will certainly happen – it shouldn’t have dramatic consequences (such as an autonomous car overlooking traffic and making wrong decisions). These kinds of safety aspects are often neglected or not deeply analyzed in system design. Finally, for equipment engineering and the industry in general, cost always plays an important role. Embedded networks on an affordable platform also belong to this second generation of AI systems and applications.
All these aspects present a clear roadmap for researchers and system developers in the Netherlands and Belgium for the upcoming years.