Peter de With is a professor of video coding and analysis at Eindhoven University of Technology.

1 July

At present, video and image sensing technology advances amazingly fast. For example, consider the imaging capabilities of the mobile phone. High-definition imaging was implemented in the blink of an eye, as was embedding video cameras for front-side video communication as well as backside picture and video capture. Now, it seems that component suppliers to mobile phone manufacturers have the third round of innovations ready: getting more light into the device. More light means more information.

The lens size of a mobile phone camera is inherently limited, but given the ever-declining price of CMOS sensors, implementing a stereo camera doesn’t add much to the bill. Alternatively, a faster sensor can take more pictures per second.

Both open up a world of possibilities. With a stereo camera, you can compute the depth of a scene, ie the distance between the camera and the object of interest. This approach enables 3D object processing and modeling once 3D reconstruction software is added. With a faster sensor, you can vary the capturing speed, and trade off time accuracy of capturing moving objects with more detail of static objects captured with a longer sensing time. The final result is highly dynamic range imaging, yielding higher quality in many situations. These are capabilities we could only dream of a few years ago, but they’re now installed on high-end mobile phones.

Adding invisible light sensing to imaging systems will yield even more information. This capability, which is emerging as well, enables capturing information that we’re typically not used to see and analyze. For example, for colon and esophageal cancer detection, laser-based imaging systems emitting invisible light have been developed. They may result in earlier diagnosis of cancer, since the radiation used can penetrate a few millimeters into the tissue, thus potentially spotting tumors before they reach the surface. Early detection of cancer improves the chance of survival considerably.

There are more benefits. Using infrared light, we can measure the temperature of an object. A lot of research effort is spent on making this technology available for surveillance applications, at reasonable prices. IR-capable cameras would be able to see persons or traffic passing by, even in a dense winter fog.

Infrared sensing can be extended by measuring within several frequency ranges simultaneously beyond the visible light. This approach is called hyperspectral imaging and offers a spectral fingerprint of the material that’s being imaged. With such a fingerprint, we can measure the nature of the material, rather than the visual properties of the surface. For example, it can be detected whether the material is made from plastic or metal.

Needless to say that multi- and hyperspectral imaging offer great opportunities for many applications. Besides material analysis, it will also contribute to safety because suspicious changes in the regular spectral fingerprint may point to local weaknesses in constructions, allowing to take action before a collapse takes place.

The advanced imaging data can also be analyzed with artificial intelligence techniques, resulting in clever feature detection and decision-making. For example, in my research group, we used this technique to experimentally obtain a spectral footprint of early cancer. Imagine what the above-described developments can bring for intelligent systems design. All imaging systems will become smarter systems, basically converting more light into more insight.