Transforming smartphone microscopes into laboratory-grade devices
Transforming smartphone microscopes into laboratory-grade devices. A powerful form of artificial intelligence known as “Deep Learning” can discern and enhance microscopic details in photos taken by smartphones, demonstrate the researchers at the UCLA Samueli School of Engineering.
Bringing high quality medical diagnostics into resource-poor regions would be possible with this advancement of technology, as this technique improves the resolution and color details of smartphone images to such a great extent that they approach the quality of images from laboratory-grade microscopes.
The research was published in ACS Photonics a journal of the American Chemical Society.
People having no access to high-end diagnostic technologies can take benefit of this technology the best.
The technique is not only best for resource-poor areas, but also cost effective to be used as an alternative to laboratory equipment, but How?
Because this technique uses attachments that can be inexpensively produced with a 3-D printer, at less than US dollar 100 a piece, versus the thousands of dollars it would cost buying laboratory-grade equipment to get produced images of similar quality.
As the cameras on the smartphones are designed not to produce high-resolution microscopic images, the researchers have developed an attachment able to be placed over the smartphone lens to increase the resolution and the visibility of tiny details of the photos they take, down to a scale of approximately 1 millionth of a meter.
But the challenge is not fully solved, because of the unavailability of an attachment that is enough to compensate for the difference in quality between smartphone cameras’ image sensors & lenses alongside those of high-end lab equipment
The newly introduced technique compensates for the difference by using AI to reproduce the level of resolution and color details required for a laboratory analysis.
The research supported by the National Science Foundation and the Howard Hughes Medical Institute, was led by Aydogan Ozcan, Chancellor’s Professor of Electrical and Computer Engineering and Bioengineering, & Yair Rivenson who is a UCLA postdoctoral scholar.
Point worth mentioning is that Ozcan’s research group has also introduced several innovations in mobile microscopy and sensing. This research group also maintains a particular focus on developing field-portable medical diagnostics and sensors for resource-poor areas.
The new technique is expected to find numerous applications in global health, telemedicine and diagnostics-related apps.
Initially, the researchers shot images of lung tissue samples, blood & Pap smears, using a standard laboratory-grade microscope. Later on they did so with a smartphone with the 3D-printed microscope attachment.
Later on, the researchers fed the pairs of corresponding images into a computer system that “learns” the way to rapidly enhance the mobile phone images. All this process relies on a deep-learning-based computer code developed by the UCLA researchers.
Researchers also confirmed the usability of the technique with other types of lower-quality images, and used deep learning to successfully perform similar transformations with images and had lost some detail because they were compressed for either faster transmission on a computer network or more efficient storage.