Multi-fx SFDI has the advantage of accuracy on tissue measurements, but is severely limited by slow optical property (OP) inversion speed. The conventional method needs to solve the inverse problem with iterative optimization at each pixel location, which is time-consuming and takes over 10 hours to process a 696x520 image. The proposed deep learning model dramatically reduces the processing time from 10 hours down to 0.2 seconds, speeding up the OP inversion over 100,000× faster.
Existing inversion algorithms have to first convert the multi-fx diffuse reflectance to optical absorptions, and then solve a set of linear equations to estimate chromophore concentrations. We present a deep learning framework, noted as a deep residual network (DRN), that is able to directly map from diffuse reflectance to chromophore concentrations. The proposed DRN is over 10x faster than the state-of-the-art method for chromophore inversion and enables 25x improvement on the frame rate for in vivo real-time oxygenation mapping.
The figure above shows an illustration of wavefront shaping and the flowchart of the gradient-assisted phase optimization. Compared to the state-of-the-art genetic algorithm method which has been widely used in the wavefront front shaping field, the gradient-assisted method is able to improved the speed by 60x and can achieve 1000 PBR.
The figure above shows an illustration of the proposed single-shot photoacoustic imaging technique with a single-element transducer. (a) Schematic of the experimental system. (b) Pictures of the proposed right-angle prism from different viewing angles showing irregular-shaped edges. (c) Single-shot 3D imaging of two objects at different depths.
|