Abstract
Imaging is a longstanding research topic in optics and photonics and is an important tool for a wide range of scientific and engineering fields. Computational imaging is a powerful framework for designing innovative imaging systems by incorporating signal processing into optics. Conventional approaches involve individually designed optical and signal processing systems, which unnecessarily increased costs. Computational imaging, on the other hand, enhances the imaging performance of optical systems, visualizes invisible targets, and minimizes optical hardware. Digital holography and computer-generated holography are the roots of this field. Recent advances in information science, such as deep learning, and increasing computational power have rapidly driven computational imaging and have resulted in the reinvention these imaging technologies. In this paper, I survey recent research topics in computational imaging, where optical randomness is key. Imaging through scattering media, non-interferometric quantitative phase imaging, and real-time computer-generated holography are representative examples. These recent optical sensing and control technologies will serve as the foundations of next-generation imaging systems in various fields, such as biomedicine, security, and astronomy.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Imaging systems are important tools in various scientific and engineering fields. In the conventional imaging approach, an object is directly imaged on an image sensor, as shown in Fig. 1a. The imaging optics is designed to make the captured image as identical as possible to the object, and the size and cost of the optics are increased to compensate for aberrations. On the other hand, computational imaging is a novel framework for constructing imaging systems by orchestrating optics and information science, as shown in Fig. 1b [1]. The optics is designed for encoding the object as the captured image. The object is reconstructed from the captured image with a decoding computational process. Based on this framework, we can minimize the optical hardware, enhance the imaging performance of optical systems, and visualize invisible targets.
Wavefront sensing and control techniques, such as digital holography and computer-generated holography, serve as the foundational principles of computational imaging [2,3,4]. In addition, Fourier transform profilometry and multi-aperture imaging represent other origins of this field [5,6,7]. These renowned computational imaging techniques have contributed to advances in various fields, including life-science, industrial engineering, and machine vision. Besides recent advances in optical technologies, image sensors, and computers, state-of-the-art technologies in information science, such as compressive sensing and machine learning, have also stimulated the field of computational imaging, where randomness takes an important role due to its redundancy and orthogonality [8,9,10,11].
2 Quantitative phase imaging
Quantitative phase imaging is important for visualizing transparent biological specimens in biomedicine and life science, without invasive fluorescent staining [12, 13]. Digital holography is an established technique for quantitative phase imaging and is one of the longstanding research topics in the field of optical imaging [2, 4]. One drawback with digital holography is the requirement for interferometric measurements, which makes the optical setup bulky and complicated.
Another established technique for quantitative phase imaging is diffraction imaging [14, 15]. In diffraction imaging, the complex amplitude field of an object is reconstructed from a single diffraction intensity image by a phase retrieval algorithm [16,17,18]. As a result, interferometric measurements are not necessary, and compact optical hardware can be implemented. However, a limited size of the object, which is called the support, must be assumed in phase retrieval to solve the ill-conditioned inverse problem. Ptychography is a diffraction imaging technique for extending the field-of-view by performing multiple measurements through a scanning process; however, it is not applicable to dynamic scenes [19,20,21].
To solve the above-mentioned issues in single-shot and multi-shot diffraction imaging, we have presented single-shot, support-free diffraction imaging with coded modulation, as shown in Fig. 2a [22,23,24]. In this method, a complex amplitude object is illuminated with coherent light, and light from the object is captured through a coded aperture, as shown in Fig. 2b [25]. The coded aperture is composed of randomly aligned pinholes to improve the condition of the inverse problem. Our method measures a single intensity image, as shown in Fig. 2c, without interferometric measurements, and therefore, its optical setup is simple and compact compared with those of digital holography. Instead of the coded aperture, randomly structured illumination can also be applied to our single-shot diffraction imaging method [24]. Both the amplitude and phase of the object are reconstructed from the single intensity image by using a phase retrieval algorithm based on compressive sensing, as shown in Fig. 2d, e, respectively [26,27,28]. The pixel count was \(740^2\) and the field of view was \(3.4~\textrm{mm} \times 3.4~\textrm{mm}\). We extend our single-shot diffraction imaging with coded modulation to single-shot multi-dimensional imaging and single-pixel imaging [29,30,31,32,33].
3 Computer-generated holography
Computer-generated holography (CGH) is a light control technique for computationally calculating an interference pattern—called a hologram—that reproduces an arbitrary optical field [34]. CGH is promising for laser processing in precision engineering, optical tweezers and stimulation in life science, and three-dimensional displays and near-eye displays for next-generation visual interfaces [35,36,37]. It is possible to dynamically control optical fields with computer-generated holography by means of a spatial light modulator [38]. However, commercially available spatial light modulators control either the amplitude or the phase of light waves. Therefore, iterative algorithms, such as the Gerchberg–Saxton method, have been used to synthesize amplitude-only or phase-only holograms [39]. The iterative process in the algorithms has prevented applications of computer-generated holography in real-time and interactive situations.
To solve this issue, we introduced deep learning to computer-generated holography for non-iterative hologram synthesis [18, 40, 41]. One of the schemes for computer-generated holography based on deep learning is shown in Fig. 3. A convolutional deep neural network is trained to realize the inverse process of the optical propagation. The training dataset consists of pairs of random input patterns and their computationally propagated results, which are speckle output patterns. Then, the network can synthesize a hologram that optically reproduces an arbitrary target pattern. As a result, the computational speed for synthesizing holograms is two orders of magnitude faster using deep learning, compared to a conventional iterative approach [41]. Nowadays, deep learning approaches are recognized as foundational techniques in computer-generated holography [42,43,44,45].
Another challenge in computer-generated holography is the requirement for highly coherent light. This often necessitates the use of large and expensive light sources, which also can potentially be invasive to the eyes. We address this problem by introducing spatiotemporally incoherent light to computer-generated holography [46]. An issue of computer-generated holography with incoherent light is the high computational cost due to many modes in the incoherent light propagation. In our method, incoherent light is described as a set of coherent random wavefronts based on stochastic gradient descent [47]. The experimental demonstration of our method by using a chip-on-board white light-emitting diode with a diameter of 2.3 cm is shown in Fig. 4. The color image in Fig. 4a is optically reproduced as in Fig. 4b with the two-layered monochrome hologram cascade composed of the first amplitude hologram in Fig. 4c and the second phase hologram in Fg. 4d. The size of the reproduced image was \(2.8~\textrm{mm} \times 2.8~\textrm{mm}\). This approach has been extended to diffractive optics design [48].
4 Imaging through scattering media
Imaging through scattering media is a longstanding issue in the field of optics for various applications, including biomedicine, astronomy, and security. Recent advances in optics and information science have enabled visualization inside or behind strongly scattering media, where ballistic photons are very few [49,50,51,52,53]. Although various methods have been established for imaging through scattering media, speckle-correlation imaging has advantages over other approaches in terms of its noninvasiveness, single-shot capability, and simple optical hardware [54, 55]. Speckle-correlation imaging utilizes a shift-invariance of the scattering random impulse response, which is called the memory effect [56, 57].
We extended speckle-correlation imaging to multidimensional cases. Depth imaging and spectral imaging through scattering media were realized by utilizing the scale-invariance of the scattering response along the axial and spectral dimensions, respectively [58,59,60]. The scattering scale-invariances in the first and second cases are called the axial memory effect and spectral (chromatic) memory effect, respectively [61, 62]. The experimental results from spectral speckle-correlation imaging are shown in Fig. 5. Point sources with a diameter of 0.1 mm and wavelengths of 520 nm and 540 nm shown in Fig. 5a are captured through scattering media, as shown in Fig. 5b. The two-color point sources are recovered and spectrally resolved as shown in Fig. 5c.
An issue with speckle-correlation imaging is the limited field-of-view because the range of the memory effect becomes small in a thick scattering medium [54, 55, 63]. To address this issue, we incorporated extrapolation of the speckle correlation into the reconstruction process of speckle-correlation imaging [64]. An experimental demonstration of extrapolated speckle-correlation imaging with an untrained deep neural network called deep image prior is shown in Fig. 6 [65, 66]. The object and its image captured through scattering media without imaging optics are shown in Fig. 6a, b, respectively. The conventional approach without the extrapolation cannot recover the whole object, as shown in Fig. 6c. On the other hand, the field-of-view of the reconstruction result obtained by the proposed approach is extended compared with the conventional case, as shown in Fig. 6d, where the pixel count was \(60^2\).
Blind deconvolution is a noninvasive technique for imaging through scattering media [67, 68]. Blind deconvolution is applicable to scattering media with limited-sized impulse responses but is not applicable to scattering media with random impulse responses. On the other hand, speckle-correlation imaging mentioned above is applicable to scattering media with random impulse responses but is not applicable to scattering media with limited-size impulse responses. One issue with single-shot blind deconvolution is instability of the reconstruction process because it needs to solve an ill-posed inversion problem to estimate both the object and the impulse response from a single captured image.
To solve this issue, we introduce a coded aperture to single-shot blind deconvolution, where the coded aperture reduces unknown variables on the aberrated pupil plane and improves the stability of the reconstruction process [69]. The experimental demonstration is shown in Fig. 7. The object in Fig. 7a is composed of point sources, and it is captured under a severe defocus condition with incoherent light, as shown in Fig. 7b. In this case, the blind deconvolution algorithm does not work, as shown in Fig. 7c. The image captured through the coded aperture is shown in Fig. 7d. In the case with the coded aperture, the object is reconstructed well, as shown in Fig. 7e, where the pixel count was \(200^2\). We have extended this approach to quantitative phase imaging through scattering media by introducing coherent light [70].
Digital optical phase conjugation, which is also known as time reversal, enables light shaping behind a scattering medium, [71,72,73]. In digital optical phase conjugation, an object inside a scattering medium is illuminated with coherent light, and light from scattering medium is captured with a wavefront sensor. The scattering medium is holographically illuminated from the outside by the conjugation of the captured wavefront, and then the object inside the scattering medium is optically reproduced. A coherent light source, wavefront sensor, and wavefront reproducer, which contain an interferometric setup and a spatial phase modulator, are necessary to realize digital optical phase conjugation.
We developed digital optical phase conjugation with incoherent light and without the wavefront sensor or reproducer [74]. The experimental demonstration is shown in Fig. 8. The object in Fig. 8a is illuminated with spatiotemporally incoherent light. The image captured through a scattering medium without imaging optics is shown in Fig. 8b. The captured image is displayed on a display device with incoherent light in front of the scattering medium. The light intensity behind the scattering medium is shown in Fig. 8c. The contrast of the reproduced image is low because of the non-negativity and the realness of the incoherent light. To address this, we introduce background suppression by random pixel shuffling of the captured image in Fig. 8b to optically reproduce the background. The result with the background suppression is shown in Fig. 8d, where the contrast is significantly improved.
5 Conclusion
We presented several computational imaging techniques with random optical modulation, including quantitative phase imaging, computer-generated holography, and imaging through scattering media. Information science has played a pivotal role in the progress of computational imaging, particularly where randomness is a significant factor in improving imaging performance. On the other hand, recent advances in optical and sensor devices, such as metalenses and event cameras, open the possibility of offering novel degrees of freedom for designing computational imaging systems [75,76,77]. Insights that span over these fields have the potential to further innovate computational imaging.
Data availability
This review paper includes no original data.
References
Mait, J.N., Euliss, G.W., Athale, R.A.: Computational imaging. Adv. Opt. Photonics 10, 409–483 (2018)
Goodman, J., Lawrence, R.: Digital image formation from electronically detected holograms. Appl. Phys. Lett. 11, 77–79 (1967)
Brown, B.R., Lohmann, A.W.: Complex spatial filtering with binary masks. Appl. Opt. 5, 967–969 (1966)
Nehmetallah, G., Banerjee, P.P.: Applications of digital and analog holography in three-dimensional imaging. Adv. Opt. Photonics 4, 472–553 (2012)
Takeda, M., Ina, H., Kobayashi, S.: Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. J. Opt. Soc. Am. 72, 156–160 (1982)
Takeda, M.: Fourier fringe analysis and its application to metrology of extreme physical phenomena: a review. Appl. Opt. 52, 20–29 (2013)
Tanida, J., Kumagai, T., Yamada, K., Miyatake, S., Ishida, K., Morimoto, T., Kondou, N., Miyazaki, D., Ichioka, Y.: Thin observation module by bound optics (TOMBO): concept and experimental verification. Appl. Opt. 40, 1806–1813 (2001)
Gehm, M.E., Brady, D.J.: Compressive sensing in the EO/IR. Appl. Opt. 54, C14–C22 (2015)
Kilic, V., Tran, T.D., Foster, M.A.: Compressed sensing in photonics: tutorial. J. Opt. Soc. Am. B 40, 28–52 (2023)
Barbastathis, G., Ozcan, A., Situ, G.: On the use of deep learning for computational imaging. Optica 6, 921–943 (2019)
Wetzstein, G., Ozcan, A., Gigan, S., Fan, S., Englund, D., Soljačić, M., Denz, C., Miller, D.A.B., Psaltis, D.: Inference in artificial intelligence with deep optics and photonics. Nature 588, 39–47 (2020)
Park, Y., Depeursinge, C., Popescu, G.: Quantitative phase imaging in biomedicine. Nat. Photonics 12, 578–589 (2018)
Nguyen, T.L., Pradeep, S., Judson-Torres, R.L., Reed, J., Teitell, M.A., Zangle, T.A.: Quantitative phase imaging: recent advances and expanding potential in biomedicine. ACS Nano 16, 11516–11544 (2022)
Chapman, H.N., Nugent, K.A.: Coherent lensless X-ray imaging. Nat. Photonics 4, 833–839 (2010)
Miao, J., Ishikawa, T., Robinson, I.K., Murnane, M.M.: Beyond crystallography: diffractive imaging using coherent X-ray light sources. Science 348, 530–535 (2015)
Fienup, J.R.: Phase retrieval algorithms: a comparison. Appl. Opt. 21, 2758–2769 (1982)
Fienup, J.R.: Phase retrieval algorithms: a personal tour. Appl. Opt. 52, 45–56 (2013)
Nishizaki, Y., Horisaki, R., Kitaguchi, K., Saito, M., Tanida, J.: Analysis of non-iterative phase retrieval based on machine learning. Opt. Rev. 27, 136–141 (2020)
Zheng, G., Horstmeyer, R., Yang, C.: Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photonics 7, 739–745 (2013)
Pfeiffer, F.: X-ray ptychography. Nat. Photonics 12, 9–17 (2018)
Zheng, G., Shen, C., Jiang, S., Song, P., Yang, C.: Concept, implementations and applications of Fourier ptychography. Nat. Rev. Phys. 3, 207–223 (2021)
Horisaki, R., Ogura, Y., Aino, M., Tanida, J.: Single-shot phase imaging with a coded aperture. Opt. Lett. 39, 6466–6469 (2014)
Horisaki, R., Egami, R., Tanida, J.: Experimental demonstration of single-shot phase imaging with a coded aperture. Opt. Express 23, 28691–28697 (2015)
Horisaki, R., Egami, R., Tanida, J.: Single-shot phase imaging with randomized light (SPIRaL). Opt. Express 24, 3765–3773 (2016)
Egami, R., Horisaki, R., Tian, L., Tanida, J.: Relaxation of mask design for single-shot phase imaging with a coded aperture. Appl. Opt. 55, 1830–1837 (2016)
Donoho, D.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)
Baraniuk, R.: Compressive sensing. IEEE Signal Process. Mag. 24, 118–121 (2007)
Candes, E.J., Wakin, M.B.: An introduction to compressive sampling. Signal Process. Mag. IEEE 25, 21–30 (2008)
Horisaki, R., Tanida, J.: Multidimensional object acquisition by single-shot phase imaging with a coded aperture. Opt. Express 23, 9696–9704 (2015)
Horisaki, R., Fujii, K., Tanida, J.: Diffusion-based single-shot diffraction tomography. Opt. Lett. 44, 1964–1967 (2019)
Horisaki, R., Kojima, T., Matsushima, K., Tanida, J.: Subpixel reconstruction for single-shot phase imaging with coded diffraction. Appl. Opt. 56, 7642–7647 (2017)
Horisaki, R., Matsui, H., Egami, R., Tanida, J.: Single-pixel compressive diffractive imaging. Appl. Opt. 56, 1353–1357 (2017)
Horisaki, R., Matsui, H., Tanida, J.: Single-pixel compressive diffractive imaging with structured illumination. Appl. Opt. 56, 4085–4089 (2017)
Matsushima, K.: Introduction to Computer Holography, Series in Display Science and Technology. Springer, Cham (2020)
Malinauskas, M., Žukauskas, A., Hasegawa, S., Hayasaki, Y., Mizeikis, V., Buividas, R., Juodkazis, S.: Ultrafast laser processing of materials: from science to industry. Light Sci. Appl. 5, e16133 (2016)
Dholakia, K., Čižmár, T.: Shaping the future of manipulation. Nat. Photonics 5, 335–342 (2011)
Park, J.-H., Lee, B.: Holographic techniques for augmented reality and virtual reality near-eye displays. Light Adv. Manuf. 3, 137–150 (2022)
Savage, N.: Digital spatial light modulators. Nat. Photonics 3, 170–172 (2009)
Gerchberg, R.W., Saxton, W.O.: A practical algorithm for the determination of the phase from image and diffraction plane pictures. Optik 35, 237–246 (1972)
Horisaki, R., Takagi, R., Tanida, J.: Deep-learning-generated holography. Appl. Opt. 57, 3859–3863 (2018)
Horisaki, R., Nishizaki, Y., Kitaguchi, K., Saito, M., Tanida, J.: Three-dimensional deeply generated holography. Appl. Opt. 60, A323–A328 (2021)
Goi, H., Komuro, K., Nomura, T.: Deep-learning-based binary hologram. Appl. Opt. 59, 7103–7108 (2020)
Peng, Y., Choi, S., Padmanaban, N., Wetzstein, G.: Neural holography with camera-in-the-loop training. ACM Trans. Graph. 39(185), 1–14 (2020)
Shi, L., Li, B., Kim, C., Kellnhofer, P., Matusik, W.: Towards real-time photorealistic 3D holography with deep neural networks. Nature 591, 234–239 (2021)
Shimobaba, T., Blinder, D., Birnbaum, T., Hoshi, I., Shiomi, H., Schelkens, P., Ito, T.: Deep-learning computational holography: a review. Front. Photonics 3(854391), 1–16 (2022)
Suda, R., Naruse, M., Horisaki, R.: Incoherent computer-generated holography. Opt. Lett. 47, 3844–3847 (2022)
Horisaki, R., Aoki, T., Nishizaki, Y., Röhm, A., Chauvet, N., Tanida, J., Naruse, M.: Compressive propagation with coherence. Opt. Lett. 47, 613–616 (2022)
Igarashi, T., Naruse, M., Horisaki, R.: Incoherent diffractive optical elements for extendable field-of-view imaging. Opt. Express 31, 31369–31382 (2023)
Mosk, A.P., Lagendijk, A., Lerosey, G., Fink, M.: Controlling waves in space and time for imaging and focusing in complex media. Nat. Photonics 6, 283–292 (2012)
Horstmeyer, R., Ruan, H., Yang, C.: Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue. Nat. Photonics 9, 563–571 (2015)
Faccio, D., Velten, A., Wetzstein, G.: Non-line-of-sight imaging. Nat. Rev. Phys. 2, 318–327 (2020)
Gigan, S.: Imaging and computing with disorder. Nat. Phys. 18, 980–985 (2022)
Bertolotti, J., Katz, O.: Imaging in complex media. Nat. Phys. 18, 1008–1017 (2022)
Bertolotti, J., van Putten, E.G., Blum, C., Lagendijk, A., Vos, W.L., Mosk, A.P.: Non-invasive imaging through opaque scattering layers. Nature 491, 232–234 (2012)
Katz, O., Heidmann, P., Fink, M., Gigan, S.: Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations. Nat. Photonics 8, 784–790 (2014)
Feng, S., Kane, C., Lee, P.A., Stone, A.D.: Correlations and fluctuations of coherent wave transmission through disordered media. Phys. Rev. Lett. 61, 834–837 (1988)
Freund, I., Rosenbluh, M., Feng, S.: Memory effects in propagation of optical waves through disordered media. Phys. Rev. Lett. 61, 2328–2331 (1988)
Okamoto, Y., Horisaki, R., Tanida, J.: Noninvasive three-dimensional imaging through scattering media by three-dimensional speckle correlation. Opt. Lett. 44, 2526–2529 (2019)
Horisaki, R., Okamoto, Y., Tanida, J.: Single-shot noninvasive three-dimensional imaging through scattering media. Opt. Lett. 44, 4032–4035 (2019)
Ehira, K., Horisaki, R., Nishizaki, Y., Naruse, M., Tanida, J.: Spectral speckle-correlation imaging. Appl. Opt. 60, 2388–2392 (2021)
Singh, A.K., Naik, D.N., Pedrini, G., Takeda, M., Osten, W.: Exploiting scattering media for exploring 3D objects. Light Sci. Appl. 6, e16219 (2016)
Xu, X., Xie, X., Thendiyammal, A., Zhuang, H., Xie, J., Liu, Y., Zhou, J., Mosk, A.P.: Imaging of objects through a thin scattering layer using a spectrally and spatially separated reference. Opt. Express 26, 15073–15083 (2018)
Schott, S., Bertolotti, J., Léger, J.-F., Bourdieu, L., Gigan, S.: Characterization of the angular memory effect of scattered light in biological tissues. Opt. Express 23, 13505–13516 (2015)
Endo, Y., Tanida, J., Naruse, M., Horisaki, R.: Extrapolated speckle-correlation imaging. Intell. Comput. 2022, 9787098 (2022)
Mashiko, R., Tanida, J., Naruse, M., Horisaki, R.: Extrapolated speckle-correlation imaging with an untrained deep neural network. Appl. Opt. 62, 8327–8333 (2023)
Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9446–9454 (2018)
Ayers, G.R., Dainty, J.C.: Iterative blind deconvolution method and its applications. Opt. Lett. 13, 547–549 (1988)
Chaudhuri, S., Velmurugan, R., Rameshan, R.: Blind Image Deconvolution. Springer, Cham (2014)
Muneta, H., Horisaki, R., Nishizaki, Y., Naruse, M., Tanida, J.: Single-shot blind deconvolution with coded aperture. Appl. Opt. 61, 6408–6413 (2022)
Muneta, H., Horisaki, R., Nishizaki, Y., Naruse, M., Tanida, J.: Single-shot blind deconvolution in coherent diffraction imaging with coded aperture. Opt. Rev. 30, 509–515 (2023)
Yaqoob, Z., Psaltis, D., Feld, M.S., Yang, C.: Optical phase conjugation for turbidity suppression in biological samples. Nat. Photonics 2, 110–115 (2008)
Xu, X., Liu, H., Wang, L.V.: Time-reversed ultrasonically encoded optical focusing into scattering media. Nat. Photonics 5, 154–157 (2011)
Aizik, D., Gkioulekas, I., Levin, A.: Fluorescent wavefront shaping using incoherent iterative phase conjugation. Optica 9, 746–754 (2022)
Horisaki, R., Ehira, K., Nishizaki, Y., Naruse, M., Tanida, J.: Incoherent optical phase conjugation. Appl. Opt. 61, 5532–5537 (2022)
Chen, W.T., Zhu, A.Y., Capasso, F.: Flat optics with dispersion-engineered metasurfaces. Nat. Rev. Mater. 5, 604–620 (2020)
Bruschini, C., Homulle, H., Antolovic, I.M., Burri, S., Charbon, E.: Single-photon avalanche diode imagers in biophotonics: review and outlook. Light Sci. Appl. 8, 87 (2019)
Gallego, G., Delbruck, T., Orchard, G.M., Bartolozzi, C., Taba, B., Censi, A., Leutenegger, S., Davison, A., Conradt, J., Daniilidis, K., Scaramuzza, D.: Event-based vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44, 154–180 (2022)
Funding
Open Access funding provided by The University of Tokyo. This work was supported by JSPS KAKENHI (JP20H02657, JP20K05361, JP20H05890, JP23H01874, JP23H05444) and Asahi Glass Foundation.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Horisaki, R. Computational imaging with randomness. Opt Rev 31, 282–289 (2024). https://doi.org/10.1007/s10043-024-00881-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10043-024-00881-9