留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

Deep learning enables parallel camera with enhanced- resolution and computational zoom imaging

Deep learning enables parallel camera with enhanced- resolution and computational zoom imaging[J]. PhotoniX. doi: 10.1186/s43074-023-00095-3
引用本文: Deep learning enables parallel camera with enhanced- resolution and computational zoom imaging[J]. PhotoniX. doi: 10.1186/s43074-023-00095-3
Shu-Bin Liu, Bing-Kun Xie, Rong-Ying Yuan, Meng-Xuan Zhang, Jian-Cheng Xu, Lei Li, Qiong-Hua Wang. Deep learning enables parallel camera with enhanced- resolution and computational zoom imaging[J]. PhotoniX. doi: 10.1186/s43074-023-00095-3
Citation: Shu-Bin Liu, Bing-Kun Xie, Rong-Ying Yuan, Meng-Xuan Zhang, Jian-Cheng Xu, Lei Li, Qiong-Hua Wang. Deep learning enables parallel camera with enhanced- resolution and computational zoom imaging[J]. PhotoniX. doi: 10.1186/s43074-023-00095-3

Deep learning enables parallel camera with enhanced- resolution and computational zoom imaging

doi: 10.1186/s43074-023-00095-3

Deep learning enables parallel camera with enhanced- resolution and computational zoom imaging

Funds: We would like to thank Ms. Yuxian Zhang for helping polish the article.
    • 关键词:
    •  / 
    •  / 
    •  / 
    •  / 
    •  / 
    •  / 
    •  
  • [1] Ditchburn RW. Information and control in the visual system. Nature. 1963;198:630.
    [2] Brady DJ, et al. Multiscale gigapixel photography. Nature. 2012;486:386–9.
    [3] Brady DJ, et al. Characterization of the AWARE 40 wide-field-of-view visible imager. Optica. 2015;2(12):1086.
    [4] Brady DJ, et al. Parallel cameras Optica. 2018;5(2):127–37.
    [5] Fan JT, et al. Video-rate imaging of biological dynamics at centimetre scale and micrometre resolution. Nat Photonics. 2019;13:809–16.
    [6] Strogatz SH. Exploring complex networks. Nature. 2001;410:268–76.
    [7] Kittle DS, et al. A testbed for wide-field, high-resolution, gigapixel-class cameras. Rev Sci Instrum. 2013;84: 053107.
    [8] Park HJ, et al. Structural and functional brain networks: from connections to cognition. Science. 2013;342:1238411.
    [9] Bullmore E, et al. Complex brain networks: graph theoretical analysis of structural and functional systems. Nat Rev Neurosci. 2009;10:186–98.
    [10] Wilburn B, et al. High performance imaging using large camera arrays. ACM Trans Graph. 2005;24:765–76.
    [11] Lynn CW, et al. The physics of brain network structure, function and control. Nat Rev Phys. 2019;1:318–32.
    [12] Seshadrinathan K, et al. High dynamic range imaging using camera arrays. 2017 IEEE International Conference on Image Processing (ICIP). IEEE; 2017. p. 725-9.
    [13] Zhang Y, et al. Multi-focus light-field microscopy for high-speed large-volume imaging. PhotoniX. 2022;3:1–20.
    [14] Wu JC, et al. An integrated imaging sensor for aberration-corrected 3D photography. Nature. 2022;612:62–71.
    [15] Cossairt, et al. Scaling law for computational imaging using spherical optics. JOSA A. 2011;28:2540–53.
    [16] Jeong KH, et al. Biologically inspired artificial compound eyes. Science. 2006;312(5773):557–61.
    [17] Zhu L, et al. Miniaturising artificial compound eyes based on advanced micronanofabrication techniques. Light: Adv Manuf. 2021;2(1):84–100.
    [18] Cao XW, et al. Single-pulse writing of a concave microlens array. Opt Lett. 2018;43:831–4.
    [19] Tanida J, et al. Color imaging with an integrated compound imaging system. Opt Express. 2003;11:2109–17.
    [20] Wu D, et al. High numerical aperture microlens arrays of close packing. Appl Phys Lett. 2010;97(3):031109.
    [21] Chan EP, et al. Fabricating microlens arrays by surface wrinkling. Adv Mater. 2006;18:3238–42.
    [22] Song YM, et al. Digital cameras with designs inspired by the arthropod eye. Nature. 2013;497:95–9.
    [23] Cheng Y, et al. Review of state-of-the-art artificial compound eye imaging systems. Bioinspir Biomim. 2019;14(3): 031002.
    [24] Park SH, et al. Subregional slicing method to increase three-dimensional nano-fabrication efficiency in two-photon polymerization. Appl Phys Lett. 2005;87:154108.
    [25] Kirschfeld K. The resolution of lens and compound eyes. Neural principles in vision. 1976. p. 354–70.
    [26] Cossairt OS, et al. Gigapixel computational imaging. 2011 IEEE International Conference on Computational Photography (ICCP). 2011. p. 1–8.
    [27] Liu SB, et al. Real-time and ultrahigh accuracy image synthesis algorithm for full field of view imaging system. Sci Rep. 2020;10(1):12389.
    [28] Perazzi F, et al. Panoramic video from unstructured camera arrays. Computer Graph Forum. 2015;34:57–68.
    [29] Dai QH, et al. A modular hierarchical array camera. Light Sci Appl. 2021;10(1):1–9.
    [30] Afshari H, et al. A spherical multi-camera system with real-time omnidirectional video acquisition capability. IEEE T Consum Electr. 2012;58:1110–8.
    [31] Cohen MF, et al. Capturing and viewing gigapixel images. ACM Trans. Graph. 2007;26(3): 93–es.
    [32] Gigapan time machine. (2016). [Online]. Available: http://timemachine.cmucreatelab.org.
    [33] Ivezić Ž, et al. LSST: from science drivers to reference design and anticipated data products. American Astronomical Society Meeting. 2009;213:460–03.
    [34] Marks DL, et al. Characterization of the AWARE 10 two-gigapixel wide-field-of-view visible imager. Appl Opt. 2014;53(13):C54–63.
    [35] Hou C, et al. Ultra slim optical zoom system using Alvarez freeform lenses. IEEE Photonics J. 2019;11(6):1–10.
    [36] Zou Y, et al. Ultra-compact optical zoom endoscope using solid tunable lenses. Opt Express. 2017;25(17):20675–88.
    [37] Savidis N, et al. Nonmechanical zoom system through pressure-controlled tunable fluidic lenses. Appl Opt. 2013;52(12):2858–65.
    [38] Zhang DY, et al. Fluidic adaptive zoom lens with high zoom ratio and widely tunable field of view. Opt Commun. 2005;249(1–3):175–82.
    [39] Cira NJ, et al. Vapour-mediated sensing and motility in two-component droplets. Nature. 2015;519(7544):446–50.
    [40] Nie J, et al. Self-powered microfluidic transport system based on triboelectric nanogenerator and electrowetting technique. ACS Nano. 2018;12:1491–9.
    [41] Lee J, et al. Multifunctional liquid lens for variable focus and aperture. Sensor Actuat A-Phys. 2019;287:177–84.
    [42] Li YL, et al. Tunable liquid crystal grating based holographic 3D display system with wide viewing angle and large size. Light Sci Appl. 2022;11(1):1–10.
    [43] Jamali A, et al. Large area liquid crystal lenses for correction of presbyopia. Opt Express. 2020;28(23):33982–93.
    [44] Chu F, et al. Four-mode 2D/3D switchable display with a 1D/2D convertible liquid crystal lens array. Opt Express. 2021;29(23):37464–75.
    [45] Kuiper S, et al. Variable-focus liquid lens for miniature cameras. Appl Phys Lett. 2004;85(7):1128–30.
    [46] Son HM, et al. Tunable-focus liquid lens system controlled by antagonistic winding-type SMA actuator. Opt Express. 2009;17(16):14339–50.
    [47] Lin YH, et al. An electrically tunable optical zoom system using two composite liquid crystal lenses with a large zoom ratio. Opt Express. 2011;19(5):4714–21.
    [48] Lin HC, et al. A holographic projection system with an electrically tuning and continuously adjustable optical zoom. Opt Express. 2012;20(25):27222–9.
    [49] Cheng J, et al. CUDA by example: an introduction to general-purpose GPU programming. Scalable Computing: Practice and Experience, 2010;11(4):401.
    [50] Xing W, et al. Fast pedestrian detection based on haar pre-detection[J]. International Journal of Computer and Communication Engineering. 2012;1(3):207.
    [51] Henriques JF, et al. High-speed tracking with kernelized correlation filters. IEEE Trans Pattern Anal Mach Intell. 2015;37:583–96.
    [52] Rublee E, et al. ORB: an efficient alternative to SIFT or SURF. 2011 IEEE International Conference on Computer Vision. 2011. p. 2564–71.
    [53] Lowe DG. Distinctive image features from scale-invariant keypoints. Int J Comput Vision. 2004;60:91–110.
    [54] Song ZL, Zhang JP. Remote Sensing Image Registration Based on Retrofitted SURF Algorithm and Trajectories Generated From Lissajous Figures. IEEE GEOSCI REMOTE S. 2010;7:491–5.
    [55] Fischler MA, et al. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM. 1981;24:381–95.
    [56] Sanders J, et al. CUDA by example: an introduction to general-purpose GPU programming. Addison-Wesley Professional. 2010.
    [57] Lai WS, et al. Deep Laplacian pyramid networks for fast and accurate super-resolution. CVPR. 2017. p. 624–32.
    [58] Park SH, et al. Flexible style image super-resolution using conditional objective. IEEE Access. 2022;10:9774–92.
    [59] Lim B, et al. Enhanced deep residual networks for single image super-resolution. IEEE Conf. Comput. Vis. Pattern Recognit. 2017. p. 136–44.
    [60] Chen C, et al. Camera lens super-resolution. IEEE Conf. Comput. Vis. Pattern Recognit. 2019. p. 1652–60.
  • 加载中
图(1)
计量
  • 文章访问数:  203
  • HTML全文浏览量:  6
  • PDF下载量:  11
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-03-07
  • 录用日期:  2023-05-30
  • 修回日期:  2023-04-24
  • 网络出版日期:  2023-06-13

目录

    /

    返回文章
    返回