Dec 4, 2024
8:15am - 8:45am
Sheraton, Second Floor, Back Bay D
Patrice Genevet1
Colorado School of Mines1
LiDAR, for Light Detection and Ranging, is a 3D imaging technology that uses a time-triggered pulsed laser source to illuminate the environment, and a photodetector to collect the returning time of pulses backscattered by objects of a scene. By measuring the round-trip time of the scattered light, also called the Time-of-Flight (ToF), it is possible to determine the distance of objects from the transmitter/receiver device. The ToF depth recovering technology can be coupled with a light scanning engine to provide a sequential emission of the laser pulse into different angular positions, enabling point-by-point measurement to recover full 3D information. Most commercial imaging LiDAR operate using this rather simple architecture to reach high performances in terms of depth of field, FoV and angular resolution. However, the frame rate of such architecture is inherently limited by the speed of the scanning device, which can additionally suffer from mechanical wear. Here, we propose integrating directional metalens array (MLA) on a time-gated sensor to perform 3D imaging. The design of the MLA is planar but inspired by the arthropod curved eyes. It is composed of an assembly of micro-scaled metalenses, each specifically designed to have a unique phase profile, so that the overall MLA can mimic the optical function of a curved dragonfly’s eye. The realization of planar directional optical interface is facilitating vertical integration on the top of planar detection modules, enabling directional imaging measurements without the limitations inherent to conventional curved optical systems. We will present 2D directional imaging and 3D LiDAR scanning experiments with a wide FoV<sup>2</sup>.<br/>The objective here is to mimic the characteristics of the dragonfly eye, by subdividing a 120°×120° FoV, reducing as much as possible the size of the shadowed areas. To perform directional detection, the MLA is coupled to a matrix of detectors (Single photon avalanche (SPAD) array or a fast CCD camera). We align the MLA such that the center of each ML (on a is perfectly facing a separated active region of the detector (a single SPAD or a designated pixel for a CCD camera)). The MLA phase profile is fragmented such that the ith ML collects information associated to its own (θ_(opt,i),φ_(opt,i)) direction. Thus, the ith ML operation principle is to rectify the direction of the incident light coming from the ith direction and to focus the light exactly along the ith ML optical axis, that is to activate only the detector placed along the center of the ith ML. Light impinging on all the other lenses will certainly be focused but away from their axis, and as a result their focal spots would simply not overlap with the active regions of the other detectors of the matrix. However, the design of the MLA is carefully chosen to make sure that light coming from a different direction denoted as j ≠ i will be focused along the optical axis of the #jth detector, thus mimicking the spherical directional observation properties of the dragonfly’s eyes on a planar sensor.<br/><br/>In conclusion, we have fabricated an MLA for directional detection, and we have integrated this detection module in a ToF LiDAR experiment. As a proof of concept, the MLA designed to focus directional light in a plane is positioned in front of a gated CCD, synchronously triggered by the pulse emission rate of the LiDAR illumination laser source (500kHz frequency). The activation of the detectors by the MLA is in good agreement with the designed system to achieve both spatial (direction) and temporal (depth) detection of illumination.<br/><br/><b>References</b><br/><b>1- </b>Juliano Martins, R., Marinov, E., Youssef, M.A.B. et al. Metasurface-enhanced light detection and ranging technology.<b> Nat Commun 13, </b>5724 (2022). https://doi.org/10.1038/s41467-022-33450-2<br/><b>2- </b>C. Majorel et al<b> NPJ Nanophotonics (in press 2024). </b>