Thursday, September 21, 2017

Auger Excitation Shows ADP-like Gains

A group of UCSD researchers publishes an open-access Applied Physics Letters paper "An amorphous silicon photodiode with 2 THz gain‐bandwidth product based on cycling excitation process" by Lujiang Yan, Yugang Yu, Alex Ce Zhang, David Hall, Iftikhar Ahmad Niaz, Mohammad Abu Raihan Miah, Yu-Hsin Liu, and Yu-Hwa Lo. The paper proposes APD-magnitude gain mechanism in by means of 30nm-thing amorphous Si film deposited on top of the bulk silicon:

"APDs have relatively high excess noise, a limited gain-bandwidth product, and high operation voltage, presenting a need for alternative signal amplification mechanisms of superior properties. As an amplification mechanism, the cycling excitation process (CEP) was recently reported in a silicon p-n junction with subtle control and balance of the impurity levels and profiles. Realizing that CEP effect depends on Auger excitation involving localized states, we made the counter intuitive hypothesis that disordered materials, such as amorphous silicon, with their abundant localized states, can produce strong CEP effects with high gain and speed at low noise, despite their extremely low mobility and large number of defects. Here, we demonstrate an amorphous silicon low noise photodiode with gain-bandwidth product of over 2 THz, based on a very simple structure."

Wednesday, September 20, 2017

Yole on iPhone X 3D Innovations

Yole Developpement publishes its analysis of iPhone X 3D camera design and implications "Apple iPhone X: unlocking the next decade with a revolution:"

"The infrared camera, proximity ToF detector and flood illuminator seem to be treated as a single block unit. This is supplied by STMicroelectronics, along with Himax for the illuminator subsystem, and Philips Photonics and Finisar for the infrared-light vertical-cavity surface-emitting laser (VCSEL). Then, on the right hand of the speaker, the regular front-facing camera is probably supplied by Cowell, and the sensor chip by Sony. On the far right, the “dot pattern projector” is from ams subsidiary Heptagon... It combines a VCSEL, probably from Lumentum or Princeton Optronics, a wafer level lens and a diffractive optical element (DOE) able to project 30,000 dots of infrared light.

The next step forward should be full ToF array cameras. According to the roadmap Yole has published this should happen before 2020.

Luminar on Automotive LiDAR Progress

OSA publishes a digest of Luminar CTO, Jason Eichenholz, talk at 2017 Frontiers in Optics meeting. Few quotes:

"Surprisingly, however, despite this safety imperative, Eichenholz pointed out that the lidar system used (for example) in Uber’s 2017 self-driving demo has essentially the same technical specifications as the system of the winning vehicle in DARPA’s 2007 autonomous-vehicle grand challenge. “In ten years,” he said, “you have not seen a dramatic improvement in lidar systems to enable fully autonomous driving. There’s been so much progress in computation, so much in machine vision … and yet the technology for the main set of eyes for these cars hasn’t evolved.”

On the requirements side, the array of demands is sobering. They include, of course, a bevy of specific requirements: a 200-m range, to give the vehicle passenger a minimum of seven seconds of reaction time in case of an emergency; laser eye safety; the ability to capture millions of points per second and maintain a 10-fps frame rate; and the ability to handle fog and other unclear conditions.

But Eichenholz also stressed that an autonomous vehicle on the road operates in a “target-rich” environment, with hundreds of other autonomous vehicles shooting out their own laser signals. That environment, he said, creates huge challenges of background noise and interference. And he noted some of the same issues with supply chain, cost control, and zero error tolerance.

Eichenholz outlined some of the approaches and technical steps that Luminar has adopted in its path to meet those many requirements in autonomous-vehicle lidar. One step, he said, was the choice of a 1550-nm, InGaAs laser, which allows both eye safety and a good photon budget. Another was the use of an InGaAs linear avalanche photodiode detector rather than single-photon counting, and scanning the laser signal for field coverage rather than using a detector array. The latter two decisions, he said, substantially reduce problems of background noise and interference. “This is a huge part of our architecture.

Wired UK publishes a video interview with LiDAR CEO Austin Russell:

Tuesday, September 19, 2017

Functional Safety in Automotive Image Sensors

ON Semi publishes a webinar on Evaluating Functional Safety in Automotive Image Sensors:

Exvision High-Speed Image Sensor-Based Gesture Control

Exvision, a spin-off from University of Tokyo's Ishikawa-Watanabe Laboratory, demos gesture control from far away, based on a high speed image sensor (currently, 120fps Sony IMX208):

SensL Demos 100m LiDAR Range

SensL publishes a demo video of 100m LiDAR based on its 1 x 16 photomultiplier imager scanned in 5 x 80 deg angle:

3D Camera Use Cases

Occipital publishes few videos on a 3D camera use cases:

OmniVision Announces Automotive Reference Design

PRNewswire: OmniVision announces an automotive reference design system (ARDS) that allows automotive imaging-system and software developers to mix and match image sensors, ISPs and long-distance serializer modules.

The imaging-system industry is anticipating significant growth in ADAS, including surround-view and rear-view camera systems. NCAP mandates all new vehicles in the U.S. to be equipped with rear-view cameras by 2018. Surround-view systems (SVS) are also expected to become an even more popular feature for the luxury-vehicle segment within the same timeframe. SVSs typically require at least four cameras to provide a 360-degree view.

OmniVision's ARDS demo kits feature OmniVision's 1080p60 OV2775 image sensor, optional OV495 ISP and serializer camera module. The OV2775 is built on 2.8um OmniBSI-2 Deep Well pixel with a 16-bit linear output from a single exposure.

Monday, September 18, 2017

Samsung to Start Mass Production of 1000fps 3-Layer Sensor

ETNews reports that Samsung follows Sony footsteps to develop its own 1000fps image sensor for smartphones:

"Samsung Electronics is going to start mass-producing ‘3-layered image sensor’ in November. This image sensor is made into a layered structure by connecting a system semiconductor (logic chip) that is in charge of calculations and DRAM chip that can temporarily store data through TSV (Through Silicon Via) technology. Samsung Electronics currently ordered special equipment for mass-production and is going to start mass-producing ‘3-layered image sensor’ after doing pilot operation in next month.

SONY established a batch process system that attaches a sensor, a DRAM chip, and a logic chip in a unit of a wafer. On the other hand, it is understood that Samsung Electronics is using a method that makes 2-layered structure with a sensor and a logic chip and attaches DRAM through TC (Thermal Compression) bonding method after flipping over a wafer. From productivity and production cost, SONY has an upper hand. It seems that a reason why Samsung Electronics decided to use its way is because it wanted to avoid using other patents.

Turkish Startup Demos CMOS Night Vision

Ankara, Turkey-based PiKSELiM demos low-light sensitivity of its 640x512 CMOS sensor operating in the global shutter mode at 10fps and using an f/0.95 C-mount security camera optics: