Wednesday, December 13, 2017

SensL 100m Ranging Demo

SensL demos a 100m range detection with its SiPM sensor:


University at Buffalo paper "ABC: Enabling Smartphone Authentication with Built-in Camera" by Kui Ren, Zhongjie Ba, Sixu Piao, Dimitrios Koutsonikolas, Aziz Mohaisen, and Xinwen Fu proposes to use PRNU to securely identify a smartphone:

"First observed in conventional digital cameras, PRNU analysis is common in digital forensic science. For example, it can help settle copyright lawsuits involving photographs.

But it hasn’t been applied to cybersecurity — despite the ubiquity of smartphones — because extracting it had required analyzing 50 photos taken by a camera, and experts though that customers wouldn’t be willing to supply that many photos. Plus, savvy cybercriminals can fake the pattern by analyzing images taken with a smartphone that victims post on unsecured websites.

The study addresses how each of these challenges can be overcome.

Compared to a conventional digital camera, the image sensor of a smartphone is much smaller. The reduction amplifies the pixels’ dimensional non-uniformity and generates a much stronger PRNU. As a result, it’s possible to match a photo to a smartphone camera using one photo instead of the 50 normally required for digital forensics.

When a customer initiates a transaction, the retailer asks the customer (likely through an app) to photograph two QR codes (a type of barcode that contains information about the transaction) presented on an ATM, cash register or other screen.

Using the app, the customer then sends the photograph back to the retailer, which scans the picture to measure the smartphone’s PRNU. The retailer can detect a forgery because the PRNU of the attacker’s camera will alter the PRNU component of the photograph.

Sony Sensors Compared

Basler publishes a white paper "Sensor Comparison: Are all IMXs equal?" by Dominik Lappenküper. Some food for thought from the paper:

"The first generation of this new [Pregius] sensor series includes the sensors IMX174 and IMX249. They have a pixel pitch of 5.86 um. In the first generation of the Pregius sensors, a particularly notable feature is the very high saturation capacity of over 32 ke-.

With the second generation of the Pregius series, Sony established a smaller pixel at 3.45 um. Due to the smaller pixels in the sensors of the 2nd generation, their saturation capacity greatly decreases, which results in values that are more typical for the CMOS sensors.

Add caption

"EMVA1288 standard offers the measured value of the “absolute threshold value for sensitivity”. It states the average number of required photons so that the signal to noise ratio is exactly 1."

LiDAR News: Aeye, Ouster, Innovusion

BusinessWire: AEye (former US LADAR) introduces iDAR (Intelligent Detection and Ranging) that combines the world’s first agile MOEMS LiDAR, pre-fused with a low-light camera and embedded artificial intelligence - creating software-definable and extensible hardware that can dynamically adapt to real-time demands.

AEye’s iDAR is designed to intelligently prioritize and interrogate co-located pixels (2D) and voxels (3D) within a frame, enabling the system to target and identify objects within a scene 10-20x more effectively than LiDAR-only products. Additionally, iDAR is capable of overlaying 2D images on 3D point clouds for the creation of True Color LiDAR. The introduction of iDAR follows AEye’s September demonstration of the first 360 degree, vehicle-mounted, solid-state LiDAR system with ranges up to 300 meters at high resolution.

AEye’s unique architecture has allowed us to address many of the fundamental limitations of first generation spinning or raster scanning LiDAR technologies,” said Luis Dussan, AEye founder and CEO. “These first generation systems silo sensors and use rigid asymmetrical data collection that either oversample or undersample information. This dynamic exposes an inherent tradeoff between density and latency in legacy sensors, which restricts or eliminates the ability to do intelligent sensing. For example, while traditional 64 line systems can hit an object once per frame (every 100ms or so), we can, with intelligent sensing, selectively revisit any chosen object twice within 30 microseconds - an improvement of 3000X. This embedded intelligence optimizes data collection, so we can transfer less data while delivering better quality, more relevant content.

AEye’s iDAR system uses proprietary low-cost, solid-state beam-steering 1550nm MOEMS-based LiDAR, computer vision, and embedded artificial intelligence to enable dynamic control of every co-located pixel and voxel in each frame within rapidly changing scenes.

MIT Technology Review adds: "there’s one point that there is no getting around: AEye’s device only has a 70° field of view, which means that a car would need five or six of the sensors dotted around it for a full 360 degrees. And that raises a killer question: how much will it cost? Dussan won’t commit to a number, but he makes it clear that this is supposed to be a high-end option—not competing with hundred-dollar solid state devices, but challenging high-resolution devices like those made by Velodyne. For a full set of sensors around the car, he says, “if you compare true apples-to-apples, we’re going to be the lowest-cost system around.

PRNewswire: Ouster emerges from stealth mode and announces the launch of OS1 model LIDAR, and $27 million in Series A funding, led by Cox Enterprises. The 64-channel OS1 Lidar has begun shipping to customers, and is rapidly ramping up commercial-scale production at a price point approximately 85% below that of its competition (Costs $12,000, according to Ouster site).

The company was founded by CEO Angus Pacala, co-founder and former head of engineering of Quanergy, and CTO Mark Frichtl, a prior Quanergy Systems engineer who also spent time at Palantir Technologies, First Solar, and the Apple Special Projects Group. Pacala and Frichtl formed Ouster with the vision to create a high-performance, reliable, and small form factor LIDAR sensor that could be manufactured at a scale and price that would allow autonomous technology to continue its rapid expansion without the cost and manufacturing constraints currently present in the market.

"The company has maintained a low-profile for over two years – staying heads-down and focusing on getting the OS1 ready to ship," noted CEO and co-founder Angus Pacala. "I'm incredibly proud of our team for their hard work to produce the most advanced, practical, and scalable LIDAR sensor on the market, and we're very excited about the impact our product will have in autonomous vehicles and other applications in robotics," Pacala added.

Ouster's Series A funding will primarily be used for the manufacturing and the continued development of its next sensor designs. The company anticipates manufacturing capacity in the tens of thousands of sensors in 2018 based on its current product design, and Ouster's product roadmap supports continually improving resolution, range, and alternative form factors. Additionally, the new capital will support the company's expansion from approximately 40 employees today to 100 employees by summer 2018.

PRNewswire: Innovusion too comes out of the stealth mode with a Lidar featuring:
  • Resolution: provides near picture quality with over 300 lines of resolution and several hundred pixels in both the vertical and horizontal dimensions.
  • Range: detects both light and dark objects at distances up to 150 meters away which allows cars to react and make decisions at freeway speeds and during complex driving situations.
  • Sensor fusion: fuses LiDAR raw data with camera video in the hardware layer which dramatically reduces latency, increases computing efficiency and creates a superior sensor experience.
  • Accessibility: enables a compact design which allows for easy and flexible integration without impairing vehicle aerodynamics. Innovusion's products leverage components available from mature supply chain partners, enabling fast time-to-market, affordable pricing and mass production.

Chinese-language sites Ifeng and Kuwankeji publish more infor about Innovusion product:

"Innovusion also achieved a very important function, is the fusion of laser radar point cloud and camera video data at the hardware level, greatly reducing the sensor data fusion software processing time.

The company founders used to work for Velodyne and Baidu.

For the reflectivity of 10% of objects, their detectable distance of 100 meters or more.

There are still mechanical components in the product nowadays, which can be called hybrid solid-state radar. Innovusion's team considered design for manufacturing at design stage, and components used in the product are also mature components. In addition, the 1550 nm laser is used in the prototype.

Tuesday, December 12, 2017

MEMSDrive OIS vs. VCM-based OIS

MEMSDrive publishes comparison videos of its OIS against iPhone X and Galaxy Note8 VCM-based OIS:

IEDM Papers Review

Semiconductor Engineering publishes Mark Lapedus review of IEDM 2017 Imaging Session papers:

- TSMC and EPFL presented "a paper on what they call the world’s first back-illuminated 3D-stacked, single-photon avalanche diode (SPAD) in 45nm CMOS technology.

The SPAD achieves a dark count rate of 55.4cps/μm2, a maximum photon detection probability of 31.8% at 600nm, over 5% in the 420-920nm wavelength range, and timing jitter of 107.7ps at 2.5V excess bias voltage at room temperature.

- Sony presented "a paper on a CMOS photon detector - a non-electron-multiplying CMOS image sensor photon detector. Based on a 90nm process, Sony’s CMOS photon detector features 15μm pitch active sensor pixels with a complete charge transfer and readout noise of 0.5 e- RMS.

The pixel circuit is a conventional 4T pixel. The pixels are arrayed, resulting in a high conversion gain of 132uV/e-, according to the paper. The photodiode is expanded to a size of 14.7μm x 13.1μm in a pixel with a pitch of 15μm, resulting in a physical fill factor of 76% without using back illumination. 4 pixels in a column are simultaneously accessed and read.

Samsung Improves its Iris Scanner

Korea Herald: The oncoming "Galaxy S9’s iris scanner will have an improved camera lens and functions to make it better to recognize the eyes of users.

The iris camera lens will be improved to 3MP from 2MP of Galaxy S8 and Galaxy Note 8 to capture clearer images. The scanner will better recognize users’ irises even when they wear eyeglasses, move their eyeballs or are in a too dark or too light environment. The response time will also be shorter from the current one second.

Samsung is also on target to expand the iris scanner into budget models possibly late next year or early 2019 with the ultimate aim of replacing physical banks with mobile banking.

A Samsung spokesperson said, “Iris scanner is the safest biometric authentication (among iris, fingerprint and face recognition) and we will continue to improve the system for upcoming smartphones for safer banking transactions.

Galaxy S8 iris scanning

MIPI Alliance Opens Access to its MIPI I3C Sensor Interface Specification

BusinessWire: Starting today, all companies, including those not currently members of MIPI Alliance, may access the MIPI I3C v1.0 specification so they may evaluate the incorporation of the specification into their sensor integration plans and design applications.

MIPI I3C provides a welcome update to the I2C technology that has been widely adopted over the past 35 years. Extending access provides an opportunity to spur innovation and help other industries beyond mobile," said Joel Huloux, chairman of MIPI Alliance. "It helps MIPI members as well, because it supports greater adoption and interoperability, strengthens the ecosystem and provides for a richer development environment.

Panasonic and Osaka University Develop Blood Vessel Endoscope

Asahi Shimbun: Panasonic and Osaka University have developed what they say is the world’s first vascular endoscopic catheter with an image sensor on its head that they say greatly improves blood vessel observations and could change existing therapies. The new catheter has a color resolution of about 480,000 pixels, about 50x higher than the existing devices.

An image sensor, a lens and a fiber-optic illuminator are embedded in the head of the vascular endoscopic catheter, which measures 1.8mm across and 5mm long.

Taisho Biomed Instruments Co., a manufacturer of medical devices, plans to release the new vascular endoscopic catheter for sale to hospitals this year. Panasonic has set a shipping target of about 8,000 units in fiscal 2021.

Synaptics Unveils 2nd Generation Under-display Optical Fingerprint Sensor

PC Perspective: Synaptics unveils FS9500 Clear ID family of optical fingerprint sensors for smartphones with OLED displays. Synaptics attaches a fingerprint senor module to the underside of the display and using the OLED display itself as the light source to illuminate the user's fingerprint so that the optimized CMOS image sensor can scan the fingerprint from the reflected light bounced through the gaps in between pixels. The new sensor can work with up to 1.5mm-thick displays.

Synaptics uses "Quantum Matcher" and "PurePrint" machine learning technology to enhance security and adapts to different external lighting environments, including a direct sunlight. The company says that its new fingerprint sensor is able to work faster and in more situations than a 3D facial recognition systems: its fingerprint sensor is able to read a user's fingerprint in 0.7s versus 1.4s for a facial recognition camera. The Clear ID In-Display fingerprint sensor is said to have approximately 99% spoof rejection rate due to the AI-powered PurePrint technology that discerns real fingerprints from fakes.

GlobeNewsWire: Synaptics is said to bring its in-display fingerprint sensors to mass production with one of the top five OEMs.

GlobeNewsWire: Synaptics's previous generation Natural ID FS9100 family of optical-based, under-glass fingerprint authentication solutions have been named a CES 2018 Innovation Awards Honoree. The Natural ID FS9100 family is said to be the industry’s first optical-based fingerprint sensors for smartphones enabling secure authentication through 1mm thick cover glass.

Synaptics is currently sampling its third-generation optical solution for in-display fingerprint authentication to select customers with mass production with a Tier 1 OEM expected in the current calendar year.

Monday, December 11, 2017

12nm Pixel Size

There is a funny mistake in Xiaomi Redmi 5 Plus smartphone promotional video:

2-step Column-Parallel Delta-Sigma ADC

CentraleSupelec, France, publishes a paper "A 14-b Two-step Inverter-based Σ∆ ADC for CMOS Image Sensor" by Pierre Bisiaux, Caroline Lelandais-Perrault, Anthony Kolar, Philippe Benabes, and Filipe Vinci dos Santos presented at IEEE International NEWCAS Conference, in June 2017 at Strasbourg, France.

"This paper presents a 14-bit Incremental Sigma Delta (IΣ∆) analog-to-digital converter (ADC) suitable for a column wise integration in a CMOS image sensor. A two-step conversion is performed to improve the conversion speed. As the same Σ∆ modulator is used for both steps, the overall complexity is reduced. Furthermore, the use of inverter-based amplifiers instead of operational transconductance amplifier (OTA) facilitates the integration within the column pitch and decreases power consumption. The proposed ADC is designed in 0.18 µm CMOS technology. The simulation shows that for a 1.8 V voltage supply, a 20 MHz clock frequency and an oversampling ratio (OSR) of 70, the power consumption is 460 µW, achieving an SNR of 83.7 dB."

Sunday, December 10, 2017

Imec Quantum Dot Sensor

MDPI Special Issue on the 2017 International Image Sensor Workshop (IISW) publishes Imec, KU Leuven, and Ghent University paper "Thin-Film Quantum Dot Photodiode for Monolithic Infrared Image Sensors" by Pawel E. Malinowski, Epimitheas Georgitzikis, Jorick Maes, Ioanna Vamvaka, Fortunato Frazzica, Jan Van Olmen, Piet De Moor, Paul Heremans, Zeger Hens, and David Cheyns. The paper describes a somewhat similar to Invisage IR image sensor:

"This work describes a CMOS-compatible pixel stack based on lead sulfide quantum dots (PbS QD) with tunable absorption peak. Photodiode with a 150-nm thick absorber in an inverted architecture shows dark current of 10−6 A/cm2 at −2 V reverse bias and EQE above 20% at 1440 nm wavelength. Optical modeling for top illumination architecture can improve the contact transparency to 70%. Additional cooling (193 K) can improve the sensitivity to 60 dB. This stack can be integrated on a CMOS ROIC, enabling order-of-magnitude cost reduction for infrared sensors."

Saturday, December 09, 2017

Canon Global Shutter Sensor Paper

MDPI Special Issue on the 2017 International Image Sensor Workshop (IISW) publishes Canon paper "Development of Gentle Slope Light Guide Structure in a 3.4 μm Pixel Pitch Global Shutter CMOS Image Sensor with Multiple Accumulation Shutter Technology" by Hiroshi Sekine, Masahiro Kobayashi, Yusuke Onuki, Kazunari Kawabata, Toshiki Tsuboi, Yasushi Matsuno, Hidekazu Takahashi, Shunsuke Inoue, and Takeshi Ichikawa.

"CISs with GS function have generally been inferior to the rolling shutter (RS) CIS in performance, because they have more components. This problem is remarkable in small pixel pitch. The newly developed 3.4 µm pitch GS CIS solves this problem by using multiple accumulation shutter technology and the gentle slope light guide structure. As a result, the developed GS pixel achieves 1.8 e− temporal noise and 16,200 e− full well capacity with charge domain memory in 120 fps operation. The sensitivity and parasitic light sensitivity are 28,000 e−/lx·s and −89 dB, respectively. Moreover, the incident light angle dependence of sensitivity and parasitic light sensitivity are improved by the gentle slope light guide structure."

Friday, December 08, 2017

Huawei is Reported Preparing Triple Rear Camera Smartphone

XDA Developers quotes Venturebeat reporter Evan Blass twitted about the oncoming Huawei smartphone featuring triple rear camera smartphone with 40MP resolution and 5x zoom:

Yole: Camera is Among Major Heat Sources in Smartphones

Yole Developpement report "Smartphones: a significant challenge for thermal management companies" points to camera and LED flash on one of the complex thermal management problems in smartphones:

Thursday, December 07, 2017

Recent ON Semi CCD Advances

MDPI Special Issue on the 2017 International Image Sensor Workshop (IISW) publishes ON Semi paper "Recent Enhancements to Interline and Electron Multiplying CCD Image Sensors" by Eric G. Stevens, , Jeffrey A. Clayhold, Hung Doan, Robert P. Fabinski, Jaroslav Hynecek, Stephen L. Kosman, and Christopher Parks.

"This paper describes recent process modifications made to enhance the performance of interline and electron-multiplying charge-coupled-device (EMCCD) image sensors. By use of MeV ion implantation, quantum efficiency in the NIR region of the spectrum was increased by 2×, and image smear was reduced by 6 dB. By reducing the depth of the shallow photodiode (PD) implants, the photodiode-to-vertical-charge-coupled-device (VCCD) transfer gate voltage required for no-lag operation was reduced by 3 V, and the electronic shutter voltage was reduced by 9 V. The thinner, surface pinning layer also resulted in a reduction of smear by 4 dB in the blue portion of the visible spectrum. For EMCCDs, gain aging was eliminated by providing an oxide-only dielectric under its multiplication phase, while retaining the oxide-nitride-oxide (ONO) gate dielectrics elsewhere in the device."

Queen Elizabeth Prize for Engineering Handed to Image Sensor Inventors

Queen Elizabeth Prize for Engineering has been presented to Eric Fossum, Nobukazu Teranishi and Michael Tompsett. Together with George Smith, who is unable to attend the ceremony, this year’s winners are honored for their contribution to creating digital imaging sensors:

From left to right: Prince Charles, Fossum, Tompsett, Teranishi

Qualcomm Snapdragon 845 Imaging Features

PRNewswire: The new Snapdragon 845 Platform is designed to capture cinema-grade videos and for AR applications:

Spectra 280 ISP:

  • Ultra HD premium capture
  • Qualcomm Spectra Module Program, featuring Active Depth Sensing
  • MCTF video capture
  • Multi-frame noise reduction
  • High performance capture up to 16MP @60FPS
  • Slow motion video capture (720p @480 fps)
  • ImMotion computational photography
  • Dual 14-bit ISPs
  • Hybrid Autofocus
  • Hardware Accelerated Face Detection
  • HDR Video Recording
Adreno 630 Visual Processing Subsystem:

  • 30% improved graphics/video rendering and power reduction compared to previous generation
  • Room-scale 6 DoF with SLAM
  • Adreno foveation, featuring tile rendering, eye tracking, multiView rendering, fine grain preemption
  • Improved 6DoF with hand-tracking and controller support

Hexagon 685 DSP:
  • 3rd Generation Hexagon Vector DSP (HVX) for AI and imaging

Wednesday, December 06, 2017

Leti SPAD Presentation

Leti publishes a presentation on SPAD image sensors it develops together with ST: "Avalanche Diodes for 3D Imaging at Large Distances" by Norbert Moussy. A few slides from the presentation: