Wednesday, December 13, 2017

SensL 100m Ranging Demo

SensL demos a 100m range detection with its SiPM sensor:

PRNU ID

University at Buffalo paper "ABC: Enabling Smartphone Authentication with Built-in Camera" by Kui Ren, Zhongjie Ba, Sixu Piao, Dimitrios Koutsonikolas, Aziz Mohaisen, and Xinwen Fu proposes to use PRNU to securely identify a smartphone:

"First observed in conventional digital cameras, PRNU analysis is common in digital forensic science. For example, it can help settle copyright lawsuits involving photographs.

But it hasn’t been applied to cybersecurity — despite the ubiquity of smartphones — because extracting it had required analyzing 50 photos taken by a camera, and experts though that customers wouldn’t be willing to supply that many photos. Plus, savvy cybercriminals can fake the pattern by analyzing images taken with a smartphone that victims post on unsecured websites.

The study addresses how each of these challenges can be overcome.

Compared to a conventional digital camera, the image sensor of a smartphone is much smaller. The reduction amplifies the pixels’ dimensional non-uniformity and generates a much stronger PRNU. As a result, it’s possible to match a photo to a smartphone camera using one photo instead of the 50 normally required for digital forensics.

When a customer initiates a transaction, the retailer asks the customer (likely through an app) to photograph two QR codes (a type of barcode that contains information about the transaction) presented on an ATM, cash register or other screen.

Using the app, the customer then sends the photograph back to the retailer, which scans the picture to measure the smartphone’s PRNU. The retailer can detect a forgery because the PRNU of the attacker’s camera will alter the PRNU component of the photograph.
"

Sony Sensors Compared

Basler publishes a white paper "Sensor Comparison: Are all IMXs equal?" by Dominik Lappenküper. Some food for thought from the paper:

"The first generation of this new [Pregius] sensor series includes the sensors IMX174 and IMX249. They have a pixel pitch of 5.86 um. In the first generation of the Pregius sensors, a particularly notable feature is the very high saturation capacity of over 32 ke-.

With the second generation of the Pregius series, Sony established a smaller pixel at 3.45 um. Due to the smaller pixels in the sensors of the 2nd generation, their saturation capacity greatly decreases, which results in values that are more typical for the CMOS sensors.
"

Add caption

"EMVA1288 standard offers the measured value of the “absolute threshold value for sensitivity”. It states the average number of required photons so that the signal to noise ratio is exactly 1."

LiDAR News: Aeye, Ouster, Innovusion

BusinessWire: AEye (former US LADAR) introduces iDAR (Intelligent Detection and Ranging) that combines the world’s first agile MOEMS LiDAR, pre-fused with a low-light camera and embedded artificial intelligence - creating software-definable and extensible hardware that can dynamically adapt to real-time demands.

AEye’s iDAR is designed to intelligently prioritize and interrogate co-located pixels (2D) and voxels (3D) within a frame, enabling the system to target and identify objects within a scene 10-20x more effectively than LiDAR-only products. Additionally, iDAR is capable of overlaying 2D images on 3D point clouds for the creation of True Color LiDAR. The introduction of iDAR follows AEye’s September demonstration of the first 360 degree, vehicle-mounted, solid-state LiDAR system with ranges up to 300 meters at high resolution.

AEye’s unique architecture has allowed us to address many of the fundamental limitations of first generation spinning or raster scanning LiDAR technologies,” said Luis Dussan, AEye founder and CEO. “These first generation systems silo sensors and use rigid asymmetrical data collection that either oversample or undersample information. This dynamic exposes an inherent tradeoff between density and latency in legacy sensors, which restricts or eliminates the ability to do intelligent sensing. For example, while traditional 64 line systems can hit an object once per frame (every 100ms or so), we can, with intelligent sensing, selectively revisit any chosen object twice within 30 microseconds - an improvement of 3000X. This embedded intelligence optimizes data collection, so we can transfer less data while delivering better quality, more relevant content.

AEye’s iDAR system uses proprietary low-cost, solid-state beam-steering 1550nm MOEMS-based LiDAR, computer vision, and embedded artificial intelligence to enable dynamic control of every co-located pixel and voxel in each frame within rapidly changing scenes.


MIT Technology Review adds: "there’s one point that there is no getting around: AEye’s device only has a 70° field of view, which means that a car would need five or six of the sensors dotted around it for a full 360 degrees. And that raises a killer question: how much will it cost? Dussan won’t commit to a number, but he makes it clear that this is supposed to be a high-end option—not competing with hundred-dollar solid state devices, but challenging high-resolution devices like those made by Velodyne. For a full set of sensors around the car, he says, “if you compare true apples-to-apples, we’re going to be the lowest-cost system around.

PRNewswire: Ouster emerges from stealth mode and announces the launch of OS1 model LIDAR, and $27 million in Series A funding, led by Cox Enterprises. The 64-channel OS1 Lidar has begun shipping to customers, and is rapidly ramping up commercial-scale production at a price point approximately 85% below that of its competition (Costs $12,000, according to Ouster site).

The company was founded by CEO Angus Pacala, co-founder and former head of engineering of Quanergy, and CTO Mark Frichtl, a prior Quanergy Systems engineer who also spent time at Palantir Technologies, First Solar, and the Apple Special Projects Group. Pacala and Frichtl formed Ouster with the vision to create a high-performance, reliable, and small form factor LIDAR sensor that could be manufactured at a scale and price that would allow autonomous technology to continue its rapid expansion without the cost and manufacturing constraints currently present in the market.

"The company has maintained a low-profile for over two years – staying heads-down and focusing on getting the OS1 ready to ship," noted CEO and co-founder Angus Pacala. "I'm incredibly proud of our team for their hard work to produce the most advanced, practical, and scalable LIDAR sensor on the market, and we're very excited about the impact our product will have in autonomous vehicles and other applications in robotics," Pacala added.

Ouster's Series A funding will primarily be used for the manufacturing and the continued development of its next sensor designs. The company anticipates manufacturing capacity in the tens of thousands of sensors in 2018 based on its current product design, and Ouster's product roadmap supports continually improving resolution, range, and alternative form factors. Additionally, the new capital will support the company's expansion from approximately 40 employees today to 100 employees by summer 2018.


PRNewswire: Innovusion too comes out of the stealth mode with a Lidar featuring:
  • Resolution: provides near picture quality with over 300 lines of resolution and several hundred pixels in both the vertical and horizontal dimensions.
  • Range: detects both light and dark objects at distances up to 150 meters away which allows cars to react and make decisions at freeway speeds and during complex driving situations.
  • Sensor fusion: fuses LiDAR raw data with camera video in the hardware layer which dramatically reduces latency, increases computing efficiency and creates a superior sensor experience.
  • Accessibility: enables a compact design which allows for easy and flexible integration without impairing vehicle aerodynamics. Innovusion's products leverage components available from mature supply chain partners, enabling fast time-to-market, affordable pricing and mass production.

Chinese-language sites Ifeng and Kuwankeji publish more infor about Innovusion product:

"Innovusion also achieved a very important function, is the fusion of laser radar point cloud and camera video data at the hardware level, greatly reducing the sensor data fusion software processing time.

The company founders used to work for Velodyne and Baidu.

For the reflectivity of 10% of objects, their detectable distance of 100 meters or more.

There are still mechanical components in the product nowadays, which can be called hybrid solid-state radar. Innovusion's team considered design for manufacturing at design stage, and components used in the product are also mature components. In addition, the 1550 nm laser is used in the prototype.
"

Tuesday, December 12, 2017

MEMSDrive OIS vs. VCM-based OIS

MEMSDrive publishes comparison videos of its OIS against iPhone X and Galaxy Note8 VCM-based OIS:



IEDM Papers Review

Semiconductor Engineering publishes Mark Lapedus review of IEDM 2017 Imaging Session papers:

- TSMC and EPFL presented "a paper on what they call the world’s first back-illuminated 3D-stacked, single-photon avalanche diode (SPAD) in 45nm CMOS technology.

The SPAD achieves a dark count rate of 55.4cps/μm2, a maximum photon detection probability of 31.8% at 600nm, over 5% in the 420-920nm wavelength range, and timing jitter of 107.7ps at 2.5V excess bias voltage at room temperature.
"

- Sony presented "a paper on a CMOS photon detector - a non-electron-multiplying CMOS image sensor photon detector. Based on a 90nm process, Sony’s CMOS photon detector features 15μm pitch active sensor pixels with a complete charge transfer and readout noise of 0.5 e- RMS.

The pixel circuit is a conventional 4T pixel. The pixels are arrayed, resulting in a high conversion gain of 132uV/e-, according to the paper. The photodiode is expanded to a size of 14.7μm x 13.1μm in a pixel with a pitch of 15μm, resulting in a physical fill factor of 76% without using back illumination. 4 pixels in a column are simultaneously accessed and read.
"

Samsung Improves its Iris Scanner

Korea Herald: The oncoming "Galaxy S9’s iris scanner will have an improved camera lens and functions to make it better to recognize the eyes of users.

The iris camera lens will be improved to 3MP from 2MP of Galaxy S8 and Galaxy Note 8 to capture clearer images. The scanner will better recognize users’ irises even when they wear eyeglasses, move their eyeballs or are in a too dark or too light environment. The response time will also be shorter from the current one second.

Samsung is also on target to expand the iris scanner into budget models possibly late next year or early 2019 with the ultimate aim of replacing physical banks with mobile banking.
"

A Samsung spokesperson said, “Iris scanner is the safest biometric authentication (among iris, fingerprint and face recognition) and we will continue to improve the system for upcoming smartphones for safer banking transactions.

Galaxy S8 iris scanning

MIPI Alliance Opens Access to its MIPI I3C Sensor Interface Specification

BusinessWire: Starting today, all companies, including those not currently members of MIPI Alliance, may access the MIPI I3C v1.0 specification so they may evaluate the incorporation of the specification into their sensor integration plans and design applications.

MIPI I3C provides a welcome update to the I2C technology that has been widely adopted over the past 35 years. Extending access provides an opportunity to spur innovation and help other industries beyond mobile," said Joel Huloux, chairman of MIPI Alliance. "It helps MIPI members as well, because it supports greater adoption and interoperability, strengthens the ecosystem and provides for a richer development environment.

Panasonic and Osaka University Develop Blood Vessel Endoscope

Asahi Shimbun: Panasonic and Osaka University have developed what they say is the world’s first vascular endoscopic catheter with an image sensor on its head that they say greatly improves blood vessel observations and could change existing therapies. The new catheter has a color resolution of about 480,000 pixels, about 50x higher than the existing devices.

An image sensor, a lens and a fiber-optic illuminator are embedded in the head of the vascular endoscopic catheter, which measures 1.8mm across and 5mm long.

Taisho Biomed Instruments Co., a manufacturer of medical devices, plans to release the new vascular endoscopic catheter for sale to hospitals this year. Panasonic has set a shipping target of about 8,000 units in fiscal 2021.

Synaptics Unveils 2nd Generation Under-display Optical Fingerprint Sensor

PC Perspective: Synaptics unveils FS9500 Clear ID family of optical fingerprint sensors for smartphones with OLED displays. Synaptics attaches a fingerprint senor module to the underside of the display and using the OLED display itself as the light source to illuminate the user's fingerprint so that the optimized CMOS image sensor can scan the fingerprint from the reflected light bounced through the gaps in between pixels. The new sensor can work with up to 1.5mm-thick displays.

Synaptics uses "Quantum Matcher" and "PurePrint" machine learning technology to enhance security and adapts to different external lighting environments, including a direct sunlight. The company says that its new fingerprint sensor is able to work faster and in more situations than a 3D facial recognition systems: its fingerprint sensor is able to read a user's fingerprint in 0.7s versus 1.4s for a facial recognition camera. The Clear ID In-Display fingerprint sensor is said to have approximately 99% spoof rejection rate due to the AI-powered PurePrint technology that discerns real fingerprints from fakes.

GlobeNewsWire: Synaptics is said to bring its in-display fingerprint sensors to mass production with one of the top five OEMs.


GlobeNewsWire: Synaptics's previous generation Natural ID FS9100 family of optical-based, under-glass fingerprint authentication solutions have been named a CES 2018 Innovation Awards Honoree. The Natural ID FS9100 family is said to be the industry’s first optical-based fingerprint sensors for smartphones enabling secure authentication through 1mm thick cover glass.

Synaptics is currently sampling its third-generation optical solution for in-display fingerprint authentication to select customers with mass production with a Tier 1 OEM expected in the current calendar year.