Not so new, but definitely cool:
On 2/15/2016 7:50 PM, Manfredi, Albert E wrote:
Imaging Revolution: Forget Frames
2/15/2016 10:05 AM EST
PARIS—For centuries, our desire to reproduce accurate, pretty images for “human
consumption” has driven the advancements of camera technologies. But what if we
were to change the premise and design image sensors for computers to see and
analyze the information?
In that case, the fundamental data an image sensor needs to capture—and how
each pixel should operate—would change completely. Further, processing would be
reinvented and known algorithms would become obsolete.
Nowadays, with drones, robots and autonomous cars increasingly tasked to see
their surroundings, detect obstructions and avoid collisions swiftly, these
consumers of the future need an image sensor built from the ground up, specific
to their computer vision.
Chronocam, a Paris-based startup, happens to have the right technology at the
Two French scientists, Ryad Benosman and Christoph Posch, both well versed in
neuromorphic engineering, founded Chronocam in 2014. They’ve developed an image
sensor designed to capture images not based on artificially created frames, but
driven by events within view.
Each pixel in Chronocam’s asynchronous time-based image sensor makes an
independent decision to sample different parts of a scene at different rates.
“Each pixel individually controls its sampling – with no clock involved – by
reacting to light, or changes in the amount of incident light it receives,”
explained Posch, Chronocam’s CTO.
Conventional image sensors capture visual information at a predetermined frame
rate, Posch explained. Regardless of dynamic changes in the scene, each frame
conveys information from all pixels, uniformly sampling them at the same time.
“Frame-based video acquisition is fundamentally flawed,” Chronocam CTO Posch
Shift from imaging to sensing
Pierre Cambou, Activity Leader at Yole Développement, believes fundamental
changes are happening in the CMOS image sensor market. It’s shifting “toward
sensing in opposition to imaging.” Cambou told EE Times, “I think Chronocam
lies exactly at the forefront of this new wave of innovation that will shape
the industry before the end of the decade.”
In Posch’s view, frame-based video capture is fraught with problems. It could
easily miss important events that might have happened between frames.
Over-sampling and under-sampling happen too often. Frame-based methodology
results in redundancy in the recorded data, which triggers higher power
consumption. Results include inefficient data rates and inflated storage
volume. Frame-based video, at 30 or 60 frames per second, or even a much higher
rate, causes “a catastrophe in image capturing,” Posch concluded.
The inspiration for Chronocam’s event-driven vision sensors comes from the two
co-founders who have studied how the human eye and brain work.
According to Benosman, human eyes and brains “do not record the visual
information based on a series of frames.” Biology is in fact much more
sophisticated. “Humans capture the stuff of interest—spatial and temporal
changes—and send that information to the brain very efficiently,” he said.
Explaining Chronocam’s bio-inspired system, Benosman said, “We didn’t invent
it. We observed [the nature].”
Chronocam’s biggest assets lie in its talent, experience and IPs.
Benosman, co-founder and scientific adviser, is a mathematician who has done
original work on event-driven computation, retina prosthetics, and neural
sensing models at the Vision Institute in Paris. In joining Chronocam, Benosman
brought along intellectual property rights –including more than 12 patents –
originally invented at the Institute. Bensoman is a full-time professor at
Pierre and Marie Curie University, where many of his students also work on
neural sensing models.
Posch, Chronocam’s CTO, also worked at the Vision Institute, co-directing the
Neuromorphic Vision and Natural Computation group. His research interests
include neuromorphic analog very large scale integration (VLSI), CMOS image and
vision sensors, biology-inspired signal processing, and biomedical devices and
Posch cut his teeth at the Large Hadron Collider (LHC), a particle accelerator
at CERN (the European Organization for Nuclear Research) in Switzerland. He
made contributions to the readout system for the ATLAS semiconductor tracker
there, along with the development of conception and design of the front-end
chip for the ATLAS Muon Detector. Sixty thousand pieces of the chip are
installed and working in ATLAS now.
Beyond its talents and IPs, it’s important to note that Chronocam’s
biology-inspired vision system is no longer just a theory. Chronocam has
applied and tested its sensor’s principles in restoring people’s vision at
Pixium Vision, a retina prosthetic company founded by Chronocam’s co-founders.
Pixium Vision developed systems to replace the normal physiological functions
of the eye's photoreceptor cells. The system “electrically stimulates the nerve
cells of the inner retina, which then transmit the visual information to the
brain via the optic nerve,” according to Pixium. The underlying technology
works on the same principle used in Chronocam’s image sensors.
Yole’s Cambou said that event-based imagers had to find a niche market where
their distinctive advantages were valued and where no real competition existed.
Chronocam’s opportunity to supply its cameras made Pixium Vision “almost a
perfect case study.”
Asked about Chronocam’s other technological edge, Cambou said, “First,
Chronocam owns a pixel which is the key building block for the imager.” Second,
Chronocam’s technology involves not just hardware but also software designed to
enable a complete new approach for data handling, he added.
But owning a revolutionary technology is a double-edged sword.
Cambou said, Chronocam “has to develop vertically the full technology ecosystem
to take advantage of its sensor technology.” It’s a “drawback” to swift entry
into the $10 billion imaging industry, he said. But unique technology should be
“very good protection” for a startup.
In Cambou’s view, Chronocam “has been very successful to align all the
different competences needed for their approach to work,” adding that “the
Paris VI (Pierre and Marie Curie University) environment has probably worked a
lot in their favor.”
Luca Verre, Chronocam CEO, acknowledged a certain “reluctance” to embrace
Chronocam’s technology, among the machine vision community. “Our approach to
computer vision requires a mindset change in people who are used to
conventional methods. It is a matter of willingness to challenge the status quo
and to move out from a comfort zone.”
Chronocam is working to reverse this attitude “by integrating more and more
computer vision tasks at a system level,” said the Chronocam CEO. “We aim at
delivering the camera running a first layer of event-based computation
(Software Development Kit) to provide an interface to users” closer to what
they are used to today, he explained.
Chronocam has raised 1.5 million euro so far. Among investors are CEA
Investment and Robert Bosch VC. Without disclosing the sum, Verre said, “We’re
currently in the process to raise more money.”
Today, Chronocam has an asynchronous time-based image sensor (ATIS) and processing software. Driven by events, the sensor offers low power consumption, operates at several hundred kHz, capable of computation at high temporal resolution in real time.
Asked to compare Chronocam’s technology with classic frame-based image sensing,
Yole’s Cambou observed, in the frame-based approach, “the ability to handle and
process a large volume of image data is the key. Therefore it quickly ends up
in a power vs. speed trade-off question.”
In the Chronocam asynchronous approach, “only the significant information is
handed down to the processing unit. Processing power is reduced dramatically in
many cases.” He added, “Reconstructing an image as we know it might take some
significant power, [but] nothing that a descent DSP cannot handle.”
Considering the low processing power, Chronocam’s technology will be “better
exploited for medical and machine vision applications than the typical video
imaging applications,” Cambou said.
Chronocam’s CCMA ATIS 1.1 sensor, whose supply voltage is 3.3V (analog),
1.8V(digital), comes in a 9.9 x 8.2mm2 chip size, featuring 2/3-inch optical
format. Its array size is 304 x 240 QVGA, with a pixel size of 30μm × 30μm. The
power consumption is less than 10mW.
Asked about the company’s next steps, Verre, the CEO, said, “We aim at
decreasing pixel pitch and increasing resolution. Our next steps are the VGA
tape out and the migration to a CIS process.”
Chronocam is open to licensing its technology to others. The company’s
immediate focus, however, is going after key players to collaborate in such
targeted fields as navigation (drone/robot/car) and security/ surveillance,
according to Verre.
He said that all top five players in those fields are “aware of Chronocam. They
have tested or they are testing our technology.”
Yole’s Cambou observed that for Chronocam to succeed, it “will have to be
supported by a strong foundry partner. Currently Chronocam is using a very
standard CMOS imaging process, but in order to reduce the pixel size below ten
microns it will need an access to 3D stacking technology.”
Additionally, Cambou cautioned that Chronocam “will have to move extremely fast
if it wants to serve emerging players of the robotic and IoT ecosystem.” He
noted that industry players are making key choices right now.
Chronocam “has to prove it’s the right choice right now for the main players of
autonomous driving, robotics and even IoT. If not, those early players will
solve their technical problem one way or another and move on.”
Referring to a report on “Sensors for Drones and Robots” (To be released Q1,
2016 – Yole Développement) which he is spearheading at Yole, Cambou said, “the
market window is clearly open for five years to come.” This market will reach
$700 million by 2021 and above $1 billion by 2025, he predicted. “By 2025, it
becomes much harder to bring a totally disruptive technology to the fray,” he
Carver Mead disciples
Much of neuromorphic engineering has its roots in a concept of “biological
circuits” developed in the late 1980’s by Carver Mead in CalTech. Its mission
was to describe the use of VLSI systems containing electronic analog circuits
to mimic the neuro-biological architectures in the nervous system.
The theory -- and its efforts to understand the brain -- took the world by storm in
the 90’s. Although it never resulted in any actual products, “neuromorphic
engineering never died,” said Benosman. “Its R&D has been kept alive.”
As Yole’ Cambou explains it, the original CalTech team spread across Europe,
first in Zurich, then Vienna and now Paris. Most recent developments took place
in the academic world, he noted. “This is both a story of individual
trajectories but also shared knowledge and collective effort.”
A group of researchers in neuromorphic engineering have been meeting at an
annual workshop in the United States – held in Telluride, Colorado. Meanwhile,
a European workshop has been held also yearly in Sardinia, Italy. Benosman and
Posch met in Sardinia in 2007. “These are real hands-on workshops where we
spend a lot of time debating and challenging one another,” explained Benosman.
Asked to compare Chronocam’s technology to other imaging solutions, Yole’s
Cambou said, “The only technology I can compare it right now is photon counting
arrays, for which the usual suspects (and whose technology centers) are located
in Switzerland, Scotland, France and Belgium.” He added, “Both technologies can
serve high-speed applications and are relatively low resolution due to their
large pixel size.”