Military and Strategic Journal
Issued by the Directorate of Morale Guidance at the General Command of the Armed Forces
United Arab Emirates
Founded in August 1971

2017-08-10

Darpa Searches For Intelligent Imaging Sensors

The U.S. Defense Advanced Research Projects Agency (DARPA) is looking for research proposals that can demonstrate multi-functional imaging sensors reconfigurable through software.
Proposers will build around a common digital framework, customisable for specific applications. DARPA is seeking proposals for both passive and active modes. 
Also of interest are proposals that develop adaptive algorithms for the sensors, which optimize information collection in real time.
 
U.S. military researchers are already working with four defence contractors to develop software-reconfigurable multi-function imaging sensors. 
The idea is to create sensors with simultaneous and distinct imaging modes, providing advanced capabilities that previously required several different sensors.
DARPA has awarded contracts to DRS Network & Imaging Systems, BAE Systems’ Electronic Systems and Lockheed Martin Missiles and Fire Control and Voxtel Inc. 
 
The eyes have it 
By making a detector’s pixels smarter than ever, DARPA aims to lay the foundation for multi-purpose imaging sensors that behave like many types of eyes at once. 
Picture a sensor pixel about the size of a red blood cell. Now envision a million of these pixels—a megapixel’s worth—in an array that covers a thumbnail. 
 
Take one mental trip: dive down onto the surface of the semiconductor hosting all of these pixels and marvel at each pixel’s associated tech-mesh of more than 1,000 integrated transistors, which provide each pixel with a tiny re-programmable brain of its own. That is the vision for DARPA’s new ReImagine program.
 
DARPA aims for a single, multi-talented camera sensor that can detect visual scenes as familiar still and video imagers do, but that also can adapt and change their personality and effectively morph into the type of imager that provides the most useful information for a given situation.
 
This could mean selecting between different thermal (infrared) emissions or different resolutions or frame rates, or even collecting 3-D Light Detection and Ranging (LIDAR) data for mapping and other jobs that increase situational awareness. The camera ultimately would rely on machine learning to autonomously take note of what is happening in its field of view and reconfigure the imaging sensor based on the context of the situation. 
 
The future sensor DARPA has
in mind would be able to perform many of these functions at the same time, because different patches of the sensor’s “carpet” of pixels could be reconfigured (by software) to work in different imaging modes. 
That re-configurability should enable sensors to toggle between different sensor modes, from one lightning-quick frame to the next. No single camera can do that now.
A primary driver here is the shrinking size and cost of militarily important platforms that are finding roles in locations spanning from orbit to the seas. 
 
With multi-functional sensors like those that may come out of a successful ReImagine program, smaller and cheaper platforms would provide a degree of situational awareness that today can only come from suites of single-purpose sensors fitted onto larger airborne, ground, space-based, and naval vehicles and platforms. With more extensive situational awareness comes the most important payoff: more informed decision-making.
 
Developing the layers of complexity
One key feature of the ReImagine programme is that teams will be asked to develop software-configurable applications based on a common digital circuit and software platform. 
During the four-year programme, MIT-Lincoln Laboratory—a federally funded research and development centre (FFRDC), whose roots date back to the WWII mission to develop radar technology—will be tasked to provide the common reconfigurable digital layer of what will be the system’s three-layer sensor hardware. 
 
The challenge for successful proposers (“performers” in DARPA speak) will be to design and fabricate various megapixel detector layers and “analog interface” layers, as well as associated software and algorithms for converting a diversity of relevant signals (LIDAR signals for mapping, for example) into digital data. 
 
That digital data, in turn, should be suitable for processing and for participation in machine learning procedures, through which the sensors could become autonomously aware of specific objects, information, happenings, and other features within their field of view. 
 
One reason for using a common digital layer is the hope that it will enable a community developing “apps” in software to accelerate the innovation process and unlock new applications for software-reconfigurable imagers.
 
Gathering the most useful information
In follow-on phases of the programme, performers will need to demonstrate portability of the developing technology in outdoor testing and, in DARPA’s words, “develop learning algorithms that guide the sensor, through real-time adaptation of sensor control parameters, to collecting the data with the highest content of useful information.” 
 
That adaption might translate, in response to visual cues, into toggling into a thermal detection mode to characterise a swarm of UAVs, or into hyper-slow-motion (high-frame rate) video to help tease out how a mechanical device is working.
 
Contractors must also try to develop algorithms that learn to collect the most valuable information when the sensor can be configured for a variety of measurements.
The objectives of the ReImagine programme are to demonstrate that a software-reconfigurable imaging system can enable revolutionary capabilities.
 
Performers must present a new approach to application development that is more similar to field programmable gate array (FPGA) based design than to application-specific integrated circuit (ASIC) design, and to develop the underlying theory and algorithms that learn to collect the most valuable information when the sensor can be configured for a variety of measurements. 
 
Optimised for all sensors and spectral bands
The ReImagine programme aims to demonstrate that a single ROIC (Readout Integrated Circuit) architecture can be configured to accommodate multiple modes of imaging operations that may be defined after the chip has been designed. 
 
With the use of 3D integration, it should be possible to customise the sensor to interface with virtually any type of imaging sensor (e.g. photodiode, photoconductor, avalanche photodiode, or bolometer) and to optimise it for any spectral band (e.g. ultraviolet (UV) through very long-wave infrared (VLWIR)). 
 
More importantly, it will be possible to adapt the mode of operation either through manual user control, through preset routines that can change many times per second, or in response to context derived from the scene being observed. 
 
For example, a single imager could present simultaneous ROICs that can run at high resolution, or at high frame rate. ReImagine ROICs will also demonstrate that efficient computation within an ROIC can enable real-time analysis on much more complex scenes than traditional systems. 
 
ReImagine will build on this architecture to develop a concept of operation, the application requirements, the modes of operation, and the algorithms that will be used. 
 
Revolutionary capabilities 
In addition to multiple passive imaging functions, the ability to incorporate range detection into a high resolution, low noise imaging system offers a potential revolutionary capability. 
LIDAR systems today are predominantly scanning devices that contain large moving components and do not provide high quality context imagery. 2D imaging LIDAR systems have been demonstrated and are able to acquire 3D imagery in framing or asynchronous modes. 
 
Both direct detect and coherent receiver arrays have been demonstrated, each with distinct advantages for different applications. However, in all cases, high data rates limit the spatial resolution of the sensor, and the demonstration of both passive imaging and active LIDAR modes in a large (> 1 Megapixel) array has not been demonstrated. 
 
A ReImagine dual-mode sensor would provide the ability to collect high data rate LIDAR measurements within a configurable ROI, while continuing to measure passive context imagery.
“Even as fast as machine learning and artificial intelligence are moving today, the software still generally does not have control over the sensors that give these tools access to the physical world,” said Jay Lewis, program manager for ReImagine. 
 
“With ReImagine, we would be giving machine-learning and image processing algorithms the ability to change or decide what type of sensor data to collect.” 
 
Importantly, he added, as with eyes and brains, the information would flow both ways: the sensors would inform the algorithms and the algorithms would affect the sensors. 
 
Although defence applications are foremost on his mind, Lewis also envisions commercial spin-offs. Smart phones of the future could have camera sensors that do far more than merely take pictures and video footage, their functions limited only by the imaginations of a new generation of app developers, he suggested
.
British engineers at BAE Systems’ Advanced Technology Centre are investigating a ‘smart skin’ concept which could be embedded with tens of thousands of micro-sensors. When applied to an aircraft, this will enable it to sense wind speed, temperature, physical strain and movement, far more accurately than current sensor technology allows. 
 
The revolutionary ‘smart skin’ concept will enable aircraft to continually monitor their health, reporting back on potential problems before they become significant
On May 30, Lockheed Martin received a contract for a potential reconfigurable imaging project worth $10.2 million; on June 5, DRS received a $10.1 million potential contract; on June 1, BAE Systems received a potential $7.5 million contract. May 30 saw Voxtel received a potential $5.2 million contract. 
 
 
Reference Text/Photo:
www.darpa.mil
www.voxtel-inc.com
www.lockheedmartin.com 
 

Add Comment

Your comment was successfully added!

Visitors Comments

No Comments

Related Topics

Raytheon Pitches for Connected Aviation Ecosystem

Read More

Artificial Intelligence: Changing the Future of Energy and Sustainability

Read More

Getting Ready for Clean Skies: Aircraft Innovations for the 2030s

Read More

TOP TEN

Read More

NATO Bolsters Tech Advantage

Read More

Attack Helicopters: The Dominance of the Skies

Read More
Close

2024-05-01 Current issue
Pervious issues
2017-05-13
2014-03-16
2012-01-01
2014-01-01
2021-06-01
2021-02-21
2022-06-01
2021-09-15
.

Voting

?What about new design for our website

  • Excellent
  • Very Good
  • Good
Voting Number 1647