Immervision is a number one provider of “Deep Seeing Technology”: wide-angle optics, processing, and sensor fusion for next generation technology. Here, Immervision AVP Ludimila Centano gives a deep dive on the sensor options available for secure, low-light drone operations. Read on to know the professionals and cons of low-light cameras vs. LiDAR sensors, what actually qualifies as a low-light camera, and things to search for when selecting a sensor.
It isn’t at all times possible to fly drones in full daylight and in wide open spaces. There are a lot of applications for which the flexibility to operate drones in low-light environments is a necessity. Oftentimes, the issue is exacerbated by the necessity to work in confined spaces (e.g., mines, sewers, waterways in hydroelectric dams) or spaces with obstructions (e.g., factory buildings, warehouses, woods).
A couple of low-light application examples include filmmaking, surveilling individuals and objects of interest, inspecting infrastructure just like the undersides of bridges and the insides of railway tunnels, delivering medications to rural areas and isolated locations, and life-and-death situations like search and rescue operations that must run day and night because every second counts.
Latest opportunities are being made available to industrial drone operators with the FAA providing Beyond Visual Line-of Sight (BVLOS) waivers (more information here.) Along with flying over greater ranges and at higher altitudes, this includes flying in low-light conditions and at night.
Unfortunately, these opportunities remain out of reach within the absence of an efficient and effective solution for operating drones safely under less-than-ideal lighting situations.
Alternative Low-Light Sensor Options
By default, drones aren’t designed to operate in low-light conditions or at night. One option is to reinforce the drones with specialist sensor technologies.
Ultrasonic sensors are small, light, function in all light conditions, and should be of limited interest for certain applications, equivalent to detecting the altitude of the drone while landing. Nonetheless, these sensors have limited range, limited accuracy, inflexible scanning methods, and intensely limited resolution that gives only “something is there” or “nothing is there” information.
Radar sensors suitable to be used on drones also work in all light conditions, are tolerant of bad weather (fog, rain, snow), and have an affordable range. Once more, nevertheless, these sensors provide limited resolution, have a narrow Field of View (FoV), and are of limited interest for many low-light applications.
There are two major LiDAR technologies—Time-of-Flight (ToF) and Frequency Modulated Continuous Wave (FMCW)—each with their very own benefits and downsides. Their major advantage within the case of low-light operations is that they use a laser to “illuminate” the item, which suggests they’re unaffected by the absence of natural light. Although LiDAR can offer significantly higher resolution than radar, this resolution is simply a fraction of that offered by camera technologies. Also, LiDAR data shouldn’t be often colorized, making its interpretation and evaluation uncertain. Moreover, the Size, Weight, and Power (SWaP) characteristics of LiDAR sensors limit their use in all but the most important drones.
All of the sensors discussed above are energetic in nature, which suggests they emit energy and measure the reflected or scattered signal. By comparison—assuming they aren’t augmented by an extra light source to light up their surroundings—cameras are passive in nature, which suggests they detect the natural light reflected or emitted by objects in the encircling environment. This passive capability could also be mandatory in certain applications. Cameras also convey many other benefits, including low price, low weight, and low power consumption coupled with high resolution and—when equipped with an appropriate lens subsystem—a 180-degree or a 360-degree FoV.
What Actually Qualifies as a Low-Light Camera?
There are a lot of cameras available that claim to supply low-light capabilities. Nonetheless, there isn’t a good definition as to what actually qualifies as a low-light camera. Humans can subjectively appreciate the standard of a picture, but how does one objectively quantify the performance of a low-light system?
At Immervision, we are sometimes asked questions like “How dark can or not it’s while your camera can still see?” It is a tricky query because this stuff are so subjective. It many respects, the reply relies on what there’s to be seen. Within the context of computer vision for object detection, for instance, the variety of object, its shape, color, and size all impact how easily it could be detected. Which means “How dark can or not it’s while your camera can still see?” is the flawed query to ask if one wishes to find out whether a camera is sweet for low-light conditions… or not.
Fortunately, there are alternatives available that provide a more deterministic and quantitative approach. The Harris detector model, for instance, detects transitions in a picture (e.g., corners and edges). This model might be used to quantify the image quality produced by a camera to be used in machine vision applications. Also, using artificial intelligence (AI) models for object detection and recognition can provide approach to measure the performance of a camera and to check different options.
Making a Great Low-Light Camera
There are three major elements that impact a camera’s low-light sensitivity and capabilities: the lens assembly, the sensor, and the image signal processor.
- The Lens Assembly: Many wide-angle lenses cause the following image to be “squished” at the sides. To counteract this, the multiple sub-lenses forming the assembly should be crafted in such a way as to lead to more “useful pixels” throughout the whole image. Moreover, with respect to low-light operation, the lens assembly must maximize the concentration of light-per-pixel on the image sensor. That is achieved by increasing the aperture (i.e., the opening of the lens measured because the F# or “F number”) to confess more light. The lower the F#, the higher the low-light performance. Nonetheless, lowering the F# comes at a value since it increases the complexity of the design and—if not implemented accurately—may impact the standard of the image. A very good low-light lens assembly must also provide a crisp image, whose sharpness might be measured because the Modulation Transfer Function (MTF) of the lens.
- The Image Sensor: That is the component that converts the sunshine from the lens assembly right into a digital equivalent that will probably be processed by the remaining of the system. A very good low-light camera must use a sensor with high sensitivity and quantum efficiency. Such sensors typically have a big pixel size, which contributes to the sunshine sensitivity of the camera module by capturing more light-per-pixel.
- The Image Signal Processor: The digital data generated by the image sensor is usually relayed to an Image Signal Processor (ISP). This component (or function in a bigger integrated circuit) is tasked with obtaining one of the best image possible based on the appliance requirements. The ISP controls the parameters related to the image sensor, equivalent to the exposure, and likewise applies its own. The calibration of an ISP is named Image Quality tuning (IQ tuning). It is a complex science that has been mastered by few firms, of which Immervision is one.
What’s Available Now
Latest advancements in low-light cameras and vision systems are helping expand the scope of applications (e.g., location mapping, visual odometry, and obstacle avoidance) and improve operational capabilities by empowering drones to take off, navigate, and land efficiently in difficult lighting conditions and adversarial weather scenarios.
At Immervision, we’re developing advanced vision systems combining optics, image processing, and sensor fusion technology. Blue UAS is a holistic and continuous approach to rapidly prototyping and scaling industrial UAS technology for the DoD. As a part of the Blue UAS program, the Immervision InnovationLab team developed a wide-angle navigation camera called IMVISIO-ML that may operate in extreme low-light environments under 1 lux.
Together with the camera module, a complicated image processing library is obtainable with features equivalent to dewarping, sensor fusion, camera stitching, image stabilization, and more. We also provide IQ tuning services, to optimize the performance of the system based on the goal application.
The IMVISIO-ML low-light navigation camera system is now broadly available to drone and robotics manufacturers. Integrated with the Qualcomm RB5 and ModalAI VOXL2 platforms, this camera module is already being adopted by drone manufacturers equivalent to Teal Drones, which is a number one drone provider from the Blue UAS Cleared list. As reported here on DroneLife, the latest model of Teal’s Golden Eagle drone will probably be equipped with two Immervision low-light camera modules, which can improve navigation in low-light conditions and supply stereoscopic vision to Teal’s autonomous pilot system.