Time-of-flight (ToF) cameras have gained attention as the depth sensing method of choice for its smaller form factor, wide dynamic range of sensing, and its ability to operate in a variety of environments. Though ToF technology has been used for years in the scientific and military fields, it has become more prevalent starting in the early s with advances in image sensing technology. This evolution in performance means that technologies such as ADIs ToF technology will become more ubiquitously deployed beyond the consumer market, where it is currently being designed into smartphones, consumer devices, and gaming devices. As the technology matures, there will be opportunities to further leverage mainstream manufacturing processes to increase system efficiencies in the design, manufacturing, and transport of goods.
HBVCAM supply professional and honest service.
Figure 1. Time of flight (ToF): a technology to detect the distance to an object.
Logistics, quality inspection, navigation, robotics, facial recognition, security, surveillance, safety, healthcare, and driver monitoring are all application use cases that can leverage 3D depth sensing ToF technology to solve many problems that traditional 2D technology struggles with today. Combining high resolution depth data with powerful classification algorithms and AI will uncover new applications.
This article will examine the basics of, and the two dominant methods for, ToF depth sensing, as well as compare it to other prevalent depth sensing technologies. It will then provide a detailed introduction to ADIs 3D depth sensing ToF technologyleveraging the ADDI analog front end, a complete ToF signal processing device that integrates a depth processor that processes the raw image data from a VGA CCD sensor into depth/pixel data. We will also discuss how ADI can scale this technology to our broad market customer base through an ecosystem of hardware partners.
Figure 2. Simple diagram of time-of-flight measurement.
A ToF camera measures distance by actively illuminating an object with a modulated light source (such as a laser or an LED), and a sensor that is sensitive to the lasers wavelength captures the reflected light (Figure 2). The sensor measures the time delay T between when the light is emitted and when the reflected light is received by the camera. The time delay is proportional to twice the distance between the camera and the object (round-trip); therefore the depth can be estimated as:
where c is the speed of light. The goal of a ToF camera is to estimate the delay between the emitted signal and the return signal.
There are different methods for measuring T, of which two have become the most prevalent: the continuous-wave (CW) method and the pulse-based method.
Figure 3. Illustration of a continuous-wave ToF system.
In the CW method, a periodic modulated signal is used for the active illumination (Figure 3), and the phase shift of the reflected light is measured by homodyne demodulation of the received signal.
For example, a sinusoidal modulation can be used, where the emitted signal is:
where
The received signal r(t) is a delayed and attenuated version of the emitted signal:
0 α < 1 is an attenuation coefficient that depends on the distance and the surface reflectivity, and T is the delay of the return signal.
Continuous-wave time-of-flight sensors measure the distance at each pixel by sampling the correlation function between the received signal r(t) and a demodulating signal g(t) with the same frequency as s(t). In the ideal case, the demodulating signal is also a sine wave:
The operation performed by the pixel is a correlation operation:
When both the emitted signal and the demodulating signal are sine waves, the correlation value as a function of the delay τ applied to the demodulating signal is:
where A = AgAsα and B = αBgBs.
The correlation function c(τ) is then sampled at four equal steps over one period (by changing the illumination phase shift in steps of 90°), as shown in Equation 3. The phase offset Φ=2πfmodΔT between the emitted signal and the demodulating signal can be estimated using Equation 7:
and the depth, which is proportional to the phase shift is:
Figure 4. Illustration of the correlation function sampling process.
In the pulse-based method, an illumination source emits a series of N short light pulses, which are reflected back to a sensor equipped with an electronic shutter that enables the capture of light in a series of short temporal windows. In Figure 5, three shutter windows, or pulses, are used to capture the reflected light pulse. The BG window captures the ambient light, which is then subtracted from the other measurements.
Figure 5. Illustration of the shutter windows to capture reflected light.
The ToF T is estimated from the measured values corresponding to the different shutter windows according to the following equation:
The distance can then be calculated by replacing T with the expression in Equation 9 in Equation 1, which gives us Equation 10:
It should be noted that these equations are predicated on the assumption that the pulses are perfect rectangular pulses, which is impossible, considering the limitations of the hardware. Additionally, in practical situations, several hundred to several thousand illumination pulses need to be accumulated in order to get a sufficient signal-to-noise ratio (SNR) for measurement.
Both approaches to ToF have their advantages and disadvantages relative to the applications use case. Issues such as the distances being measured, the environment in which the system is being used, accuracy requirements, thermal/power dissipation restrictions, form factor, and power supply issues need to be considered. It should be noted that the vast majority of CW ToF systems that have been implemented and are currently on the market use CMOS sensors, while pulsed ToF systems use non-CMOS sensors (notably CCDs). For this reason, the advantages/disadvantages listed as follows will be based on those assumptions:
It is instructive to be familiar with other depth mapping technologies to understand the trade-offs; as previously mentioned, all depth detection systems have advantages and disadvantages depending on the use case and the application requirement.
Stereo vision for depth sensing works by using more than one camera separated by a certain amount of distance from one another (Figure 6). Like the human eye, a given reference point in space will be in separate positions in each camera, allowing the system to calculate that points position in space if the correspondence of that point is made between the two cameras. Determination of this correspondence involves computationally intense and complex algorithms.
Figure 6. 3D depth sensing using stereo vision.
The structured light method works by projecting a known reference pattern of dots onto an object. The 3D object distorts this reference pattern, and a 2D camera captures this distortion. This distortion is then compared to the reference pattern that was projected, and then calculates a depth map based on the distortion level.
Figure 7. Illustration of depth mapping using the structured light method.
ADIs ToF technology is a pulse-based ToF CCD system (Figure 8) that utilizes a high performance ToF CCD and the ADDI, a complete ToF signal processing device that integrates a 12-bit ADC, the depth processor (that processes the raw image data from the CCD into depth/pixel data), as well as a high precision clock generator that generates the timing for both the CCD and the laser. The precision timing core of the timing generator allows the adjustment of clocks and LD output with approximately 174 ps resolution at a clock frequency of 45 MHz.
Figure 8. Block diagram of the ADI ToF system.
ADIs ToF system differentiates itself from other solutions by:
Figure 9. Photon flux vs. wavelength of sunlight.
A pseudo-randomization algorithm combined with special image processing integrated in the depth processor enables interference cancellation (as previously mentioned). This enables multiple ToF systems to operate in the same environment.
If you want to learn more, please visit our website Depth Camera Module.
Figure 10. Depth map comparison of an outdoor image.
Figure 10 shows an example where three different depth measurement systems were used outdoors to measure distance. Note that while a CMOS ToF system that used an 850 nm light source has difficulty distinguishing both the person and the tripod, the ToF CCD system was able to distinguish both more clearly.
As mentioned in the introduction, the addition of depth information to a 2D image allows useful information to be extracted and can greatly improve the quality of scene information. For example, 2D sensing cannot distinguish between a real person and a photograph. Extracting depth information allows better classification of people, tracking both their facial and body features. ToF depth sensing can provide high quality and reliable face recognition for security authentication. The higher the resolution and depth accuracy, the better the classification algorithm. This can be used for simple functions such as allowing access to mobile devices/our personal home space, or for high end use cases such as secure door access control in commercially sensitive areas.
Figure 11. Digital face recognition.
As depth sensing technologies achieve higher resolutions and depth accuracies, the classification and tracking of people will become easier. The use of artificial intelligence will allow for classification with very high confidence, which in turn will create new and emerging application areas. One use case is for a commercial automatic door opening function, especially in areas of strong sunlight. Ensuring a door only opens for a person and not anything else can provide efficiencies in building management, as well as in security and safety.
Figure 12. People classification for automatic door opening.
As 3D algorithms mature further, data analytics will be leveraged to gather lots of meaningful information about peoples behavior. The first wave of this is likely to happen in building control applications such as door entry/exit systems for people-counting. The addition of depth information from a vertically mounted sensor means that people can be counted with very high accuracy. Another use case is a smart automatic door opening (Figure 13) where people can be classified and the door only opens when a real person is detected. ADI is developing software algorithms in the area of people-counting and classification.
Depth information allows high accuracy classification of people under many challenging conditions, such as environments with low or no ambient light, in areas where there is a significant density of people, and where people are well camouflaged (for example, with hats, scarves, etc.). Most importantly, the false triggering of people-counting is virtually eliminated. Today, stereo cameras can be used for entry/exit detection, but due to the limitations of mechanical size (two sensors) and high processor needs, stereo tends to be expensive and of a large form factor. As the ADI ToF technology directly outputs a depth map and only has onesensor, both the form factor and processing needs are much reduced.
Figure 13. People-tracking algorithm using depth sensing technology.
An important application of depth sensing will be in the industrial, manufacturing, and construction process. The ability to accurately dimension and classify objects in real time through a production process is not trivial. Accurate depth sensing can determine space utilization of warehouse bays. Products that come off a production line need to be dimensioned quickly for transfer. High resolution depth sensing will allow edges and lines of target objects to be determined in real time, and fast volume calculations made. A neural network approach is already being used in such volume determination.
Figure 14. 3D dimensioning.
The autonomous transfer of product within a factory continues to increase. Autonomous vehicles like AGVs (autonomous guided vehicles) will need to self-navigate faster through the factory and warehouse. High accuracy depth sensing technology will allow sensors to map out their environment in real time, localize themselves within that map, and then plot the most efficient navigation path. One of the biggest challenges with the deployment of such technology in factory automation is the interference from other sensors that may operate in the same area. ADIs interference cancellation IP will allow many such sensors to operate directly in each others line of sight without affecting the performance.
Figure 15. Depth sensing use cases in manufacturing.
ADI has developed an optical sensor board (AD-96TOF1-EBZ) that is compatible with the Arrow 96 application processor platform. This 96TOF1 boards optical specifications are listed in Table 1.
Figure 16. ADIs 96TOF optical depth sensing board. Table 1. ADIs 96TOF Optical Board Specifications Range <6 m FOV 90° × 69.2° Wavelength 940 nm Frame Rate 30 fps maz Resolution 640 × 480 pixels
This board can interface directly to Arrows 96Boards family. The 96Boards family is a range of hardware processor platforms that make the latest ARM®-based processors available to developers at a reasonable cost. Boards produced to the 96Boards specifications are suitable for rapid prototyping, Qualcomm® SnapdragonTM, and NXP and NVIDIA® processors are all supported within the 96Boards platform.
ToF depth sensing is a complex technology. There is significant optical expertise needed to achieve the highest performance of the VGA sensor. Optical calibration, high speed pulse timing patterns, temperature drifts, and compensation will all affect depth accuracy. Design cycles can be long to achieve the desired performance. While ADI can support chip-down designs for qualified customer opportunities, many customers are looking for an easier, faster, and more efficient approach to get to market.
Many customers are interested in a simple demonstration module to first evaluate the performance of the technology, before moving forward with a real project. ADI has worked with various hardware partners to provide different levels of hardware offerings. The DCAM710 demonstration module, offered by one of our hardware partners (Pico) supports USB depth image streaming directly to a PC.
Figure 17. DCAM710 VGA depth sensing and RGB camera.
The specifications of the ToF camera DCAM710 module are:
®
, and Windows®
7/8/10The Pico SDK software platform can support Windows and Linux operating systems and supports several software functions. A point cloud, which can generate a set of data points in space around an object, and which are often used to generate 3D models, is easily generated through the SDK.
Figure 18. Depth sensing point cloud.
As the demonstration platform streams raw data to a computer via USB, it is easy to develop simple software application algorithms to get customers developing code quickly.
Figure 19. VGA depth sensing streaming to PC via USB.ADI provides simple sample code in Python to support customer evaluations. The example below shows a screen shot of real-time Python source code being used to detect and classify a person, and then apply a depth measurement to determine where the person is relative to the sensor. Other algorithms that are available include edge detection, object tracking, and a 3D safety curtain.
Figure 20. People classification and range detection.
While the ADI 96TOF reference design is useful for customers doing a chip-down design and the DCAM710 demonstration platform is a cost efficient way to evaluate the technology, in many cases, customers may need a different or more customized solution when they go to production. For example, in AGV systems, a GigE or Ethernet output from an edge node sensing module is often desirable. This gives a robust method of sending high speed, raw depth data from an edge node sensing module to a centralized CPU/GPU controller.
Figure 21. Depth sensing in industrial AGVs (navigation/collision avoidance).
In other applications, customers may want to implement some edge node processing and only send metadata back to the controller. In this case, having a small form factor depth node module with an integrated edge node processor supporting ARM or an FPGA is more desirable. ADI has developed a set of third-party ecosystem partners that can meet all customers requirements. These third parties offer a range of capabilities, from complete camera products to small optical modules without external housing that can be integrated into bigger systems. The diagram below shows a tiny MIPI module without external housing that can easily be mechanically integrated into bigger systems. ADIs partner network can also offer customization of hardware, optics, and application processors if needed. An example of the modules our partners offer today are USB, Ethernet, Wi-Fi, and MIPI, with a range of integrated edge node processors available.
Both ADI and our hardware partners also work with external software partners who bring algorithm expertise in depth processing at the application level.
The advantages of high resolution depth imaging to solve difficult and complex tasks in new and emerging application areas are forcing our customers to quickly drive its adoption. The fastest, lowest risk, and cheapest path to get to market is through affordable, small form factor, high precision modules that can be integrated into larger systems. ADIs 96TOF reference design platform provides a complete embedded evaluation platform, enabling customers to immediately evaluate the technology and start developing application code. Please contact ADI for more information about ADIs ToF technology, hardware, or our hardware partners.
Table 1 Camera module options for industrial equipment
Fig. 2 Combining a MIPI camera and V-by-One HS
However, integrating a camera module into industrial equipment requires overcoming several obstacles, although they are by no means complicated. The first challenge engineers encounter at industrial equipment manufacturers is camera module selection. At present, there are four major camera module options. Specifically, these four types of camera modules are USB Video Class (UVC) cameras, Internet Protocol (IP) cameras, Wi-Fi cameras, and Mobile Industry Processor Interface (MIPI) cameras. These four camera modules are briefly described below.UVC camera modules are connected via a USB cable. Most webcams connected to PCs for online conferencing and video streaming use these UVC camera modules. The advantage of these modules lies in their ease of use. This is because the module is compliant with the USB Video Class (UVC) standard, meaning the driver software is preinstalled in the computer's OS. The user simply connects the UVC camera to their computer, displaying the image on their monitor. Therefore, there is no need to develop driver software when integrating into industrial equipment. On the other hand, there are some disadvantages. Since UVC cameras are end products in themselves, their external dimensions are large, and their cost is high for camera modules to be incorporated into industrial equipment.IP cameras are modules connected via Ethernet cables. The camera has an IP address and can be used simply by connecting the module to the Internet. These devices are sometimes also called network cameras. The main applications for these devices are for surveillance, but they can also be used for integration into industrial equipment. Their advantage is the same ease of use as with UVC cameras. There is no need to develop driver software when integrating a module into industrial equipment. Another advantage is that the connection distance is as long as 100 m. These modules can be used to monitor remote locations. The disadvantages are large external dimensions and high cost, similar to UVC cameras. A further disadvantage is that video signals are compressed for transmission, making real-time transmission impossible, resulting in a video delay of several frames.Wi-Fi cameras are positioned as a wireless LAN (Wi-Fi) version of the above IP camera. Thus, this allows wireless communication between the camera module and the monitor displaying the images. This wireless communication is the biggest advantage. Moreover, the connection distance can be long, up to approximately 200 m, making it highly user-friendly. The disadvantage of these modules is that, as with IP cameras, real-time transmission is not possible. Furthermore, there is a risk of disconnection depending on the surrounding environment, and the robustness of the connection is relatively low, raising security concerns. In addition, the large external dimensions and high cost for a camera module to be incorporated into industrial equipment may be cited as disadvantages.MIPI cameras are modules that use MIPI CSI-2, a transmission standard that connects the camera to the SoC inside mobile devices, such as smartphones, as its video output interface. The advantage of using these modules is that the external dimensions are small, and the cost is low because they were originally intended for mobile devices. Further advantages include the possibility of real-time transmission and the robustness of the connection. On the other hand, there are two disadvantages. One is that the connection distance is short, being 30 cm at most. This makes it difficult to apply to industrial equipment where the distance between the camera module and the video processor is far. The other disadvantage is the need to develop driver software and write register code when integrating into industrial equipment. Unlike UVC and IP cameras, MIPI cameras cannot display video footage by simply connecting the camera.However, the disadvantages related to connection distance can be eliminated to some extent if appropriate measures are taken. Specifically, MIPI cameras can be combined with the V-by-One HS high-speed serial interface from THine Electronics ((5) MIPI Camera + V-by-One HS in Table 1). This combination would extend the connection distance to about 15 m (Fig. 2). Moreover, there are few disadvantages that arise from this combination. The only disadvantage is a slight increase in component and development costs. The advantages of small external dimensions and the robustness of the connection remain unchanged.The disadvantage of increased development costs when combining MIPI cameras with V-by-One HS can be largely eliminated with the MIPI Camera SerDes Starter Kit offered by THine Electronics. This starter kit includes a complete set of software required to send and receive MIPI CSI-2 signals and also provides a graphic user interface (GUI) tool that automatically generates register codes.
For more 1MP Global Shutter Camerainformation, please contact us. We will provide professional answers.