There are quite a lot of different optical 3D technologies to generate a point cloud. If you want very accurate and precise point clouds, though, you’re basically left with one choice: blue light fringe technology. With this technology you can scan a complete surface at once, create point clouds with micron precision ánd easily detect outliers and filter them. That’s why we’re using this technique in our val-IT Flex.
So how does it work? To make it work well, you need two cameras and one projector. The projector can be seen as an ‘inverse camera’ and its optical resolution is typically the resolution of your measurement system. The camera resolution needs to be at least two times higher to detect all individual projector pixels.
The second camera is used for validation. With one camera and a projector you can create a point cloud, but with the second camera you’re able to validate the first point cloud and remove outliers. For accurate measurements you need an industrial DLP projector, with special optics. This way you won’t lose information due to poor performance of the lens or due to changes in the construction (as a result of heat differences and vibrations). Regular projectors have a 100% offset, really poor optics and cheap LED controller. So you can imagine it requires some nifty engineering to make an industrial projector.
The projector is generating projector pixels that needs to be measured by the camera. This can be done by encoding all projector pixels in a sequence of images with help of gray codes. So each pixel gets an unique and individual identifier that can be detected by the camera. Depending on the projector resolution this requires 20 to 30 images with gray-codes.
In theory, you now have calculated the projector pixels which you can match with camera 1 and generate a point cloud. But there is one catch! If you want to have submicron precision, your weakest link is the projector which has a relative low optical resolution.
In order to increase the precision of the projector pixels, we look at projector waves where the limitation of your measurement is the frequency of the wave in combination with the wave accuracy of your projector. For that you need a short wavelength (blue light or UV) LED and a projector that can generate a perfect sine wave.
So we project a known sine wave with the projector and due to tiny differences in Z, the measured sine wave has a different frequency. With phase shifting (3 – 10 phases) you can distinguish frequencies from other optical effects, robustly estimate the frequency and filter out outliers.
For proper results, you need to do a radiometric calibration (or at least gamma correction) for the camera’s and projector. You can find more on this subject in our previous Tech Talk-blog. There also are other techniques that can be used to increase the accuracy of your measurement.
Now that you have your submicron projector pixels everything will become more easy. You can now treat your projector as an ‘inverse’ camera and with laser triangulation convert the pixel displacements to 3D positions. The second camera will be used to detect any differences due to optical effects, and filter them if required.
For fringe scanning technology to work you must wait a moment (~1 second) until all images are captured. During that time the object mustn’t move because even the slightest vibrations could have a large impact on the precision of your point cloud. The high number of of pixels (~360 million) need to be processed within less than a second. For that amount of data we make use of parallelization with help of graphical cards and CUDA.
So in sum: with a projector and two cameras you can make a very precise measuring device that can be used for numeral types of applications. Check our website to see some showcases!