This page explains how and why we calibrate images used for measuring the reflectance of materials such as vegetation in a crop field.
Our sun emits a large spectrum of light which is reflected by objects on the Earth's surface. A camera can be used to capture this reflected light in the wavelengths that the camera's sensor is sensitive to. The sensors that we supply are based on silicon which is sensitive in the Visible and Near Infrared spectrum from about 400-1200nm. Using band-pass filters that only allow a narrow spectrum of light to reach the sensor we can capture the amount of reflectance of objects to that particular band of light.
For instance if the camera's filter selects a 25nm wide band with a peak wavelength of 650nm it will only capture the reflected "red" light emitted by the sun. This is what the Survey2 Red model camera captures in each image it takes. Each pixel in the image is thus a percentage of the reflected light allowed to pass through the "red" filter.
Pixels have a value that range from a minimum to a maximum based on the image bitrate. The higher the bitrate the more information that can be stored in the image. A sensor captures each image in a RAW format and then saves the RAW or converts it to a more common format (typically by compressing it). The Survey2 cameras capture 16bit RAW photos, which means there are 16bits (65,536) pixels and a pixel value range from 0 to 65,535. When the camera saves a JPG it compresses (removes) the pixels leaving a range of only 0 to 255. Since we are capturing a reflectance of light and not trying to make a "pretty picture" we want to always use the RAW format. If you need a JPG you can easily convert the RAW to TIFF to JPG using our MAPIR QGIS plugin (see below).
It is important to also make sure the camera settings (shutter speed, ISO, EV) are adjusted so that no pixels reach the maximum pixel value. If a pixel would normally be higher than the max value the information will be lost. You may notice that the TIFF images from the Survey2 cameras seem dark. This is normal as we have set the default settings to keep the pixels from reaching the max value. Remember, you are capturing a percent of reflectance not making a "pretty picture".
That brings us to calibration. Just because we have captured a percentage of reflectance in each pixel, how do we know if it is correct? We need something then to calibrate each pixel using a known reflectance value. We do this by capturing a photo of our MAPIR Camera Reflectance Calibration Ground Target Package just before each survey, which contains 3 targets that have been measured at incremental wavelengths by a spectrometer (a calibrated lab instrument). The pixel values of the captured target image are then compared with the known reflectance values of the targets. Using this information in our MAPIR QGIS plugin we then transform the pixel values and thus calibrate the survey images.
Once the images are calibrated you can stitch them together into a single image called an ortho-mosaic, or "ortho" for short. If you are simultaneously capturing images using multiple sensors you will need to make sure your software supports alignment of multiple sensors. Examples of such software are Pix4D's Pix4Dmapper Pro and Agisoft's Photoscan. The resulting ortho images can then have the index calculations performed on the pixels to produce different types of analysis. If you are finding that the ortho generation software is having difficulty finding tie points between the calibrated images you can choose to stitch the ortho first with the non-calibrated images and then export and calibrate the ortho in QGIS afterwards.
The normalized difference vegetation index (NDVI) is the most common analysis and compares reflected red and near infrared (NIR) light to assess where plants are most "healthy". We assume that given a sample area of a single crop, the plants which are reflecting more NIR light will be performing more photosynthesis and thus are healthier (and vice versa). If a certain region of plants have a NDVI value that is higher (values range from -1 to +1), then the plants are likely healthier there. No matter what index analysis you perform it is vital that you also physically inspect the subject area to verify your results, a process commonly called "ground truthing".
After you have pre-processed your images you have the option of stitching them into the final orthomosaic or calibrating each photo before stitching. Typically it's better to stitch the non-calibrated images and then calibrate the orthomosaic due to possible issues with stitching the calibrated images. This is especially important with point-cloud software like Pix4D and Photoscan to calibrate after ortho generation.
To begin calibration, open the QGIS MAPIR plugin and click on the Calibrate tab:
Select your camera model from the drop-down menu.
If you have an image of the MAPIR Camera Reflectance Calibration Ground Target Package taken just before your survey (recommended) then click the first Browse button and select the QR target image. When the software has detected the QR target the dialog box on the right will let you know if it was successful.
If none of the images are able to be used to detect the QR code the program will automatically use the hard-coded values we supply which are taken during a clear sunny day. There may be a slight inaccuracy if the hard-coded values are used so make sure to capture a few good images of the target shortly before your survey for the best results.
After the plugin has obtained the necessary calibration values click the lower Browse button to select the input image directory. A Calibrated folder will be automatically created in the input folder for the calibrated images. If you would like the calibrated TIFFs to be converted to JPGs please select the "Convert Calibrated TIFFs to JPGs" box. To begin calibration select the "Calibrate Images" button.
Each pixel in the images now represents a percentage of reflectance for the photographed area. They will likely be very dark, but don't worry this is normal. Remember, you are capturing reflectance information, not a pretty picture.
If you calibrated each individual photo, you will now want to upload the images to the software you are choosing to generate the ortho-mosaic image.
Here is an example of a calibrated image set taken by the Survey2 NDVI camera processed on Drone Deploy with the RGN filter and NDVI algorithm:
Here is a non-calibrated TIFF image taken from a Survey2 NDVI camera flying over a winery (grapes):
(Image has been converted to JPG for proper web display)
In this NDVI camera model the reflected red light is captured in the image's red channel and the reflected near infrared (NIR) light is mainly in the image's blue channel.
If you split the image into its RGB channels, here is the red channel (left/top) and the blue channel right/bottom):
(Brightness and contrast has been added so you can see the ground subjects. The image was also cropped.)
The rows are the grape vines and show up as black in the red channel and white in the blue channel. Pixel values range from black (no reflectance) to white (full reflectance). Plants reflect lots of NIR light during photosynthesis so that's why the blue NIR channel shows the plants as white and the ground as black. The ground reflects more red light than the green healthy plants do so that's why the ground is white in the red channel and the plants are black.
After calibration using the MAPIR Camera Reflectance Calibration Ground Target Package image this is how the same unedited image looks:
(Image has been converted to JPG for proper web display)
Again, yes it's very dark, but remember each pixel now represents a calibrated reflectance percentage of the surveyed objects. Let's bring the image back into QGIS.
Using the Rastor Calculator at the top of the QGIS window, we choose each of the image channels corresponding to the information required for the NDVI formula:
After choosing a new name in the top right by clicking on the 3 dots "..." we click OK.
The resulting image is a black and white index image, meaning that each pixel represents the calibrated NDVI value, with a max range of -1 to +1. Here is the information from QGIS of the NDVI index image:
As you can see the image's pixels range from 0.082 to 0.704. Vegetation will typically have a NDVI value from about 0.2 to 0.9 when using the NDVI formula. Now let's add some color so we can better see the contrast between the ground and plants, and also within the plants themselves.
Double-clicking on the photo's name at the left we can click the Style tab on the left in the window that pops up and change the "Render type" to "Singleband pseudocolor".
Under the "Generate new color map" you can choose whichever color gradient you prefer. Most people like the Red to Green lut so let's choose the "RdYlGn" from the drop-down menu. On the Mode change it to "Equal interval" so the value spacing between each color is even. You can choose any number of classes you prefer, the more classes the more contrast you add between the colors (and thus the more colors). You can also Edit the color gradient ramp if you want to get creative in the color map. Clicking the "Classify" button will assign the colors to the value ranges you have selected and generate the key on the left side of the window. Click the Apply button at the bottom.
You now have an image showing the "health" of the vegetation, with green being the most healthiest and red being least healthy. Problem is that we've scaled the color gradient to all the pixel values which also includes the non-vegetation (like the ground). Let's go back into the style and manipulate the values to try and isolate the vegetation.
Above we mentioned that NDVI values for plants typically start around 0.2 so let's change the minimum value to 0.2 for our color map:
Click Classify and then Apply at the bottom.
As you can see the contrast within the vegetation is beginning to become more apparent.
One thing to know about applying color gradients to images in QGIS is how they are saved. Right clicking on the image in the left "Layers Panel" go to Save As. In the save windows make sure to select 'Rendered image" to save the color gradient, otherwise the resulting image will be the black and white index image.
If you are overlaying this image over a color RGB layer we'd recommend adjusting the lut to only cover the values for vegetation and making the rest of the pixels transparent, a method known as "clamping".
Back in the style menu of QGIS click the "+" symbol above the lut, click the down arrow to sort, and double click the value next to the pink box. Change the value to something 0.001 less than the first color box (red). Then double click the default pink color box:
Moving the Opacity slider to the left will make those pixels see-through. This will mean that all pixels below the red color will be see-through allowing the pixels from the RGB image to show through. You can also rename the transparent pixels something useful like CLAMP. Here's what the lut looks like then:
And the resulting clamped NDVI layer:
Assuming the RGB ortho was also aligned together with the NDVI ortho (a function of point cloud software like Pix4D and Photoscan) you can simply bring both layers together with the NDVI one above the RGB one. If the orthos were not aligned already you will need to do so manually which can be tedious. There are many options available but the one we keep going back to is the Exact Aligner plugin in GIMP, due to its simplicity.