This page explains how and why we calibrate images used for measuring the reflectance of materials such as vegetation in a crop field.
Our sun emits a large spectrum of light which is reflected by objects on the Earth's surface. A camera can be used to capture this reflected light in the wavelengths that the camera's sensor is sensitive to. The sensors that we supply are based on silicon which is sensitive in the Visible and Near Infrared spectrum from about 400-1200nm. Using band-pass filters that only allow a narrow spectrum of light to reach the sensor we can capture the amount of reflectance of objects to that particular band of light.
For instance if the camera's filter selects a 25nm wide band with a peak wavelength of 650nm it will only capture the reflected "red" light emitted by the sun. Each pixel in the image is thus a percentage of the reflected light allowed to pass through the "red" filter.
Pixels have a value that range from a minimum to a maximum based on the image bitrate. The higher the bitrate the more information that can be stored in the image. A sensor captures each image in a RAW format and then saves the RAW or converts it to a more common format (typically by compressing it). The Survey3 cameras capture 16bit RAW photos per RGB channel, which means there are 16bits (65,536) pixels and a pixel value range from 0 to 65,535. When the camera saves a 8bit JPG it compresses (removes) the pixels leaving a range of only 0 to 255. Since we are capturing a reflectance of light and not trying to make a "pretty picture" we want to always use the RAW format. If you need a JPG you can easily convert the RAW to TIFF to JPG using MAPIR Camera Control (MCC) (see below).
It is important to also make sure the camera settings (shutter speed, ISO, EV) are adjusted so that no pixels reach the maximum pixel value. If a pixel would normally be higher than the max value the information will be lost. You may notice that the images from the Survey3 cameras seem dark. This is normal as we have set the default settings to keep the pixels from reaching the max value. Remember, you are capturing a percent of reflectance not making a "pretty picture".
That brings us to calibration. Just because we have captured a percentage of reflectance in each pixel, how do we know if it is correct? We need something then to calibrate each pixel using a known reflectance value. We do this by capturing a photo of our MAPIR Camera Reflectance Calibration Ground Target Package just before each survey, which contains 4 targets that have been measured at incremental wavelengths by a spectrometer (a calibrated lab instrument). The pixel values of the captured target image are then compared with the known reflectance values of the targets. Using this information in MAPIR Camera Control (MCC) we then transform the pixel values and thus calibrate the survey images.
Once the images are calibrated you can stitch them together into a single image called an ortho-mosaic, or "ortho" for short. If you are simultaneously capturing images using multiple sensors you will need to make sure your software supports alignment of multiple sensors. Examples of such software are Pix4D's Pix4Dmapper Pro and Agisoft's Photoscan. The resulting ortho images can then have the index calculations performed on the pixels to produce different types of analysis. If you are finding that the ortho generation software is having difficulty finding tie points between the calibrated images you can choose to stitch the ortho first with the non-calibrated images and then export and calibrate the ortho in MCC afterwards.
The normalized difference vegetation index (NDVI) is the most common analysis and compares reflected red and near infrared (NIR) light to assess where plants are most "healthy". We assume that given a sample area of a single crop, the plants which are reflecting more NIR light will be performing more photosynthesis and thus are healthier (and vice versa). If a certain region of plants have a NDVI value that is higher (values range from -1 to +1), then the plants are likely healthier there. No matter what index analysis you perform it is vital that you also physically inspect the subject area to verify your results, a process commonly called "ground truthing".
After you have pre-processed your images you have the option of stitching them into the final orthomosaic or calibrating each photo before stitching. Typically it's better to stitch the non-calibrated images and then calibrate the orthomosaic due to possible issues with stitching the calibrated images. This is especially important with point-cloud software like Pix4D and Photoscan to calibrate after ortho generation.
To begin calibration, open MAPIR Camera Control (MCC) and click on the Calibrate tab:
In order for the scaling of the pixel data to be the same you need to load all cameras into the calibration window that you want to calibrate. For instance, if you wanted to calibrate a Survey3W_RGN with a Survey3W_NGB camera, you would select the camera model, lens and filter for both cameras as such:
If you have an image of the MAPIR Camera Reflectance Calibration Ground Target Package taken just before your survey (recommended) then click the first Browse button and select the QR target image. Then click the "Generate Calibration Values" button next to that browse button. When the software has detected the QR target the dialog box on the right will let you know if it was successful. Do this for each camera you are calibrating.
If none of the images are able to be used to detect the QR code the program will automatically use the hard-coded values we supply which are taken during a clear sunny day. There may be a slight inaccuracy if the hard-coded values are used so make sure to capture a few good images of the target shortly before your survey for the best results.
After the plugin has obtained the necessary calibration values for each camera click the Browse button for each camera below the "Generate Calibration Values" button to select the input image directory for that camera. You are browsing for a folder, not a single image, so you will not see images in the input folder browser. The calibration will calibrate ALL images in ALL input folders, so make sure you don't have any images that you don't need in there. It's typically best to clean your input folders up by deleting or moving all photos you don't need to another folder, such as those captured by the camera before and after your main survey. Then press the Calibrate button at the bottom of the window to calibrate. The program will likely freeze and become unresponsive while calibrating, this is normal. Please let it process without interrupting it.
A Calibrated_1 folder will be automatically created in the input folders for the calibrated images. If you would like the calibrated TIFFs to be converted to JPGs please select the "Convert Calibrated TIFFs to JPGs" box prior to pressing the Calibrate button.
Each pixel in the images now represents a percentage of reflectance for the photographed area. They will likely be very dark, but don't worry this is normal. Remember, you are capturing reflectance information, not a pretty picture.
If you calibrated each individual photo, you will now want to upload the images to the software you are choosing to generate the ortho-mosaic image.
Open MAPIR Camera Control (MCC) and click on the Viewer tab at the top. The Viewer allows you to look at images that are normally too dark to see in other photo browsers. You can view and convert single images directly from the camera or your stitched orthomosaics. Click the Browse button to open up a TIFF image that was previously converted from the RAW (you can also open JPGs in the Viewer) in the Process tab of MCC.
Here is a non-calibrated TIFF image taken from a Survey3W RGN camera flying over a winery (grapes):
In this RGN camera model the reflected red light is captured in the image's red channel and the reflected near infrared (NIR) light is mainly in the image's blue channel.
If you were to split the image into its RGB channels, here is the red channel (left/top) and the blue channel right/bottom):
(The image was cropped.)
The white/gray rows are the grape vines and show up grey in the red channel and white in the blue channel. Pixel values range from black (no reflectance) to white (full reflectance). Plants reflect lots of NIR light during photosynthesis so that's why the blue NIR channel shows the plants as white and the ground as black. We still haven't calibrated the images yet, so let's do that now:
Under the Calibrate tab of MCC load in the photo of our MAPIR Camera Reflectance Calibration Ground Target Package along with selecting the folder with the TIFF image over the grapes. Click the Generate Values button and then Calibrate at the bottom:
Going back to the Viewer tab, browse for the calibrated photo:
Notice that the image is overall much less green, this is one result of calibration. The other is that the index values are now calibrated properly, so let's look at those next.
Click the "Calculate Index" button to show the raster calculator. Let's choose the NDVI index for this tutorial. The NIR light is stored mostly in the blue image channel, so change the Y drop-down to @Band3(Blue Channel). The X drop-down should be @Band1(Red Channel) to represent the red image channel:
Click the "Apply" button and the image in the viewer will become black and white. This is the NDVI index image, with the black pixels representing a low index value and the white pixels representing a high value. You can see the pixel value range on the right side in the Legend area. For the NDVI index, a low (black) pixel value means that that pixel had more red light than NIR being reflected, and thus was not healthy vegetation performing photosynthesis. The opposite is true for the high (white) pixel values, which had more NIR light than red light and is typically healthy vegetation.
As you can see the image's pixels range from -0.36 to 0.44. Vegetation will typically have a NDVI value from about 0.2 to 0.9 when using the NDVI formula. Now let's add some color so we can better see the contrast between the ground and plants, and also within the plants themselves.
Click the "Configure LUT" button to show the Color Map (LUT) window. Let's choose the Lut: RrYyGg, Classes: 7 Colors and Clip: Background Grayscale. Click the "Apply" button:
Looking back at the Viewer main screen, you can see the index image is now colored according to your lut:
In the still open Color Map (LUT) window, you'll notice there are editable values for the Min and Max pixel values. These values represent the range of pixel values that the colors are being applied based on the clipping chosen. Here are the clipping options:
Going back to our example, let's change the min to 0, and change the clipping to "Background Original" so we can see the lut color only the NDVI index pixels from 0 to 0.44 and then show the original image in the other pixels:
You can then click the Save button to save this lut image.