Image processing
Initialization
When a video is loaded, the frame rate, frequency and resolution are automatically detected. The fields in the Image Processing tab are updated automatically and the information appears in the dialog box.
Mask and Filter
It is recommended that the first user operation be the definition of a mask. Areas outside the free area can be masked, including areas containing GRPs (see transformation part). This avoids multiple irrelevant detections. A mask will be defined with a closed polyline defining a polygon. To create a mask, press the Draw Mask button. Then enter the points of the outline with a left click. To close the line and create the polygon, right-click again. The values of the area of interest (ROI) in the panel Image Processing are updated automatically considering the smallest rectangle including the unmasked area. The operation can be redone to improve the mask as long as no image processing has been performed.
To save the mask as a binary image, go to File>>Write/Export>>Mask. This mask can then be reloaded automatically with the command File>>Read/Import>>Mask.
The check box Mask allows you to take into account or not, this mask during processing. WARNING, when a treatment has taken place with a mask, you must close and reopen the video to create a new mask. This avoids conflicts between different calculation steps that would not be done on the same images.
It is also possible to filter the images using a Gaussian filter. This allows to smooth the image which could be noisy. A drop-down menu allows you to set the pixel width of the Gaussian filter.
Time window
To define the temporal window of analysis, it is necessary to enter the minimum and maximum frame in the boxes Frames,Min and Frames, Max respectively. When the contents of the boxes are deleted, the minimum and maximum values corresponding to the video file before processing appear in gray.
The box Frames, step allows you to define if certain frames will be skipped. For example, a step of 2 indicates that only one frame in two will be processed. The interest of using a step is to reduce the calculation time but also to increase the precision by having displacements greater than the uncertainty of detection (of the order of the pixel). However the step cannot be increased indefinitely otherwise the software will have difficulty in ensuring a good match between 2 particles for 2 consecutive images. The optimum step depends on the flow speed in relation to the acquisition frequency but also on the number and density of particles. To help the choice, an uncertainty bar will be given during the analysis of the results.
Color
The detection of the particles can be done either on the initial images (supposed in RGB), or on images put in gray level, or on images in HSV format. The majority of algorithms use grayscale images as all existing software. Color images (RGB or HSV) are only possible for the methods Threshold or histogramm. For all the methods, it is recommended to use a gray level analysis except:
** If we use colored particles easily identifiable in natural environment, for example red. This makes it easier to discriminate particles from stray reflections or scum. ** If you’re tracking an object with a particular texture, such as fish or large floaters (trunk, boat, etc.).
Background image
To each image it is possible to subtract the background corresponding to the average of the images in the temporal window. This treatment makes it possible to remove the clear objects which do not move the river in order to limit the calculation time. If the number of particles is important the time window should also be large to limit the influence of the presence of particles in the averaging.
The background is not based on the minimum image to avoid that random or periodic fluctuations (reflection, shadow, surface wave) disturb the pixel value.
The use of a background is only possible for a gray level processing of images.