Motion Model

General

The following motion models can be used regardless of the detection model used previously.

The calculation of the displacements is done by clicking on the button Motion Model. The displacements are then stored in the format (for each point): trajectory identifier, X position, Y position, X velocity, Y velocity. For the algorithms that do not give a trajectory, the identifier is equal to -1. The displacements of each image are grouped together by image.

The displacements found can be visualized after the calculation by selecting Processed_with_Moves or original_with_Moves in the display menu. A blue line shows the displacement between the previous and the current image. The size of the line on the image can be modified with the “up” and “down” arrows on the keyboard. Compared to the detection, the number of objects can be reduced according to the filtering operated by the displacement model.

If the methods allow to identify trajectories, the trajectories can be visualized after the calculation by selecting Trajectories in the display menu.

KLT

The KLT algorithm is directly derived from the scripts available on https://github.com/groussea/opyflow even if the trajectory calculation is slightly different. This algorithm is itself based on the Luka-Kanade algorithm for the determination of displacement by optical flow. The parameters of the algorithm are not accessible through the graphical interface because the default values have shown good results for all the tested flows. Moreover this algorithm has good performance in speed. It is therefore recommended to try it first. By default, the calculation of trajectories (association of points between frames) is activated, but it can be deactivated to gain a little in calculation time. It is then necessary to add a maximum search distance for the matching of the points.

_images/deplacement01.png

Displacements obtained with Opyflow.

Nearest

The nearest algorithm is adapted to videos with particles with different shapes and intensities. Therefore, it is more relevant with the detection method Threshold. The principle is to match the particles between 2 images based on the likelihood of similarity (distance, size, intensity).

The first parameter is the estimated maximum displacement, in order to reduce the number of comparison between 1 particle and all others. This maximum displacement can be estimated visually from the detections of the previous step. The value of the parameter is in pixels.

The likelihood is the maximum variation in size and intensity that allows to say that a particle is possibly the same as the one in the previous image. This parameter is given in percentage of variation. If the size of the particles is small, this parameter can exceed 100 % in order not to eliminate particles that would go from 1 to 2 pixels.

We must then specify the quantities that will be used to compare 2 particles. The method distance means that we will take the closest particles among the most likely (i.e. which satisfy the previous criterion of likelihood). We can also take into account the size or the intensity. These values are weighted so as to have values of the same order of magnitude: i.e. the square root of the surface or the relative variation of intensity (between 0 and 1).

Nearest predicition

This algorithm is similar to the previous one. The difference is that the comparison will be made between the detected particle and the prediction of the position of the particles from the previous instants. This prediction is made using a Kalman filter. Detailed explanations are available on https://github.com/L42Project/. The interest of the Kalman filtering in our case is to allow the position prediction even if a particle is not detected anymore on some time steps (shadow zone). There is also a smoothing effect on the trajectories.

The parameters are identical to the “Nearest” method. A parameter Max steps prevision allows to delete the predictions and thus the trajectories associated with a particle which would not have been detected on this number of time steps.

The use and the update of the Kalman filter at each step make this algorithm slower than the others. It is therefore to be used in specific cases (detection losses, unidirectional movement).

Tractrac

The tractrac algorithm comes directly from the scripts available on https://perso.univ-rennes1.fr/joris.heyman/tractrac.html. It is quite fast for a large number of particles. It allows to extract trajectories and to filter the detections according to the probable displacement on several time steps (filter on position, speed, acceleration). This probable trajectory selection work must be activated with the choice motion . We then define a number of iterations for the verification of the matches as well as a maximum search distance. If this option is not checked, only one iteration is performed and the calculation is then close to a minimum distance (Nearest method) but on 2 consecutive time steps. The temporal filtering is also available as in the initial version, however in our applications where there are few particles and ensemble movements, this filtering tends to remove many probable trajectories. Time Filter is therefore not recommended for surface velocity.

DenseOpticalFlow

The denseOpticalFlow algorithm is directly based on the calcOpticalFlowFarneback function of openCV. The optical flow is calculated here for all the pixels of the image, then to minimize the storage space, an average is made on a predefined number of pixels given by users. This number is fixed at 20 by default. The calculation time is significantly greater. In addition, this method measures displacements of light intensity that are not correlated to the movement of water. It is therefore reserved for specific conditions such as the absence of specific tracers, reflection and surface waves. However, for the measurement of wave displacements this method can be particularly suitable. For the time being, a detection must be made to activate the method, even if the positions found are not involved in the calculation.

_images/deplacement02.png

DenseOpticalFlow displacements

PIV

The PIV (Particle Image Velocimetry) calculation is based on the “pyprocess” method available on https://github.com/OpenPIV/openpiv-python/blob/master/openpiv/. The consecutive images are entered in grayscale and the method has not been modified. The parameters are the size of the correlation window, the number of overlapping pixels and the size of the search window for the displacements. Strictly speaking, the method should be done on transformed images, but the calculation of inter-correlation on untransformed images also gives consistent results.