Geometric transformation
Principle
By definition the measurement is supposed to be made on a plane representing the free surface. This plane may not be horizontal, which limits the approximation for fishway. The objective of the geometric transformation step is to transform the pixel image into an image where each pixel corresponds to an identical size in real world. This means to distorting the image as if it would have been taken vertically (in fact from the normal to the free surface). This geometric transformation is the inverse of the projection of the real 3D world to the plane of the camera sensor. This projection can be described mathematically by a homography. The properties relating to these homographies that directly concern the measurement in the field will be given below. It is important to keep in mind that this step is essential and that the use of the method no transformation is in fact a homography with simplifying assumptions. Finally this transformation step is relatively fast thanks to the algorithms developed in openCV. However, it is always faster to transform only a certain number of points rather than all the pixels of all the frames. This is why we can choose to do the transformation on transformed image (option before detection) or on the positions detected on the raw image (after detection). In ANDROMEDE, the transformations are processed using the photogrammetric functions developed in the FlowVeloTool project:(https://github.com/AnetteEltner/FlowVeloTool). These scripts use specific openCV functions for the use of a pinhole model. The scripts related to the integration of drone data have been developed for the ANDROMEDE project.
No transformation
The use of the option No transformation is adapted to vertical images without optical distortion. This case corresponds to images taken by drone, for which the distortions are already supposed to have been corrected.
The principle of this mode is to define a constant resolution (real distance corresponding to 1 pixel) on the whole image. The parameter of the transformation is the inverse of the resolution, namely the number of pixels per meter. To measure directly this parameter on the images we can use a specific tool of ANDROMEDE (figure :ref:’fig-transf01’).
To do this you must have on the image two reference points (at the level of the free surface) whose real distance between them is known. This can be a civil engineering element or a ruler placed on the ground at the level of the free surface.
Double-click the first marker, then press CTRL and click the second marker. You can use the zoom feature for greater accuracy.
In the interface that opens, the distance in pixels appears. You must then enter the actual distance in meters between the two markers.
In this mode, there are no uncertainties associated with the transformation part. It will be necessary to pass by an analysis by homography to specify the uncertainties which depend on the position of the drone and the camera used.
Homography
Projection file
The first step of the transformation is to provide the correspondence between the real position of objects (in meters) and their position on the image (in pixels). For this we need to know the real position of landmarks (noted GRP: ground reference point).
2D homography
In a 2D homography, the real points are assumed on the plane we want to project. They must therefore be as close as possible to the free surface. In the same way, to gain in accuracy, they must be as far away as possible, ideally at the 4 extremities of the area of interest. In the chosen algorithm, only 4 GRPs are necessary and sufficient (figure :ref:’fig-transf02’). If the direction of the real axes is opposite to the direction of the pixels on the image (from left to right and from top to bottom), a mirror effect may appear on the transformed image. In this case you need to invert 1 or 2 axes by multiplying the real positions by -1.
3D homography
The principle of 3D homography is identical to that of 2D homography. Data corresponding to the real vertical dimension of the GRPs are added. In the default mode, the minimum number of GRPs is 4 but an intrinsic matrix of the camera must be entered. This is a text file containing 1 line with the parameter (table ref{tab:camera}):
The file must be named “interiorGeometry.txt” and be in the directory of the video. This allows it to be loaded automatically. If it does not exist, default values are taken (pixel size =10 µm and focal length = half of the sensor). The distortion coefficients can be obtained by an independent calibration. The camera position values are optional and can be used in the following transformation mode (Pose Camera). The calculation of the transformation matrix is performed by the scripts available in https://github.com/AnetteEltner/FlowVeloTool. During the image processing stage, the estimated position of the camera and the associated uncertainty will appear in the console.
Before detection
All the images (of the area of interest) are transformed. The resolution of the transformed image is given by the parameter pixel per meter. If we want to identify particles of a few centimeters, the order of magnitude of this parameter must be a few tens (=1/0.01m). However, we must be careful not to have too many pixels in the transformed image for memory reasons. This can happen if the deformation is too large. In this case the method After detection is necessary. Moreover, the Before detection method is only justified when we want to be very selective about the size and/or shape of the particles. However, if a PIV model is used, it may be logical to transform the avan images. This ensures consistency in the average velocity as a function of the spatial resolution of the analysis window.
After detection
The After detection method has the advantage of reducing the computation time and the memory space required. The image detections are done on the original image. Therefore identical particles do not have the same size depending on their position. However, some detection algorithms allow to use a range of variations rather than a fixed value. The transformation of the positions of the particles is done in the analysis step.
Camera position
The camera mode allows you to know the transformation using the characteristics of the camera and in particular its position. The main interest is to be free of reference points on the ground (GRPS). The position of the camera is related to the use of the drone, ie the angles are defined relative to the position of the camera sensor in relation to the horizontal. These angles are in order: roll, pitch, yaw (Fig. :ref:’fig-transfaxis’). For example, for a vertical view seen from above, the angles are respectively 0, -90, 0 degrees. The position of the camera is given in meters. The resolution of the image depends on the focal length of the lens and the chosen position. To change the position of the camera, you can use the parameter file as described in the general description. It is also necessary to define the dimension of the free surface which must be given in the same reference frame as the position of the camera. Currently, the detections are only done on the raw images with this mode.
UAV
This mode allows the transformation of pixel positions to real-world positions without reference at the ground, even when the camera is in motion. Thus it is possible to measure larger areas with the same particles. The detection is done on the raw images in the same way as the after detection mode of the previous modes. The transformation into real coordinates is done during the analysis phase.
After having loaded the video and chosen the relevant detection and movement modes, the UAV mode must be selected in the drop-down menu of the PreProcessing tab.
Then we launch a complete calculation (Full Processing). The software then asks to indicate a flight file of the drone where the positions of the camera are located according to time. The positions are georeferenced in the UTM system in X and Y and the vertical position is given relative to the takeoff point. The software is designed to automatically read the flight plans in DJI format. It is also necessary to give the relative position of the free surface with respect to the takeoff point. This data must be provided in the parameter file, it is 0 by default and can be changed using the parameter file.
As the transformation is done directly in real coordinates, no visualization of the area in the form of images is available for the moment. The superposition of the velocity fields with an orthophoto requires the use of complementary software (BlueKenue, Qgis, Google Earth).