Module MotionModel

This module manages motion models. It allows the choice of the method to determine the more probalistic displacement for each particule detected by the ObjectDetection module. For each particule, a horizontal and vertical is associated. For Nearest and Tractrac methods an ID trajectory is also provided. For dense optical flow method the detection is not used even if it should be done to intialize the computation.

MotionModel.PIV(curr_gray, prev_gray, params)[source]

From pyprocess method of openPIV.

Parameters:

curr_gray – current frame in grayscale :param prev_gray: previous frame in grayscale :param

resolutionDenseOF: resolution of the grid for result (could be lower tha pixel size) :return: list of movement detected for current time step [ID trajectory, X position, Y position, Velocity along x axis, Velocity along y axis] :rtype: list

MotionModel.distance_objs(current_obj, prev_objs, params, size_and_intensity)[source]

Compute distance between selected points (current frame) and all others possibles previous point (previous frame). This method is used for nearest method (simple+Kalman filter). The parameter of likelihood allows avoiding the computation for non-realistic displacement. The distance depends on the chosen type (euclidian, euclidian + root square of area, euclidian + root square of area + normalised intensity)

Parameters:

current_obj – x and y position of the input point :param prev_objs: x and y position of all points

detected in the previous frame :return: list of movement detected at each time step [ID trajectory, X position, Y position, Velocity along x-axis, Velocity along y axis] :rtype: list

MotionModel.farneback_andromede(curr_gray, prev_gray, params)[source]

From farneback method of openCV.

Parameters:
  • curr_gray – current frame in grayscale

  • prev_gray – previous frame in grayscale

  • curr_X – x and y positions detected on the current frame

  • prev_X – x and y positions detected on the previous frame

  • resolutionDenseOF – resolution of the grid for result (could be lower tha pixel size)

  • pyr_scale – parameter, specifying the image scale (<1) to build pyramids for each image; pyr_scale=0.5 means a classical pyramid, where each next layer is twice smaller than the previous one.levels number of pyramid layers including the initial image; levels=1 means that no extra layers are created and only the original images are used.

  • winsize – averaging window size; larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field.

  • iterations – number of iterations the algorithm does at each pyramid level.

  • poly_n

    size of the pixel neighborhood used to find polynomial expansion in each pixel; larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion

    field, typically poly_n =5 or 7.

  • poly_sigma – standard deviation of the Gaussian that is used to smooth derivatives used as a basis for the polynomial expansion; for poly_n=5, you can set poly_sigma=1.1, for poly_n=7, a good value would be poly_sigma=1.5.

Returns:

list of movement detected for current time step [ID trajectory, X position, Y position, Velocity along x axis, Velocity along y axis]

Return type:

list

MotionModel.initTractrac(current_objs, prev_objs, id_method)[source]
Initialize variable ofr tractrac method.

The variable contains the positions of detected particles for the 2 first frames.

Parameters:
  • current_objs – x and y positions detected on the current frame

  • prev_objs – x and y positions detected on the previous frame

  • id_method – type of the chosen method

Returns:

list of variables for the movement of the particles in the 2 first frames.

Return type:

list

MotionModel.nearest(current_objs, prev_objs, id_image, params)[source]

find the displacement by computing the minimum distance between previous and current frame. The distance can take into account size and intensity particles.

Parameters:
  • current_objs – x and y positions detected on the current frame

  • prev_objs – x and y positions detected on the previous frame

  • id_image – frame identifiant

  • distance_max – maximal possible distance for computation

Returns:

list of movement detected at each time step [ID trajectory, X position, Y position, Velocity along x-axis, Velocity along y axis]

Return type:

list

MotionModel.nearest_prediction(U_KF, current_objs, params)[source]

Find the displacement by computing the minimum distance between current frame and the prediction from KF. The distance can take into account size and intensity particles. The kalman filter allows predicting position of the particles and store it if the particles has disappeared in the current frame. The filter is updated at each time step.

Parameters:
  • U_KF – Kalman filter variables (trajectory)

  • currents_objs – x and y positions detected on the current frame

  • step_max_prevision – maximal number of time step for prediction without new detection

  • distance_max – maximal possible distance for computation

Returns:

list of movement detected for current time step [ID trajectory, X position, Y position, Velocity along x-axis, Velocity along y axis]

Return type:

list

MotionModel.opyflow_andromede(curr_gray, prev_gray, curr_X, prev_X, params)[source]

find the displacement by using method in Opyflow. It is based on the Luka Kanade algorithm.

Parameters:
  • curr_gray – current frame in grayscale

  • prev_gray – previous frame in grayscale

  • curr_X – x and y positions detected on the current frame :param prev_X: x and y positions detected on the previous frame

  • distance_max_opy – maximal possible distance for computation

  • winSize – size of the search window at each pyramid level.

  • maxLevel – 0-based maximal pyramid level number; if set to 0, pyramids are not used (single

level), if set to 1, two levels are used, and so on; if pyramids are passed to input then algorithm will use as many levels as pyramids have but no more than maxLevel. :param criteria: parameter, specifying the termination criteria of the iterative search algorithm (after the specified maximum number of iterations criteria.maxCount or when the search window moves by less than criteria.epsilon. :return: list of movement detected for current time step

[ID trajectory, X position, Y position, Velocity along x axis, Velocity along y axis]

Return type:

list

MotionModel.process_motion_model_all(video, results, params, id_method)[source]

Processing selected motion model

Parameters:
  • video – video read

  • dict – dictionnary parameters

  • id_method – motion model method

Returns:

list of movement detected at each time step [ID trajectory, X position, Y position, Velocity along x-axis, Velocity along y axis]

Return type:

list

MotionModel.select_method(params)[source]

Function for motion model selection the availiable methode are:nearest/nearest + Kalman Filter/Tractrac method/Opyflow method/Dense Optical Flow (Fanerback)

Parameters:

dict – dictionnary parameters

Return type:

int

MotionModel.tractrac(U_tractrac, params)[source]

Main loop of the tractrac method.

Parameters:
  • U_tractrac – variables for the 2 previous frames (connectivity, displacement, error, etc..)

  • motion – (true/false)selection of the motion method (iteration to determine the most probable displacement)

  • motion_it – number of iteration for motion method

  • filter – (True/False) filter outliers if necessary

  • filter_time – number of time step for temporal filter

Returns:

list of movement detected for current time step [ID trajectory, X position, Y position, Velocity along x-axis, Velocity along y axis]

Return type:

list

Returns:

list of variables for the movement of the particles.

Return type:

list