

Transforms = np.zeros((n_frames-1, 3), np.float32)
Warp stabilizer requires dimensions to match code#
Make sure to read the comments in the code to follow along. The code below goes over steps 3.1 to 3.3. We store these values in an array so we can change them smoothly. Once we have estimated the motion, we can decompose it into x and y translation and rotation (angle). This is done using the function estimateRigidTransform. So we can use these two sets of points to find the rigid (Euclidean) transformation that maps the previous frame to the current frame. In other words, we found the location of the features in the current frame, and we already knew the location of the features in the previous frame. In step 3.2, we used optical flow to track the features.

To recap, in step 3.1, we found good features to track in the previous frame. Fortunately, as you will see in the code below, the status flag in calcOpticalFlowPyrLK can be used to filter out these values.

For example, the feature point in the current frame could get occluded by another object in the next frame. An image pyramid in computer vision is used to process an image at different scales (resolutions).ĬalcOpticalFlowPyrLK may not be able to calculate the motion of all the points because of a variety of reasons. In the name calcOpticalFlowPyrLK, LK stands for Lucas-Kanade, and Pyr stands for the pyramid. It is implemented using the function calcOpticalFlowPyrLK in OpenCV. Once we have found good features in the previous frame, we can track them in the next frame using an algorithm called Lucas-Kanade Optical Flow named after the inventors of the algorithm. It is called goodFeaturesToTrack (no kidding!). Fortunately, OpenCV has a fast feature detector that detects features that are ideal for tracking. So, smooth regions are bad for tracking and textured regions with lots of corners are good. It is based on a two-dimensional motion model where we apply a Euclidean (a.k.a Similarity) transformation incorporating translation, rotation, and scaling. We will learn a fast and robust implementation of a digital video stabilization algorithm in this post. The second stage filters out unwanted motion and in the last stage the stabilized video is reconstructed.

The transformation parameters between two consecutive frames are derived in the first stage. There are three main steps - 1) motion estimation 2) motion smoothing, and 3) image composition.
