What we will cover
Learn how to use Alegion Video Annotation to the fullest with features that adapt your workflow as video length increases.
Video Transcript
This tutorial covers capabilities of Alegion Video Annotation designed for long videos with sparse annotation. But we’ll also cover techniques for short videos with dense annotation as we look at how the overall design adapts to different scenarios.
Videos with dense annotation and sparse annotation present different design challenges, but we think Alegion Video Annotation covers both, and everything in between. Let’s get into the details. Understanding the design of the navigation tools can greatly enhance your productivity.
For videos with dense annotation, limiting what is in view and what can be edited is key. Hiding and locking entities is the best way to focus on specific aspects of a densely annotated scene.
However, as videos get longer, annotations tend to be more sparsely distributed and navigation techniques become more important. Alegion video annotation has different tools that come into play as video length increases.
To understand the navigation and annotation techniques, it’s best to start at the playhead in the timeline view and work outward.
The playhead is always your current location. It represents the location, expressed as either a frame or a time code, where an annotation will be recorded.
Moving the playhead, either by scrubbing, using the play controls, or a shortcut key, is the most common and obvious way to navigate through a video.
However, at a local level, when setting or reviewing a classification it’s not the most effective. When finding the right frame for a classification, it’s disruptive to change the playhead location to look forward or backward in the video. It changes the frame of reference and requires some mental recall to navigate back to the original frame.
This is why we support hover scrubbing. When this option is enabled, you can quickly cover a small number of frames without losing the playhead location as your frame of reference. I can go to a keyframe location and quickly step forward and backward to verify that my current frame is the best frame for a classification.
I can hover to preview and click to navigate through the video. However, the steps I can take are limited to the zoom factor of the timeline which is limited to 500 frames. When you are working with dense annotation this is perfect.
However, in a longer video with sparse classifications the approach becomes less efficient. Let’s take this one and a half hour video of a mining site with almost 162,000 frames. A video of this type may have large gaps in activity or specific events of interest within the longer video.
So, let’s move up a bit to thumbnail views. The timeline scroll bar encompasses the entire length of the video. If I mouse over the scroll bar, I can maintain my current location and look at areas of interest far beyond the scope of the frames in the timeline. This is effective for longer running videos for use cases like medical procedures, security cameras, or driver attentiveness, to name a few.
Now Let’s go beyond searching for areas of interest, and look at how can we quickly navigate through annotations within a long running video Perhaps you are reviewing work previously annotated by a human annotator or imported model predictions.
Here the keyframe navigation tools come into play. From any point in the video, I can navigate through position and visibility keyframes for any entity using the keyframe buttons and Details panel.
Using keyframe navigation, I can jump from one keyframe to the next regardless of its location in the video. Once in the right local position, hover scrubbing allows me to quickly assess the quality of a temporal classification without losing my location.
In this example, the scenes of interest are dispersed throughout this 90 minute video and across multiple entities. Using the keyframe navigation buttons or the more granular Details panel. I can quickly jump to the scenes of interest for each entity. In the case of the second loading bucket, I’ll jump directly to where it first comes into frame approximately one hour and seventeen minutes into the video and then skip to each event of interest. This saves an immense amount of time.
I hope I made a convincing case that Alegion video annotation scales from short videos with dense annotation to one ones that are hours long, and everything in between. Keeping these annotation techniques in mind will improve your speed and quality across the board.