Aug 8, 2024
We’ve integrated the latest version of Meta’s computer vision segmentation model into V7. The updated Segment Anything Model 2 (SAM 2) offers improved accuracy for segmenting complex objects with minimal input. It is also more suitable for video annotation. Very often, you need just a single click to track objects throughout an entire video sequence.
This implementation includes:
More accurate segmentation and easy adjustment
Consistent movement tracking in clips and longer videos
Improved model performance for domain-specific scenarios
The model can maintain context about objects even when they temporarily disappear, which is very valuable for challenging real-world scenarios. This functionality also pairs perfectly with V7's In and Out of View detection.
To get the most out of SAM 2, you can also use the timeline to adjust selected frames and recalculate auto-tracking for the selected section of the clip. This powerful addition expands SAM 2's capabilities beyond the out-of-the-box functionalities the model offers.
Learn more: https://ai.meta.com/blog/segment-anything-2/
Andrea Azzini
Head of Product