Playbooks

How to Use Segment Anything Model (SAM) in V7

6 min read

May 11, 2023

Discover the updated V7 Auto-Annotate tool powered by Segment Anything (SAM). Learn how the new SAM engine can enhance your ML training data and solve segmentation tasks.

Casimir Rajnerowicz

Casimir Rajnerowicz

Product Content Writer

A video labeling annotation tool where drone footage of a port inspection is being annotated

Video annotation

AI video annotation

Get started today

A video labeling annotation tool where drone footage of a port inspection is being annotated

Video annotation

AI video annotation

Get started today

We are excited to introduce the updated V7 Auto-Annotate that now includes the powerful Segment Anything Model (SAM) developed by Meta. It is a great all-purpose tool for automatic zero-shot segmentation tasks.

New SAM-based Auto-Annotate

Pick, combine, and merge pre-segmented sections of images

 

You can pre-process the whole image and create semantic masks based on the SAM engine. This feature gives you better control over your annotations and is more accurate.

Traditional Auto-Annotate

It can detect and segment objects within a specified bounding box area

The previous version of the Auto-Annotate allowed you to automatically segment objects within a specified area.

The SAM is now the default option available in V7, but you can switch between the SAM and the traditional Auto-Annotate tool. Both can be used together for different scenarios. It is possible to use the tools interchangeably, as both produce polygon annotations as the final output.

Benefits of using the SAM zero-shot model for automatic labeling

The updated Auto-Annotate tool, powered by SAM, provides numerous benefits for your training data workflows and AI product development:

  • Improved accuracy. The tool delivers better segmentation masks, ensuring high-quality annotations for your projects.

  • Faster annotation. With pre-segmented and pre-processed images, the annotation process becomes faster and more efficient.

  • Scalability. The advanced capabilities of the new labeling engine allow the tool to handle large volumes of data, making it suitable for projects of any size.

  • Enhanced collaboration. The pre-segmented images and semantic masks make it easier for teams to collaborate on projects, ensuring consistency and accuracy across all annotations.

  • Easier segmentation of complex shapes. The tool makes it simpler to segment unusual shapes and create polygon masks with "holes" or multiple "parts" forming a larger entity.

Notice how the areas between the pink wolf's front and hind legs are automatically excluded from the annotation

How does SAM work and how to use it in V7?

SAM is now the primary engine for our updated Auto-Annotate tool. Our goal was to enhance your experience with dataset labeling. SAM offers advanced segmentation capabilities and intuitive mechanics for annotation tasks.

The Segment Anything Model (SAM) is an AI-driven image segmentation tool developed by Meta. With just a single click, it can isolate any object within an image and generalize to unfamiliar objects without additional training.

To leverage all of these powerful features in V7, there are a few simple steps to follow.

Step 1: Pre-process the image with the Auto-Annotate tool

Pick a file in your dataset and go to the annotation panel. Click the Auto-Annotate button (or press N on your keyboard) to pre-process your image with SAM. The short animation with dots should appear. After the animation is over, you will be able to pick one of the highlighted objects in the image.

Step 2: Choose the objects to annotate and save annotations

Once an area of the image has been highlighted with SAM, save the annotation by clicking the Save button or by pressing Enter. Be sure to select the class you want to map the SAM annotations onto.

For instance, to select apples, simply pick an apple, create the "Apple" class, press Enter, choose another apple, press Enter, and so on. Notice that you can work with only one class at a time. When selecting a banana, remember to change the annotation class to "Banana.”

Step 3: Fine-tune annotations as needed

For complex objects, adjust the shape of your annotations by adding positive and negative points. The model will attempt to predict the correct area of the annotation based on the points you include or exclude.

Positive points are marked in blue while negative points are red. Adding more points makes it easier for the model to predict the correct shape of the annotation, even when it consists of multiple segmentation masks that are separated.

Once the polygon mask is saved you can modify it with other tools from the panel. For example, it is possible to add or erase parts of your new annotation with the Brush tool.

Auto-Annotate + SAM use cases and examples

SAM is a versatile model for all sorts of segmentation tasks. Here are some more examples to help you understand the practical applications of the upgraded Auto-Annotate tool.

1. All purpose auto-labeling with a single click

As the name suggests, SAM can auto-segment anything (for example a fruit stand) with a single click. After the animation is complete, you will see that the image is now pre-segmented with different fruits being highlighted.

Now, all you have to do is select a fruit, pick the right class, and press Enter. The SAM engine will automatically create a polygon mask around the selected fruit, significantly speeding up the annotation process.

2. Labeling objects with multiple parts at different levels of granularity

You can choose the level of detail you want in your annotations. The SAM engine is very good at predicting whether you are interested in the whole object or just a specific part of it.

For example, if you click on the pizza, the entire pizza, including the ingredients, will be selected. However, if you want to annotate specific ingredients, like basil or tomatoes, simply click on them. Only that part of the pizza will be highlighted. This gives you granular control over your annotations, allowing you to annotate complex scenes with ease.

3. Multi-polygon annotations of objects that are partially occluded

Let’s consider a photo of a factory line with cans of soda being manufactured. Some cans are fully visible, while others are partially obscured by machinery or other cans. With the traditional Auto-Annotate tool, labeling such images could be challenging. However, with the SAM integration, this task becomes much easier.

 

The updated auto-annotation engine will pre-segment the image, highlighting each can individually (even those that are partially obscured). When you select a can and save the segmentation mask, the engine will create a multi-polygon annotation that accurately represents the visible parts of the can.

Future developments and considerations

While the integration of the Segment Anything Model (SAM) represents a significant upgrade for the V7 Auto-Annotate tool, it's important to acknowledge some limitations:

  • Currently, SAM does not support video or volumetric data, such as DICOM image series. However, it's possible to extract individual frames from these data types and process them as regular images.

  • Once an annotation has been converted to a polygon, it is not possible to modify the polygon by adding or removing positive and negative points.

  • At present, the SAM model cannot be incorporated as a model stage within a V7 workflow.

Despite these considerations, the integration of SAM into the V7 Auto-Annotate tool brings several key improvements and will play a significant role in our future updates:

  • Enhanced object recognition. With the incorporation of language models, the tool can more effectively understand and recognize different objects. This enhancement paves the way for features such as text-to-object segmentation and other tasks based on language-based prompts.

  • Adaptive learning. The tool will store segmentation information in memory, improving the accuracy of your segmentation masks the more you use them. This improvement will occur without any additional model training, thanks to reinforcement learning from human feedback.

  • Autolabel feature. Soon, you will be able to combine SAM-powered segmentation tools with our upcoming image-guided detection feature. This innovative functionality will allow you to identify and automatically label similar objects within the same image en masse.

As we continue to enhance and refine our tools, we hope to address current limitations while introducing new features that will further improve your ML workflows.

Read more: Segment Anything Model (SAM): Documentation

A data labeling tool where a medical image is being labeled as Basophil Cell

Data labeling

Data labeling platform

Get started today

A data labeling tool where a medical image is being labeled as Basophil Cell

Data labeling

Data labeling platform

Get started today

Casimir Rajnerowicz

Casimir Rajnerowicz

Product Content Writer at V7

Casimir Rajnerowicz

Casimir Rajnerowicz

Product Content Writer at V7

Casimir is a tech journalist and content writer with a keen interest in all things AI. His main areas of focus are computer vision, AI-generated art, and deep learning. He's also a fan of contemporary digital art and photography.

Next steps

Label videos with V7.

Rewind less, achieve more.

Try our free tier or talk to one of our experts.

Next steps

Label videos with V7.

Rewind less, achieve more.