Computer vision
Labeling with LabelMe: Step-by-step Guide [Alternatives + Datasets]
10 min read
—
Oct 14, 2022
LabelMe is a free graphical annotation tool for image and video data. Learn how to install and use LabelMe to annotate your training data, and use it to train your model on V7
Hmrishav Bandyopadhyay
Annotating data for machine learning doesn’t have to suck.
Or… at least it doesn’t have to be crazy expensive.
In this article, we’ll explore yet another open-source image annotation tool - LabelMe, and show you how to label your data quickly and efficiently. You can then use your labeled dataset to train your computer vision model on V7 (yep, it’s easier than you think!).
If LabelMe does not live up to your expectations, be sure to check out our CVAT and LabelImg guides, which are both free to use.
Here’s what you’ll learn in a few minutes:
What is LabelMe
LabelMe features & limitations
LabelMe installation guide
Annotating data with LabelMe
LabelMe alternatives (free & paid)
Bonus: How to train your model on V7
In two words: exciting stuff.
And hey—while we’re not here to advertise ourselves and brag about V7’s 5-star reviews and extensive functionalities, we can’t help but let you know that LabelMe is not the only option out there.
In fact, if you are looking for open datasets and a more advanced tool to label your data efficiently–you are in the right place. Check out:
What is LabelMe?
LabelMe is an open-source graphical annotation tool for image and video data publicly available on GitHub. It’s written in Python, and it uses Qt for its graphical interface. LabelMe is extremely lightweight and easy to use, making it a popular choice as an open-source visual annotation tool.
You can use it to create annotations for object detection, semantic segmentation, and panoptic segmentation for both images and videos.
Pro tip: Check out What is Data Labeling and How to Do It Efficiently [Tutorial].
LabelMe features & limitations
While LabelMe is equipped with a plethora of features like batch processing of images, a number of available annotation tools, and multiple export formats, it also comes with several limitations.
Here’s the breakdown of the features and limitations of LabelMe.
LabelMe features
Let’s have a look.
1. Annotation tools
LabelMe offers polygons, rectangles, circles, lines, and points for image annotation. You can also annotate videos using polygons.
Video annotation example on LabelMe (source)
2. Batch processing of multiple files
LabelMe allows batch processing of multiple files. You can open a folder containing all the files and swipe through them while annotating each file. LabelMe also allows you to open files in a folder in any random sequence while batch processing, with the help of a small file list where files can be searched for and opened.
3. Web-based & local machine version
LabelMe is an extremely light application with easy installation on Windows, Mac, and Linux systems. As an alternative, LabelMe also provides standalone executables for all three system types, which makes it more accessible. These executives are quite small, with the Windows version being only 62 megabytes in size.
LabelMe allows the export of annotated images into multiple popular formats like the Pascal-VOC format and the COCO format for both instance and semantic segmentation for images and videos.
Here are other features as described on GitHub’s main page.
LabelMe limitations
While LabelMe is a good tool for getting started with labeling, you might quickly discard it as a viable option for working on more serious computer vision projects. Here are a couple of limitations you’ll encounter when using LabelMe.
1. Not built for collaboration
LabelMe offers a limited number of annotation tools, and it does not come equipped with functionalities that allow for easy collaboration. If you are working on large projects involving more stakeholders, you might find the experience to be frustrating.
2. Limited export options
LabelMe does not allow you to export your datasets into popular formats like YOLO, OpenImages, and CreateML. This limits your options for training your ML models.
3. No dataset management
LabelMe does not provide tools for backing up and managing datasets which makes it unusable for annotating large amounts of image or video data. Furthermore, LabelMe does not support data augmentation or any form of image manipulation.
Pro tip: Check out V7 Dataset Management.
Now let’s get our hands dirty and install LabelMe on our local machines.
LabelMe installation guide
You can install LabelMe with the help of command line tools. Depending on the platform and the operating system you are running, here are your options:
Installation through Anaconda with a single command from the command line. This installs all dependencies of LabelMe automatically as long as Anaconda is installed in the system (Recommended).
If Anaconda is not installed, installation can be performed with the help of docker in both MacOs and Linux.
Installation without anaconda or docker is also possible through Python and can be done in MacOs, Linux, and Windows.
More detailed step-by-step installation instructions for installing LabelMe are available here: https://github.com/wkentaro/labelme
If you do not want to install LabelMe, the repository also offers a standalone executive application for Windows, MacOs and Linux that you can access here: https://github.com/wkentaro/labelme/releases/tag/v5.0.2. To get started, you have to run the executive file.
Annotating data with LabelMe
LabelMe allows you to solve computer vision problems like classification and segmentation. You can annotate your data using circles, rectangles (bounding boxes), lines, and polygons.
Here’s a short guide to getting started.
1. Open LabelMe and open the directory where you have stored your images for annotation. Opening a directory allows batch processing of multiple images together, speeding up the annotation process.
2. Select the image you want to annotate from the file list in the bottom right corner.
3. Start annotating the image by selecting the create polygon option on the left. Click to set the polygon starting point on the image and click on the next keypoint to form a line.
Keep clicking to select keypoints till you are done annotating the image. Click the starting keypoint to complete the annotation.
4. For annotations of other formats, select edit from the title bar and select the form of annotation you require.
5. Label the annotation by typing in the label or selecting a label from the label list.
6. Save your annotation as a JSON file by selecting the output directory and saving the filename for your annotation.
You can use the automatically generated filename for ease of access. You can also choose to automatically save your annotations by choosing to save automatically in the File menu on the top left corner.
Finally, here are a couple of examples of data annotated using LabelMe.
Semantic segmentation: Semantic segmentation is a form of segmentation where multiple objects belonging to the same category are treated as the same object. So, while annotating for semantic segmentation tasks, we have to annotate the overall boundary of all overlapping objects in the same image.
Instance segmentation: In Instance segmentation, multiple objects belonging to the same category are treated as different objects in themselves. Since multiple instances of the same object are to be treated differently, we have to annotate the instances individually.
Bounding box annotations: Bounding box annotations can also be created for tasks like object detection with LabelMe by selecting the “Create Rectangle tool” from the “Edit” button on top left.
Pro tip: Have a look at Annotating With Bounding Boxes: Quality Best Practices.
Best practices for labeling with LabelMe
Here are some of the best practices for labeling your images using LabelMe.
Handling occlusion
While annotating images for segmentation, make sure your polygon covers the entirety of the object and not anything that is not the object in itself. Occluded regions of the object where the object is in the background should not be covered by the polygon
Tightness
Make sure your polygon tightly fits the object you are annotating. Polygons that are too small for the object or too big for the object would result in lower training accuracy when training segmentation models on the exported data.
Annotating instances
When annotating for instance segmentation, make sure to annotate all instances of the object, otherwise, the model would be penalized for detecting additional instances of the object in the image that were missed while annotation.
Overlapping annotations
While annotating for semantic segmentation in case there are multiple objects of the same class overlapping with each other, make sure to capture the boundary of all the objects combined and not the individual boundaries of each object.
Exporting data from LabelMe
If you want to export LabelMe annotations to other formats such as COCO or PascalVOC, have a look at these instructions: Export data from LabelMe.
To export the annotation to any of these formats, you need to save your file as a JSON file as is available from the “save” option within LabelMe. Then, you need to use the utility python scripts “labelme2coco.py” and “labelme2voc.py” to convert LabelMe annotations to COCO and VOC formats respectively.
LabelMe alternatives
While LabelMe is a good tool for annotating limited data and small projects, for annotation of large scale datasets, more advanced annotation tools should be used that allow for annotation backups and multiple export formats for easy handling of annotated data.
Here are a couple of free and open-source LabelMe alternatives:
1. CVAT
CVAT is one of the most popular free image and video annotation tools. It was developed by Intel. You can either use it online (with some limitations) or install it on your local machine. CVAT is used for labeling data for solving computer vision tasks such as Image Classification, Object Detection, Object Tracking, Image Segmentation, or Pose Estimation.
2. Labelimg
A graphical image annotation tool to label objects using bounding boxes in images written in Python. You can export your annotations as XML files in PASCAL VOC format.
3. VoTT
VoTT (Visual Object Tagging Tool) is a free and open-source image annotation and labeling tool developed by Microsoft.
4. ImgLab
ImgLab is an open-source and web-based image annotation tool. It provides multiple label types such as points, circles, boundary boxes, and polygons.
5. V7 Free Edu Plan
For students and researchers, V7’s free education plan serves as a great balance between the paid and free alternatives, as it comes equipped with multiple annotation tools and an inbuilt pipeline to train your models. You can apply via our Verify Academia form.
For a detailed comparison of paid and free data annotation tools, check out 13 Best Image Annotation Tools
Bonus: How to train your model on V7
Finally, let’s train your computer vision model on V7. Get your data labeled labeled using LabelMe ready!
To begin, you need to sign up for a 14-day free trial to get access to our tool (or apply for a free Edu Plan) . And once you are in, here's what comes next.
1. Upload your labeled data
Dataset tab view on V7
V7 also allows you to upload your data via API and CLI SDK.
2. Choose your model
Head over to the “Models” tab and click “Train a Model” to pick the model you want to train.
Depending on how you labeled your data, you can choose to train an Instance Segmentation model (polygons), an Object Detector (bounding boxes), or an Image Classifier (tags).
Training a computer vision model on V7
Name your model, and click “Continue.”
3. Choose your dataset and review class distribution
Pick your labeled dataset and check whether your class distribution is balanced. Make sure you avoid situations where your classes are either overrepresented or underrepresented. This will hinder your model’s performance.
Class distribution view
Next, V7 will show you the split between your training, validation, and test set. It will also calculate the time and cost of this training session.
Training, validation, and test set split
All you have to do is click “Start training” and voila—
You trained your first computer vision model! You can go ahead and work with it or keep re-training your model to improve its performance.
V7 supports model-assisted labeling where your model can constantly learn on its own and help you annotate new batches of data even 10x faster.
Got questions? Let us know or head over to V7 Academy.
We hope to see you training your models on V7.
Good luck!