Computers see the world through pixels. Pixels are tiny squares that make up the image, each pixel is represented by a number, and each number represents a colour.
Images typically come in three formats, PNG, JPG and JPEG. PNG and JPG are lossless image types which means the data is not compromised and can be restored to original whereas a JPEG image is a lossy image type and that means the data is compromised and cannot be restored to original.
TonkaBI has found image quality to be a crucial aspect when developing computer vision software as it has a profound impact for identification. Check out this unique use case on why quality matters.
Image classification is when you would be asking “what is in this image?” The task for the image classifier is to describe the image against a set of labels, an example of this could be a sports photo. Alternatively, a classifier can categorise image and video data into types based on the classifying data and labels.
Classification can be viewed as descriptive whereas object detection or segmentation are more specific to a particular part or object in a video or image. Take this image as an example, an image classifier can be used to classify and identify sport then an object detection model can be used to count the player(s) in the image.
Segmentation provides an exact outline of an object within an image or video (it segments the exact object or area of interest within the image or video). Segmentation just like object detection is used to differentiate between two or more objects within the image or video. But unlike object detection, segmentation does not pick up the background in the image or video. The annotation and labelling process is also different.
Segmentation provides a pixel by pixel mask around the object this provides greater accuracy and understanding of the object within the image when compared to object detection.
A great use for segmentation is when the object(s) within the image or video lack consistency in shape, volume, colour, location and texture such as vehicle damage or satellite imagery.
Similar to the image classification, object detection deals with distinguishing between objects within an image or video. Object detection uses classification to distinguish between objects within the image or video using bounding boxes, an example of this is facial recognition. Classification would say faces whereas object detection can locate the faces in the image.
Object detection should be used for certain use cases due to the limitations when differentiating between two similar objects within an image or video, this is largely due to the box itself (the background is included).
Object detection can be used generically when classifying as the bounding box provides an area of interest and picks up background objects (this is because of the bounding box) whereas image segmentation is used more specifically against an object or area of interest.
12 Hay Hill, London, W1J
26 Broadway, New York, NY
Bangalore, Mumbai & Trivandrum