In one of our previous posts we showed you that small size of objects isn’t a concern when it comes to our image recognition technology. Today we continue to provide you with examples of how tolerant our algorithm can be while still delivering extremely accurate results.
In order to obtain the most accurate results one might think that the best solution is to provide the highest quality query images. While this is a correct approach, it comes with quite significant shortcomings – with great quality comes great file size. A major factor influencing a JPEG’s file size is its compression level. When it comes to sending files over the Internet, you probably want to use as little bandwidth as possible, especially if you’re a web or mobile developer, since it corresponds to faster user experience.
In this post we demonstrate how our technology performs against decreasing query image quality. We conduct several tests of multiple object recognition based on 1,200 reference images and 200 query images. In each test we reduce the JPEG quality of query images by 10%. The image below presents the original query picture and some of its compressed versions (with matched objects outlined):
In order to measure accuracy of our algorithm we use an F1-score measure. Below you can see how it corresponds to JPEG quality loss:
Also notice how lowering JPEG quality affects file size:
The above charts show that our algorithm can perform accurately without caring too much about image quality. Images with JPEG quality of 20% present results comparable to the original images, while holding 0.07 of their size.
Bear in mind that this test concerns recognition of multiple objects, thus matched objects occupy only a portion of entire image. Still, we are able to correctly recognize them. The image below shows that we can recognize objects as small as 240×140 in an image with 10% JPEG quality:
I bet we could push it even further. Let’s suppose we’re about to conduct a single image recognition test. In such a situation, a matched object would most likely fill the majority of the image’s area, thus providing us with a lot of information about itself. It’s likely that we retain quite a lot of this information even after serious compression. Imagine how much you could push on squeezing the image size and compression level, while still achieving comparable recognition results.
Of course, besides imagining it, you could just try it by using our freshly released public API. Have fun discovering our product and feel free to give us your feedback :)