Google is to make it easier for developers and researchers to identify objects within images. Google is trying to offer the best of simplicity and performance — the models being released today have performed well in benchmarking and have become regularly used in research.

The handful of models included in the detection API include heavy duty inception-based convolutional neural networks and streamlined models designed to operate on less sophisticated machines — a MobileNets single shot detector comes optimized to run in real-time on a smartphone.

Earlier this week Google of lightweight computer vision models. These models can handle tasks like object detection, facial recognition and landmark recognition.

Today’s smartphones don’t possess the computational resources of larger scale desktop and server-based setups, leaving developers with two options. Machine learning models can run in the cloud, but that adds latency and requires an internet connection — non-starters for a lot of common use cases. The alternative approach is simplifying the models themselves, making a trade-off in the interest of more ubiquitous deployment.

Google, Facebook and Apple have been pouring resources into these mobile models. Last fall, Facebook for building models to run on smartphones — the first big implementation of this was . This spring at I/O,, it’s version of a streamlined machine learning framework. And most recently at WWDC, , its attempt to reduce the difficulty of running machine learning models on iOS devices.

Of course Google’s public cloud offerings give it differentiated positioning with respect to both Facebook and Apple, and it’s not new to delivering computer vision services at scale vis-à-vis .

Today’s TensorFlow object detection API can be found . Google wants to make it extra easy to play with and implement so the entire kit comes prepackaged with weights and a Jupyter notebook.

LEAVE A REPLY

Please enter your comment!
Please enter your name here