On June 2017 in an opening keynote of WWDC in California, Apple introduced CoreML − machine learning framework that is designed to help developer build amazing user experiences.
Two main parts of this framework are Vision and NLP (Natural Language Processing).
Vision allows you to integrate machine learning features into your applications, like: face tracking, face detection, landmarks, text detection, rectangle detection, barcode detection, object tracking, and image registration.
Where is NLP focused on: language identification, tokenization, lemmatization, part of speech, and named entity recognition.
This is a very big step forward for a developers. Giving this set of tools and API's will allow us in the next 5 months build and bring new category of applications into customers smartphones by the time new iPhone 8 (?) and iOS11 will be delivered by Apple.
Just as an example, imagine an application, that will allow you to point your camera onto a dog, flower, tree, piece of furniture, etc and get full information about it: name, breed, classification, price and so much more.
Considering this exciting news I've created example application and custom Image Recognizer class written in Swift that takes user provided picture, process it and returns name of the object and confidence number (from 0-100%). See example screenshot below.
Full github example of the working application project can be found here.
789 total views, no views today