There is a nice API for face/person recognition on the internet at http://www.face.com/.
Their API provides quite a lot of interesting data, such as
- Position of the eyes
- Position of the nose
- Position of the mouth (right, center, left bounds)
- Yaw/roll/pitch rotation of the detected head
- Attributes: gender, wearing glasses, smiling, mood, …
And the best of all … it’s a free service which gives you 5000 calls an hour to their API which is more than enough for a little experiment.
I’ve created a little trainer app (ios) in which you can test their service and see what data gets returned. The app has 3 steps:
- Detect – Take a picture (make sure you take it in landscape mode, home button on the right side).
Add first and last name to the picture (i’ve implemented a core data database so you don’t have to re-enter the name for the same user) and send the photo to Face.com. They will return a temporary tag id which you can use in the next step. - Train – Train the user, so Face.com starts to know the face.
- Recognize – After you’ve trained the user with a few pictures, you will get positive hits when you take a picture of a person you have trained.
If you want to dive in and try creating a cool mashup for yourself, then you can download my sourcefiles at GitHub. The code could use some cleanup, but it’s a start to get you going!
If you want to build the project, you will need to add you API and secret key to the FaceConstants.h file!
Happy coding!