How Facebook recognises faces for its new auto-tagging feature
Facebook’s freaky auto-tagging feature that has been rolling out across the US has raised hackles but also complaints – that it simply doesn’t work very well.
Differences in lighting, the colouring, camera angle and the person’s expression can trip the system up.
So how does Facebook’s Facial Recogntion work? Will it work better in future? Should we start getting scared?
Computers can’t see faces of course – so they read pixels. Pixels are represented by strings of numbers giving the colour code of the particular pixel and its position in the picture.
Patterns in these colour blocks are unique to people’s faces, depending on the shape of their nose, where their eyes are, what their cheekbones look like, etc.
So when a computer sees a face, it compares selected facial features from the image with a facial database. In Facebook’s case, that’s the album of photos tagged with you on your profile.
It runs a mathematical calculation for comparison. “Where we see a face, a computer only sees numbers,” Andrew Ng of Stanford University, who has worked on artificial intelligence and machine-learning problems throughout his career told PCWorld. “It assigns a value to every pixel, and it’s the computer’s task to decide that the values add up to ‘This is my good friend Joe.'”
The more photos of you in the database, the more easily the computer can match the data up and recognise you. And the more those pictures are from different angles and in different lights, the better the recogniton works/
We’re not sure if joke tags throw the system off or not.