A picture might be worth a thousand words, but your computer’s not likely to pick up on any of them. That looks likely to change in future though, thanks to computer scientists at the University of Rochester. Professor Jiebo Luo and his colleagues wanted to try teaching computers to recognise the emotional information that photos convey.
They started with an existing database called SentiBank, which was set up by researchers at Columbia University. This contains a large amount of Flickr images labelled with emotions by a machine algorithm. Each one’s also labelled with how certain the algorithm is of its meaning.
The Rochester team started by teaching their computers to discard the least accurate emotional labels, and then trained them to be able to pick out different sentiments with increasing accuracy. They’ve detailed their system in a paper presented at the American Association for the Advancemen of Artificial Intelligence conference, calling it a progressive training deep convolutional neural network (or CNN).
They say that thanks to their CNN, it’s now easier for machines to understand the sentiment behind a photo attached to a tweet than to assess the meaning of the tweet itself. This type of computer analysis could be especially useful for election analysis.
People on Twitter and other social networks often share photos of candidates alongside a text-based post, but the meaning of a gurning Nigel Farage changes depending on whether you think he’s a top bloke or a gibbering goon. Firms running sentiment analysis and trying to predict results could be much more accurate if their systems could speedily analyse photos as well as text. I’m guessing that next up, they’ll be teaching computers to understand video.
Image by Galymzhan Abdugalimov via Unsplash.