An artificial intelligence algorithm can predict whether a photo will be "memorable" and even tell what parts of it will stick in the memory, researchers at MIT report.
Their "MemNet" algorithm does it by utilizing AI "deep learning" techniques involving pattern recognition and processing similar to that employed by the human brain, the team at MIT's Computer Science and Artificial Intelligence Laboratory says.
"Understanding memorability can help us make systems to capture the most important information, or, conversely, to store information that humans will most likely forget," explains graduate student Aditya Khosla. "It's like having an instant focus group that tells you how likely it is that someone will remember a visual message."
Want to give it a try? The MIT team has put the algorithm online, and you can feed it an image from your Instagram feed for an instant "memorability" check.
While at the moment MemNet just matches humans at assessing memorability, the MIT researchers say it should get even better over time.
They began by feeding MemNet thousands of photos, each ranked with a score indicating how well people had been able to remember them.
After the algorithm was turned loose on them to analyze the visual features, MemNet was able to identify patterns of which photos had proven memorable and what objects within them contributed most to them being so.
The researchers then fed MemNet photos from which the human memorability score had been removed; the system proved able to predict that score as well as the human had, the researchers found.
The MIT team says the system could have a number of applications, from improving the visual content of ads and social media postings to developing better teaching resources.
Companies like Apple, Google and Facebook have invested hundreds of millions of dollars on "deep learning" startups to support their technologies.
The significance of the MIT work lies in its ability to match, and perhaps someday surpass, the human brain's capacity to remember.
"While deep-learning has propelled much progress in object recognition and scene understanding, predicting human memory has often been viewed as a higher-level cognitive process that computer scientists will never be able to tackle," says research scientist Aude Oliva, senior investigator for the study. "Well, we can, and we did!"
The researchers presented their work at the International Conference on Computer Vision in Santiago, Chile.