What did we experiment with?
We wanted to see what kind of information we can get out of our images by using some commercially available machine learning tools to do object recognition in images. We took about 3200 images from the SAGE Publishing corpus and had a look at the kind of results that we can get from that.
How did we do it?
We used Cloudy Vision (a python tool to manage the images developed by Gaurav Oberoi), and compared the resulting tags between Google Vision API and Clarifai. This is what the raw output looked like (we are not sharing the actual pictures due to copyright):
We analysed the results to try to identify ways in which we could apply the image tags in SAGE products. For example, we tried to visualize these results to explore potential relationships between the concepts that had been tagged and to understand whether there is any value to the data and how it links together. We used Gephi to construct a network of tags. The thickness of the links represents how often the tags occur together.
More ideas and what did we learn
While scrolling through the results, the team started thinking about different applications for this experiment. Most critically, we all agreed that this would be an easy way to enrich the image metadata and make these more accessible as well as discoverable. We learned that similar services that extract text from images are quite effective, and can provide powerful ways of making information locked in the image available for discovery—such as mentions of gene names, diseases, and research methods.