Earlier this week, Microsoft released a fascinating app for iOS devices, called Seeing AI. Seeing AI is an app that lets users take pictures of the world around them, and then it uses the iPhone’s on-phone intelligence to describe what’s in the picture. It’s designed as an app for people with low vision, but even if that description doesn’t apply to you at the minute, using it makes for a provocative way of thinking about the way these devices will be mediating the world around us, especially if Apple continues to build out support for augmented reality.
Here’s Microsoft’s promotional video describing the app:
Using it is super-easy. When you open the app, and give it permission to access your camera, you get a little overlay over the normal camera view:
To have Seeing AI describe the world to you, just pick the category you want it to use as a filter, then that little blue icon in the middle-left of the screen. (To have it read text, just tap “short text, and it starts reading text right away. I tested it on Imre Kertész’s Kaddish for a Child Not Born, and it worked great, except for lines where my wife’s annotations blotted out the original text.)
Here are a couple of sample results, which are interestingly close-but-not-quite:
As you see, Seeing AI says that this photo is “probably a dog that is lying down and looking at the camera,” which is not a terrible guess! Of course, there are two dogs, and neither are looking anywhere near the camera, but ¯\_(ツ)_/¯
Here’s another try, one that I think speculates too far once it recognizes a bookshelf:
Here, my phone thinks this is “probably a living room filled with furniture and a book shelf.” It’s right that we’re in the living room, but, given the dual-English-Ph.D. family, it’s probably righter to say it’s “filled with book shelves and a piece of furniture.” But a decent guess all the same.
Have you tried Seeing AI? What are some of its limitations? How might it prove interesting in the classroom or in research? Please share in comments!