Here at Supersense, we are a team of innovators, researchers, designers, and social entrepreneurs, who always look for ways to improve life by making it more accessible for all people, and we highly value openness and collaboration. Therefore, today is a very exciting day for us. Because today, we are launching our blog to share our lives, our work, and our perspective with you.
Since starting off at MIT in 2017, we have been exploring and experimenting with artificial intelligence and augmented reality to empower people in their everyday lives. These technologies always fascinated us. However, we think there are two approaches to think about them. The first one is thinking of ways how they can replace labor. The second one is to work on how they can enrich lives. We never liked the idea of turning it into something that automates people out of their jobs. We want AI to complement and empower human beings to lead more productive and enjoyable lives.
We knew many others, such as Seeing AI, started bringing these technologies to help blind people. We also knew that there is a lot more to do. To explore the field and find out the specific problems our product could solve, we interviewed 50+ people with varying degrees of vision impairments, from around the US and the world. On the way, we made some connections, which then turned into long-lasting friendships. We are still grateful for their guidance and support.
There are many ways AI can assist blind in everyday life. Text scanning and reading and GPS navigation are highly developed areas in this field. But there is another less travelled road, indoor navigation. When we started to work on indoor navigation, we had to create a system to understand physical space using a smartphone camera. And fortunately for us, this precisely aligned with Cagri’s AI research at MIT.
So, our first task was designing an application that locates objects in a given space to help people to move freely and safely. This seemed like an attainable task for AI. Our first concept was based on recognizing objects in a particular area by scanning the environment with a mobile phone camera. We shared it on a few email lists, and the idea was received well, and soon after we released it as an application for Android. Later, when other applications started copying it, we figured we were on the right track.
Finding the name for the app was another challenge. We had four alternatives: Indicate, Sniffer, Guide Dog, and Supersense. Supersense got most of the votes, and we went for it. Once Supersense got released, it became a massive hit all around the world, especially in India and Latin America. It is still one of the top apps for the blind on Google Play Store.
In the meantime, our users were demanding a text scanner. The market was overflowing with scanner apps, and we didn't want to add another one. Yet, as we talked to more people, we realized that the vast majority of our users struggle to use the existing apps because they can’t point the camera accurately and also because of other UX problems. So, we decided to create a very basic text scanner which guides users on how to hold their camera.
This specific feature helped us reach a wider audience, and many users switched to our basic reader because of the added guidance feature. That showed us that usability or, as we like to call it, “super accessibility” is an essential requirement for blind users to be able to use apps.
Therefore, we wanted Supersense to be so simple that anyone can use it without having to tap on any buttons or struggle to how to hold their phones. Therefore we designed the iOS version, focusing on simplifying the entire scanning process and developed a smart scan mode, which automatically classifies what kind of document the user is looking at. The user doesn't have to choose the reading mode and can simply start scanning. The app also guides them to point the camera to the right angle.
But this is just the first part of our journey. Today, we continue to develop our initial project, indoor navigation. We received a National Science Foundation grant partnered with MIT and supported by Perkins School for the Blind and St Louis Society for the Blind and Visually Impaired. This grant is aimed at developing novel AI architectures to solve various spatial tasks from locating objects to identifying paths of navigation, and to generate detailed descriptions of the surroundings. We believe this would be groundbreaking research in the field of spatial cognition, and we can’t wait to implement its results in Supersense.
We always believe in hard work and in the sense of community to keep us going forward! Therefore we would like to thank you for your trust, support and contribution. We hope our work will grow together.
The project is an MIT spinoff and awarded by the National Science Foundation and US Department of Veterans Affairs. You can find the details of the NSF project here: https://www.sbir.gov/sbirsearch/detail/1585353.
Mediate, which is the parent company of Supersense, started in 2017. Previously, the team worked on facilitating virtual collaboration. https://www.cagrizaman.com/projects/2tc7rm72fcrhgycdnfk5fhb38ra37z
Cagri is a designer and computer scientist specializing in spatial cognition and artificial intelligence. He recently completed his PhD in design computation and AI with his dissertation "Spatial Experience in Humans and Machines."
Emre has an extensive background in social entrepreneurship. For years he built nonprofits to provide people upward social mobility.
Supersense is one of the most downloaded and highest rated apps for the blind on both App Store and Google Play. We constantly add new features and and enhance the existing ones. Download and try it for free!
We’d love to have a conversation. If you are a part of the blind and visually impaired community, you’d like to be part of our mission, or share your ideas and collaborate with us, get in touch with us.
We are based in Cambridge, Massachusetts.
Fill out the form below to reach us or email us at email@example.com