With the introduction of the Explore and Find features in the app, Supersense has provided insight into the future use of AI by the blind. Supersense's approach for delivering critical information defining the surrounding environment in real-time, while at the beginning, provides a working start for application of AI in a meaningful way to achieve enhanced independence, accessibility, and mobility for blind people.
Previously, object identification was achieved via photo capture. The user has to take a photo of the object, send it to cloud databases for comparative analysis, and they’d provide information back to the user for VoiceOver presentation. This method had some shortcomings, such as requiring the user to perceive the desired object, capture it as a photo, connect to a cloud database, and a long wait for the description. While this form of object identification has historically proven to be a valuable service to the blind, its inability to provide real-time feedback and the number of steps involved render it a bit time and energy-consuming process. However, it was a solid first step of what is possible through the continued development of AI.
Supersense's object explorer and find features highlight the beginnings of a new path. Achieving a sufficiently enhanced level of real-time object identification feedback opens up possibilities for independent living and mobility for the blind. This glimpse into the futuristic world of real-time object identification bridges the functional mobility gap between sighted and blind people.
Currently, multiple developers, including the big shots like Google and Microsoft, are working with real-time AI object identification technology. But Supersense's continued commitment to facilitating direct communication with their clients represents a seldom attribute. This behavior also offers a unique opportunity to uncover the actual needs of the blind instead of the perceived needs people think they have. Obviously, there is a long way to go in development, and tons of beta testing from blind users is needed until a completely reliable system is achieved. A better directional prompting within a workable timeline will probably require collaboration with fellow developers who are currently in advanced stages of developing interactive mobility applications for the blind.
So I imagine a futuristic endpoint for these mobility applications, where a blind individual uses real-time object identification data to move around gathered through wearable glasses with a BlueTooth LIDAR 3D camera. A pocketed smart-phone will analyze the surroundings, which will be able to adjust and track navigation with haptic feedback and connection through VoiceControl commands or smart-watch swipe/tap gestures.
Most of these advancements are a part of our world now, maybe not fully integrated yet; however, they were unimaginable just a very short time ago. Achieving this desirable future for the blind traveler requires a broad vision, continued development of AI, more powerful smart-phones, and strong collaboration of developers with their blind clients.
Currently, a blind traveler's independent mobility requires the following three components: orientation/mobility skills, GuideDog or White Cane skills, and mobile smart technology powered by blind applications. The burning question is: will this technology evolve to a level of sophistication capable of delivering real-time information and analysis that makes it an integral part of this trio?
The short-term answer is probably no. But with the developments in AI, the long-term answer remains to be formulated. As a blind walker desiring a more fluid/natural walking experience, I will continue to provide input for developers, such as Supersense, who hopefully remain willing to interact with their blind clients directly. I challenge all fellow blind technology users to establish collaborative relationships with developers of choice to provide their real blind-world knowledge to guide the development of applications for the blind forward from our "Blind Perspective"!
We love to talk to people from the visually impaired and the blind community, people who want to help our mission, or people who just want to see if we can collaborate in any ways.
We are based in Cambridge, Massachusetts.
Fill out the form below to reach us or email us at email@example.com