Real-Time Multimodal Assistive Technology for Navigation, Social Interaction, and Financial Management in Visually Impaired Individuals
Main Article Content
Abstract
Living with vision loss presents remarkable day-to-day interactions that relate to hazard recognition, person identification, and money management when visually impaired. Traditional tools such as white canes and guide dogs are extremely helpful, but do not provide substantial support in daily activities, especially in rich and varied environments. In this study, we present the idea of an AI-powered wearable device that combines the elements of face recognition, obstacle detection, and currency identification, into a single device. Using advanced algorithms, such as the Grassmann model for faces, CNNs for money, and YOLO for obstacles, we provide users with the ability to get context-sensitive audio information in a real-time manner that allows them to navigate and live independently and safely. In addition, a single compact device with multiple use cases, reduces the need to carry multiple tools, functionality is enhanced, as practical practicality. Our adaptive learning algorithm to the device, will allow the accuracy of detection to improve over time with use, along with our design being extremely cost-efficient will allow for immediacy in a device that makes a real difference. This composite solution gives visually impaired people more independence, safety and confidence in partnering and completing in-person discussions, exploring new environments, and completing day-to-day tasks, while using the device autonomously.