Traditional aids for the visually impaired, such as canes and guide dogs, have short detection range, poor coverage, and slow information transfer. We designed a device through which the visually impaired can visualize nearby obstacles via surround sound, allowing them to navigate easily and safely. It employs a five-step data pipeline: simultaneous capturing of images with two cameras, coordinated wireless transfer of images, extraction of depth information from the images, transformation of depth information into distance-direction vectors, and vector projection into surround sound. We eliminate radial distortion from the images and generate a stereo depth map representing the relative distances to the camera of objects in the picture. The depth map, originally sized 1024 x 768 pixels, is scaled down to an 8 x 6 map of average depths. The downscaled depth map is converted to audio sources in a virtual 3D world. Specifically, the position of each sound source reflects its position in the depth map and its direction relative to the user; the intensity of each sound source reflects the average depth of its corresponding area in the map and the average distance between the user and objects in that area. The device we have developed mimics the function of the visually impaired person’s missing fifth sense — sight.
Oracle Academy: Award of $5,000 for outstanding project in the systems software category.
GoDaddy: $750 Joining Forces for the Community Award
Second Award of $1,500