COGNITIVE ADEQUACY OF ACCESSIBLE CONTENTS AND RESOURCES FOR BLIND PEOPLE IN THE APP ‘QR-UGR’. DESAM PROJECT
Semantic structures implied in the verbal utterances received by early and late blind subjects seem to be responsible for the activation of certain areas of the visual cortex in this population, as shown by some neuroimaging studies (Röder et al. 2002; Burton 2003; Bedny et al. 2011), whereas sighted subjects’ visual areas mostly activate when visual input is received (Tootell, Tsao & Vanduffel 2003; Barry 2005; Kosslyn & Smith 2006).
Within the frame of he educational innovation project DESAM: Development of contents for a universal accessibility low-cost cross-platform system for description, location and guidance in buildings of the University of Granada, led by the research group TRACCE (Translation and Accessibility), a prototype of App for Android devices was developed (‘UGR QR’), which includes a series of accessible resources such as audio description, audio-guidance or audio-location, suitable for tourist and museum environments.
In this paper, we analyze the viability of some of the resources of the App, as well as their semantic adequacy to the cognitive features of blind users, based on the abovementioned studies. We also lean on the cognitive operations enabling the second phase of the translation process involved in the production of audio described contents, i.e. the representation of visual knowledge made by audio description professionals. This process enables the transfer of visual information into a verbal text comprehensible for visually deprived people. Supporting evidences can be found in Bartels & Zeki (1998, 2005, 2006), regarding the proprieties of visual image processing, and Chica (2013), in relation to the access to visual knowledge and its verbal representation, as taking place in the accessible translation modality of audio description.