A PICTURE SPEAKS LOUDER THAN A THOUSAND WORDS: DECODING TEXTUAL MNEMONICS INTO VISUAL REPRESENTATIONS
York University (CANADA)
About this paper:
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
Mnemonics are widely used in education today. However, the strength of the associations between textual mnemonics and their target information (textual and visual) are understudied. Hence, in the context of dual-coding theory, we aimed to answer the research question, “Is the association between mnemonics and their target information (both of which are textual) stronger than the association between the mnemonics and the visual representation of the target information ?” Addressing this question will help reveal the extent to which visual mnemonics are essential in promoting long-term learning.
Consequently, we designed a web-based mnemonics-decoding study using a 2 x 2 x 4 experimental design. The first term in the design is a within-group variable, which refers to the two target modalities into which the textual mnemonic cues are decoded (text and image). The second term is a between-group variable, which refers to the order in which the decoding was done (decoding-into-text first or decoding-into-image first). The third term is a within-group variable, which refers to the number of repeated trials undergone by each participant. Twenty-two students were randomly assigned to the two decoding groups. Each participant was remunerated with bonus points in a course or $10 in appreciation of their time. In the experiment, two sets of textual mnemonic cues were decoded into their respective targets in the mutually exclusive orders. The first set (Small Man Ate Raw Toast) encoded the five attributes of goal-setting (Specific, Measurable, Attainable, Realistic, Timebound), while the second set (Old Cat Eats All Nuts) encoded the Big-Five personality traits (Openness, Conscientiousness, Extroversion, Agreeableness , Neuroticism). These targets were chosen because they were familiar and participants could relate to them easily.
The results showed that the overall means of decoding accuracy did not differ significantly between the two groups, as confirmed by an analysis of variance (F1,20 = 0.000, ns). Likewise, the effect of modality was not significant (F1,20 = 2.441, p > .05), indicating no difference between both modalities overall. Accuracy improved significantly across trials (F3,60 = 14.536, p < .0001), showing a learning effect over repeated practice. Neither the Modality × Group interaction (F1,20 = 0.414, ns) nor the Trial × Group interaction (F3,60 = 1.162, p > .05) was significant. However, the Modality × Trial interaction (F3,60 = 3.143, p < .05) was significant, revealing that the relative effect of modality changed across trials. While there was a significant difference between the two modalities in the first trial (F1,21 = 4.408, p < .05), with the decoding of the mnemonics into target images performing better than into target words, there was none in the other trials due to the learning effect.
The result indicates that textual mnemonics are more strongly associated with the visual representations of the target information than the textual target itself. This shows the picture superiority effect, indicating that visual representations should be integrated in mnemonics-based learning alongside textual mnemonics, to foster long-term memory. To the best of our knowledge, this is the first study to show that learners are more likely to decode textual mnemonics into target images than into target words, as prior studies mainly focus on the recall of textual information vs. visual representations.Keywords:
Memory, education, technology, learning, decoding.