AIM: ARTIFICIAL INTELLIGENCE FOR MULTIMEDIA GENERATION
HTW Dresden (GERMANY)
About this paper:
Conference name: 18th International Technology, Education and Development Conference
Dates: 4-6 March, 2024
Location: Valencia, Spain
Abstract:
Media literacy and digital literacy are important skills to make sense of and think critically about media in the digital age. With the rise of generative Artificial Intelligence (AI), teaching not only media and digital literacy, but also teaching AI literacy is crucial to navigate the digital landscape.
This work presents a student workshop called “Artificial Intelligence for Multimedia Generation” (AIM). AIM provides a set of Jupyter Notebooks and additional resources. AIM aims at
1) explaining high-level concepts of AI models with practical examples,
2) teaching prompt engineering techniques in a playful and experimental manner, and
3) eliciting discussions on the chances and risks of generative AI, e.g. regarding (online) identity theft.
The resources provided for multimedia generation are modular, but form a coherent workshop if used in their entirety and are split into separate Jupyter Notebooks. Each Notebook deals with a different input and output modality of generative AI models. The covered modalities are
1) text,
2) image,
3) music,
4) voice, and
5) video.
The resources for each modality consist of:
1) a primer on how the model functions on a high level,
2) a Notebook including the code and data to run an AI model that works with the respective modality,
3) different prompt engineering techniques to get the model to generate the expected output and
4) a set of questions to start a discussion on the potential chances and risks of the mass availability of such models.
The outputs of one model can be used as an input of another model or as an input of the same model in some cases. A model that generates an image can for example receive a textual input and – for refinement – an image input to produce an image output that aligns more with the user’s intent.
AIM targets mostly students, but the modules can be adapted to address different age groups and potentially subtopics. If the target group is e.g. a middle school class (ages 11 to 14) the entire workshop could be held with a specific theme in mind – for example “Create your own superhero story!” – to engage the students further. Each model can then be utilized to create a piece of media in the students' superhero universe.
1) The superheroes’ background story and superpower can be generated with the text model,
2) the appearance of the superhero can be visualized by the image model,
3) the superheroes theme music can be generated with the music model,
4) the vocal narration of the background story can be told in the participants own voice by cloning the participants voice and reading the background story via a text to speech (TTS) model, and
5) finally, an animated video can be constructed that consists of multiple generated images of the superhero as individual frames while playing the generated theme music and the vocal narration in the background.
The participants of the workshop may take home their generated media to show it off to their friends and family, and they receive access to the Notebooks to experiment further in their own time.
Since AIM builds upon the Jupyter Notebooks, the workshop can be deployed on local servers or via cloud hosting services. The free tier of Google Colab is recommended, as the only requirement for usage is a free Google account per participant. The workshop materials are accessible via the following GitHub repository: https://github.com/plc-dev/AIM/tree/main.Keywords:
Experiential Learning, Open Educational Resource, Artificial Intelligence, AI Literacy, Media Literacy, Prompt Engineering, Jupyter Notebook, e-learning.