3:20pm - 3:42pmHOW CAN AI SUPPORT THE CREATION OF NOVEL IDEAS IN PRODUCT DESIGN
Victoria Catherine Hamilton, Ross Brisco, Hilary Grierson
University of Strathclyde, United Kingdom
The rise of Artificial Intelligence (AI) provides an exciting opportunity in many fields and aspects of life. In the field of design, one area to be explored, is how AI could be used to support a designer to develop more novel ideas at the idea generation stage of the product development journey. In this paper, we explore how AI can be utilised to support a traditional 6-3-5 creative design methods workshop, and whether it increases the novelty of the concepts. In our workshop, students were tasked with using the 6-3-5 method to generate ideas for a product which could make life easier for an arthritis sufferer when undertaking tasks in the kitchen. Some of the students were advised they could use Chat GPT to support idea creation; some of the students were advised they could use Google to support idea creation; and the remaining students were advised they would not be able to use any additional support for idea generation. This group was considered as the control group. A feedback survey was distributed among the participants to gather their thoughts on whether the use of AI/Google had assisted them in the application of the design method to generate more novel and creative concepts. Further analysis was then conducted, with a focus on novelty, on the outputs of the 6-3-5 to assess whether the novelty of the concepts in the AI/Google groups was greater than that of the control group. The results of the workshop indicated that the ideas generated with AI support were more novel than those without, and that students utilising AI became more relaxed in their approach to idea generation, relying on AI before fully exhausting their own ideas. Interestingly, the perceived helpfulness of AI was also not fully appreciated by the more novice designers in comparison to those more experienced designers. In this paper we discuss how AI could be used by educators to support teaching and application of more creative design methods such as 6-3-5.
3:42pm - 4:04pmEXPLORING THE SYNERGY OF AI GENERATIVE FILL IN PHOTOSHOP AND THE CREATIVE DESIGN PROCESS UTILISING INFORMAL LEARNING
Abigail Batley, Richard Glithro
Bournemouth University, United Kingdom
This paper examines the emerging use of AI generative fill techniques in Adobe Photoshop, coupled with informal learning situations, to enhance the creation of product posters for student design exhibitions. By leveraging the capabilities of AI, designers can streamline their creative workflows, allowing for more efficient and innovative design outcomes. The aim of this paper is to examine the benefits of AI generative fill in comparison to traditional manual methods for new graduates exhibiting at their first design show, and to gauge the influence of informal learning settings in supporting designers' adoption of AI-driven design techniques. The findings of this research demonstrate a paradigm shift in the creative process, as AI generative fill in Photoshop emerges as a powerful tool for designers seeking efficiency, inspiration, and novel artistic directions. The findings also show how informal learning settings have played a vital role in nurturing new designers' adoption of AI-driven design techniques.
4:04pm - 4:26pmDREAMWORLDS: A CASE STUDY PRESENTING THE POTENTIAL OF TEXT-TO-IMAGE AI IN PRODUCT DESIGN EDUCATION
Emily Elizabeth Brook, Christopher Hanley, Ian Campbell Cole, Suzannah Hayes, Craig Mutch
Nottingham Trent University, United Kingdom
This paper presents a two-part case study on the "Dreamworlds" project conducted with first-year BA Product Design Students. It explores the integration of Generative AI tools within a five-week design project, focusing on its role in speculative world-building and subsequent toy design. Part One involved collaborative exploration and creation of speculative worlds in teams of 3 – 4 students over two weeks. Leveraging Text-to-Image AI, students produced a 5-minute video presenting their visions, showcasing AI-generated visuals that enhanced artistic direction. Part Two shifted focus to designing toys for children aged 4-5, using the speculative worlds from Part One as inspiration. Unlike Part One, Part Two was carried out individually, emphasizing consideration of materials, safety, and cultural sensitivity. This case study contributes to the discourse on integrating AI in design education, offering insights into its roles in world-building and practical design. The "Dreamworlds" project serves as a practical example of AI application in both speculative and practical design education.
4:26pm - 4:48pmA COMPARISON OF ARTIFICIAL INTELLIGENCE IMAGE GENERATION TOOLS IN PRODUCT DESIGN
Sam Dhami, Ross Brisco
Department of Design, Manufacturing and Engineering Management, University of Strathclyde, United Kingdom
Artificial intelligence (AI) image generators have seen a significant increase in sophistication and public accessibility in recent years, capable of creating photorealistic and complex images from a line of text. A potential application for these image generators is in the concept generation phase in product design projects. Successful implementation of AI text-to-image generators in concept generation could prove to be a cost and time saving application for companies and designers. Therefore, the aim of this paper is to investigate the integration of AI into product design and education. A literature review was conducted to gain a general understanding of what AI is and how AI image generators function. An experiment was carried out which used three different image generators: Stable Diffusion, DALL·E 2, and Midjourney. Three images of dining tables were produced by each AI text-to-image generator and inserted into a weighting and rating matrix to be rated as concepts along with three real dining tables from IKEA. Within the matrix were four design specifications to rate the concepts against: aesthetics; performance; size; safety. The matrix was sent out to product design students and graduates to be completed anonymously. The highest scoring concept was one from IKEA, followed by one generated by DALL·E 2. Based on the results of the experiment, it was concluded that AI image generators are not yet a viable alternative for concept generation in product design but could be a useful tool to spark new ideas for designers to use during the concept generation phase.
4:48pm - 5:10pmVISUALISING SPECULATIVE MATERIALS: USING TEXT-TO-IMAGE PROMPTING TO ELABORATE LIVINGNESS AS A DESIGNED MATERIAL QUALITY
Ali Cankat Alan1, Owain Pedgley2
1Istanbul Technical University, Turkiye; 2Middle East Technical University, Turkiye
The democratisation of generative AIs has led to the emergence of novel paradigms in design. Varying capabilities of GenAIs, in terms of addressing numerous single and multiple modalities, have allowed designers to implement these tools efficiently in their workflow. GenAIs are used in various instances during the design process, such as research, ideation, visualisation, and reporting. Public GenAIs are often used with prompts. The prompts are the instructional input and descriptive data for a GenAI model to start working. Whilst prompts can be in different mediums such as text, images, videos, etc., the most common occurrence in the World Wide Web is text, which functions on large language models, processing the natural language. This ability to tell GenAIs what is needed is called prompting or prompt engineering—a developable skill of carefully crafting sentences and descriptive keywords to return a high-quality result. Designers are already prompting, and in an environment where new models of GenAI are being developed and made public each day, design students at various levels of their education have started using the power of GenAI for their daily design tasks.
One area of great potential is text-to-image modelling, which opens the opportunity for GenAI to act as a fast visualisation tool. This paper presents an implementation of one of the most popular text-to-image GenAI models, Midjourney, as a part of an academic research through design (RTD) process. Midjourney is used as a visualisation tool for outcomes in a design fiction biodesign workshop focused on investigating new future cohabitation possibilities with living materials. In the workshop, design fiction was authored and initially communicated using narratives. Midjourney was then employed as a means to transform verbal storytelling into a visual medium that could more readily provoke design discussion based around feedback, plausibility and design iteration. The narratives were recorded with a voice recorder, analysed through CAQDAS, and converted into GenAI prompts by carefully selecting descriptive words and phrases tied to each participant’s storyworld.
The success of the Midjourney implementation lies in the ability to bridge between the abstractness of fiction and the tangibility of material, as well as to visually contextualize design proposals in a future setting. Using GenAI, it was possible to quickly generate visual interpretations of living materials as boundary objects to provoke discussions on the merits and possibilities of livingness as a material quality. The results highlighted two critical takeaways: 1) in terms of design fiction, text-to-image GenAI models yield unexplored potentials for visualising narrative-based design outcomes and diegeses in a broader sense; and 2) in terms of materials for design practice and education, such models can help ease the communication of performative and experiential qualities of newly developed materials or new material proposals amongst key stakeholders.
|