Develop and deliver

Overview

This page mostly showcase work where I'm exploring ideas and visualising complex system processes. There are also samples where I used generative AI as part of my process like storyboarding.

Sketching: 2-day design sprint on how to transition into online course delivery

At UX Playground, we did a design sprint to explore approaches for transitioning an in-person UX course to an online format. During the first session, I enjoyed sketching concepts focused on community interaction and how to promote the new course, such as design hackathons and community challenges. In a later group voting session, a common theme emerged around offering a variety of activities that foster interaction within the community like collaborative projects and learning sessions.

Sketches on how to grow and engage the UX Playground community

My sketches presented in Miro for voting. I explored how to engage with the UX Playground community that also allows promoting the new course offering, such as community challenges and hackathons.

Storyboarding: cake baking as an example to explain the UX process

For the development of UX Playground's UX course, the goal was to create content and activities to teach UX design for non-design professionals. For one of the introduction sessions, I developed a storyboard using the example of a baker running a cake shop. I chose this example because previously it helped me to learn about the UX process when I was a front-end developer.

Storyboard for a course activity

A storyboard I created in Adobe XD for a Project UX course activity. The storyboard describes how a baker runs her own cake shop, an example I feel is relatable for non-designers learning about the UX process.

Storyboard used in a session

A sprint session of the UX Design course in Miro, where the storyboard was used for an activity.

Storyboarding: using generative AI (Midjourney) to create images for storytelling

For an article series on multisensory design, I utilised generative AI (Midjourney) to create storyboard images. I learned that achieving characteristics like style, characters, and scenes matching our vision required a very detailed and structured prompt. Through experimentation, I found prompts needed to be specific so the AI could better interpret the output I wanted. Personally, it felt like I am the product owner and the AI being the designer, trying it's best to understand and create results that matches my instructions.

Storyboarding scenario

A scenario describing how a design concept may work for a multisensory article focused on soundscapes in urban environments.

Example prompt for step 3 of the scenario

An example prompt for step 3 of the scenario. I structured my prompt into a framework I learnt from Manon Dave during his presentation at Mindvalley AI Summit 2024 (https://www.mindvalley.com/summit/ai). This helped me to clarify what output I wanted Midjourney to create whilst also considering if my prompt matches the scenario step.

My process using Midjourney

A flow diagram showing my process in using Midjourney to create images. Depending on the results, it can take me many tries until I get an image which I feel matches the imagery I was aiming for or to get images with similar styles.

Generating images with Midjourney

Example images generated by Midjourney. Results are reviewed and iterated depending on if it matches my prompt or the image/prompt needing adjustments e.g. more zoomed out to capture the background, using the editor's inpainting feature to regenerate specific areas of the image, creating additional variations of one result etc.