How to get started with genAI prototyping

Author
Juliana Gómez

Sr Designer

Nico Almanzar

Sr Designer

Lennert Decuypere

AI Lead & Innovation Consultant

Table of content

In case you’ve just perfected the craft of prompt writing, OpenAI got news for you. Dall-E, their alternative to Midjourney and Photoshop’s Firefly, will no longer need artificially written prompts. Instead, “natural” text descriptions will get the job done for you, thanks to a symbiosis between ChatGPT and Dall-E 3.

With that, generative AI plays more than ever a vital part in any innovator or business designer’s process.

Farewell, 'prompt engineers'?

The continued democratization of generative AI tools has revolutionized the way we ideate, prototype, and test new concepts, but that doesn’t necessarily make them intuitive to use. Similarly, picking the right tool to achieve your specific task can be a bit of a mouthful in this mushrooming of new tools.

In this blog, we’ll share tips and tricks from our design teams, and their experience with using text-to-image prompts. We’ll guide you through all you need to know to simply get started, and unlock the potential of genAI when making prototypes.

Midjourney-vs-Dall-E-vs-Firefly

1. Firefly for products, Midjourney for people, Photoshop Beta for editing

So where should you start?

If anything, OpenAI’s latest move has won a battle, but the war is not over yet. Only time will tell what tool(s) will be best for prototyping. In the meantime, our designers swear by Firefly for product shots. It was released by Adobe with (product) designers in mind, trained on a publicly available dataset. It is great for product visuals and packaging that is not too “crazy”, and has won over our hearts as long as it doesn’t need to portray humans (for now).

But as long as a hand has four fingers and a thumb, we equally swear by Midjourney. This tool, hosted on Discord, was built out with art creation in mind. Hence it is great for illustrations, concept art and realistic human pictures. It can be more creative as well – inventive even, to some extent.

But regardless of your starting point, Photoshop Beta is our go-to to finish off prototypes (e.g. putting the McDonald’s logo on the seaweed packaging below), or to create a bunch of generative fills in order to upgrade your prototype to print quality.

Futuristic-McDonalds-wrapping
Fictitious product packaging, developed with AI by Board of Innovation

2. Be concise

Dall-E 3 is yet to be released for the general public, but for now most tools perform best when your prompts are concise. For Midjourney, our current rule of thumb is: anything written after approximately 40 words is likely to be ignored; after 60 words, it becomes highly likely to be ignored, and after 80 words, it will almost certainly be cut off.

3. What you see is what you get

Being concise shouldn’t keep you from using descriptive language though. Use adjectives and adverbs to add nuance and emotion to your output. You can describe the color, shape, size, texture, and other characteristics of the object or scene that you want to create.

Just keep in mind that when prompting, the algorithm will break down your prompt in the most non-contextual way possible. Dall-E 3 has yet to prove itself here. E.g. when asked for “a top-down view of a bowl of baby food”, the AI may as well return the below Anna Geddes-like visuals, trying to make sense of how each of the elements from your prompt may come together. Similarly, good luck prompting “people having drinks on the house”.

Generative AI prototype

The opposite can be the case as well. Use a double colon (::) in Midjourney to interpret terms separately. One use case where this comes in handy is when you want to increase or decrease the importance of certain terms in your prompt.

A ‘space ship’ will take you to the world of Star Wars and Star Trek, while ‘space:: ship’ (mind the space) will consider each word separately, generating a ship sailing through space.

4. Use creative tags (in Midjourney)

While it can look a bit overwhelming, adding a couple of tags will make a big difference. We have compiled a simple cheat sheet that will help you understand how to use the ones we find most relevant for Midjourney:

  • Aspect ratio – -ar <width:height>:  Use it to give your image a s. 1:1 is a square and will be the default ratio in all the generators, whilst 16:9 is a wide horizontal rectangle.
  • Chaos – -c <number 0-100>: Use it to change how varied and unpredictable your results can be, where 0 is the default value, giving very similar results and 100 is absolutely different images
  • Tile – -tile: when you want to create a seamless pattern you want to expand through a surface. 
  • Negative – -no <terms that you don’t want to appear in your results>: Use this one to avoid having typography, branding, or any specific elements you might be getting from previous prompts

 

(don’t forget to always leave a space between the tag and the value you will input, e.g. –ar 9:16)

5. Generate results in bulk (in Midjourney)

Depending on what tool you’re using, you can expect the AI to return one to four visual outputs per prompt.

Still, you can write prompts in Midjourney that allow you to quickly generate variations of a prompt in one single command. List the options you’d like to see with separated commas like the example below.

/imagine Frontal view of a {white, grey, blue} pill in a {paper, plastic, aluminum} pack.

Midjourney will then produce 3x3x4 results combining the above items in every way possible.

Feeling empowered by now?

So farewell, ‘prompt engineers’?

It’s safe to say that prompt engineering for most of us will change the way we’ve known it over the last year. But as long as there are good and bad designs, there will always be a need for (prompt) designers, in whatsoever form, pushing the limits of what can be achieved.

They can assist you in handling a wider range of possible user inputs and outputs – a fairly new challenge due to the open-ended nature of AI. They can also help you to tackle bias-related challenges in AI, making mock-ups more inclusive. And last but not least, they can help you to stand out by actively prompting against the (nearly boring) predictability soon, of genAI prototyping tools.

Join us at our virtual Autonomous Innovation Summit  to discover how AI is changing the way we innovate, operate and design – and how businesses can transform to thrive in this autonomous world.

VIRTUAL SUMMIT

Autonomous
Innovation

JUNE 5 & 6