Finding images of microbiology that visually describe what happens at the cellular level is crucial for scientific education and communication. Many biological processes occur at the molecular level. Visualizing molecular processes through images can make complex scientific concepts and precision medicine more accessible to students, researchers and the general public. It facilitates effective communication of scientific findings and fosters a better understanding of the molecular world, which is what many government health agencies set out to do.
But finding microbiological images can be a challenge. There is a limited repository of specific images that can be purchased from stock image agencies and agencies such as the National Institute of Health can provide images from their microscopy lab. However, these options can be costly. With inflation dramatically shrinking public health budgets in the past decade, the potential of AI-generated images is looking very promising. Using generative AI images in conjunction with traditional digital design tools will revolutionize the way public health organizations educate and communicate with their audience about how microbiology and precision medicine work.
Text-to-image generation deep learning models like Midjourney and OpenAI’s DALL-E 2 are promising new tools for image generation in an educational healthcare setting. First introduced by OpenAI in April 2022, DALL-E 2 is an AI tool that has gained popularity for generating novel photorealistic images or artwork based on textual input. DALL-E 2 has been trained on billions of existing text-image pairs off the internet and its generative capabilities are powerful. Another example of generative AI that can convert natural language prompts into images is Midjourney Labs' AI-powered image generation platform, which enables the quick creation of high-quality visuals based on text descriptions. These generations are useful for public health agencies to generate visual aids and educational materials and enhance presentations.
The use of these tools always begins in the same way—with a prompt. Prompts are the instructions or examples you enter into a generative AI platform to produce a response. They can come in different forms. They can be single sentences or paragraphs or even contain multiple examples. In this case, the aim is to craft a prompt to create images used to communicate scientific concepts, like precision medicine.
Sometimes the results created by medical illustrators don’t quite resemble scientifically correct images. Both human illustrators and AI usually need a few tries to get it right, as seen in the below early experimentation examples written by the PS creative team at NCI.
The illustration above demonstrates the output sequence of three different prompts, rendering various generative AI art of a microbiology image at various stages of the process. The vision? To create a scientifically accurate representation. A deeper look into the anatomy of a prompt reveals how a successful image is created.
Is public health in the era of deep machine learning? The use of AI-generated imagery is an opportunity to narrow educational gaps in general public health and provide an avenue to deliver affordable microbiology imagery to educate audiences across the world. Generative image models are trained on large datasets to enable the generation of synthetic images that closely resemble what happens at the molecular and cellular levels in our bodies. Thus we train algorithms in deep machine learning. The output use cases range from educational use to image processing in a clinical setting, such as for medical diagnostics or accelerated research and development.
Historically, stock image libraries have been an important resource for education initiatives in public health. Even though it’s in the early stages, there are already some changes disrupting this space. Getty Images recently launched a generative AI art tool trained on its content library. Other companies in the field are doing the same. Adobe released its Firefly model, trained on its licensed images, across its Creative Suite and Creative Cloud service. Using the AI capabilities directly within the stock photo sites requires an additional subscription to use the tools. While in beta, these tools did not compare well to Midjourney for quality and thus were not used in the creation of the images illustrated above. The Firefly tools in the latest releases of Photoshop and Illustrator are used mainly for adding background to existing photos or illustrations to accommodate multiple crop sizes for the image creation team’s content management system.
Now is the time to revolutionize the way public health agencies educate their audience. Using AI to create images for communication and educational purposes has the potential to revolutionize the way public health agencies communicate with and educate citizens. AI gives us the opportunity to set the scene and render it in a way that makes the viewer feel as though they have a front-row seat to the immune system fighting cancer, how cells make proteins or how genetic mutations happen within DNA. It is the vista we create that pulls the viewer into those microscopic worlds.
Charles Rose
Creative Director
Let's connect