This is a series of images I made using Midjourney for editorial art for an article I wrote for the Michigan Municipal League. When I’m making editorial art for an article, I try to keep the style relatively unified to create a sense of continuity throughout the page. Sometimes I play around with extremely specific things, like the socialist realism in Kristin Caffray’s article on the just transition.
The first set of images are contemplative images of robots in front of cityscapes. I wanted to illustrate something playful to communicate how technological innovation can be used to make cities better places (in this case, industrialized construction). The style reference used here is illustration by Daniel Clowes, because this often gets me a good color palette and shading for illustrations, as long as the prompts are relatively straightforward.
The next few ones included a few different style references, but I wanted something that was less cartoonish and more abstract.
I noted that when I used “tile” I got a completely different style altogether, which seems to be a training problem. This means that in parsing my prompt, Midjourney probably threw out my style references because it didn’t have any that it could fit into a tiled image. A more sophisticated AI would be able to translate this properly, but instead I get this sort of vector stock graphics style for the following two images:
The last two, then, are completely different styles I just experimented with for fun. I wanted to make more whimsical representations of the idea of industrialized construction. The train one was particularly challenging. As I’ve mentioned before, it can be challenging to get generative AI image models like Midjourney or DALL-E to render images that include fundamentally illogical or nonsensical components– in this case, a freight train carrying houses, or an assembly line upon which houses are built.
It’s interesting to see how it did it, too. In this case, the locomotive has the same flower boxes on the side and front that you can also notice on the houses, which look like quaint little cottages. This is a way in which whatever series of filters, diffusions, and gyrations within the artificial intelligence brain look at each “car” of the train and don’t differentiate a flower box on the window of a house from the edge of a freight train. There are other examples of weirdness, like the overhead pantograph in spite of this appearing to be a diesel or perhaps even steam locomotive.
Or the fact that the smoke appears to perhaps be blowing the wrong direction. But this was the best of dozens of attempts.
The below image is the first one I produced with DALL-E for this series to try and get a better attempt at a freight train carrying houses. I realized that it’d be hard to get something that looked like volumetric construction modules, so I figured I’d try for something more playful. The difference between the above image and the below one is that the above one was actually reverse engineered from the bottom one using /describe in Midjourney. Even with the guidance of this prompt, it still took forever to try and get something that looked good.
The following image was the second one I produced from DALL-E for this series. Note that it looks completely different. I’ve noticed that DALL-E’s rendering filters, for lack of a more precise term, don’t seem to create images that look quite as clean as in Midjourney, if you’re zooming in to look for things like artifaction or weird fuzziness. But the idea is there, and it did a pretty decent job, I thought. I’ve found that DALL-E can sometimes get accurate details that Midjourney isn’t able to, even if the finished product doesn’t look quite as polished. Midjourney also involves tweaks that operate more as dials or sliders (the parameters) whereas everything with DALL-E is done as text through ChatGPT.
