Creative industries are undergoing a 0 to 1 moment. If you didn't know, now you do. The impact that AI will have and is already having on creativity cannot be understated. The whole "but AI can’t draw hands right" argument will become irrelevant in the near future, way nearer than anyone will be comfortable with. The options are simple: adapt or die.

This actually isn’t new—in every field, things are always being made easier, and the act of creation is being made more accessible to more people. Originally, pixel art was drawn on graph paper and manually converted into code. Classical music was created only by savants with royal patrons. Now, both things are accessible to anyone with a cheap computer. We are comfortable with the shortcuts and innovations that occurred before we entered the field, but we fear and resent developments that undermine our current workflow, even if they have the potential to expedite things or lighten our load.

The fact of the matter is, you don’t see a lot of people drawing out their pixel art on grid paper anymore, as the productivity multiplier of a digital approach is too great, and people that never experienced the trials of the prior era have no nostalgia or stockholm syndrome for less convenient methods. We probably won’t see many people creating pixel art in exactly the same way they are currently, even just a few years from now.

All of the new pixel art I posted between October 25 and November 7 was made using AI

—specifically, the stable diffusion generator on Lexica. This is all very new to me, and Lexica’s algorithm isn’t even the most advanced on the market, but this just goes to show how powerful these tools really are. The creative process did involve several hours of pushing pixels by hand as well as more generalized decision-making guided by thousands of hours of personal experience, but as technology advances the process will become more weighted towards the decision-making part rather than the pushing pixels part.

My process for creating this image was as follows:

  1. I browsed Lexica until I found something similar enough to the type of art I usually make. While no stable diffusion algorithm is currently capable of producing fully adequate pixel art, searching for terms like "pixel art" yielded results that were the kernel of the ultimate work.
  2. I copied prompts from inspiring and relevant images and generated new images, tweaking the prompt until something usable was produced. If you’d like to know the exact prompt I used for this piece, DM me.
  3. I determined the ideal resolution and cropping for the piece. I compared native resolutions of various pixel art pieces which were similar to my intended end product, and decided to go with 200x200, cropping the image a little below the figure’s breasts as this is a more typical composition for a bust-length work. I knew I wanted to be able to draw little hearts in the eyes, so the resolution needed to be at least big enough to accommodate that detail.
  4. I scaled the image down to the correct size and converted it to pixel. I brought the image into Photoshop and selected "save for web and devices" to expedite this process of palettization, a trick I’ve used for many years. I’m sure there are other ways of achieving the same end, but this way works well enough and allows me to compare how different settings produce different results in real time. During this process, I experimented briefly with the different dithering settings, which I did not ultimately use, but this inspired me to try some creative dithering techniques in the background towards the end of the process. The way in which Photoshop created AA from a limited color count inspired me to use creative AA in the piece, something I also typically do not do.
  5. I cleaned up the art by hand. The original image has some wonky eyebrow stuff going on, which I initially considered turning into a headband, but I decided to just use my judgment to place the eyebrows somewhere that made sense. I adjusted the shading on the neck and boobas a bit and rendered parts of the hair that weren't finished. The figure was almost symmetrical but not quite, so I fixed that with some copy and paste action (Oh look, a modern luxury that artists of previous generations probably used to scoff at.)
  6. I put my creative stamp on the piece. This meant adjusting the composition of the background a little and adding creative dithering patterns for texture. I liked the idea of having differently scaled pixel art in the bg, something I wouldn’t have done were it not for the image the AI produced. I added another cloud and gave her a little blush under her eyes to make her extra endearing and not-at-all-terrifying-that-this-is-the-OC-of-an-algorithm.

The process took me about six hours in total, significantly fewer than what would have been necessary to create this from scratch. Looking back, I probably should have gone a little higher res than I did—there really aren’t enough pixels for subtleties in line weight variance, so I was forced to spend time touching up the linework and AA for consistency. 200x200 is relatively large for pixel art, but if a single pixel makes this much of a difference it should probably be larger. If I had gone higher res, I could have gotten away with being a little sloppier overall and would have saved even more time.

Not only would it have taken me around around twice as long to create this image from nothing, I would never have made this exact image if it weren’t for the AI.

The procedure for creating these was similar, but I generated a separate image for each item as well as for the background. Going into this, I knew that any time savings would be minimal, since this type of thing usually only takes me around an hour per sprite, and it does take some time to generate and select the ideal initial images. However, I do think the process was more straightforward and I'm happy with the end result. I started with the larger sprites, but I think I prefer the smaller ones—once I made the initial set it was trivial to make them smaller.

As expected, the process for creating this isometric island was the most involved out of the three pieces, but there are many elements that I would not have approached in the same way without the image I generated, and I do think the workflow was more streamlined than it would have been otherwise.

Beauty filters allow more people to appear beautiful.

Meme formats allow more people to make jokes. Youtube allows more people to make their own "tv" shows. Steam allows more people to make and distribute games. Etc. If you’re someone who already was able to do any of these things before they became accessible, it makes sense to be upset. As the demand for a thing increases, the supply of that thing becomes split between more and more providers, because a single provider cannot possibly satisfy the entire demand. Even if you believe the quality of your product is better, most consumers are not that discerning and have become accustomed to valuing "more" rather than "better" or "different."

In every field there is a divide between the innovator and the technician. The innovator figures out how to do something, and the technicians replicate the process to satisfy the increasing needs of the consumers. As the creative process becomes more and more autonomous, these technicians will become the AI itself rather than the artists and previously non-artists who are now able to create thanks to stable diffusion and similar technological advances. A small percentage of capable creatives will continue to be employed, and their individual voices will determine what art looks like on the whole. You can already see this in the hyperpop music scene—intensely individual experiences and perspectives being expressed through cutting-edge techniques. The sounds themselves may be uncanny valley (at least if you’re not used to it) but the messages are very human. Still, it’s only a matter of time before even the most cutting-edge genres are easily replicated by AI. (Highly recommend fishmonger and boneyard aka fearmonger by underscores (it’s the new wave of the future!) and 1000 gecs by 100 gecs—perfect soundtracks for the current cyberpunk dystopia/upcoming apocalypse, lmao right?)

Both what is produced and what is enjoyed will shift to align with what AI is capable of and what it excels at. While people will grow to enjoy AI art more, there will still be a demand for things that cannot be made well enough by AI. This means that those few skills that AI cannot master will be the ones that will continue to be valued—this includes pixel art currently, but I don’t know how long this will continue to be the case. People are working on pixel art models as we speak, and they're almost good, it basically works.

Because stable diffusion creates images from text through the process of adding and removing noise, these algorithms are particularly good at capturing noisy things, like clouds.

There is no point in being scared of what we, individually, have no control over, but there is value in being aware and in choosing to see things as opportunities for growth and change. There will always be a distinction between high and low art, and, just like how there are people willing to pay exponentially more for a hand-knitted sweater than one made in a fast-fashion sweatshop, there will always be people who value the act of making something by hand. Don’t give up (yet.)

(Here's a picture of Naruto saying "never give up" that I generated in two seconds using stable diffusion—one thing it's not great at yet is proper english.)