Making clay magnet faces

I bought a 3D printer a while ago and started using it to make some fun things.

And then I had my first fail, and I kind of got gun shy. And I didn't really didn't really go back to using it.


But during that time, I had this idea of printing these zombie face magnets, and I thought that they'd be really cool.



Later, after the printer failed, I went back to these magnets and looked at them and thought, “you know, it's cool that they're 3D printed, but they don't they don't have to be. Like, These could easily be sculpted”.


My little zombie friend watches on. He’s very encouraging.




I have all these interesting colors of clay, and I have this cool metal shape that you can cut out ovals and I thought that might make a cool starting size. And then I could just see how much variety I could get from these faces. And I would experiment with some being kind of horizontal and then some being sort of vertical long faces.


And then I can just play with the features and see how much variety I can get there. Give them really simple mouths. But I could play with this other stuff like bags under their eyes or like mustaches or eyebrows. I could give them really crazy noses.

So it started out as sort of this exercise in minimalist design. Like, how much personality can I get out of these guys with the simplest shapes?

Sometimes when I'm in the mood to make things, I get overwhelmed by all the choices and the possibilities. And so to combat that, I like to give myself a really small box to work within.

So this was fun being able to make a variety of characters from very simple colors and very simple shapes.

You can watch the whole studio vlog on my Youtube channel. I’ve got six studio vlogs up already.
https://www.youtube.com/daspetey

I’ve also got some of these little magnet faces for sale up on my Etsy shop, if you’re interested in decorating your fridge. https://www.etsy.com/shop/daspetey

Eating Ghosts (ADAM Music Project, featuring No More Kings)

A while ago, my friend Adam, asked Neil and me to be part of his new Pac-man inspired song, Eating Ghosts. Adam has a project called the A.D.A.M. Music Project where he collaborates with a bunch of really cool artists on songs inspired by video games. So when he asked Neil and me to be part of a Pac-man one, we were psyched!

But the really fun part is, immediately after release, the song got picked up on one of Spotify’s huge editorial playlists! https://open.spotify.com/playlist/37i9dQZF1DX23YPJntYMnh?si=c25c38cec0014fc3

It’s not very often that I sing on songs that I didn’t write, but honestly, this feels like the kind of song NMK would’ve thought of anyway. When Adam pitched me the idea, I was like, “oh man, why didn’t I think of that?”

After we recorded the song, Adam wanted a fun lyric video to debut it with. So he asked me to come up with one. I tried to reference the game vibe as much as I could for the video. You can check it out here: https://www.youtube.com/watch?v=TXmAHPGDbM8


Adam asked me to sing on another song too. This time it’s about Yoshi! I’ll post about that one soon.

Ink scape

A while ago, I made some fun ink wash pieces on watercolor paper. They were essentially abstract pieces where I dropped ink onto the paper and let it form random blobs. Then I went over it with white ink to create concentric circle patterns. I did a series of about six of these.

Later I re-discovered the pieces, and I think it would be fun to give them some motion. So I brought them into Disco Diffusion and used them as the initial seed image for the AI to create an animation from.

The Matrix and AI

i’ve always loved the Matrix. The first movie has a special place in my heart. I saw it in theaters when it came out, knowing absolutely nothing about it. I was blown away.

So naturally as AI tools started becoming more and more accessible, I decided to see what AI would do with the Matrix.

still frame from the matrix
still frame from the matrix
still frame from the matrix

Specifically, I rendered out 1500 frames from the movie as stills, and trained an AI on that dataset. The imagery it produced wasn’t that interesting. So I tried a few more AI approaches.

AI generated image of the matrix
AI generated image of the matrix

I’ve had this problem before when training AI models on large datasets of non-similar imagery. It sort of breaks. It doesn't really know what kind of imagery i’m looking for, so the results are messy.

A similar thing can happen when the dataset is too small. It picks one or two images and keeps repeating them. Neither of those things are what I want.

AI generated morph

The next tool I tried was an AI model called Looking Glass. Looking Glass works by taking an image, or group of images as input, and then generated variations of those from its pre-trained datasets. The stuff it creates can be pretty out there.

Looking Glass AI generated image of the Matrix
Looking Glass AI generated image of the Matrix

Aaaand that wasn’t much better. You can tell what it’s trying to do, but it keeps missing the mark.

That’s ok, i’ve got a few more AI tricks up my sleeve. Next is Disco Diffusion.

Looking Glass AI generated image of the Matrix

Now we’re getting weird! Disco Diffusion also has a mode to create video, by moving through an image and continuously putting it through the algorithm. So I fed it a still frames and let it do its thing.

The Matrix has you

This idea has potential. I definitely want to experiment with more still frames, and see how weird things can get. But this one takes the longest. Usually I have to leave it overnight and check it in the morning. So I had one more AI tool to try before bed: Midjourney.

Midjourney AI generated image of the Matrix

At first, Midjourney was giving me results that were “too good”. Or rather, too close to the source material. It started to look like the Matrix, just re-cast.

So I decided to add some other terms to the prompt. “Retro mod illustration” got me this:

Midjourney AI generated image of the Matrix
Midjourney AI generated image of the Matrix

I kind of love this direction! This kind of output makes me feel like exploring my own illustration in this style. There’s a lot more I think I can do with this concept of sending still frames from my favorite movies through AI. And there’s a lot of different styles I can explore with the help of AI. This kind of stuff really excites me.

Maybe Fight Club next?

Mushroom People

I don’t know what started my recent fascination with painting portraits of people with mushrooms growing out of their heads, but it’s a thing now. I’ve filled a few sketchbook pages with them, and I’ve even done a few acrylic paintings.

Since my recent addiction to using AI to inspire new art pieces, I’ve also started training AI models on this stuff, to see what new imagery it can come up with.

In addition to the weird stuff above, I used a few other AI tools to give me some variations on my images. The results were varied, and disturbing. I’m really falling in love with the process of sending my art through AI models. It really feels like collaborating with an alien version of myself.

The next step for me is to take my favorites from the AI output, and sketch my own versions. Then I can work those up into paintings and send them back through the AI to see what it does with those.

I’m also trying to make sure I keep this project separate from my Cabbagehead project. I want to make sure each one has its own feel.

Making more Mayan-inspired art, with the help of AI

I've been working a lot on my Mayan-inspired art project called Lightbearer lately. And recently I started using AI to help me come up with ideas and variations on imagery.

This batch of four marker pieces was based on some of the output from my AI exploration, mixed with a little Mayan art reference.

I’ve been working on this series for a long time now, and the stuff I’m making ranges pretty widely in how much it references Mayan art. Additionally, I’ve recently been fascinated with using AI in my process. I use a variety of AI tools, in a variety of ways.

Here’s some of the output from one session working with AI.

These images were generated using Dalle-mini, which is a text-to-image generator. These types of generators take a while to dial in a look that you want. For me, I’m just looking for it to give me new ideas. I don’t need the AI to give me a final art image.

So once I get some fun, usable stuff from the AI, I immediately start filling a few sketchbook pages with sketches. When I’m doing these, I’m not concerned with replicating the AI output. I’m trying to “collage” the best bits of the output into new single images.


The whole process is super fun for me. I love seeing what the AI makes. It feels a little like fishing. Each session, I’m just trying to catch a few good fish. And of course I love sketching, and honestly, I always feel like I should be sketching more than I do.


So that’s it! Please let me know if you have a method for using AI or other generative art methods in your own art making!


How I use AI in my art

Recently I've become addicted to using AI in my illustration work. I love starting an image with an AI generated piece and seeing what I can make of it. My newest favorite procedure is this:

I start a session in Artbreeder. Artbreeder is great for jump-starting the creative process for me. I can quickly generate some unique, inspiring imagery and then easily produce variations of that image. The variations can be subtle shifts, or completely new imagery.

Once i have a few pieces that I like from my Artbreeder session, I'll bring those into Looking Glass AI. Looking Glass is a Google Colab notebook that is free to play around with. I pay for Colab Pro so that I can have access to faster machines, but the procedure works just fine on the free plan.

I feed my Artbreeder images into looking glass one at a time. Then I generate some wild variations. I slowly add more of the Artbreeder imagery to the ingesting folder and let Looking Glass play around some more.

Usually while this stuff is cooking, I'll hop over to Prosepainter, which is by the creator of Artbreeder. Prosepainter lets you upload an image and then choose a section of the image, or the whole image, to be affected by a text prompt. The AI will slowly morph the input image towards its translation of the text. You can get some pretty wild surreal stuff this way.

I've been using it mostly to nudge the imagery more towards classic artists that i love, like Mucha or Klimt, or even Bosch. Or sometimes I'll start to see new things emerge in the imagery, and I'll tailor the text prompt to push more towards that.

I find adding "a painting of" in front of the text gives me more interesting results than the photo-realistic stuff that normally comes out.

Once I've done this whole process, I'm left with a folder full of really cool unique imagery for the day. I'll comb through that stuff and set aside some pieces to work on. Those get imported into Procreate on my iPad, and then I start painting.

My most recent adventure started as usual in Artbreeder, and moved into Looking Glass and Prosepainter, as I normally do. As i pushed the image further, it started to look like a Medusa head. so I asked Prosepainter to give me an image of Medusa. I loved what it produced, so I took that into Procreate to refine it.

While on my iPad, I started using some of my image manipulation apps and ended up with some fun abstract imagery. I wanted to do some simple white line character in the resulting image. Then I saw a sort of fire shape in the dark areas. So I played with that illustration a bit, and brought it back into one of the apps for this final look.

Quite a weird journey to end up with this illustration, but I really love the final piece. And I especially love this way of working. It continues to surprise and delight me.

Let me know if you use this process to create anything fun!

Mr B coloring pages

A while ago, I made a set of Mr B postcards. 10 little postcard prints of a little Mr B flying around with his flower friends. Soon after, I turned that postcard set into a tiny Mr B video game. Just a silly little interactive toy, really. You fly around with Mr B and play drums, or make music on a tiny flower sequencer. Silly Stuff.

The first Mr B postcard set

Mr B plays the drums in his first video game.

Then I started working on an update to the game, and it meant more art. So I started working on what is starting to become a second Mr B postcard set.

As the second postcard started to unfold, I realized I had enough art to do a cute little Mr B coloring book, and maybe an activity book, with mazes and word searches. You know, like those old school fun pads we had as kids.

So as I assemble that book, I wanted to give some pages away here for free. I’m hoping that you color some of these pages and share what you come up with, with me! You can certainly tag me in anything that you post. I’d love to see a bunch of creative Mr B pages!

Just right-click/save the images below! You can actually get these, and five more free coloring pages if you sign up to my email list.

A.I., Machine Learning, and zombies

Ever since I got my first Commodore Vic20 (and then later the Commodore 64), I’ve been addicted to using the computer to make art. And while my college years were spent studying traditional painting and drawing, as soon as I graduated and moved to Los Angeles, I dove headfirst into using the computer to animate and paint.

Recently, the field of A.I and Machine Learning has made leaps and bounds, making these tools finally available to artists. I first started playing around with a web-based app called Pix2Pix, which let you sketch something, and the computer would “flesh” it out.

My initial results were pure nightmare fuel.

pix2pix.png



Not too long after, I discovered a site called Artbreeder, which gave access to a pre-trained set of models. Mostly you could make your photos look like Van Gogh, or any other artist whose work the A.I. had been trained on. But I wanted to be able to train my own model. I wanted the computer to make new work based on my entire body of digital paintings.

That’s when I found RunwayML.

RunwayML’s aim was to make the training of new A.I. models accessible to artists, which is exactly what I was wishing for. And their site made the whole process super painless to hop into. Soon I was training models on my amoeba characters, some digital faces, and now, zombies.

My first experiment in the land of the A.I. undead, was to grab all the photos of zombies I could find from The Walking Dead. I trained a model on them, and it started giving me some pretty creepy, but awesome output. Soon, I realized that I would have to fix some of its attempts, and re-train it to include the newly altered imagery.

This idea turned to work extremely well.

img000000000.jpg
img000000023.jpg

Then I decided to start re-painting the images to be a little sillier, more in my style. Those results were just as weird.

But a few days ago, I realized I had enough imagery of my silly, squiggly zombies to try training a model on those. And boy am I glad I did! Right away the computer started making things that approximated my wide-eyed, confused zombies. I just had to keep training it.

But then something went wrong. In Machine Learning, there is a measurement used to describe how confident the computer is in its attempts to recreate the source imagery. This is called the FID score, or Frechet Inception Distance. It’s basically a measure of how far off the pixels of the generated image are from the original images in the training data-set.

And somewhere in my over-excitement, I had over-trained my zombie model.

So my current goal, is to take the output from the stuff that worked, and clean it up. Re-paint it all and feed it back to the computer. Like I did with the Walking Dead zombies. My hope is that with a little time and care, I can train a little robot Petey to make very good silly squiggly zombies. Just like his dad.