Another week, another challenge. It’s my turn to come up with the theme for this week.
Theme
This week’s theme is masks. I would love to see what kinds of cool, beautiful, or just plain weird masks you can come up with.
Rules
-
Follow the community’s rules above all else
-
One comment and image per user
-
Embed image directly in the post (no external link)
-
Workflow/Prompt sharing encouraged (we’re all here for fun)
-
At the end of the week each post will be scored according to the following grid
Prize Points Most upvoted +3 points Second most upvoted +1 point Theme is clear +1 point OP’s favorite (me, this week) +1 point Most original +1 point Last entry (to compensate for less time to vote) +1 point Prompt and workflow included +1 point -
Posts that are ex aequo will both get the points
-
Winner gets to pick next theme! Good luck everyone and have fun!
Past entries
- Dieselpunk
- Goosebump Book
- Deep Space Wonders
- Fairy Tales
- A New Sport
- Monsters are Back to School
- War and Peace
- Distant lands
- Unreal Cartoons
- Sustainable Ecumenopolis
Above image made in Midjourney with the prompt:
a woman wearing a mask made from scrimshaw, intricate designs, tendrils, pagan
Good luck and have fun !
Sorry for the low quality, i´ve found no other way to upload animations directly to lemmy, yet.
Edit: Here is a link to a sharing Portal with a higher Resolution GIF :
Gifyu
The comfyui Workflow is also embedded in this picture, you will have to install several custom extensions to make this work:
This is a quite interesting workflow as you can generate relatively long animations.
You draw the Motions of your character from a video. For this one i googled “dancing girl” and took one of the first i´ve found:
Link to youtube Video
You can draw single images from the Video into comfyui. for this one i´ve skipped the first 500 Frames and took 150 Frames to generate the animation. The single images are scaled to the Resolution 512x512. This gives me a initial set of pictures to work with:
Via the openpose prepocessor you can get the Poses for every single image:
This can be fed to the openpose controlnet to get the correct pose for every single animation. Now we have following problem. We are all set with the Poses, but we also need a set of latent images which have to go trough the ksampler. The solution is to generate a single 512x512 latent image and blend it with every single VAE encoded Picture of the Video to get an empty latent image for every Frame:
We get a nice set of empty latents for the sampler:
then we let the ksampler together with the animate diff nodes and controlnet do its magic and we get a set of images for our animation ( The number of possible images seems to be limited by your system memory. i had no problem with 100, 150, 200, 250 images and have not tested higher numbers yet. I could not load the full video):
Last step is to put everthing together with the video combine node. You can set the frame rate here. 30 FPS seems to produce acceptable results.:
That’s awesome work! You’re getting better and better at this :)
It’s too bad the embedding doesn’t seem to work so well, maybe someone else has a solution for this?
I´ve tried to implement Loras in the Workflow and a face detailer to strengthen the lora effect. the Results are quite interesting (low quality comes from the webm format):
Gwen Tennyson Lora
Link To “High Res” GIF
Trump
Link To “High Res” GIF
Buscemi
The workflow is embedded in this picture ( The image is pre Face Detailer)
Wow, that looks quite consistent! It’s so weird to see Trump happy… and in shape…
You might want to put the webm files behind spoilers, they take up a lot of space in the feed if you just want to scroll through. At least they do in my browser (Firefox).
Done!
Its all a bit trial and error right now. this animation took about 20 Minutes on my machine. I would love to do some more tests with different models and embeddings or even loras but unfortunately my time for this is somewhat limited.
I love to do the contests to test new things out :-)
Visions for the future: If you could get a stable output for the background and the actors (maybe Loras?) you could “play out” your own scenes and transform them via stable diffusion to something great. thinking of epic fight scenes, or even short anmation films.
This whole stable diffusion thing is extremly interesting and in my opinion a game changer like the introduction of the mobile phone.
I’m glad you’re enjoying the contests, your contributions are always welcome :)
Though you might want to consider making a post of your own, your work deserves a lot more exposure than just as a comment.
The idea of making your own consistent scenes sounds quite impressive, it’s a bit out of my league though. Like you, I have limited time to invest in this hobby, I’ll stick to my images :)
Great work as always! It’s always interesting to see your workflow and I loved the end result.