Comfyui face id reddit

Comfyui face id reddit. I'm doing some test with comfyUI. one is the face (the redhead woman) and the other one is only the head position (the one from tron evolution). in reddit, but I never saw an "official" manual which can run for everybody. If you want some some face likeness, try detailing the the face using impact pack, but use the old mmdet model, because the new utralytic model is realistic. I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . I just released version 4. Yeah, installing insightface is the hardest part. Hi, i am trying to perform face swap on animal characters on children's storybooks. for those with only one character, i could do face swap with ipadapter face id models, but I am wondering how i can do it with multiple characters in the picture. 217 votes, 58 comments. it really depends on what you're trying to accomplish there are many models to choose from depending on what you want to do. Go to the ControlNet tab, activate it and use "ip-adapter_face_id_plus" as preprocessor and "ip-adapter-faceid-plus_sd15" as the model. pth) Using the IP-adapter plus face model To use the IP adapter face model to copy a face, go to the ControlNet section and upload a headshot image. I’m hoping to use InstantID as part of an inpainting process to change the face of an already existing image but can’t seem to figure it out. 8 even. Like 0. Please share your tips, tricks, and… I have learned a few things about ComfyUI in the last two months. . It uses the face position and angle from the darker image and draws in that location the redhead woman. Recently, BOOLEAN was added to ComfyUI and Impact Pack is updated to use it. Each node needs a load image of the face you want to swap. 1’s 200,000 GPU hours. Yeah what I like to do with comfyui is that I crank up the weight but also don't let the IP adapter start until very late. 8K. The only way to keep the code open and free is by sponsoring its development. it seems to produce fairly decent results in the original SDXL output, but when it gets to upscaling and face detailing things start looking less different again. 1. Could anyone recommend the most effective way to do a quick face swap on an MP4 video? It doesn't necessarily have to be with ComfyUI; I'm open to any tools or methods that offer good quality and reasonable speed. See, this is another big problem with IP adapter (and me) is that it's totally unclear what all it's for and what it should be used for. Where can I learn about face fix, hands fix, legs fix, and overall body enhancement? Hi. 25K subscribers in the comfyui community. Run the WebUI. Please share your tips, tricks, and… I installed instant id for confyui, and tried the example workflows to understand how to use it, the multi id one looks like this, and is used to create an image with 2 people starting from 2 faces, can anyone help me understand which type of image should be inserted in the third input node? thanks to those who will answer 44 votes, 54 comments. K12sysadmin is open to view and closed to post. Hello to everyone because people ask here my full workflow, and my node system for ComfyUI but here what I am using : - First I used Cinema 4D with the sound effector mograph to create the animation, there is many tutorial online how to set it up. 0. I've been using a ComfyUI workflow, but I've run into issues that I haven't been able to resolve, even with ChatGPT's help. I have tried prompts like 'man with dog face', 'man with lion nose', etc but it generates images of a man with a dog or just a dog. 92 votes, 29 comments. The "FACE_MODEL" output from the ReActor node can be used with the Save Face Model node to create a insightface model, then that can be used as a ReActor input instead of an image. We do not encourage stolen content and unauthorized face-swap (deepfake). What I want to do is, I have a image of one real person and want to make full body images with same face as the original image. Now you can try it out too! TripoSR is a state-of-the-art open-source model for fast 3D reconstruction from a single image, collaboratively developed by Tripo AI and Stability AI. 19K subscribers in the comfyui community. Reply reply Open-Leadership-435 I've used InstantID with much smaller faces, probably in the 300-400px range, and it worked great. TripoSR was just released and I just felt like I had to create a node for comfyUI so I can experiement with it. There are lots of discussions in web about it, incl. Belittling their efforts will get you banned. Important ControlNet Settings: Enable: Yes Preprocessor: ip-adapter_clip_sd15 Model: ip-adapter-plus-face_sd15 The smaller faces become, the worse they get, but this depends a lot on the model and the prompt too, so your results will vary. Fingers crossed. Step 4 (optional): Inpaint to add back face detail. I have Matteo's workflow for combining the 2 face ipadapters with the 2 ksamplers but can't figure out at what point and in which location to add Openpose to workflow. Here's FaceSwap with Reactor with a very cool optional, which is FaceAnalysis. Hello, fellows. and so on. Is there any other way to do this? Welcome to the unofficial ComfyUI subreddit. However, since prompting is pretty much the core skill required to work with any gen-ai tool, it’ll be worthwhile studying that in more detail than ComfyUI, at least to begin with. Thank you and have a Great day :) It's a bit late, but the issue was that you needed to update your ComfyUI version to the latest one. I wonder if any of you happens to know the reason for this. Workflows: https://f. If that's the case it gives errors if you give an image with a closeup face or without a face. I know how to use LoRAs and embeddings. 20K subscribers in the comfyui community. Upload your desired face image in this ControlNet tab. That said, I'm looking for a front-end face swap, something that will inject the face into the mix at the point of ksampler, so if I prompt for something like Freckles they won't get lost in the swap/upscale but I've still got my likeness. For anime look, i suggest inpainting the face afterward, but you want to experiment with the denoise level. Visual Studio Build Tools For the Txt2Img and the Face Swap/Detail groups I've used ZavyChromaXL, and I decided to swap out Face ID for InstantID. I'm utilizing the Detailer (SEGS) from the ComfyUI-Impact-Pack and am encountering a challenge in crowded scenes. 5. Anything smaller, though, and you lose the likeness and details, but then again, small faces in general aren't great without a fix step. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. In my experience Windows 10/11 users will need at least 3 components: "ReActor Node for ComfyUI" by Gourieff in ComfyUI. This extension differs from the many already available as it doesn't use diffusers but instead implements InstantID natively and it fully integrates with ComfyUI. Every time I apply it, it screws up the previously generated face and replaces it with a generic one, regardless of Face ID or Face ID Plus. I want to generate avatar images of people having animal features, preferably using SD1. Something is happening when it passes into the upscaling phase that causes the faces to just shift toward something more generic. latent. Yes, I've actually been doing that. That’s a cost of abou Welcome to the unofficial ComfyUI subreddit. I agree wholeheartedly. Yeah, I stoleadopted most of it from some example on inpainting a face. Example Face number one would be index 0, face 2 would be index 1, face 3 would be index 2. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. EDIT: I'm sure Matteo, aka Cubiq, who's made IPAdapter Plus for ComfyUI will port this over very soon. , The file name should be ip-adapter-plus-face_sd15. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. Shitty example but I did the work for you. Thanks for all your videos, and your willingness to share your very in depth knowledge of comfy/diffusion topics, I would be interested in getting to know more in depth how you go about creating your custom nodes like the one to compare the likeness between two different images that you mentioned in a video a while back and which now you made a node for it and showed in this video, which is We would like to show you a description here but the site won’t allow us. Best practice is to use the new Unified Loader FaceID node, then it will load the correct clip vision etc for you. Thanks it seems you need to download controlnet model and instant id model. I'm struggling with face fix, hands fix, legs fix, and overall body enhancement. (Same image takes 5. And above all, BE NICE. mtb node has face swap, kinda like roop, but not as good as training with lora. (i. Is there a way to configure it to focus solely on detailing the largest face in the scene? Since version 0. After reviewing this new model, it appears we're very close to having a closer face swap from the input image. ComfyUI IPAdapter Plus; ComfyUI InstantID (Native) ComfyUI Essentials; ComfyUI FaceAnalysis; Not to mention the documentation and videos tutorials. To add content, your account must be vetted/verified. Exciting times. Please share your tips, tricks, and… Welcome to the unofficial ComfyUI subreddit. But it's reasonably clean to be used as a learning tool, which is and will The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. You should give an image that has the whole head including a clear face for it to work. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. If you are going to use an LLM then give it examples of good prompts from civitai to emulate. You can't use that model for generations/ksampler, it's still only useful for swapping. if you have manager installed click on it and click on download models then search for control net models download what you need. Welcome to the unofficial ComfyUI subreddit. A lot of people are just discovering this technology, and want to show off what they created. Ran a few different combinations with a fixed seed of both at different strengths and with and without an additional IPAdapter Advanced so that I could compare, and I definitely prefer the results from InstantID. May 1, 2024 · Discover how to use FaceDetailer, InstantID, and IP-Adapter in ComfyUI for high-quality face swaps. Also, if this is new and exciting to you, feel free to post It's right above you have to daisy chain reactor nodes and imput the face index's. I guess you are using the newest IPadapter face id. 0 of my AP Workflow for ComfyUI. K12sysadmin is for K12 techs. I don't know what half of the controls do on these nodes because I didn't find any documentation for them 😯 And while face/full body inpaints are good and sometimes great with this scheme, hands still come out with polydactily and/or fused fingers most of the time. Any help would be greatly appreciated. 0 you can save face models as "safetensors" files (stored in ComfyUI\models\reactor\faces) and load them into ReActor implementing different scenarios and keeping super lightweight face models of the faces you use. Please share your tips, tricks, and workflows for using this software to create your AI art. So that the underlying model makes the image accordingly to the prompt and the face is the last thing that is changed. Check my ComfyUI Advanced Understanding videos on YouTube for example, part 1 and part 2. I’m using ComfyUI and have InstantID up and running perfectly in my generation process. 5-1. I assume before the 2 face ksamplers but can't figure out the hookup. Native InstantID support for ComfyUI. vision/download/face Jan 16, 2024 · Welcome back, everyone (Finally)! In this video, we'll show you how to use FaceIDv2 with IPadapter in ComfyUI to create consistent characters. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. e. 4. Choose a weight between 0. New FaceID model released! Time to see how it works and how it performs. Let us know if you find it useful and stay tuned for the next post! DISCLAIMER: all images here are generated. Go to the Lora tab and use the LoRA named "ip-adapter-faceid-plus_sd15_lora" in the positive prompt. I had 5 min to toss this together. Uncharacteristically, it's not as tidy as I'd like, mainly due to a challenge I have with passing the checkpoint/model name through reroute nodes. I can manage basic parts like image-to-image, text-to-image, and upscaling. I've reinstalled pip and face detailer is working again. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. I’m wondering if anyone can help. The RPG model doesn't do as well with distant faces as other models like Absolute Reality (which is why I used RPG for this guide, for the next part). That determines how close the generated output face matches with its source. Hello all, it turns out that, while generating photos with the reactor node, some turn out fine and some turn out extremely blurry. I would recommend watching Latent Vision's videos on Youtube, you will be learning from the creator of IPAdapter Plus. You need to use the IPAdapter FaceID node if you want to use Face ID Plus V2. 45K views 4 months ago ComfyUI IPAdapter Plus. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. The tool attempts to detail every face, which significantly slows down the process and compromises the quality of the results. 5 and IP adapter FaceID. Please keep posted images SFW. 2 seconds, with TensorRT. The issue is most probably related to the insight face node. We'll also int 4. I've tried ipadapter plus face, instant id, reactor, pulid but the result is not same as the real face images. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. The graphic style Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. zawwa dzto jcst ocdho vtjzo xwbqbgdr ubdyavg lun msuffnwt vczkxmb


Powered by RevolutionParts © 2024