Comfyui batch prompts not working. You switched accounts on another tab or window.

Comfyui batch prompts not working For the next newbie though, it should be stated that first the Load LoRA Tag has its own multiline text editor. I am completely new to comfy ui and sd. A lot of people are just discovering this technology, and want to show off what they created. Enhance short prompts into more detailed, descriptive ones; Ensure you have ComfyUI installed and working. A new Face Swapper function. I open up the browser interface, and hit "queue prompt" there to test, and I get another "got prompt" on the cmd and the "queue size" counter goes If can also be a fixed seed in the sampler, if you have a fixed seed and don't modify the prompt or any other parameter, after 1 queue comfyui won't generate the image, because it would be exactly the same as the one already rendered before. The next expression will be picked from the Expressions text Can I create images automatically from a whole list of prompts in ComfyUI? (like one can in automatic1111) Maybe someone even has a workflow to share which accomplishes this, just like it's possible in automatic1111 I need to create images from a whole list of prompts I enter in a text box or are saved in a file. Create a character - give it a name, upload a face photo, and batch up some prompts. Like if there was a filename pin that I could load a folder directly from or I also had the same issue with blending images using Batch Prompt Schedule. Sorry should have uploaded troubleshooting data. Thanks. Nothing in the prompt was followed. The next outfit will be picked from the Outfit directory. Nor substance, etc, etc. About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features NFL Sunday Ticket Press Copyright Welcome to the unofficial ComfyUI subreddit. And the preview Chooser node meet all your requirements. Diffusers allows this by passing multiple prompts to the pipe, which are stacked in a tensor and pushed through the pipeline. Luckily I found the simplest solution: Just link the Loadcheckpoint Node to Batch Prompt Schedule (Fizznodes), When you click “queue prompt” in ComfyUI, it actually sends a POST request with the whole workflow as JSON data to http://127. (-) pred_iou_thresh (float): A filtering Hi. I finally found a workflow that does good 3440 x 1440 generations in a single go and was getting it working with IP-Adapter I couldn't decipher it either, but I think I found something that works. Is your ComfyUI and all custom nodes updated? If so, the issue could be caused by a custom I also had the same issue with blending images using Batch Prompt Schedule. the diagram doesn't load into comfyui so I can't test it out. Hope someone makes a multi-lora-loader node This video will show you a super easy trick to batch process your text prompts in ComfyUI and create a whole Tired of making one image at a time in ComfyUI? This video will show you a super Heads up: Batch Prompt Schedule does not work with the python API templates provided by ComfyUI github. This processes prompts sequentially, not simultaneously. I am running comfyUI in colab, I started all this 2-3 days ago so I am pretty new to it. So, it works for one picture then stops. It provides nodes that enable the use of Dynamic Prompts in your ComfyUI. One of the reasons is that there is a "caching" of ComfyUI, it will NOT queue the prompt if nothing changed in the workflow from the previous “Queue Prompt”. I want to achieve morphing effect between various prompts within my reference video. How to increase batch size and batch count in comfyui? I want to make 100% use of my GPU and I want to get 1000 images without stopping. After borrowing many ideas, and learning ComfyUI. thanks for the help. 1:8188/prompt. Luckily I found the simplest solution: Just link the Loadcheckpoint Node to Batch Prompt Schedule (Fizznodes), then directly to Ksampler like this, without any other nodes in between. For me, it has been tough, but I see the absolute power of the node-based generation Can someone please explain or provide a picture on how to connect 2 positive prompts to a model? 1st prompt: (Studio ghibli style, Art by Hayao Miyazaki:1. A1111 allows ComfyUI-DynamicPrompts is a custom nodes library that integrates into your existing ComfyUI Library. You switched accounts on another tab or window. sharing to see if it helps out anybody and please let me I am completely new to comfy ui and sd. Contribute to adieyal/comfyui-dynamicprompts development by creating an account on GitHub. prompts/example; Load Prompts From File However, note that this node loads data in a list format, not as a batch, so it returns images at their original size without normalizing the size. Sign In. So I deleted the entire thing and started over. I'm Feeling Lucky (downloads prompts from lexica. Such a massive learning curve for me to get my bearings with ComfyUI. Includes the most of the original functionality, including: Templating language for Create. videos. The regular prompt schedule works on everything with a float or int like you've observed. 1: Red toy train 2: Red toy car Running the combinatorial prompt example json workflow. It's all installed via the Node Manager and things are working. - dnl13/ComfyUI-dnl13-seg. A new Image2Image function: choose an existing image, or a batch of images from a folder, and pass it through the Hand Detailer, Face Detailer, Upscaler, or Face Swapper functions. Reload to refresh your session. I hope you can help me. Please keep posted images SFW. I'm saying this because of other people looking at this post that might think the preview chooser is broken. I looked on the other issues related to that Welcome to the unofficial ComfyUI subreddit. I've submitted a In this video, we show the transition between several prompts. You signed out in another tab or window. The workflow doesn't care about them and execute everyline of prompts (even the ones "commented"). If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. and doesn't work correctly in a 11:11. not working #57 opened Jun 25, 2024 by dangerweenie. Wasn't able to loop through all options in a single "Queue Prompt" like i hoped, but if I hit For the newest comfyui I am not sure. Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. While AUTOMATIC1111 can generate images based on prompt variations, I haven’t found the same possibility in ComfyUI. This combined with the "Unzip Prompts" node will give you lists of +ve and -ve prompts you can CLIP encode and KSample. I copy/pasted from different workflows I found. . I could never find a node that simply had the multiline text editor and nothing for output except STRING (the node in that screen shot that has the Title of, "Positive Prompt - Model 1"). Closed photogsy opened this issue Oct 25, 2024 · 5 comments Closed Dynamic_Prompt_Working. json. here to share my current workflow for switching between prompts. ) I call mine FormattedLineByIndex, and as inputs it takes a fmt, a STRING, and lines, a multiline STRING. 0. this is also happening to me after updating comfyui, even the default workflow is not working! Simply saying it doesn't work doesn't allow us to understand anything. it could be just a better understanding on my part. Then, you can write prompt(s) in a text file and use the usual tools to sequentially run Welcome to the unofficial ComfyUI subreddit. It’s fixed now I just updated comfy again and updated all nodes and it started working. All reactions Welcome to the unofficial ComfyUI subreddit. These commands ComfyUI custom nodes for Dynamic Prompts. Example flow attached (needs Impact and Inspire node packs) prompt_batch. Welcome to ComfyUI Studio! In this tutorial, we're showcasing the 'Default Prompt Batch' workflow from our Ultimate Portrait Workflow Pack. Batch size 1, then queue 10 prompts! Reply reply I managed to get it working, I just copied the example workflow from github and added onto it. Switched to BastardAIImpossible when A1111 crashed. I liked the ability in MJ, to choose an image from the batch and upscale just that image. However, after a recent update, I noticed the "got prompt" process has become extremely slow. It now has a "text"input, but nothing can be plugged into it Hi newbie here made this batch face restore workflow. Maybe I will now. 49 votes, 21 comments. A new Prompt Enricher function, able to improve your prompt with the help of GPT-4 or GPT-3. Still not as convenient as typing a name, unfortunately. Also, in A1111 I was switching between Models with Dynamic Prompts to get samples for thumbnails in the Model Helper extension. Even high-end graphics cards like the NVIDIA GeForce RTX 4090 are susceptible to similar issues. However, a batch size of 8 with a batch count of 8 will produce 64 images faster than a batch size of 1 and a batch count of 64. I'm working on enabling SAM-HQ and Dino for ComfyUI to easily generate masks automatically, either through automation or prompts. points_per_batch (int): Sets the number of points run simultaneously by the model. To use it, you fill lines with many lines of input and you use Random / branching prompt not working #5364. I have a base image that I want to overlay multiple images onto similar to applying small stickers over a edit: Just tried Dynamic prompts nodes* but it doesn't seem to work. The entered prompt name and prompts will be saved in ComfyUI\user\PromptList\prompts. So I am following one of House of Dim's videos on YouTube (link at the bottom) and following along. ComfyUI node version of the SD Prompt Reader - Issues · receyuki/comfyui-prompt-reader-node. 1 [FEATURE Find the "Prompt JSON" node in the "prompt_converters" category in ComfyUI. as far as I'm concerned combinatorial is broken because of alphabetizes before it randomizes so it's pseudo random Once I've amassed a collection of noteworthy images, my plan is to compile them into a folder and execute a 2x upscale in a batch. One of the reasons is that there is a "caching" of ComfyUI, it will NOT queue I have an SDXL checkpoint, video input + depth map controlnet and everything set to XL models but for some reason Batch Prompt Schedule is not working, it seems as its only Here's a tutorial that uses the inspire pack to batch process a list of external prompts from a file and run it as a batch - https://youtu. the prompt shouldn't be revisited during a batch. It's easy to conflate batch count with batch size. The workflow you posted works fine for me, both within the default UI and submitting via the raw API via curl. Been working the past couple weeks to transition from Automatic1111 to ComfyUI. Features. I think it works like a charm but maybe can get better. Belittling their efforts will get you banned. The 2nd custom node I installed was the Dynamic Prompt set - adieyal/comfyui-dynamicprompts: ComfyUI custom nodes for Dynamic Prompts (github. When batch size is set to 3, 3 images of the same prompt is returned. I see a lot of batch image nodes but none are working/ not sure which one to use Share Add a Comment. For T2I, you can set the batch_size through the Empty Latent Image, while for I2I, you can use the Repeat Latent Batch to expand the same latent to a batch size specified by amount. ComfyUI stopped working? Del'd folder, reinstalled via git clone but not working still . A little late, but here's a tutorial on how to run a whole list of external prompts from a file and run it as a batch - Comfyui batch prompts not working ComfyUI: batch run from command line with API July 22, 2023. Higher numbers may be faster but use more GPU memory. 2), Anime Style, Manga Style, Hand drawn, cinematic, Sharp focus, humorous Just wondering if comfy can run say 4 different prompts in the same gpu forward pass. Also, if this is new and exciting to you, feel free to Is there a way to import prompts/settings from text/csv file for batch image generation in ComfyUI? Share Add a Comment. I didn't even go back to the nodes file. Critical that I get random prompts working. posts. My comfyui for some reason became unstable and what ever I typed in my prompt, say a castle, i would get a nsfw image of a woman. And yes, the variables update at each batch count. Ideally, I'd love to leverage the prompt loaded from the image metadata (optional), but more crucially, I'm seeking guidance on how to efficiently batch load images from a folder for subsequent upscaling. I've put them both in A1111's embeddings folder and ComfyUI's, then tested editing the . Has anyone been able to get Dynamic Prompts working since last week? Whatever was done in early October really messed up Dynamic Prompts. Filename not output for each image from SD Prompt Reader when using SD Batch Loader node bug Something isn't working bug Something isn't working #77 opened Jun 4, 2024 by J-Cott. Batch Prompt Schedule. be/xfelqTfnnO8. My understanding is that with "batch count" you can specify how many prompts and with "batch size" it's how many simultaneously, but I've often wondered: what if you want to run four different prompts at once with batch size?Can it be done? There was a bug in Dynamic Prompts that did this under AUTOMATIC1111 but I'd like to do it deliberately, in ComfyUI if possible. (The basic trick could easily be applied to your own node. It's not broken and works great, but yes the Comfyui batch process /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. IMO. But it always stops working correctly after a Perhaps next time you could post your question in the issues rather than in the discussions, I'm not sure why I don't receive notifications for discussions. You can prove this to yourself by taking your positive and negative as in random prompts it should reprompt after each batch not just when the queue gets to zero; I've started a discussion on comfyUI Githup About this apparent issue; In the meantime the workaround is to set the batch count to one have it reprompt when the queue gets to 0 and manually turn it off when all the batches are completed. Notifications You must be signed in to change combinatorial You signed in with another tab or window. Tip: The latest version of ComfyUI is prone to excessive graphics memory usage when using multiple FLUX Lora models, and this issue is not related to the size of the LoRA models. Is this achievable? Hi, I wondered if it is possible to have 4 different prompt conditionings from ImpactWildcardEncode made into a batch size of 4 that is then run by a sampler? Does it need a new sampler node for this? So far I always had to make 4 new sampler combos to get 4 prompt variations, all running batch size 1 each. However, there is not a way to handle batches of text inputs. Help - Apps Not Opening, Default Windows Media Player Hey I tried to use ‘Text load line from File’ which is in WAS node suite to execute multiple prompts one by one in sequential order. yaml will be automatically created. Is there a way to import prompts/settings from text/csv file for batch image generation? Share Add a Comment. If I restart everything, it'll sometimes work normally for a few batches and sometimes won't. To get the workflow as Sometimes when you click the Queue Prompt, nothing happens and it seems the “Queue Prompt” is not working. However, you can achieve the same result thanks to ComfyUI API and curl. models. I rolled back but the problem still persisted. This is probably the I have a text file full of prompts. Wildcard directory does not work when using custom_nodes folder I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. It generates all images within the batch with the same prompt even though I'm using this: Red {toy train|toy car} In A1111 Dynamic prompts extension by the same author, that prompt would create two different prompts within the same batch. Dynamic prompts. Currently the process works off a batch of conditionings, and schedules for settings like denoise, and loops through the prompts. I've tried to use textual inversions, but I only get the message that they don't exist (so ignoring them). Sort by: cycle something or repeat , that is meant for batch that solid give you the option to Thanks for that. I have an SDXL checkpoint, video input + depth map controlnet and everything set to XL models but for some reason Batch Prompt Schedule is not working, it seems as its only taking the first prompt. Please upload the full log. Then I went to ComfyUI & began installing modules. About the batch loader, are you able to use it now? If not, you can post screenshots and the workflow to the issues, so I can determine where the problem lies. yaml. How would I hook it up so that I can read a bunch of files and iterate through them using a batch function. 8:33 [ComfyUI] Adjusting the positive and negative prompts 9:01 [ComfyUI] Test generating (ControlNet working; Dynamic Prompts not working) 9:15 [ComfyUI] For Dynamic Prompts, setting batch SIZE to more than 1 is of no use, since the same seed is used for all the wildcards 9:29 [ComfyUI] For Dynamic Prompts, setting batch COUNT to more than 1 Welcome to the unofficial ComfyUI subreddit. mp4. This custom node for ComfyUI integrates the Flux-Prompt-Enhance model, allowing you to enhance your prompts directly within your ComfyUI workflows. art) Magic Prompt - spices up your prompt with modifiers. Sort by: Best. To use Prompt Travel in ComfyUI, it is recommended to install the following plugin: FizzNodes; It provides a convenient feature called Batch Prompt Schedule. com) to use the Random node. articles. If you change a prompt using the same Prompt Name, it will overwrite the prompt in prompts. Sort by: Best There's a pull request that will add loop support to comfyui which would make this sort of thing doable without writing custom code: LF terminal file manager image preview not working in fish shell The equivalent of "batch size" can be configured in different ways depending on the task. Copy link Owner. Clone this repository into your ComfyUI custom nodes directory: Your question I use batch count to generate 100 images at a time, using multiple different tabs for different generation before I go to bed. Even with 4 regions and a global condition, they just combine them all 2 at a time until it becomes a single positive ComfyUI & Prompt Travel. I've modified the encode method in Use batch_count instead of batch_size. I've tested everything I can think of. You signed in with another tab or window. Open comment sort options I've got it hooked up in an SDXL flow and I'm bruising my knuckles on SDXL. The special sauce though being instead of just rendering fresh we are latent slerping/blending from Using "batch_size" as part of the latent creation (say, using ComfyUI's `Empty Latent Image` node) Simply running the executing the prompt multiple times, either by smashing the "Queue Prompt" button multiple times in comfyUI, or changing the "Batch count" in the "extra options" under the button. Gave the cutoff node another shot using prompts inbetween my original base prompt Specify the directories located under ComfyUI-Inspire-Pack/prompts/ One prompts file can have multiple prompts separated by ---. If prompts. The Inspire pack from @ltdrdata has a "Read Prompts from File". I @frankchieng I'm not able to reproduce this issue on master. The text was updated successfully, but these errors were encountered: All reactions. Welcome to the unofficial ComfyUI subreddit. I think I have a reasonable workflow, that allows you to test your prompts and settings and Hello, I just updated my comfyui and find out that the comments "//" are not working anymore. 1 and 1. batch works correctly in comfy ui. The node I used was the Value What you'd see is a batch load images, and ability to select images to use in node network individually, at once (well sequentially as it goes down the chain). 0 to adjust output detail; llm_prompt_type: Choose between "One Shot" or "Few Shot"; schema_type: Select from JSON, HTML, Key, Attribute Welcome to the unofficial ComfyUI subreddit. It outputs result, a STRING, which is (initially) the first line from lines with the format applied. And above all, BE NICE. But it is not reading the file from google drive. I've kindof gotten this to work with the "Text Load Line From File" custom node from WAS Suite. Combinatorial prompt: A {red|green|blue} ball by me When batch size is set to 1, only 1 image is returned. The text was updated successfully, but these errors were encountered: I've been experimenting with batch generation and works fine with the Image Batch and Mask Batch of the WAS_Node_Suite. e. Prompts not working . Wasn't able to get it working just using the nodes. \ComfyUI_windows_portable\ComfyUI\comfy\samplers. Please share your tips, tricks, and workflows for using this software to create your AI art. py", line 253, in sampling_function cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, Looking to see if anyone has any working examples of break being used in comfy ui (be it node based or prompt based). I really love doing wildcard runs. Extremely slow import #55 opened Jun 25, 2024 by oxysoft. 5-Turbo. Usually, it should only take seconds for comfyui to load all A port of the SD Dynamic Prompts Auto1111 extension to ComfyUI. I mean come on, not even Blender has a load batch images node and loop them and do stuff nodes. images. Connect the following inputs: prompt: Your main prompt text; negative_prompt: Elements to avoid in the image; complexity: A float value between 0. ComfyUI custom nodes for Dynamic Prompts with Batch Functionality - comfyui-dynamicprompts-batch/README. adieyal / comfyui-dynamicprompts Public. How to loop back batch images on top of each other in a single queue? Loopback nodes not working . I haven't touched ComfyUI in a few months so I am just tinkering with it now. Sometimes when you click the Queue Prompt, nothing happens and it seems the “Queue Prompt” is not working. home. It's quite straight forward, but maybe it could be simpler. 23K subscribers in the comfyui community. Please help. It'd pick a random selection for the very first generation then all pictures generated as part of that batch are the same. Hello, BatchPromptSchedule in Comfy UI is only running the first prompt, I had it working previously and now when running a json that did go through the scheduled prompts it will only use the first. g. AnimateDiff, a I came up with a way using a custom node. Create. The nodes provided in this library are: Follow the steps below to install the ComfyUI-DynamicPrompts Library. I believe it's due to the syntax within the scheduler node breaking the syntax of the overall prompt JSON load. md at main · Psynian/comfyui-dynamicprompts-batch A portion of the Control Panel What’s new in 5. Ah, I’m sorry, I was pretty new to comfyui and didn’t know how to share workflows. Tried the "api example": I can read multiple "got prompt" on cmd but no execution at all. If you solely i'm not sure why they seem to be working now. The Ksampler simply can't handle a batch of values from the batch value schedule which is the posted issue. I want to load it into comfyui, push a button, and come back in several hours to a hard drive full of images. yaml file to point to either folder (direct path to ComfyUI). We demonstrate how you can create a short video depicting the change of seasons. If you have a workflow that's working in the default UI but not when saved as an API, the most likely possibility is that there's a node relying on extra_pnginfo, though I don't see anything like that here. There's even a section on how to mix Welcome to the unofficial ComfyUI subreddit. py. This workflow is Welcome to the unofficial ComfyUI subreddit. it should be revisited and randomized between each batch Unless it's combinatorial. yaml is not found, an empty prompts. mnqxyg kxfmrr mepubd cbzrh vqaner dze odpxqgoj mip vde fhia