Skip to content

Stable Diffusion ComfyUI Note 01 - Download And Installation

Last Updated on 2024-08-12 by Clay

What is ComfyUI?

Those who play with Stable Diffusion AI-generated images have likely heard of stable-diffusion-webui. It is a visual interface that supports the Stable Diffusion model framework, allowing users to perform inference with AI models without having to write code or deal with complicated command-line operations. ComfyUI, on the other hand, is a slightly more niche front-end interface, but it has quickly garnered a loyal fan base due to its flexibility and customizability. Essentially, it can be seen as a more advanced version of stable-diffusion-webui, though it is less user-friendly.

I usually use ComfyUI workflows to generate images of characters from the "Trails" series, but if I add LoRA, I worry that the number of workflow nodes will become so large that they won't fit on the screen 🙂

However, after testing it myself, I found that ComfyUI performs more stable and faster inference acceleration compared to stable-diffusion-webui. The seemingly complicated node-based workflow design becomes quite convenient once you get familiar with it, and I got hooked, starting to explore a series of advanced operations.

I plan to document my learning process while researching. Today's main goal is to get ComfyUI installed.


Download and Environment Configuration

Before downloading, please note the following:

  1. It is recommended to install ComfyUI in an environment with a GPU, preferably an Nvidia card, mainly due to higher support. Of course, you can also use CPU computation, but it will indeed be very slow.
  2. You need to install Python first, then decide whether to set up a Python virtual environment or use Docker. My notes are primarily based on the Linux operating system, so for Windows, you may need to refer to the official guide: ComfyUI Installation Guide.

If you need to install Python from scratch, you can refer to: [Python] Tutorial(1) Download and print "Hello World".

To create a Python virtual environment, you can refer to: [Python] How to Build a Python Virtual Environment in a Folder.

How to Create a Python Virtual Environment in a Folder.

Next, assuming you have set up the Python virtual environment or plan to install directly on the native system, we can proceed to clone the entire ComfyUI project to a desired location:

# Clone the GitHub repo
git clone git@github.com:comfyanonymous/ComfyUI.git

# Install the dependencies
cd ComfyUI
pip3 install -r requirements.txt


If there is no any error occurred, now we can startup the ComfyUI.


Run ComfyUI

We can use the following command in command line:

python3 main.py --listen --port 22222


Output:

clay@84f05b2d2173:/workspace/ComfyUI$ python3 main.py --listen --port 22222                                                                                                                   
Total VRAM 7940 MB, total RAM 63994 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Laptop GPU : cudaMallocAsync
Using pytorch cross attention
[Prompt Server] web root: /workspace/ComfyUI/web
Traceback (most recent call last):
File "/workspace/ComfyUI/nodes.py", line 1931, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/workspace/ComfyUI/comfy_extras/nodes_audio.py", line 1, in <module>
import torchaudio
ModuleNotFoundError: No module named 'torchaudio'

Cannot import /workspace/ComfyUI/comfy_extras/nodes_audio.py module for custom nodes: No module named 'torchaudio'

Import times for custom nodes:
0.0 seconds: /workspace/ComfyUI/custom_nodes/websocket_image_save.py

WARNING: some comfy_extras/ nodes did not import correctly. This may be because they are missing some dependencies.

IMPORT FAILED: nodes_audio.py

This issue might be caused by new missing dependencies added the last time you updated ComfyUI.
Please do a: pip install -r requirements.txt

Starting server

To see the GUI go to: http://0.0.0.0:22222


You can see that I still have some audio packages that haven't been installed, which were detected by ComfyUI. However, this does not affect the operation of the Stable Diffusion model—in fact, I'm also curious why generating images requires me to install torchaudio... But let's not worry about that for now.

The --listen parameter sets the host to 0.0.0.0. If not set, the service will run on localhost, i.e., 127.0.0.1, and can only be accessed by local users. However, if you are using a remote server, setting --listen allows you to connect to your ComfyUI service via the IP address.

The --port parameter sets the port for the service to open. I usually set it to a five-digit number to avoid conflicts with default ports of various services.

In short, if you see the message "To see the GUI go to: http://0.0.0.0:22222", you can open your browser and check out how ComfyUI looks.

This is the default workflow after installation. In future notes, I will document various working methods, but for now, you can click "Queue Prompt" to start the first image generation inference. During the inference process, each corresponding card will light up as it progresses, allowing you to easily observe the operation of the Stable Diffusion framework.

In short, if you see the generated image in the "Save Image" card on the far right, congratulations! ComfyUI has been successfully installed! You can start testing the image generation features to your heart's content.


References


Read More

Leave a ReplyCancel reply

Exit mobile version