Deploy Stable Diffusion on JD Cloud GPU: A Complete Step‑by‑Step Guide

This tutorial walks you through installing GPU drivers, CUDA, Python, Anaconda, PyTorch, and the Stable Diffusion WebUI on a JD Cloud GPU instance, then shows how to add essential plugins like LoRA, ControlNet, Jupyter Notebook, and Kohya_ss for advanced AI art generation.

JD Cloud Developers
JD Cloud Developers
JD Cloud Developers
Deploy Stable Diffusion on JD Cloud GPU: A Complete Step‑by‑Step Guide

1. Create GPU Instance

The JD Cloud GPU instance (e.g., Tesla P40 24G) provides the compute power needed for Stable Diffusion.

Recommended configuration: 24 GB GPU memory, 12 CPU cores, 48 GB RAM.

1.2 Configure Security Group

Create a security group and open ports 7860, 7861, 8080, and 8888.

2. Environment Installation

2.1 Install GPU Driver

Find the appropriate driver version (e.g., 510) on NVIDIA’s website and install it.

# Install driver version 510
apt install nvidia-driver-510
# Verify installation
nvidia-smi

2.2 Install CUDA

Match the CUDA version to the driver (CUDA 11.6 for driver 510) and install it.

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.6.2/local_installers/cuda-repo-ubuntu2004-11-6-local_11.6.2-510.47.03-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2004-11-6-local_11.6.2-510.47.03-1_amd64.deb
sudo apt-key add /var/cuda-repo-ubuntu2004-11-6-local/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda

2.3 Install Python 3.10

apt install software-properties-common
add-apt-repository ppa:deadsnakes/ppa
apt update
apt install python3.10
python3.10 --version

Configure pip to use a domestic mirror to avoid timeouts:

[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple
[install]
trusted-host = https://pypi.tuna.tsinghua.edu.cn

2.4 Install Anaconda

wget https://repo.anaconda.com/archive/Anaconda3-2023.03-1-Linux-x86_64.sh
bash Anaconda3-2023.03-1-Linux-x86_64.sh
conda create -n python3.10.9 python==3.10.9
conda activate python3.10.9

2.5 Install PyTorch

# Using conda (recommended)
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
# Or using pip
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116

3. Deploy Stable Diffusion WebUI

3.1 Clone Repository

conda activate python3.10.9
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

3.2 Install Dependencies

cd stable-diffusion-webui
pip install -r requirements_versions.txt
pip install -r requirements.txt

3.3 Launch WebUI

python launch.py --listen --enable-insecure-extension-access

The first run downloads common models; if a model fails to download, retrieve it manually from HuggingFace and place it in stable-diffusion-webui/models/Stable-diffusion.

3.4 Access UI

Open http://<IP>:7860 in a browser. For public access, set authentication:

python launch.py --listen --enable-insecure-extension-access --gradio-auth username:password

4. Recommended Plugins & Tools

4.1 LoRA (Additional Networks)

Install via Extensions → Install from URL with the repository

https://ghproxy.com/https://github.com/kohya-ss/sd-webui-additional-networks.git

, then set the LoRA folder path in Settings → Additional Networks .

4.2 ControlNet

Install via Extensions → Install from URL with

https://ghproxy.com/https://github.com/Mikubill/sd-webui-controlnet.git

. Download required model files from HuggingFace if automatic download fails.

4.3 Jupyter Notebook

jupyter notebook --allow-root --NotebookApp.token='your_token'

Access the notebook at http://<IP>:8888.

4.4 Kohya_ss (Model Training)

Install Docker and NVIDIA Container Toolkit, then clone the repository and adjust ports (map 7860 of Kohya to host 7861 to avoid conflict with WebUI).

sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit-base
nvidia-ctk --version
git clone https://github.com/bmaltais/kohya_ss.git
cd kohya_ss
# Edit docker-compose.yaml to map 0.0.0.0:7861:7860
docker compose build
docker compose run --service-ports kohya-ss-gui

Download missing model files from HuggingFace if needed.

5. Summary

After completing the installation and plugin configuration, Stable Diffusion runs efficiently on a JD Cloud GPU instance, providing a powerful, free, and extensible platform for AI‑generated artwork.

DockerAI artJD CloudInstallation GuideGPU cloud
JD Cloud Developers
Written by

JD Cloud Developers

JD Cloud Developers (Developer of JD Technology) is a JD Technology Group platform offering technical sharing and communication for AI, cloud computing, IoT and related developers. It publishes JD product technical information, industry content, and tech event news. Embrace technology and partner with developers to envision the future.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.