Skip to content
/ icc Public

[CVPR 2025] Accelerating Diffusion Transformer via Increment-Calibrated with Channel-Aware Singular Value Decomposition

Notifications You must be signed in to change notification settings

ccccczzy/icc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 

Repository files navigation

Accelerating Diffusion Transformer via Increment-Calibrated Caching with Channel-Aware Singular Value Decomposition

This is the official implementation of CVPR2025 paper Accelerating Diffusion Transformer via Increment-Calibrated Caching with Channel-Aware Singular Value Decomposition.

Quick Start

Take increment-calibrated caching for DiT as an example.

Setup

Download and setup the repo:

git clone https://github.com/ccccczzy/icc.git
cd icc/DiT

Create the environment and install required packages:

conda env create -f environment.yml
conda activate DiT

Usage

To generate calibration parameters with SVD:

python gen_decomp.py --use-wtv False --rank 128 

To generate calibration parameters with CA-SVD:

python gen_decomp.py --use-wtv True --wtv-src "mag" --num-steps 50 --num-samples 256 --rank 128 --data-path /path/to/imagenet/train

To generate calibration parameters with CD-SVD:

python gen_decomp.py --use-wtv True --wtv-src "delta_mag" --num-steps 50 --num-samples 256 --rank 128 --data-path /path/to/imagenet/train

To sample images with the generated calibration parameters:

torchrun --nnodes=1 --nproc_per_node=N sample_ddp_sp_step.py --num-fid-samples 50_000 --results-dir /path/to/calibration/parameters

Bibtex

@misc{chen2025icc,
      title={Accelerating Diffusion Transformer via Increment-Calibrated Caching with Channel-Aware Singular Value Decomposition}, 
      author={Zhiyuan Chen and Keyi Li and Yifan Jia and Le Ye and Yufei Ma},
      year={2025},
      eprint={2505.05829},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2505.05829}, 
}

Acknowledgments

This codebase borrows from DiT, PixArt-alpha and ADM. Thanks to the authors for their wonderful work and codebase!

About

[CVPR 2025] Accelerating Diffusion Transformer via Increment-Calibrated with Channel-Aware Singular Value Decomposition

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published