COMMITS
/ modules/sd_hijack_optimizations.py September 17, 2025
N
update sd_hijack_optimizations.py — Changing [CUDA Compute Capability] Version
namemechan committed
June 8, 2024
A
integrated edits as recommended in the PR #15804
AUTOMATIC1111 committed
May 15, 2024
H
Replace einops.rearrange with torch native
huchenlei committed
May 21, 2023
B
Make sub-quadratic the default for MPS
brkirch committed
May 8, 2023
B
Use fixed size for sub-quadratic chunking on MPS
brkirch committed
August 2, 2023
July 13, 2023
A
get attention optimizations to work
AUTOMATIC1111 committed
July 12, 2023
A
SDXL support
AUTOMATIC1111 committed
June 7, 2023
A
Merge pull request #11066 from aljungberg/patch-1
AUTOMATIC1111 committed
June 6, 2023
A
Fix upcast attention dtype error.
Alexander Ljungberg committed
June 4, 2023
A
Merge pull request #10990 from vkage/sd_hijack_optimizations_bugfix
AUTOMATIC1111 committed
A
fix the broken line for #10990
AUTOMATIC committed
June 3, 2023
V
torch.cuda.is_available() check for SdOptimizationXformers
Vivek K. Vasishtha committed
June 1, 2023
A
revert default cross attention optimization to Doggettx
AUTOMATIC committed
A
revert default cross attention optimization to Doggettx
AUTOMATIC committed
May 31, 2023
A
rename print_error to report, use it with together with package name
AUTOMATIC committed
May 29, 2023
A
Add & use modules.errors.print_error where currently printing exception info by hand
Aarni Koskela committed
May 20, 2023
A
Add a couple `from __future__ import annotations`es for Py3.9 compat
Aarni Koskela committed
May 19, 2023
A
Apply suggestions from code review
AUTOMATIC1111 committed
May 18, 2023
A
fix linter issues
AUTOMATIC committed
A
make it possible for scripts to add cross attention optimizations
AUTOMATIC committed
May 11, 2023
A
Autofix Ruff W (not W605) (mostly whitespace)
Aarni Koskela committed
May 10, 2023
A
ruff auto fixes
AUTOMATIC committed
A
autofixes from ruff
AUTOMATIC committed
April 14, 2023
B
Fix for Unet NaNs
brkirch committed
March 24, 2023
F
Update sd_hijack_optimizations.py
FNSpd committed
March 21, 2023
F
Update sd_hijack_optimizations.py
FNSpd committed
March 10, 2023
P
sdp_attnblock_forward hijack
Pam committed
P
argument to disable memory efficient for sdp
Pam committed
March 6, 2023
P
scaled dot product attention
Pam committed
January 25, 2023
B
Add UI setting for upcasting attention to float32
brkirch committed
January 23, 2023
A
better support for xformers flash attention on older versions of torch
AUTOMATIC committed
January 21, 2023
T
add --xformers-flash-attention option & impl
Takuma Mori committed
A
extra networks UI
AUTOMATIC committed
January 6, 2023
B
Added license
brkirch committed
B
Change sub-quad chunk threshold to use percentage
brkirch committed
December 27, 2022
B
Add Birch-san's sub-quadratic attention implementation
brkirch committed
December 19, 2022
B
Use other MPS optimization for large q.shape[0] * q.shape[1]
brkirch committed
December 10, 2022
A
cleanup some unneeded imports for hijack files
AUTOMATIC committed
A
do not replace entire unet for the resolution hack
AUTOMATIC committed
November 23, 2022
B
Patch UNet Forward to support resolutions that are not multiples of 64
Billy Cao committed
October 18, 2022
C
Remove wrong self reference in CUDA support for invokeai
Cheka committed
October 17, 2022
C
Update sd_hijack_optimizations.py
C43H66N12O12S2 committed
C
readd xformers attnblock
C43H66N12O12S2 committed
C
delete xformers attnblock
C43H66N12O12S2 committed
October 11, 2022
B
Use apply_hypernetwork function
brkirch committed
B
Add InvokeAI and lstein to credits, add back CUDA support
brkirch committed
B
Add check for psutil
brkirch committed
B
Add cross-attention optimization from InvokeAI
brkirch committed