SIGN IN SIGN UP

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

0 0 0 Python

Fix Cache.max_cache_len max value for Hybrid models (#39737)

* fix gemma

* fix min

* fix quant init issue

* fix gemma 3n

* skip quant cache test

* fix modular

* new test for Gemma

* include cyril change

---------

Co-authored-by: Cyril Vallez <[email protected]>
M
Manuel de Prada Corral committed
c4e20698985887215f7e91a02621265f047af2d7
Parent: 075dbbc
Committed by GitHub <[email protected]> on 7/29/2025, 3:12:50 PM