🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Use real tokenizers if tiny version(s) creation has issue(s) (#22428)
Fix some tiny model creation issues Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Y
Yih-Dar committed
8894b8174253655ca3e750aafe0b2ba35d790d08
Parent: 9b494a1
Committed by GitHub <noreply@github.com>
on 3/29/2023, 2:16:23 PM