vllm.transformers_utils.processor ¶
HashableDict ¶
Bases: dict
A dictionary that can be hashed by lru_cache.
Source code in vllm/transformers_utils/processor.py
HashableList ¶
_transformers_v4_compatibility_import ¶
Some remote code processors still import ChatTemplateLoadKwargs which was a subset of ProcessorChatTemplateKwargs as defined in Transformers v4. In Transformers v5 these were merged into ProcessorChatTemplateKwargs and ChatTemplateLoadKwargs was removed. For backward compatibility, we add an alias for ChatTemplateLoadKwargs if it doesn't exist.
This can be removed if HCXVisionForCausalLM is upstreamed to Transformers.
Source code in vllm/transformers_utils/processor.py
_transformers_v4_compatibility_init ¶
_transformers_v4_compatibility_init() -> Any
Some remote code processors may define optional_attributes in their ProcessorMixin subclass, and then pass these arbitrary attributes directly to ProcessorMixin.__init__, which is no longer allowed in Transformers v5. For backward compatibility, we intercept these optional attributes and set them on the processor instance before calling the original ProcessorMixin.__init__.
This can be removed if Molmo2ForConditionalGeneration is upstreamed to Transformers.
Source code in vllm/transformers_utils/processor.py
get_feature_extractor ¶
get_feature_extractor(
processor_name: str,
*args: Any,
revision: str | None = None,
trust_remote_code: bool = False,
**kwargs: Any,
)
Load an audio feature extractor for the given model name via HuggingFace.
Source code in vllm/transformers_utils/processor.py
get_image_processor ¶
get_image_processor(
processor_name: str,
*args: Any,
revision: str | None = None,
trust_remote_code: bool = False,
**kwargs: Any,
)
Load an image processor for the given model name via HuggingFace.
Source code in vllm/transformers_utils/processor.py
get_processor ¶
get_processor(
processor_name: str,
*args: Any,
revision: str | None = None,
trust_remote_code: bool = False,
processor_cls: type[_P]
| tuple[type[_P], ...] = ProcessorMixin,
**kwargs: Any,
) -> _P
Load a processor for the given model name via HuggingFace.
Source code in vllm/transformers_utils/processor.py
get_video_processor ¶
get_video_processor(
processor_name: str,
*args: Any,
revision: str | None = None,
trust_remote_code: bool = False,
processor_cls_overrides: type[_V] | None = None,
**kwargs: Any,
)
Load a video processor for the given model name via HuggingFace.