Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
2b73f0f
feat(evaluation): add VLM-based metrics with litellm and transformers…
davidberenstein1957 Feb 21, 2026
89102cd
fix(evaluation): ARNIQA not in torchmetrics - implement manually
davidberenstein1957 Feb 21, 2026
c5109a6
fix(evaluation): use List-based scores pattern matching Pruna standards
davidberenstein1957 Feb 21, 2026
a39bca4
fix(evaluation): use sync completion instead of async acompletion
davidberenstein1957 Feb 21, 2026
22da2ee
chore(evaluation): remove ARNIQA from VLM PR - has dedicated PR #547
davidberenstein1957 Feb 21, 2026
4f7d31b
feat(evaluation): add structured generation to VLM metrics
davidberenstein1957 Feb 21, 2026
5c58c8d
fix(evaluation): fix linting issues in VLM metrics
davidberenstein1957 Feb 21, 2026
3df89f4
fix(evaluation): fix remaining linting issues
davidberenstein1957 Feb 21, 2026
763270b
fix(evaluation): fix D205 docstring issues in VLM classes
davidberenstein1957 Feb 21, 2026
770cc96
fix(evaluation): fix import sorting in __init__.py
davidberenstein1957 Feb 21, 2026
c7e6eed
fix(evaluation): skip docstring check for metrics_vlm
davidberenstein1957 Feb 21, 2026
2f1b044
fix(evaluation): enhance docstrings for VLM metrics and base classes
davidberenstein1957 Feb 21, 2026
67087e4
feat(evaluation): introduce new VLM metrics and integration tests
davidberenstein1957 Feb 27, 2026
1cf9fce
Delete docs/VLM_METRICS_PROMPT_COMPARISON.md
davidberenstein1957 Feb 27, 2026
c8d313d
feat(metrics): paper docstring fixes, VQA use_probability default, vl…
davidberenstein1957 Mar 5, 2026
bd163c4
feat(metrics): enhance metric classes with update and compute docstrings
davidberenstein1957 Mar 5, 2026
9c534a8
fix(vlm_base): update response_format type hints for clarity
davidberenstein1957 Mar 5, 2026
3cb8893
refactor(vlm_base): simplify response_format check for pydantic usage
davidberenstein1957 Mar 5, 2026
07e38f2
fix(vlm_base): add "json" option to response_format type hints
davidberenstein1957 Mar 5, 2026
ff5f74a
feat(dependencies): add pruna[evaluation] to dev dependencies
davidberenstein1957 Mar 5, 2026
a5c408e
refactor(metrics): improve docstring consistency and formatting acros…
davidberenstein1957 Mar 5, 2026
7d61c60
refactor(metrics): update response formats and improve utility functions
davidberenstein1957 Mar 12, 2026
ae61ef6
refactor(metrics): update collation functions and enhance benchmark t…
davidberenstein1957 Mar 17, 2026
03c5838
refactor(data): update seed parameter handling and add warnings for t…
davidberenstein1957 Mar 19, 2026
03080d2
feat(data): enhance OneIG dataset support and add new benchmarks
davidberenstein1957 Mar 19, 2026
7b01e45
feat(metrics): introduce OneIGTextScoreMetric and enhance TextScoreMe…
davidberenstein1957 Mar 19, 2026
ed98f5e
feat(metrics): add OneIGAlignmentMetric for dependency-aware scoring
davidberenstein1957 Mar 19, 2026
8de57ad
feat(metrics): add OneIG reasoning metric and enhance dataset support
davidberenstein1957 Mar 24, 2026
a2fa784
fix(evaluation): wire GenEval to qa_accuracy with all-or-nothing; ref…
davidberenstein1957 Apr 9, 2026
40b7d4a
refactor(evaluation): drop use_outlines; wire transformers via struct…
davidberenstein1957 Apr 9, 2026
943b148
evaluation: rename vlm_utils, deps, and VLM metric polish
davidberenstein1957 Apr 9, 2026
3df8f8a
evaluation: require VLM model_name, Task vlm_model_name, rename metri…
davidberenstein1957 Apr 9, 2026
76a5287
style(evaluation): ruff import order and format for metrics
davidberenstein1957 Apr 9, 2026
3610037
style(vendor): ruff fixes for oneig_llm2vec
davidberenstein1957 Apr 9, 2026
0df11b3
fix(metrics): handle list text_content; simplify VLM and benchmark tests
davidberenstein1957 Apr 9, 2026
267a027
Enhance LLM2Vec class with improved docstrings and error handling
davidberenstein1957 Apr 9, 2026
3f2ecb6
Enhance Llama model classes with improved docstrings and version checks
davidberenstein1957 Apr 9, 2026
d4d6e8f
Refactor type hints and improve error handling in LLM2Vec and Benchma…
davidberenstein1957 Apr 9, 2026
143d548
Refactor Llama model imports and enhance docstrings for clarity
davidberenstein1957 Apr 9, 2026
7435679
Refactor dataset setup functions and enhance VLM benchmark integration
davidberenstein1957 Apr 9, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/user_manual/configure.rst
Original file line number Diff line number Diff line change
Expand Up @@ -253,7 +253,7 @@ Underneath you can find the list of all the available datasets.
- ``text: str``
* - Image Generation
- `LAION256 <https://huggingface.co/datasets/nannullna/laion_subset>`_, `OpenImage <https://huggingface.co/datasets/data-is-better-together/open-image-preferences-v1>`_, `COCO <https://huggingface.co/datasets/phiyodr/coco2017>`_, `DrawBench <https://huggingface.co/datasets/sayakpaul/drawbench>`_, `PartiPrompts <https://huggingface.co/datasets/nateraw/parti-prompts>`_, `GenAIBench <https://huggingface.co/datasets/BaiqiL/GenAI-Bench>`_
- ``image_generation_collate``, ``prompt_collate``
- ``image_generation_collate``, ``prompt_with_auxiliaries_collate``
- ``text: str``, ``image: Optional[PIL.Image.Image]``
* - Image Classification
- `ImageNet <https://huggingface.co/datasets/zh-plus/tiny-imagenet>`_, `MNIST <https://huggingface.co/datasets/ylecun/mnist>`_, `CIFAR10 <https://huggingface.co/datasets/uoft-cs/cifar10>`_
Expand Down
7 changes: 6 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -157,14 +157,18 @@ dependencies = [
"peft>=0.18.0",
"trl<=0.21.0",
"termcolor==2.3.0",
"realesrgan"
"realesrgan",
]

[project.optional-dependencies]
vllm = [
"vllm>=0.16.0",
"ray",
]
evaluation = [
"outlines>1.2.0,<2.0.0",
"litellm>=1.0.0",
]
stable-fast = [
"xformers>=0.0.30",
"stable-fast-pruna==1.0.8",
Expand Down Expand Up @@ -217,6 +221,7 @@ dev = [
"types-PyYAML",
"logbar",
"pytest-xdist>=3.8.0",
"pruna[evaluation]",
]
cpu = []
lmharness = [
Expand Down
24 changes: 22 additions & 2 deletions src/pruna/data/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,13 @@
setup_hps_dataset,
setup_imgedit_dataset,
setup_long_text_bench_dataset,
setup_oneig_anime_stylization_dataset,
setup_oneig_dataset,
setup_oneig_general_object_dataset,
setup_oneig_knowledge_reasoning_dataset,
setup_oneig_multilingualism_dataset,
setup_oneig_portrait_dataset,
setup_oneig_text_rendering_dataset,
setup_parti_prompts_dataset,
)
from pruna.data.datasets.question_answering import setup_polyglot_dataset
Expand Down Expand Up @@ -103,19 +109,33 @@
"image_classification_collate",
{"img_size": 224},
),
"DrawBench": (setup_drawbench_dataset, "prompt_collate", {}),
"DrawBench": (setup_drawbench_dataset, "prompt_with_auxiliaries_collate", {}),
"PartiPrompts": (
setup_parti_prompts_dataset,
"prompt_with_auxiliaries_collate",
{},
),
"GenAIBench": (setup_genai_bench_dataset, "prompt_collate", {}),
"GenAIBench": (setup_genai_bench_dataset, "prompt_with_auxiliaries_collate", {}),
"GenEval": (setup_geneval_dataset, "prompt_with_auxiliaries_collate", {}),
"HPS": (setup_hps_dataset, "prompt_with_auxiliaries_collate", {}),
"ImgEdit": (setup_imgedit_dataset, "prompt_with_auxiliaries_collate", {}),
"LongTextBench": (setup_long_text_bench_dataset, "prompt_with_auxiliaries_collate", {}),
"GEditBench": (setup_gedit_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIG": (setup_oneig_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIGAnimeStylization": (
setup_oneig_anime_stylization_dataset,
"prompt_with_auxiliaries_collate",
{},
),
"OneIGGeneralObject": (setup_oneig_general_object_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIGKnowledgeReasoning": (
setup_oneig_knowledge_reasoning_dataset,
"prompt_with_auxiliaries_collate",
{},
),
"OneIGMultilingualism": (setup_oneig_multilingualism_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIGPortrait": (setup_oneig_portrait_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIGTextRendering": (setup_oneig_text_rendering_dataset, "prompt_with_auxiliaries_collate", {}),
"DPG": (setup_dpg_dataset, "prompt_with_auxiliaries_collate", {}),
"TinyIMDB": (setup_tiny_imdb_dataset, "text_generation_collate", {}),
"VBench": (setup_vbench_dataset, "prompt_with_auxiliaries_collate", {}),
Expand Down
Loading
Loading