After fine tuning, there is a problem for using it.
#50 opened about 2 hours ago
by
SalmanFaroz

Bounding boxes in the pre-training data and pre-training tasks
1
#49 opened 1 day ago
by
bilibraker
How does the attention_mask contribute to the projector preformance?
2
#45 opened 5 days ago
by
lucasjin
Getting idefics2 into gguf format for use with llama.cpp and/or ollama?
2
#43 opened 6 days ago
by
PaulCapestany
Large value difference when comparing hidden_states with flash attention ON and OFF
#42 opened 8 days ago
by
Ye27

Fine-tuning Script: QLoRA w/ Flash Attn fails
2
#41 opened 12 days ago
by
RonanMcGovern

Use in pipelines?
1
#40 opened 13 days ago
by
harpreetsahota

Setting compute_metrics in Trainer() leads to AttributeError
3
#38 opened 13 days ago
by
Eyel
[AUTOMATED] Model Memory Requirements
#33 opened 19 days ago
by
model-sizer-bot
Dedicated Inference Endpoints for Idefics2-8b
5
#32 opened 20 days ago
by
zesquirrelnator

How can I deploy idefics2-8b with TensorRT + Triton?
9
#31 opened 21 days ago
by
marksuccsmfewercoc
Multi-gpu fine-tuning
14
#30 opened 21 days ago
by
matbee
Model is incompatible with Inference Endpoints
2
#23 opened 27 days ago
by
sebbyjp
Cuda OOM by simply doing a forward pass on an A6000 (48GB VRAM)
10
#11 opened 28 days ago
by
starzmustdie

CUDA out of memory on A100 with 40GB
7
#8 opened 29 days ago
by
SkalskiP

Error running idefics2-8b-AWQ
23
#7 opened 29 days ago
by
oliverguhr
