generation_config.json adds a mapping with the special token '<|im_end|>' to solve the problem of non-stop generation when <|im_end|> is encountered.
#13 opened about 8 hours ago
by
zjyhf
The tokenizer adds a special token '<|im_end|>' to solve the problem of non-stop generation when encountering <|im_end|>.
#12 opened about 8 hours ago
by
zjyhf
About tokens used in this model.
1
#8 opened 7 days ago
by
icoicqico

Multi-lang?
1
#6 opened 10 days ago
by
DalyD
Upload to ollama
#5 opened 11 days ago
by
nonetrix

Adding `safetensors` variant of this model
#4 opened 13 days ago
by
lucataco

🚩 Report: Legal issue(s)
3
#3 opened 13 days ago
by
localfultonextractor

Should be "Llama 3ChatQA-1.5-70B"
3
#2 opened 13 days ago
by
just1moremodel
Concerns regarding Prompt Format
6
#1 opened 13 days ago
by
wolfram
