Access request FAQ
pinned#13 opened 4 months ago
by
samuelselvan
How to run llama3.1-70B-Instruct inference with mutil-gpu?
#38 opened 20 days ago
by
ToukesuD
Slow response : Text validation
1
#37 opened 24 days ago
by
GUrubux
Independent evaluation results
#35 opened about 1 month ago
by
yaronr
Request: DOI
#34 opened about 2 months ago
by
Howl1226
Inference client to be added to the pipeline
1
#33 opened about 2 months ago
by
JulienGuy
Llama 3.1 models continuously unavailable
#32 opened 2 months ago
by
HugoMartin
Use of "parameters" or "arguments" in chat template
#31 opened 2 months ago
by
mbayser
Update tokenizer_config.json
#30 opened 2 months ago
by
Rocketknight1
Deploying to dedicated Inference Endpoints
#29 opened 3 months ago
by
stmackcat
Compute Instance Requirement
#28 opened 3 months ago
by
iammano
Slow inference/low GPU utilization.
#27 opened 3 months ago
by
hmanju
Pruning
7
#24 opened 3 months ago
by
dhivakarsa
Context window size?
4
#23 opened 3 months ago
by
JulienGuy
Fix chat_template for tool-calling
1
#22 opened 3 months ago
by
ishelaputov
[ToolCalling] Fix chat_template error
1
#21 opened 3 months ago
by
ishelaputov
what is a way to verify the model I am running is performing as expected?
1
#18 opened 3 months ago
by
MarkWard0110
What's up with the MATH Lvl 5 score on HF Open LLM Leaderboard 2?
1
#16 opened 4 months ago
by
invalid-access
🚀 LMDeploy support Llama3.1 and its Tool Calling. An example of calling "Wolfram Alpha" to perform complex mathematical calculations can be found from here!
#14 opened 4 months ago
by
vansin
Issue with Tokenizer when deploying with TGI
1
#10 opened 4 months ago
by
BigBoyAlan
Bug in config.json?
3
#7 opened 4 months ago
by
dhruvmullick