It seems like there is server overload or downtime of upstream ?

#75
by prithivMLmods - opened

Screenshot 2024-08-20 at 09-52-44 Mistral 7B Instruct V0.3 - a Hugging Face Space by prithivMLmods.png

I am also facing the same issue

The same here. Maybe the model is being updated.

@harikrishnan99 @bpcanedo //

Back to Live 🚀

So you have all learned an important lesson this week: Download and run your models locally or pay a service.

@Nurb4000
So, APIs and endpoints are for testing and easier deployment, right? InferenceCLI might be better in this context, rather than downloading a model.

Public ones are ok for testing. Not to be relied on for production.

@Nurb4000
So, APIs and endpoints are for testing and easier deployment, right? InferenceCLI might be better in this context, rather than downloading a model.

Public ones are ok for testing. Not to be relied on for production.

  • That's what I mean. [ inference as a service ]
prithivMLmods changed discussion status to closed

Sign up or log in to comment