load model from local files, disabling flash_attn, OSError: We couldn't connect to 'https://huggingface.co'
#15
by
wa99
- opened
Saved the model with, disabling flash_attn to use CPU, refer to : Fix
def save_florence_model(destination_path):
"""
Download and save the Florence-2 model and processor locally.
Parameters:
- destination_path (str): Local path to save the model and processor.
Returns:
- None
"""
with patch("transformers.dynamic_module_utils.get_imports", fixed_get_imports): #workaround for unnecessary flash_attn requirement
model = AutoModelForCausalLM.from_pretrained(MODEL_HUGGINGFACE_PATH, attn_implementation="sdpa",trust_remote_code=True)
processor = AutoProcessor.from_pretrained(MODEL_HUGGINGFACE_PATH,trust_remote_code=True)
processor.save_pretrained("florence_processor")
torch.save(model.state_dict(), destination_path)
config = model.config
config.save_pretrained('florence_config')
Then trying to load with :
def load_florence_processor(florence_processor_path):
processor = AutoProcessor.from_pretrained(florence_processor_path, local_files_only=True)
return processor
def load_florence_local_model(config_path, model_path):
with patch("transformers.dynamic_module_utils.get_imports", fixed_get_imports): #workaround for unnecessary flash_attn requirement
model = AutoModelForCausalLM.from_pretrained(model_path, config=config_path, local_files_only=True)
return model
Looks like I have all the important files :
.
βββ florence2-large.pt
βββ florence_config
β βββ config.json
βββ florence_processor
β βββ added_tokens.json
β βββ merges.txt
β βββ preprocessor_config.json
β βββ special_tokens_map.json
β βββ tokenizer_config.json
β βββ tokenizer.json
β βββ vocab.json
then running the script :
from utils import load_florence_processor, load_florence_local_model
FLORENCE_PROCESSOR_PATH = "florence_processor"
FLORENCE_CONFIG_PATH = "florence_config"
FLORENCE_MODEL_PATH = "florence_config"
if __name__ == '__main__':
processor = load_florence_processor(FLORENCE_PROCESSOR_PATH)
model = load_florence_local_model(FLORENCE_CONFIG_PATH, FLORENCE_MODEL_PATH)
Gives :
The repository for florence_processor contains custom code which must be executed to correctly load the model. You can inspect the repository content at https://hf.co/florence_processor.
You can avoid this prompt in future by passing the argument `trust_remote_code=True`.
Do you wish to run the custom code? [y/N]
and error :
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like microsoft/Florence-2-large is not the path to a directory containing a file named processing_florence2.py.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
How do you guys load hugging face models locally ? I faced the same problems with BLIP2 and the solution was random by changing the model checkpoint path and putting it in a folder, this made it work for BLIP2 but not for florence.