repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 25,105 | closed | fix get_keys_to_not_convert() to return correct modules for full precision inference | # What does this PR do?
This PR fixes `get_keys_to_not_convert()` and returns the correct modules that should not be quantized for numerical stability reasons.
For GPT2 and bloom, both the current(1st output) and new version(2nd output) work well.
GPT2
```
>>> filtered_module_names_old is ['transformer.wte', 'lm_head', 'lm_head'] # both are correct, duplicated though
>>> filtered_module_names is ['lm_head', 'transformer.wte'] # both are correct
```
bloom
```
>>> filtered_module_names_old is ['lm_head', 'transformer.word_embeddings', 'lm_head'] # both are correct, duplicated though
>>> filtered_module_names is ['transformer.word_embeddings', 'lm_head'] # both are correct
```
But for ChatGLM and MPT(and possibly others), this PR finds the correct modules while the current code doesn't.
MPT
```
>>> filtered_module_names_old is ['transformer.wte', 'transformer.output_wte', 'transformer'] # this is wrong
>>> filtered_module_names is ['transformer.output_wte', 'transformer.wte'] # this is correct
```
ChatGLM2
```
>>> filtered_module_names_old is ['transformer'] # this is wrong
>>> filtered_module_names is ['transformer.output_layer'] # this is correct
``` | 07-26-2023 07:26:45 | 07-26-2023 07:26:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @younesbelkada for a first look<|||||>@younesbelkada Looks like that you have fixed this in #25047 two days ago. Great work. My bad for not using latest code.
<|||||>Alright. I am diving deeper into the code.
Could you try this code:
```python
from transformers import AutoConfig, MptForCausalLM
from transformers import AutoModelForCausalLM # many people use this for convenience
from transformers.utils.bitsandbytes import get_keys_to_not_convert
from accelerate import init_empty_weights
config = AutoConfig.from_pretrained("mosaicml/mpt-7b", trust_remote_code=True)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
print(get_keys_to_not_convert(model))
>>> [] # this is because there is problem checking "if (not has_tied_params) and is_base_model" for remote code.
with init_empty_weights():
model = MptForCausalLM(config) #
>>> AttributeError: 'MPTConfig' object has no attribute 'hidden_size'
print(get_keys_to_not_convert(model))
```
See my [another issue for checking tied parameters](https://github.com/huggingface/accelerate/issues/1761)
<|||||>if you comment out the following lines:
```python
if (not has_tied_params) and is_base_model:
return []
```
you will get the following :
```
>> ['transformer.norm_f']
```
But the new fix still get:
```
>> ['transformer.wte']
```
So maybe it is a good practice to utilize get_output_embeddings() in both find tied_parameters and checking lm_head?
@sgugger @younesbelkada any comments? See again [here](https://github.com/huggingface/accelerate/issues/1761)
<|||||>Sure, I am happy to add the test.
Actually I have done tests similar to yours above, for the following models:
```python
model_names = [
"mosaicml/mpt-7b",
"mosaicml/mpt-7b-storywriter",
"mosaicml/mpt-7b-8k-chat",
"THUDM/chatglm2-6b",
"THUDM/chatglm-6b",
"lmsys/longchat-7b-16k",
"lmsys/fastchat-t5-3b-v1.0",
"TheBloke/koala-13B-HF",
"WizardLM/WizardLM-13B-V1.0",
"OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
"nomic-ai/gpt4all-13b-snoozy",
"WizardLM/WizardLM-13B-V1.0",
"stabilityai/FreeWilly2",
"OpenAssistant/llama2-13b-orca-8k-3319",
"JosephusCheung/Guanaco",
"lmsys/vicuna-7b-v1.3",
"baichuan-inc/Baichuan-7B",
"openlm-research/open_llama_3b",
"EleutherAI/gpt-j-6b",
"facebook/opt-1.3b",
"tiiuae/falcon-7b",
"tiiuae/falcon-40b-instruct",
```
Maybe too many. I will stick to your choice of models. <|||||>>
Added test. Please would you kindly review. @younesbelkada
<|||||>>
Thanks a lot. any problems please let me know <|||||>Ok thanks, I will keep you updated<|||||>Indeed there is not `output_embeddings` for `AutoModelForSequenceClassification`.
We can pass the test the model by adding
```python
if output_embeddings is None:
output_embeddings = list(model.modules())[-1]
```
Do you think we we can add this?
One more thing, I am thinking, is it important to keep classification head (not originally `lm_head`) in full precision, as the matrix is not that large(thus quantization error might be not that large too)? Or, is it useful to quantize classification models to 8bits in the first place, as they are usually not that large.
<|||||>Make sense. Minimum change is much safer. Will update the code.<|||||>Thanks very much @ranchlai !!<|||||>@younesbelkada You are very welcome. I have tried to keep minimum change and fix the mpt problem. Please check if that's what you mean. <|||||>The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25105). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,104 | open | Inconsistent Rotation Base for Dynamic NTK Scaling RoPE | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.19.0-42-generic-x86_64-with-glibc2.27
- Python version: 3.8.0
- PyTorch version (GPU?): 2.0.0+cu117 (True)
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
### Inconsistent problem
There is a subtle rotation inconsistency in the base factor of the DynamicNTKRope implemented in [transformers 4.31.0](https://github.com/huggingface/transformers/blob/b257c46a075419c09e5ce5c5aa39bc346ecdb9a5/src/transformers/models/llama/modeling_llama.py#L147)
Suppose we have a decoder model, like LLaMA-1, that utilizes DynamicNTKRope for interpolation and we want to evaluate it using perplexity. In any layer of this decoder model, after the key_states and query_states are computed from the hidden features, they are then rotated based on a fixed seq_len, which is the context length.
However, while generating token by token beyond its maximum trained length at the inference stage, LLM usually reuses previous **cached keys** which are rotated based on factors associated with the previous seq_len. As the seq len keeps increasing, each cached key is rotated with respect to a different base, and consequently, the inconsistency between keys and values arises.
### Expected behavior
I have conducted some experiments on the inconsistency and edited the codes about applying rotation on the keys and values to ensure the rotation base consistent [here](https://github.com/NormXU/Consistent-DynamicNTKRoPE/blob/main/scale_rope/consistent_rope_for_llama_patch.py). Please check the [repo](https://github.com/NormXU/Consistent-DynamicNTKRoPE) for further details.
While I haven't tested if a consistent rotation will benefit perplexity or downstream tasks in any dataset or language model, I believe that, from a mathematical perspective, keeping consistency in the rotation base could potentially enhance the language model's ability to learn relative positions more effectively. My intuition suggests that this consistency might offer advantages in capturing relative position information. | 07-26-2023 07:15:39 | 07-26-2023 07:15:39 | cc @ArthurZucker who works more on this (kind of) model(s).
However, @NormXU, could you provide a short but self-contained code snippet to demonstrate the `inconsistency` you mentioned? Thanks.<|||||>Sure
The inconsistency happens here:
https://github.com/huggingface/transformers/blob/a5cc30d72ae2dc19af534e4b35c986cc28db1275/src/transformers/models/llama/modeling_llama.py#L314-L330
While LLM generates token by token beyond its maximum trained length at the inference stage, the key_states are first applied RoPE based on cos and sin w.r.t. `kv_seq_len`, then rotated key_states are cached.
Then when we come to the next token, key_states are applied to RoPE based on cos and sin w.r.t. `kv_seq_len + 1`. Since DynamicNTKScale has a rotation base w.r.t seq_len:
https://github.com/huggingface/transformers/blob/a5cc30d72ae2dc19af534e4b35c986cc28db1275/src/transformers/models/llama/modeling_llama.py#L156-L163
Therefore, we have an inconsistency between cached `keys_states` and between `keys_states` and query_states
<|||||>Actually @gante is the king of dynamic ROPE so will let him handle this! 🤗 <|||||>Hey @NormXU 👋
I agree with your consistency issue you pointed out. However, our users' biggest concern (and ours, by extension) is empirical results, regardless of correctness.
If you can share with us a few benchmarks where we see the change has positive benefits (and little to no downsides), I'll be more than happy to include it!<|||||>@gante Of course.
How about the perplexity experiments I did in my repo [link](https://github.com/NormXU/Consistent-DynamicNTKRoPE),
The way how we currently compute perplexity is more like we keep the rotation base consistent. Therefore, to bridge such a gap in rotation base between perplexity evaluation and inference with DynamicNTKScale, I modified the codes about how to apply the rotary embedding on keys and queries and do simple experiments on LLama1-7B.
After modification, the perplexity is computed in this way:
![image](https://github.com/huggingface/transformers/assets/33339685/b7936531-a59a-4792-bbbd-624600c72e40)
$K(\alpha(x))$ means, key_states is rotated by a rotation matrix whose base $\alpha$ is a function of sequence length.
Then, I compare the perplexity and the results are shown as below
![image](https://github.com/huggingface/transformers/assets/33339685/88a7888c-542a-400e-99c3-4873003aeb43)
This is about perplexity value on Llama1-7B, an 2k max sequence length model, values above 12.0 are cut off for concise;
**Vanilla:** RoPE w/o any interpolation;
**NTK:** DynamicNTK when scale=1;
**Consistent DynamicNTK:** keep rotation base between keys consistent, this is how we currently calculate perplexity
**Inconsistent DynamicNTK**: keep rotation base between keys inconsistent w.r.t context length;
Can this experiment convincingly demonstrate that a consistent DynamicNTK can achieve better perplexity in long context than an inconsistent DynamicNTK?
Besides, could you please give me any advice on what benchmark I need to test this on? I have access to 8 x A100, enabling me to conduct many experiments quickly.
<|||||>@NormXU I see, that makes sense!
A final question, before I make a decision: have you measured throughput vs the original dynamic scaling? Since we need to reapply RoPE to the cached values, it should introduce slowdowns. The decision to include the technique in `transformers` depends on the extent of the slowdown :) |
transformers | 25,103 | open | Beam search genereation with len(eos_token_id) > 1 throws exceptions | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.13.0-1031-aws-x86_64-with-glibc2.31
- Python version: 3.10.5
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: true
- Using distributed or parallel set-up in script?: false
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
~~~
import transformers
from transformers import GenerationConfig
import torch
name = "gpt2"
tokenizer = transformers.AutoTokenizer.from_pretrained(name)
model = transformers.AutoModelForCausalLM.from_pretrained(name)
gc = GenerationConfig(
max_new_tokens=40,
eos_token_id=tokenizer.encode(" black white green red brown blue yellow purple pink orange"),
pad_token_id=tokenizer.eos_token_id,
num_beams=3,
)
input_ids = tokenizer.encode("Hello, I have 3 cats, one of them is colored", return_tensors="pt")
output = model.generate(input_ids, generation_config=gc)
tokenizer.decode(output[0])
~~~
### Expected behavior
This simple beam search example should work but is throwing this exception:
~~~
File [/usr/local/lib/python3.10/site-packages/transformers/generation/utils.py:2985](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f6c756e675f63616e636572222c2273657474696e6773223a7b22686f7374223a227373683a2f2f796f6e69676f5f6770227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/site-packages/transformers/generation/utils.py:2985), in GenerationMixin.beam_search(self, input_ids, beam_scorer, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)
2982 next_tokens = next_tokens % vocab_size
2984 # stateless
-> 2985 beam_outputs = beam_scorer.process(
2986 input_ids,
2987 next_token_scores,
2988 next_tokens,
2989 next_indices,
2990 pad_token_id=pad_token_id,
2991 eos_token_id=eos_token_id,
2992 beam_indices=beam_indices,
2993 )
2995 beam_scores = beam_outputs["next_beam_scores"]
2996 beam_next_tokens = beam_outputs["next_beam_tokens"]
File [/usr/local/lib/python3.10/site-packages/transformers/generation/beam_search.py:297](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f6c756e675f63616e636572222c2273657474696e6773223a7b22686f7374223a227373683a2f2f796f6e69676f5f6770227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/site-packages/transformers/generation/beam_search.py:297), in BeamSearchScorer.process(self, input_ids, next_scores, next_tokens, next_indices, pad_token_id, eos_token_id, beam_indices, group_index)
294 break
296 if beam_idx < self.group_size:
--> 297 raise ValueError(
298 f"At most {self.group_size} tokens in {next_tokens[batch_idx]} can be equal to `eos_token_id:"
299 f" {eos_token_id}`. Make sure {next_tokens[batch_idx]} are corrected."
300 )
302 # Check if we are done so that we can save a pad step if all(done)
303 self._done[batch_group_idx] = self._done[batch_group_idx] or self._beam_hyps[batch_group_idx].is_done(
304 next_scores[batch_idx].max().item(), cur_len
305 )
ValueError: At most 3 tokens in tensor([ 2266, 11398, 4171, 4077, 11, 10912]) can be equal to `eos_token_id: [2042, 2330, 4077, 2266, 7586, 4171, 7872, 14032, 11398, 10912]`. Make sure tensor([ 2266, 11398, 4171, 4077, 11, 10912]) are corrected.
~~~
I think there is a bug in the check `if beam_idx < self.group_size` as it doesn't take into account that there could be more than 1 eos_token_id and each beam may select more than 1 eos token after the topk.
I will be happy to work on this | 07-26-2023 06:06:05 | 07-26-2023 06:06:05 | I think maybe the point of following line was to be sure there are enough tokens that are not eos:
[`generation/utils.py:2976`](https://github.com/huggingface/transformers/blame/main/src/transformers/generation/utils.py#L3072)
~~~
# Sample 2 next tokens for each beam (so we have some spare tokens and match output of beam search)
next_token_scores, next_tokens = torch.topk(
next_token_scores, 2 * num_beams, dim=1, largest=True, sorted=True
)
~~~
Maybe instead of sampling only 2 it should sample `1+len(eos_token_id)` from each beam
If this is the case, Ill be happy to open a pr<|||||>cc our generation expert @gante <|||||>@yonigottesman exactly, we need to get the `topk` for `max(2, 1+len(eos_token_id)) * num_beams`, such that we guarantee that we have at least 1 non-eos token per beam.
> If this is the case, Ill be happy to open a pr
I'll gladly take your offer 💛 <|||||>awesome! Ill open a pr.
why `max(2, 1+len(eos_token_id))` and not just `1+len(eos_token_id)`? I mean, if `len(eos_token_id)==0` isn't it fine to select just 1 token per beam?<|||||>@yonigottesman it probably is fine, but I'd rather err on the safe side in case there are subtle changes when `len(eos_token_id)==0` -- `max(2, 1+len(eos_token_id))` ensures the logic for that case sees no change.
These regressions are hard to track and consume a lot of time, playing defensively helps :)<|||||>@gante I agree,
opened #25115 |
transformers | 25,102 | closed | documentation for llama2 models | # What does this PR do?
Adds some documentation for 'f' models.
Fixes #25090
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@ArthurZucker
** I'm not sure if this is even the right way to include the documentation, but let me know if there's something more to add. | 07-26-2023 03:10:26 | 07-26-2023 03:10:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>changes incorporated! |
transformers | 25,101 | closed | fix tied_params for meta tensor | # What does this PR do?
This PR fix the retrieval of `tied_params` when we load a model with accelerate. The issue was that we were not able to retrieve the tied parameters by using the `id` function to check equality between meta `tensor`. Instead, we use `find_tied_parameters` from accelerate library.
Solves [#1772](https://github.com/huggingface/accelerate/issues/1772) | 07-25-2023 21:07:07 | 07-25-2023 21:07:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,100 | open | Trainer/accelerate crashes when loading checkpoint using FSDP: sync_module_states ValueError | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@sgugger @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Use `run_clm.py` (https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) to train a large model using the HuggingFace Trainer, use FSDP and save checkpoints. For example:
```
torchrun --nproc_per_node=4 --master_port=XXXXX experiments/run_clm.py \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset_name openwebtext \
--streaming \
--per_device_train_batch_size 16 \
--gradient_accumulation_steps 1 \
--do_train \
--max_steps 1000 \
--output_dir output_dir/ \
--block_size 512 \
--save_steps 10 \
--save_total_limit 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap "LlamaDecoderLayer" \
--tf32 True \
--bf16 True \
--gradient_checkpointing \
```
2. Kill training after a checkpoint has been saved. Then, resume training from the checkpoint with the `resume_from_checkpoint` training argument.
3. Observed behavior: crashes when loading checkpoint model:
```
Traceback (most recent call last):
File ".../run_clm.py", line 638, in <module>
main()
File ".../run_clm.py", line 584, in main
main()
File ".../run_clm.py", line 584, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 1528, in train
main()
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File ".../run_clm.py", line 584, in main
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 1528, in train
train_result = trainer.train(resume_from_checkpoint=checkpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 1528, in train
self._load_from_checkpoint(resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 2055, in _load_from_checkpoint
self._load_from_checkpoint(resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 2055, in _load_from_checkpoint
self._load_from_checkpoint(resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 2055, in _load_from_checkpoint
load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 79, in load_fsdp_model
load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 79, in load_fsdp_model
load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 79, in load_fsdp_model
raise ValueError(
ValueError: Set the `sync_module_states` flag to `True` so that model states are synced across processes when initializing FSDP object
raise ValueError(
ValueError: Set the `sync_module_states` flag to `True` so that model states are synced across processes when initializing FSDP object
raise ValueError(
ValueError: Set the `sync_module_states` flag to `True` so that model states are synced across processes when initializing FSDP object
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 45997 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 45998) of binary: miniconda3.../bin/python
```
### Expected behavior
Expected behavior: can resume training from checkpoint using FSDP. | 07-25-2023 20:47:40 | 07-25-2023 20:47:40 | Hello, as the error mentions, `sync_module_states` needs to be True. For this please use the accelerate launcher with the related accelerate config as specified in the docs here: https://huggingface.co/docs/transformers/main_classes/trainer#using-accelerate-launcher-with-trainer<|||||>Thank you, I will try that. Is it no longer supported to use the Trainer without accelerate launch, such as with torchrun? Loading from checkpoints worked fine for me before I updated to the latest versions.<|||||>@pacman100 I tried using accelerate launch and setting `sync_module_states` to True, but I still cannot load checkpoints, but with a different error this time:
```
[INFO|trainer.py:2020] 2023-07-25 16:53:23,504 >> Loading model from weights/llama-2/debug/checkpoint-10/.
Traceback (most recent call last):
Traceback (most recent call last):
File ".../run_clm.py", line 638, in <module>
File ".../run_clm.py", line 638, in <module>
Traceback (most recent call last):
File ".../run_clm.py", line 638, in <module>
main()main()
main()
File ".../run_clm.py", line 584, in main
File ".../run_clm.py", line 584, in main
File ".../run_clm.py", line 584, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
train_result = trainer.train(resume_from_checkpoint=checkpoint)
train_result = trainer.train(resume_from_checkpoint=checkpoint)
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^^^^^^^^^^^^^^^^ ^^ ^^ ^^ ^^ ^^ ^^ ^ ^^ ^^ ^^ ^^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^^^^^^^^^^^^^^^^^^^
^^^^ File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 1528, in train
^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^ File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 1528, in train
^^^^^^^^^^^^^^^^^^^^^^^
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 1528, in train
self._load_from_checkpoint(resume_from_checkpoint)self._load_from_checkpoint(resume_from_checkpoint)
self._load_from_checkpoint(resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 2055, in _load_from_checkpoint
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 2055, in _load_from_checkpoint
File "miniconda3.../lib/python3.11/site-packages/transformers/trainer.py", line 2055, in _load_from_checkpoint
load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)
load_fsdp_model(self.accelerator.state.fsdp_plugin, self.accelerator, model, resume_from_checkpoint)
File "miniconda3.../lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 73, in load_fsdp_model
File "miniconda3.../lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 73, in load_fsdp_model
File "miniconda3.../lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 73, in load_fsdp_model
with FSDP.state_dict_type(
File "miniconda3.../lib/python3.11/contextlib.py", line 144, in __exit__
with FSDP.state_dict_type(
File "miniconda3.../lib/python3.11/contextlib.py", line 144, in __exit__
with FSDP.state_dict_type(
File "miniconda3.../lib/python3.11/contextlib.py", line 144, in __exit__
next(self.gen)
next(self.gen)
File "miniconda3.../lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 720, in state_dict_type
File "miniconda3.../lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 720, in state_dict_type
next(self.gen)
File "miniconda3.../lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 720, in state_dict_type
FullyShardedDataParallel.set_state_dict_type(
File "miniconda3.../lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 608, in set_state_dict_type
FullyShardedDataParallel.set_state_dict_type(
FullyShardedDataParallel.set_state_dict_type(
File "miniconda3.../lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 608, in set_state_dict_type
File "miniconda3.../lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 608, in set_state_dict_type
state_dict_config_type = _state_dict_type_to_config[state_dict_type]
state_dict_config_type = _state_dict_type_to_config[state_dict_type]
~~~~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ^ ^ ^ ^ ^ ^ ^~^~^~^~^~^~^~^~^~^~^~
~~~~~~KeyError~: ~None~
~~~~~~^^^^^^^^^^^^^^^^^
KeyError: None
state_dict_config_type = _state_dict_type_to_config[state_dict_type]
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
KeyError: None
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 109772 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 109773) of binary: miniconda3.../bin/python
Traceback (most recent call last):
File "miniconda3.../bin/accelerate", line 8, in <module>
sys.exit(main())
^^^^^^
File "miniconda3.../lib/python3.11/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
args.func(args)
File "miniconda3.../lib/python3.11/site-packages/accelerate/commands/launch.py", line 966, in launch_command
multi_gpu_launcher(args)
File "miniconda3.../lib/python3.11/site-packages/accelerate/commands/launch.py", line 646, in multi_gpu_launcher
distrib_run.run(args)
File "miniconda3.../lib/python3.11/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "miniconda3.../lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "miniconda3.../lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
```
Here is the accelerate config I am using:
```yaml
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: true
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_use_orig_params: false
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
I'm really not sure what is going on, any help would be greatly appreciated!<|||||>Update: it seems that using `fsdp_state_dict_type=SHARDED_STATE_DICT` fixes this issue. Not sure if it is expected behavior to not work with `FULL_STATE_DICT`.<|||||>Hello, can you share the outputs of the saved checkpoint? This seems to be an issue with the PyTorch. <|||||>I experienced a similar problem, where using `SHARDED_STATE_DICT` instead of `FULL_STATE_DICT` allowed me to load the model when resuming from checkpoint. <|||||>Hi, @chenchenygu @pacman100 i am do the same things like you, could share your training srcipt of Acceleate;
Here is my training script:
```
accelerate launch --config_file training_scripts/accelerate_config.json train.py \
--model_name_or_path ./pre-trained-model/huggyllama/llama-7b \
--train_file datasets/yesno_task/datatsets/train_pruning.json \
--validation_file datasets/yesno_task/datatsets/valid.json \
--bf16 True \
--output_dir model/test \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "epoch" \
--save_total_limit 5 \
--learning_rate 1e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True \
--max_length 512 \
--gradient_checkpointing True
```
I meet the follow warning:
```
/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/transformers/training_args.py:1531: FutureWarning: using `--fsdp_transformer_layer_cls_to_wrap` is deprecated. Use fsdp_config instead
warnings.warn(
/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/transformers/training_args.py:1531: FutureWarning: using `--fsdp_transformer_layer_cls_to_wrap` is deprecated. Use fsdp_config instead
warnings.warn(
/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/transformers/training_args.py:1531: FutureWarning: using `--fsdp_transformer_layer_cls_to_wrap` is deprecated. Use fsdp_config instead
warnings.warn(
/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/transformers/training_args.py:1531: FutureWarning: using `--fsdp_transformer_layer_cls_to_wrap` is deprecated. Use fsdp_config instead
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
{'loss': 2.2814, 'learning_rate': 1e-05, 'epoch': 0.62}
2023-08-02 11:29:12,413 _dedup_tensors.py:44 INFO p:MainProcess t:MainThread: Duplicate keys to remove: {}
2023-08-02 11:29:52,569 _dedup_tensors.py:44 INFO p:MainProcess t:MainThread: Duplicate keys to remove: {}
/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/torch/distributed/checkpoint/filesystem.py:157: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
if tensor.storage().size() != tensor.numel():
/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/torch/distributed/checkpoint/filesystem.py:157: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
if tensor.storage().size() != tensor.numel():
/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/torch/distributed/checkpoint/filesystem.py:157: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
if tensor.storage().size() != tensor.numel():
/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/torch/distributed/checkpoint/filesystem.py:157: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
```
However, my training script seems work, but i want solve the above wanring.
Thank you for your help !!<|||||>Updata hit OOM in training process !
```
/root/zhuxuekai/anaconda3/envs/py39/lib/python3.9/site-packages/torch/distributed/fsdp/_state_dict_utils.py:312: UserWarning: Failed to clone() tensor with name _fsdp_wrapped_module.model.layers.30.mlp.down_proj.weight on rank 0. This may mean that this state_dict entry could point to invalid memory regions after returning from state_dict() call if this parameter is managed by FSDP. Please check clone implementation of _fsdp_wrapped_module.model.layers.30.mlp.down_proj.weight. Error: CUDA out of memory. Tried to allocate 172.00 MiB (GPU 0; 47.54 GiB total capacity; 45.40 GiB already allocated; 25.62 MiB free; 46.42 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
The major error seem to be **Please check clone implementation of _fsdp_wrapped_module.model.layers.30.mlp.down_proj.weight. Error: CUDA out of memory. **<|||||>@Xuekai-Zhu i also ran into that issue, the workaround that worked for me is https://discuss.pytorch.org/t/fsdp-failed-to-save-model-checkpoints/178232/3
but this is a hacky fix, i think the ultimate fix that is needed is https://github.com/pytorch/pytorch/issues/98823#issuecomment-1504812144
but this requires changes to the accelerate/trainer code<|||||>> @Xuekai-Zhu i also ran into that issue, the workaround that worked for me is https://discuss.pytorch.org/t/fsdp-failed-to-save-model-checkpoints/178232/3 but this is a hacky fix, i think the ultimate fix that is needed is [pytorch/pytorch#98823 (comment)](https://github.com/pytorch/pytorch/issues/98823#issuecomment-1504812144) but this requires changes to the accelerate/trainer code
@chenchenygu Huge thank to you !!! https://discuss.pytorch.org/t/fsdp-failed-to-save-model-checkpoints/178232/3 this is also wrok for me<|||||>Hello, with PR https://github.com/huggingface/transformers/pull/24926, this should be resolved, i.e., by default it will use `FULL_STATE_DICT` with cpu_offload on rank_0 only. |
transformers | 25,099 | open | Missing '1' token for the Donut Processor checkpoints | ### System Info
`transformers==4.31.0`
`python=3.9`
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Apparently, all Donut tokenizer checkpoints are missing the `'1'` token and only have `'▁1'` available. As a result, the Donut tokenizer is unable to tokenize any `1` that is preceded by a character.
Reproduction:
```
from transformers import DonutProcessor
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base")
print(processor.decode(processor.tokenizer('A1')['input_ids']))
<s> A<unk></s>
print(processor.decode(processor.tokenizer('A 1')['input_ids']))
<s> A 1</s>
print(processor.decode(processor.tokenizer('A2')['input_ids'])
<s> A2</s>
```
For Document AI, having digits and characters without separation is a pretty common pattern for fields that are IDs
### Expected behavior
For each digit in [0] + [2,.., 9], the donut tokenizer has both the digit as a standalone token and the digit with a preceding blank.
For instance:
```
print(processor.tokenizer.get_vocab().get('2'))
35934
print(processor.tokenizer.get_vocab().get('▁2'))
3822
```
However, this is not true for the digit `1` which only has the token with a preceding blank:
```
print(processor.tokenizer.get_vocab().get('1'))
None
print(processor.tokenizer.get_vocab().get('▁1'))
1314
```
Note that this behavior is not present for the tokenizer mentioned as a pretrained base on the [donut github repo](https://github.com/clovaai/donut/blob/master/donut/model.py#L159C9-L161C10):
```
self.tokenizer = XLMRobertaTokenizer.from_pretrained(
"hyunwoongko/asian-bart-ecjk" if not name_or_path else name_or_path
)
``` | 07-25-2023 20:43:36 | 07-25-2023 20:43:36 | Not sure this is something wrong in Transformers, sounds like it's the model repos that are missing something. cc @younesbelkada <|||||>> Note that this behavior is not present for the tokenizer mentioned as a pretrained base on the [donut github repo](https://github.com/clovaai/donut/blob/master/donut/model.py#L159C9-L161C10)
I tried the original one and still get the same result, see below.
Could you provide us a code snippet that could reproduce the original behavior? Thanks
```python
from transformers import XLMRobertaTokenizer
t2 = XLMRobertaTokenizer.from_pretrained("hyunwoongko/asian-bart-ecjk")
print(t2.decode(t2('A1')['input_ids']))
# A<unk></s>
```<|||||>Oups, I thought I did try that. Then I'll open an issue on the Clova repo and they'll hopefully update the checkpoints in the future<|||||>Actually, I tried using AutoTokenizer which returns a BartTokenizer and not a Roberta Tokenizer. Using a BartTokenizer seemingly fixes the issue.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("hyunwoongko/asian-bart-ecjk")
print(tokenizer.decode(tokenizer('A1')['input_ids']))
# ▁A 1 </s> en_XX
print(tokenizer.decode(tokenizer('A 1')['input_ids']))
# ▁A ▁1 </s> en_XX
```
This is still a problem on their end since they are indeed using a RobertaTokenizer, but I thought it was worth mentioning here too. |
transformers | 25,098 | closed | Bump certifi from 2022.12.7 to 2023.7.22 in /examples/research_projects/decision_transformer | Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.12.7 to 2023.7.22.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/certifi/python-certifi/commit/8fb96ed81f71e7097ed11bc4d9b19afd7ea5c909"><code>8fb96ed</code></a> 2023.07.22</li>
<li><a href="https://github.com/certifi/python-certifi/commit/afe77220e0eaa722593fc5d294213ff5275d1b40"><code>afe7722</code></a> Bump actions/setup-python from 4.6.1 to 4.7.0 (<a href="https://redirect.github.com/certifi/python-certifi/issues/230">#230</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/2038739ad56abec7aaddfa90ad2ce6b3ed7f5c7b"><code>2038739</code></a> Bump dessant/lock-threads from 3.0.0 to 4.0.1 (<a href="https://redirect.github.com/certifi/python-certifi/issues/229">#229</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/44df761f4c09d19f32b3cc09208a739043a5e25b"><code>44df761</code></a> Hash pin Actions and enable dependabot (<a href="https://redirect.github.com/certifi/python-certifi/issues/228">#228</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/8b3d7bae85bbc87c9181cc1d39548db3d31627f0"><code>8b3d7ba</code></a> 2023.05.07</li>
<li><a href="https://github.com/certifi/python-certifi/commit/53da2405b1af430f6bafa21ba45d8dd8dfc726b8"><code>53da240</code></a> ci: Add Python 3.12-dev to the testing (<a href="https://redirect.github.com/certifi/python-certifi/issues/224">#224</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/c2fc3b1f64d6946f1057971ee897ea828ae848d8"><code>c2fc3b1</code></a> Create a Security Policy (<a href="https://redirect.github.com/certifi/python-certifi/issues/222">#222</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/c211ef482a01aff5f1bc92c4128bfa0c955f4a01"><code>c211ef4</code></a> Set up permissions to github workflows (<a href="https://redirect.github.com/certifi/python-certifi/issues/218">#218</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/2087de5d0aa1d472145fc1dbdfece3fe652bbac5"><code>2087de5</code></a> Don't let deprecation warning fail CI (<a href="https://redirect.github.com/certifi/python-certifi/issues/219">#219</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/e0b9fc5c8f52ac8c300da502e5760ce3d41429ec"><code>e0b9fc5</code></a> remove paragraphs about 1024-bit roots from README</li>
<li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2022.12.07...2023.07.22">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=certifi&package-manager=pip&previous-version=2022.12.7&new-version=2023.7.22)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 07-25-2023 20:43:18 | 07-25-2023 20:43:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,097 | closed | Bump certifi from 2022.12.7 to 2023.7.22 in /examples/research_projects/visual_bert | [//]: # (dependabot-start)
⚠️ **Dependabot is rebasing this PR** ⚠️
Rebasing might not happen immediately, so don't worry if this takes some time.
Note: if you make any changes to this PR yourself, they will take precedence over the rebase.
---
[//]: # (dependabot-end)
Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.12.7 to 2023.7.22.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/certifi/python-certifi/commit/8fb96ed81f71e7097ed11bc4d9b19afd7ea5c909"><code>8fb96ed</code></a> 2023.07.22</li>
<li><a href="https://github.com/certifi/python-certifi/commit/afe77220e0eaa722593fc5d294213ff5275d1b40"><code>afe7722</code></a> Bump actions/setup-python from 4.6.1 to 4.7.0 (<a href="https://redirect.github.com/certifi/python-certifi/issues/230">#230</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/2038739ad56abec7aaddfa90ad2ce6b3ed7f5c7b"><code>2038739</code></a> Bump dessant/lock-threads from 3.0.0 to 4.0.1 (<a href="https://redirect.github.com/certifi/python-certifi/issues/229">#229</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/44df761f4c09d19f32b3cc09208a739043a5e25b"><code>44df761</code></a> Hash pin Actions and enable dependabot (<a href="https://redirect.github.com/certifi/python-certifi/issues/228">#228</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/8b3d7bae85bbc87c9181cc1d39548db3d31627f0"><code>8b3d7ba</code></a> 2023.05.07</li>
<li><a href="https://github.com/certifi/python-certifi/commit/53da2405b1af430f6bafa21ba45d8dd8dfc726b8"><code>53da240</code></a> ci: Add Python 3.12-dev to the testing (<a href="https://redirect.github.com/certifi/python-certifi/issues/224">#224</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/c2fc3b1f64d6946f1057971ee897ea828ae848d8"><code>c2fc3b1</code></a> Create a Security Policy (<a href="https://redirect.github.com/certifi/python-certifi/issues/222">#222</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/c211ef482a01aff5f1bc92c4128bfa0c955f4a01"><code>c211ef4</code></a> Set up permissions to github workflows (<a href="https://redirect.github.com/certifi/python-certifi/issues/218">#218</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/2087de5d0aa1d472145fc1dbdfece3fe652bbac5"><code>2087de5</code></a> Don't let deprecation warning fail CI (<a href="https://redirect.github.com/certifi/python-certifi/issues/219">#219</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/e0b9fc5c8f52ac8c300da502e5760ce3d41429ec"><code>e0b9fc5</code></a> remove paragraphs about 1024-bit roots from README</li>
<li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2022.12.07...2023.07.22">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=certifi&package-manager=pip&previous-version=2022.12.7&new-version=2023.7.22)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 07-25-2023 20:43:05 | 07-25-2023 20:43:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,096 | closed | Bump certifi from 2022.12.7 to 2023.7.22 in /examples/research_projects/lxmert | Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.12.7 to 2023.7.22.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/certifi/python-certifi/commit/8fb96ed81f71e7097ed11bc4d9b19afd7ea5c909"><code>8fb96ed</code></a> 2023.07.22</li>
<li><a href="https://github.com/certifi/python-certifi/commit/afe77220e0eaa722593fc5d294213ff5275d1b40"><code>afe7722</code></a> Bump actions/setup-python from 4.6.1 to 4.7.0 (<a href="https://redirect.github.com/certifi/python-certifi/issues/230">#230</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/2038739ad56abec7aaddfa90ad2ce6b3ed7f5c7b"><code>2038739</code></a> Bump dessant/lock-threads from 3.0.0 to 4.0.1 (<a href="https://redirect.github.com/certifi/python-certifi/issues/229">#229</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/44df761f4c09d19f32b3cc09208a739043a5e25b"><code>44df761</code></a> Hash pin Actions and enable dependabot (<a href="https://redirect.github.com/certifi/python-certifi/issues/228">#228</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/8b3d7bae85bbc87c9181cc1d39548db3d31627f0"><code>8b3d7ba</code></a> 2023.05.07</li>
<li><a href="https://github.com/certifi/python-certifi/commit/53da2405b1af430f6bafa21ba45d8dd8dfc726b8"><code>53da240</code></a> ci: Add Python 3.12-dev to the testing (<a href="https://redirect.github.com/certifi/python-certifi/issues/224">#224</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/c2fc3b1f64d6946f1057971ee897ea828ae848d8"><code>c2fc3b1</code></a> Create a Security Policy (<a href="https://redirect.github.com/certifi/python-certifi/issues/222">#222</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/c211ef482a01aff5f1bc92c4128bfa0c955f4a01"><code>c211ef4</code></a> Set up permissions to github workflows (<a href="https://redirect.github.com/certifi/python-certifi/issues/218">#218</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/2087de5d0aa1d472145fc1dbdfece3fe652bbac5"><code>2087de5</code></a> Don't let deprecation warning fail CI (<a href="https://redirect.github.com/certifi/python-certifi/issues/219">#219</a>)</li>
<li><a href="https://github.com/certifi/python-certifi/commit/e0b9fc5c8f52ac8c300da502e5760ce3d41429ec"><code>e0b9fc5</code></a> remove paragraphs about 1024-bit roots from README</li>
<li>Additional commits viewable in <a href="https://github.com/certifi/python-certifi/compare/2022.12.07...2023.07.22">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=certifi&package-manager=pip&previous-version=2022.12.7&new-version=2023.7.22)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 07-25-2023 20:42:39 | 07-25-2023 20:42:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,095 | open | Migrate Trainer from `Repository` to `upload_folder` | # What does this PR do?
This PR migrates the internal of the push to Hub strategy inside the Trainer to the upload methods instead of the old git approach. The goal is to benefit from the improvements in terms of speed (for instance `upload_folder` is faster than `Repository` on Colab) and avoid some weird sync errors we have seen happen in the past.
This migration breaks a few things. I'm listing them for the sake of completeness but I don't think those are important breaking changes.
1. the return of `Trainer.push_to_hub` will change: in the blocking case it's the url of the repo instead of the URL of the commit. In the non-blocking case, it's the Future object returned by the `huggingface_hub` libreary (instead of a tuple commit address, job in progress)
2. before the migration, it was instant to cancel the push in progress when at the end of the training, to then push the final version of the Hub (but we also had some weird bugs). After the migration we will have to wait for the last push to be completed before pusing the final version of the model.
3. the `output_dir` of the `Trainer` won't be a git repository which is an exact clone of the model repo anymore.
Note that I chose to push checkpoint jobs in two `upload_folder`: one for the model and one for the optimizer state (roughly). This way I can cancel the second one if it hasn't started when at the end of training. | 07-25-2023 19:23:28 | 07-25-2023 19:23:28 | Note that the Hub tests are failing because `upload_folder` seems very slow on moon-staging (with 503 errors and multiple retries for each push). Might be worth putting everything in blocking mode for those tests (though we won't test the fact that jobs properly run in the background then)<|||||>The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25095). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,094 | closed | Add bloom flax | # What does this PR do?
Adds the Flax BLOOM model, superseding #18022 (where force pushing to Younes' branch after rebase closed the PR branch and created a new one) | 07-25-2023 18:35:46 | 07-25-2023 18:35:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the clinical review! Merging 👍 |
transformers | 25,093 | open | Add mask2former fp16 support | # What does this PR do?
Add float16 support to mask2former module and derivatives
Some operations and conversions were fixed in order to propagate the chosen user _dtype_ (ex: float32, float16) for input and embeddings. In this way, we can pass in fp16 tensors for inference.
## Who can review?
Similar PRs were reviewed by: @alaradirik, @amyeroberts, @LysandreJik
| 07-25-2023 18:25:15 | 07-25-2023 18:25:15 | Hi @pedrohml, thanks for opening this PR!
Could you add some tests for these models e.g. [like this one for ViT](https://github.com/huggingface/transformers/blob/dd9d45b6ecc0861847e21d461187711331a56138/tests/models/vit/test_modeling_vit.py#L320)<|||||>@amyeroberts thanks for review. do you mind to double check new changes ?<|||||>@pedrohml Thanks for iterating and adding some tests. The tests added should match the pattern of the ones [I linked to](https://github.com/huggingface/transformers/blob/dd9d45b6ecc0861847e21d461187711331a56138/tests/models/vit/test_modeling_vit.py#L320), rather than call integration tests that check logit values. There are two reasons for this:
* An independent test enables us to select as small as possible checkpoint / config to keep the CI runs as fast as possible
* It might not be robust, as we can expect some differences in the output logits. Out of interest, did you run the slow tests to confirm these pass?
<|||||>> @pedrohml Thanks for iterating and adding some tests. The tests added should match the pattern of the ones [I linked to](https://github.com/huggingface/transformers/blob/dd9d45b6ecc0861847e21d461187711331a56138/tests/models/vit/test_modeling_vit.py#L320), rather than call integration tests that check logit values. There are two reasons for this:
>
> * An independent test enables us to select as small as possible checkpoint / config to keep the CI runs as fast as possible
> * It might not be robust, as we can expect some differences in the output logits. Out of interest, did you run the slow tests to confirm these pass?
@amyeroberts
Thanks for the heads up.. I managed to simplify the fp16 tests. Also, I was able to run tests locally using gpu to confirm all tests go green. This helped me to fix a mistake for oneformer workaround.
I appreciate if you can review again. Feel free to add something else.<|||||>The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25093). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,092 | closed | fix idefics vision config | # What does this PR do?
As discussed offline, this should refactor the vision config args.
We need to address similar changes as https://huggingface.co/HuggingFaceM4/tiny-random-idefics/discussions/2 to make it work with other checkpoints
cc @stas00
| 07-25-2023 18:13:51 | 07-25-2023 18:13:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25092). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,091 | closed | Hotfix for failing `MusicgenForConditionalGeneration` tests | # What does this PR do?
A exceptional case (for `MusicgenForConditionalGeneration`) is not detected by the CI triggered in #24927.
Test is currently failing on `main`, see
https://app.circleci.com/pipelines/github/huggingface/transformers/68971/workflows/503866ea-e425-4817-9849-52c6e7fd2865/jobs/863722 | 07-25-2023 18:00:12 | 07-25-2023 18:00:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,090 | closed | Need documentation for understanding the difference between `7B` and `7Bf` | Unclear what the tradeoffs are between using eg `7B` and `7Bf` as an argument for `model_size` in the `convert_llama_weights_to_hf.py` script:
https://github.com/huggingface/transformers/blob/f9cc333805c47665c6afee8b5867931e54abe0c6/src/transformers/models/llama/convert_llama_weights_to_hf.py#L65 | 07-25-2023 17:49:17 | 07-25-2023 17:49:17 | cc @ArthurZucker <|||||>Sure, we can add a bit of documentation, mentioning that this is specific to llama2 official release 😉 |
transformers | 25,089 | closed | Move common image processing methods to BaseImageProcessor | # What does this PR do?
Moves out the `rescale` and `normalize` methods of the image processors to the BaseImageProcessor class, which all the image processors inherit from.
Reason for moving `rescale` and `normalize`:
* Standard image processing steps
* Used by (almost) all image processors
* Few cases when the logic is model specific e.g. [ViVit](https://github.com/huggingface/transformers/blob/f9cc333805c47665c6afee8b5867931e54abe0c6/src/transformers/models/vivit/image_processing_vivit.py#L196).
Reason for not moving other methods:
* Many require model specific preparation before calling the transforms function. For example, `resize` has different logic across many models e.g. [1](https://github.com/huggingface/transformers/blob/f9cc333805c47665c6afee8b5867931e54abe0c6/src/transformers/models/beit/image_processing_beit.py#L142), [2](https://github.com/huggingface/transformers/blob/f9cc333805c47665c6afee8b5867931e54abe0c6/src/transformers/models/detr/image_processing_detr.py#L865), [3](https://github.com/huggingface/transformers/blob/f9cc333805c47665c6afee8b5867931e54abe0c6/src/transformers/models/vilt/image_processing_vilt.py#L196)
* Some methods aren't universal to all image processors and don't make sense to add to all image processors e.g. [reduce_label](https://github.com/huggingface/transformers/blob/f9cc333805c47665c6afee8b5867931e54abe0c6/src/transformers/models/beit/image_processing_beit.py#L235)
* Some will be moved in future e.g. center_crop but this requires a bit more work outside the PR scope
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 07-25-2023 17:21:37 | 07-25-2023 17:21:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,088 | open | [`resize_embedding`] Introduce `pad_to_multiple_of` and guidance | # What does this PR do?
Fixes #22312.
After internal discussions, it appears that adding this to `tokenizers` is not really feasible ( nor is it desirable).
However, what we can do is by default resize the embedding layer to the nearest size that is optimal for the dtype of the model [following this](https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc).
Motivations:
- the `_get_resized_embeddings` is not exposed, and thus making this automatic can be a big silent win.
- if properly documented, should not really have issues.
Cons:
- it is not backward compatible, so some kind of `config.optimise_resize` might be needed?
- it is hidden and people might not really get why tokenizer.vocab_size will be different than the model's embedding dimension. | 07-25-2023 17:09:42 | 07-25-2023 17:09:42 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25088). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,087 | closed | Edit err message and comment in `test_model_is_small` | # What does this PR do?
Tiny error message and comment update to bring into alignment with 1M param max mentioned in https://github.com/huggingface/transformers/pull/24824#issue-1804886584
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger
| 07-25-2023 16:09:57 | 07-25-2023 16:09:57 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25087). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,086 | open | Generate: return `past_key_values` | # What does this PR do?
WIP | 07-25-2023 15:29:13 | 07-25-2023 15:29:13 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25086). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,085 | closed | [`TF`] Also apply patch to support left padding | # What does this PR do?
Fixes red test on main GPTj test equivalence. Follows #24979 | 07-25-2023 14:52:28 | 07-25-2023 14:52:28 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25085). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,084 | closed | AttributeError: 'GenerationConfig' object has no attribute 'task_to_id' | ### System Info
I am following the [Audio course](https://huggingface.co/learn/audio-course/chapter5/asr_models#longform-transcription-and-timestamps) course and tried to perform translation using the automatic speech recognition pipeline but got a weird error.
Code:
```
from transformers import pipeline
asr = pipeline("automatic-speech-recognition", model='arbml/whisper-largev2-ar', device=0)
res = asr(
audio_file_path,
max_new_tokens=256,
generate_kwargs={"task": "translate"},
chunk_length_s=30,
batch_size=8,
)
```
Error:
`AttributeError: 'GenerationConfig' object has no attribute 'task_to_id'`
This was using Colab free tier on T4
transformers version:
```
import transformers
transformers.__version__
>>> '4.31.0'
```
This error arises when using `generate_kwargs={"task": "translate"}` or `generate_kwargs={"task": "transcribe"}`
Tagging @Narsil to help with pipeline issues.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import pipeline
asr = pipeline("automatic-speech-recognition", model='arbml/whisper-largev2-ar', device=0)
res = asr(
audio_file_path,
max_new_tokens=256,
generate_kwargs={"task": "translate"},
chunk_length_s=30,
batch_size=8,
)
### Expected behavior
Should return a python `dict` with key named `text` that holds the English text. | 07-25-2023 14:52:27 | 07-25-2023 14:52:27 | Also cc @sanchit-gandhi since it comes from the audio course.<|||||>Can you link the full stacktrace if possible ? This might help us narrow it down faster.<|||||>+1 on the full stack-trace. It might require an update to your generation config since this is a fine-tuned checkpoint and the API was updated to take the `task`/`language` as arguments rather than as from the config's `forced_decoder_ids` (see https://github.com/huggingface/transformers/issues/21878#issuecomment-1451902363 for details)<|||||>@sanchit-gandhi
@Narsil
Here's a colab notebook to reproduce the error
https://colab.research.google.com/drive/1kLjKWZSKmvPwBqnaN-NJxy6Hv4gG5oDJ?usp=sharing<|||||>Thanks for the notebook @AmgadHasan! The generation config for this model is indeed missing, meaning it is created automatically from the config in the call to `.generate`, and is only populated with some basic information:
```python
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model='arbml/whisper-largev2-ar')
print(asr.model.generation_config)
```
**Print Output:**
```
GenerationConfig {
"_from_model_config": true,
"begin_suppress_tokens": [
220,
50257
],
"bos_token_id": 50257,
"decoder_start_token_id": 50258,
"eos_token_id": 50257,
"max_length": 448,
"pad_token_id": 50257,
"transformers_version": "4.32.0.dev0",
"use_cache": false
}
```
If we compare this to the most recent generation config, i.e the one for [Whisper large-v2](https://huggingface.co/openai/whisper-large-v2/blob/1f66457e6e36eeb6d89078882a39003e55c330b8/generation_config.json#L216-L219), we see that the generation config is missing many both the language and task token id mappings:
```
GenerationConfig {
"begin_suppress_tokens": [
220,
50257
],
...
"task_to_id": {
"transcribe": 50359,
"translate": 50358
},
...
}
```
These language/task token mappings are used in the call to `.generate` to get the correct language/task token ids respectively:
https://github.com/huggingface/transformers/blob/a1c4954d25ca030c85319dd78395a4eff816e852/src/transformers/models/whisper/modeling_whisper.py#L1691
Since using the language/task arguments as input to the `.generate` method was added with the update to the generation config, these are new features that only work with the updated generation config.
Probably what we can do here @ArthurZucker is throw an error when the user tries to call `.generate` and passes the language/task arguments but the generation config is missing the language/task token ids mapping? Happy to open a PR to fix this
A quick fix for this issue @AmgadHasan is updating the generation config for the model checkpoint (as per my previous comment)<|||||>> Thanks for the notebook @AmgadHasan! The generation config for this model is indeed missing, meaning it is created automatically from the config in the call to `.generate`, and is only populated with some basic information:
>
> ```python
> from transformers import pipeline
>
> pipe = pipeline("automatic-speech-recognition", model='arbml/whisper-largev2-ar')
> print(asr.model.generation_config)
> ```
>
> **Print Output:**
>
> ```
> GenerationConfig {
> "_from_model_config": true,
> "begin_suppress_tokens": [
> 220,
> 50257
> ],
> "bos_token_id": 50257,
> "decoder_start_token_id": 50258,
> "eos_token_id": 50257,
> "max_length": 448,
> "pad_token_id": 50257,
> "transformers_version": "4.32.0.dev0",
> "use_cache": false
> }
> ```
>
> If we compare this to the most recent generation config, i.e the one for [Whisper large-v2](https://huggingface.co/openai/whisper-large-v2/blob/1f66457e6e36eeb6d89078882a39003e55c330b8/generation_config.json#L216-L219), we see that the generation config is missing many both the language and task token id mappings:
>
> ```
> GenerationConfig {
> "begin_suppress_tokens": [
> 220,
> 50257
> ],
> ...
> "task_to_id": {
> "transcribe": 50359,
> "translate": 50358
> },
> ...
> }
> ```
>
> These language/task token mappings are used in the call to `.generate` to get the correct language/task token ids respectively:
>
> https://github.com/huggingface/transformers/blob/a1c4954d25ca030c85319dd78395a4eff816e852/src/transformers/models/whisper/modeling_whisper.py#L1691
>
> Since using the language/task arguments as input to the `.generate` method was added with the update to the generation config, these are new features that only work with the updated generation config.
>
> Probably what we can do here @ArthurZucker is throw an error when the user tries to call `.generate` and passes the language/task arguments but the generation config is missing the language/task token ids mapping? Happy to open a PR to fix this
>
> A quick fix for this issue @AmgadHasan is updating the generation config for the model checkpoint (as per my previous comment)
Thanks @sanchit-gandhi ! This solved the issue.<|||||>The simplest way of updating the generation config is as follows:
```python
from transformers import GenerationConfig
MODEL_ID = "arbml/whisper-largev2-ar" # set to your model id on the Hub
MULTILINGUAL = True # set True for multilingual models, False for English-only
if MULTILINGUAL:
generation_config = GenerationConfig.from_pretrained("openai/whisper-large-v2")
else:
generation_config = GenerationConfig.from_pretrained("openai/whisper-medium.en")
generation_config.push_to_hub(model_id)
``` |
transformers | 25,083 | closed | update `use_auth_token` -> `token` | # What does this PR do?
Fix #25008
@sgugger This should be ready for a review. There are still a few places that use `use_auth_token`, but only during the calls to relevant methods.
I will change those calls to use `token` before merge. | 07-25-2023 14:30:39 | 07-25-2023 14:30:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>In some training example script, (for example, `examples/tensorflow/contrastive-image-text/run_clip.py`) there are
```python
use_auth_token: bool = field(
default=False,
metadata={
"help": (
"Will use the token generated when running `huggingface-cli login` (necessary to use this script "
"with private models)."
)
},
)
```
Do you have any comment on how we should do with them?
Change everything to use `token` but still need to accept `use_auth_token`?
|
transformers | 25,082 | closed | LLaMA Tokenizer does not compute word_ids | ### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-4.18.0-477.10.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.7
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.1.0.dev20230515+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The LLaMA tokenizer does not compute word_ids; all the tokens have the id `0`. This function is useful for custom decoding strategies as it allows the user to know to which word in the sentence a token belongs.
```python
from transformers import AutoTokenizer
sentence = "This is a test"
gpt_neox_tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
llama_tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
gpt2_tokenizer = AutoTokenizer.from_pretrained("gpt2")
t5_tokenizer = AutoTokenizer.from_pretrained("google/mt5-base")
bert_tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
print(gpt_neox_tokenizer([sentence], add_special_tokens=False).word_ids())
print(llama_tokenizer([sentence], add_special_tokens=False).word_ids())
print(gpt2_tokenizer([sentence], add_special_tokens=False).word_ids())
print(t5_tokenizer([sentence], add_special_tokens=False).word_ids())
print(bert_tokenizer([sentence], add_special_tokens=False).word_ids())
```
Output
```python
[0, 1, 2, 3]
[0, 0, 0, 0]
[0, 1, 2, 3]
[0, 1, 2, 2, 3]
[0, 1, 2, 3]
```
### Expected behavior
```python
print(llama_tokenizer([sentence], add_special_tokens=False).word_ids())
```
The expected output is
```python
[0, 1, 2, 3]
``` | 07-25-2023 14:25:53 | 07-25-2023 14:25:53 | Hey! Thanks for reporting this. This seems to be related to the backend tokenizer, so most probably the `tokenizers` library. cc @Narsil if you have a quick idea, otherwise, I'll investigate<|||||>Oh it's simple.
It's just that what you call "word" doesn't exist for this tokenizer...
A lot of tokenizers use whitespace splitting before processing text, but llama does not.
So the other tokenizers listed here see "This test is" as `"This"+ "test" + "is"` while llama will see exactly "This test is".
So llama thinks its one "word". So outputting `[0, 0, 0]` is technically correct, even if not too terribly useful.
The best recommendation I can make is using `offsets` instead which will give you the offsets from which each tokens comes from in the original sentence.
This is the only general way that will work on any tokenizer to recover where a given token is from. And you can compute words as you'd like by some other means if you really want to use words (I usually advise to stay away from words, it creates a lot of unecessary issues when looked under enough scrutiny)
```python
from transformers import pipeline, AutoTokenizer
llama_tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
sentence = "This is a test"
model_inputs = llama_tokenizer(sentence, add_special_tokens=False, return_offsets_mapping=True)
for (input_id, (start, stop)) in zip(model_inputs["input_ids"], model_inputs["offset_mapping"]):
print(f"{input_id} {start}-{stop} {sentence[start:stop]}")
```
<|||||>Thank you @Narsil for the great explanation! |
transformers | 25,081 | open | WIP [`split_special_tokens`] Add support for `split_special_tokens` argument to encode | # What does this PR do?
Argument name is totally debatable. Will also require a pull request in `tokenizers`.
The goal is to be able to simply activate and de-activate the special token splitting. Feature was asked in #22490, and is required for some production type cases, where users pass inputs and we don't want them to be able to hack them | 07-25-2023 14:02:58 | 07-25-2023 14:02:58 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25081). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,080 | open | fix bug : add global_step when call self.lr_scheduler.step | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes [Forget to put the global-step in lr scheduler in train.py #25079](https://github.com/huggingface/transformers/issues/25079)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-25-2023 13:02:52 | 07-25-2023 13:02:52 | |
transformers | 25,079 | open | Forget to put the global-step in lr scheduler in train.py | ### System Info
- ` transformers` version: 4.30.2
- Platform: Linux-3.10.0-1160.83.1.el7.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
We fine-tuned the llama model using trainer.py and accelerated the training process using DeepSpeed. We chose the cosine function as the type for the lr scheduler.
### Expected behavior
In the process of training LLM using `train()` in trainer.py, we observed that after reloading the model from a checkpoint with deepspeed zero-3, the learning rate scheduler did not resume iteration from the previous training's learning rate. Instead, there were gaps in the learning rate progression.We used a lr scheduler of the cosine type, which is not provided by Deepspeed. Instead, we observed that the trainer used the [lr_scheduler](https://github.com/pytorch/pytorch/blob/main/torch/optim/lr_scheduler.py#L124) defined in torch.
| 07-25-2023 13:01:28 | 07-25-2023 13:01:28 | cc @pacman100 <|||||>Hello, a minimal reproducer for this please. The checkpointing test loads back the correct lr: https://github.com/huggingface/accelerate/actions/runs/5651936442/job/15310803049#step:5:2259<|||||>> Hello, a minimal reproducer for this please. The checkpointing test loads back the correct lr: https://github.com/huggingface/accelerate/actions/runs/5651936442/job/15310803049#step:5:2259
Hello, the difference between your job and ours lies in the **different lr_scheduler type**. We use the cosine lr_scheculer (with `--lr_scheduler_type cosine`) instead of set `"scheduler"` in `deepspeed_config.json`.
What we see in wandb is, the `train/learning_rate` always restart from zero with warmups when we resume training from checkpoints.
![image](https://github.com/huggingface/transformers/assets/9065640/e928d3df-7194-456e-9652-cfd405b30aa3)
As we go through the code(correct us if we were wrong), the reason is that DeepSpeed doesn't support consine lr_scheduler, so Transformers use [pytorch native lr_scheduler](https://github.com/pytorch/pytorch/blob/e18d53e2df44bccd7231cdf3dad6ea1255221bd4/torch/optim/lr_scheduler.py#L124), which maintains a self-incrementing variable `self.last_epoch` if we don't pass `epoch` when calling `step()`.
We will provide a minimal reproducer later.
<|||||>We used [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) to reproduce this bug and here is our startup script and DeepSpeed configuration.
**`startup script`**
```
torchrun --nproc_per_node=8 --master_port=29600 train.py \
--model_name_or_path '/nvme/models/llama-13b/' \
--data_path ./alpaca_data.json \
--bf16 True \
--output_dir './chpt' \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--logging_steps 1 \
--learning_rate 2e-4 \
--warmup_ratio 0.1 \
--tf32 True \
--lr_scheduler_type 'cosine' \
--adam_beta2 0.95 \
--deepspeed 'deepspeed.zero3.json' \
--save_strategy 'steps' \
--save_steps 10
```
`deepspeed config`
```
{
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 5,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false,
"flops_profiler": {
"enabled": true,
"profile_step": 5,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": "logs/deepspeed_flops.log"
}
}
```
**before restart**
![20230726-125554](https://github.com/huggingface/transformers/assets/82354186/4045a5c4-0473-479c-80ec-a1941817f0b4)
![image](https://github.com/huggingface/transformers/assets/82354186/1353067f-7d6a-47a2-8d4d-d1c4d0c0a700)
**after restart**
![new](https://github.com/huggingface/transformers/assets/82354186/87d599f2-3ea8-4b7c-aa0d-0a6c7e15fd25)
**wandb**
![image](https://github.com/huggingface/transformers/assets/82354186/ce025b1b-2248-42cc-ab78-037ac8d5114b)
We save a checkpoint every ten steps, and when we restart the training from the previous checkpoint, we observe that the learning rate after the restart is different from the learning rate before the restart. This indicates that the LR (learning rate) scheduler's state is not reset during the restart.
|
transformers | 25,078 | closed | replace `per_gpu_eval_batch_size` with `per_device_eval_batch_size` in readme of multiple-choice task | ### What does this PR do?
replace `per_gpu_eval_batch_size` with `per_device_eval_batch_size` in readme of multiple-choice task as the training arguments `per_gpu_*` is deprecated.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 07-25-2023 12:00:21 | 07-25-2023 12:00:21 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25078). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,077 | open | [`PEFT`] Peft integration alternative design | # What does this PR do?
From the offline discussion + the comments from @patrickvonplaten in https://github.com/huggingface/transformers/pull/24827#issuecomment-1641750464 I propose a new design for tightly integrating PEFT into transformers.
This integration enables loading any PEFT adapter that is saved locally or on the Hub directly into PEFT without dispatching the entire model creation process to PEFT as introduced in #24827.
This would also enable an easier pipeline integration (a one-liner to load adapter weights) | EDIT: pipeline should work out of the box
Let's constraint this integration to few PEFT methods only, for simplicity and redirect users to use PEFT for advanced features (e.g. merge and unload) and advanced PEFT methods (adaptation prompt, prompt learning).
Current API:
## Load a model with an adapter locally or from the Hub:
```python
import torch
from transformers import AutoModelForCausalLM, OPTForCausalLM
model_id = "facebook/opt-350m"
adapter_model_id = "ybelkada/opt-350m-lora"
# directly on from_pretrained
model = AutoModelForCausalLM.from_pretrained(adapter_model_id)
print(model)
# directly on from_pretrained
model = OPTForCausalLM.from_pretrained(adapter_model_id)
print(model)
```
## Load and attach adapter to an existing model
```python
from transformers import AutoModelForCausalLM
# with load_adapter
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(adapter_model_id)
print(model)
# 8-bit + multiGPU compatiblity
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map="balanced")
model.load_adapter(adapter_model_id)
print(model)
print(set(model.hf_device_map.values()))
_ = model(torch.LongTensor([[0, 1, 2, 3]]).to(0))
```
## Attach an adapter, iteratively enable / disable adapters
```python
from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer
from peft import PeftConfig
model_id = "facebook/opt-350m"
adapter_model_id = "ybelkada/opt-350m-lora"
tokenizer = AutoTokenizer.from_pretrained(model_id)
text = "Hello"
inputs = tokenizer(text, return_tensors="pt")
model = AutoModelForCausalLM.from_pretrained(model_id)
peft_config = PeftConfig.from_pretrained(adapter_model_id)
# To get random weights
peft_config.init_lora_weights = False
model.add_adapter(peft_config)
print(model)
model.disable_adapters()
output_disabled = model.generate(**inputs)
print(tokenizer.decode(output_disabled[0], skip_special_tokens=True))
>>> Hello, I'm a newbie to this sub. I'm looking for a good place to
model.enable_adapters()
output_enabled = model.generate(**inputs)
print(tokenizer.decode(output_enabled[0], skip_special_tokens=True))
>>> Hello, MMMMMMMM
```
## Add multiple adapters
```python
from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer
from peft import PeftConfig, LoraConfig
model_id = "facebook/opt-350m"
# directly on from_pretrained
model = AutoModelForCausalLM.from_pretrained(model_id)
lora_config = LoraConfig(
target_modules=["q_proj", "k_proj"],
init_lora_weights=False
)
model.add_adapter(lora_config, adapter_name="adapter_1")
# attach new adapter with same config
model.add_adapter(lora_config, adapter_name="adapter_2")
model.set_adapter("adapter_1")
output_disabled = model.generate(**inputs)
print(tokenizer.decode(output_disabled[0], skip_special_tokens=True))
>>> Hello, I'm a newbie to this sub. I'm looking for a good place to
model.set_adapter("adapter_2")
output_enabled = model.generate(**inputs)
print(tokenizer.decode(output_enabled[0], skip_special_tokens=True))
>>> Hello, I'm a newbie to the game. I'm looking for a good way to
```
## Save adapters
```python
from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer
from peft import PeftConfig, LoraConfig
model_id = "facebook/opt-350m"
# directly on from_pretrained
model = AutoModelForCausalLM.from_pretrained(model_id)
lora_config = LoraConfig(
target_modules=["q_proj", "k_proj"],
)
model.add_adapter(lora_config)
... # train here
model.save_pretrained(save_dir)
# you can either load it back with transformers or PEFT
from peft import AutoPeftModelForCausalLM
model = AutoPeftModelForCausalLM.from_pretrained(save_dir)
# or
model = AutoModelForCausalLM.from_pretrained(save_dir)
```
## Train adapters using Trainer
Check this gist: https://gist.github.com/younesbelkada/cdda6e4abcb09e58f6324d75e0d88862
This PR is on par with: https://github.com/huggingface/peft/pull/749
Features to support:
- [x] loading PEFT adapters
- [x] using multiple adapters
- [x] deal with models loaded with accelerate
- [x] Loading directly from `from_pretrained`
- [ ] Merging adapter weights - to not support
- [ ] Unload adapter weights - to not support
- [x] Training with BC with expected PEFT checkpoints format (do we really want to support training? Shall we just redirect users to load a classic `PeftModel` if they want to train a model?)
- [x] What about `save_pretrained` ?
Features to **not** support:
- [x] disabling adapters
- [x] prompt tuning / prompt learning methods
TODOs:
- [ ] docs
- [x] tests
cc @sgugger @patrickvonplaten @BenjaminBossan @pacman100 | 07-25-2023 11:12:34 | 07-25-2023 11:12:34 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25077). All of your documentation changes will be reflected on that endpoint.<|||||>Mmmm, but know we can't do `model = AutoModelForCausalLM.from_pretrained("ybelkada/opt-350m-lora")` despite the checkpoint having all the correct info to load the model (compared to the alternative PR). Can we still add the necessary code in `from_pretrained` to have the load take one line instead of two?
Apart from that, the design looks great to me!<|||||>At the moment it looks like loading multiple adapters is supported, could we maybe add a couple of examples when multiple adapters are loaded?<|||||>@patrickvonplaten
> At the moment it looks like loading multiple adapters is supported, could we maybe add a couple of examples when multiple adapters are loaded?
Modified the PR description to expose all the possible things that users can do currently<|||||>I think that now all core components are available, before adding the tests and the documentation, I would love to have a round of review @patrickvonplaten @sgugger @pacman100 @BenjaminBossan
I have updated the PR description to detail all possible things users can perform with this integration. Note that there is also a Trainer support and is fully BC with PeftModels
EDIT: I don't know why the CI is currently failing :/ <|||||>Ah, and also this PR should close https://github.com/huggingface/transformers/pull/24750 as pipeline should work out of the box now if you inject adapters or pass a path to an adapter file |
transformers | 25,076 | closed | Evaluation resulting in "RuntimeError: Tensors must be contiguous". | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-1037-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: fp16
- use_cpu: False
- num_processes: 7
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': False, 'zero_stage': 2}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Distributed
### Who can help?
@sgugger
Apologies in advance if this is not a bug and instead a fault of my code. Either way, I think there's likely room to either fix a bug or provide significant improvements to the documentation.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I provide a minimum viable example:
```
import torch
from torch import nn
from transformers import AutoModel, LlamaTokenizer, DataCollatorWithPadding
import pandas as pd
from datasets import load_dataset
from transformers import TrainingArguments, Trainer
class RewardModel(nn.Module):
def __init__(self, model):
super().__init__()
self.language_model = model
self.fc = nn.Linear(self.language_model.config.hidden_size, 1)
def forward(self, input_ids, attention_mask, labels):
"""
Given inputs to a language model, returns a reward at the last sequence position (no normalization).
Input is the output of a tokenizer.
Output is float for size batch_size.
"""
outputs = self.language_model(input_ids = input_ids, attention_mask = attention_mask)
last_hidden_state = outputs.last_hidden_state
reward = self.fc(last_hidden_state) # (batch_size, seq_len, 1)
reward = reward.squeeze(-1) # (batch_size, seq_len)
reward = reward[:,-1] # takes reward at last seq pos (batch_size)
loss = torch.nn.functional.cross_entropy(reward, labels.half())
return {"output": reward, "loss": loss}
# Model
pretrained_model_name = "decapoda-research/llama-7b-hf"
model = AutoModel.from_pretrained(pretrained_model_name)
reward_model = RewardModel(model)
# Tokenizer
tokenizer = LlamaTokenizer.from_pretrained(pretrained_model_name)
if tokenizer.pad_token is None:
tokenizer.pad_token='[PAD]'
# Dataset:
tokenized_dataset = load_dataset('notrichardren/hh-rlhf-tf') # Load a datasetdict with ['train', 'test'] with features input_ids, attention_mask, labels
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
# Metric
def compute_metrics(eval_preds):
rewards, labels = eval_preds
return nn.MSELoss(rewards, labels)
# Training -- training a probe on last unembed layer
for param in reward_model.parameters():
param.requires_grad = False
for param in reward_model.fc.parameters():
param.requires_grad = True
args = TrainingArguments("test-trainer",
evaluation_strategy="steps",
eval_steps = 50,
num_train_epochs = 3,
per_device_train_batch_size = 4,
per_device_eval_batch_size = 4,
remove_unused_columns = False,
logging_strategy = "steps",
logging_steps = 3,
fp16=True
)
trainer = Trainer( # probably using cross entropy loss
reward_model,
args,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["test"],
data_collator=data_collator,
tokenizer=tokenizer,
# compute_metrics=compute_metrics,
)
trainer.train()
```
```
Traceback (most recent call last):
File "reward-model-v3-trainer.py", line 166, in <module>
trainer.train()
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1901, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2226, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2934, in evaluate
output = eval_loop(
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 3147, in evaluation_loop
logits = self.accelerator.gather_for_metrics((logits))
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/accelerator.py", line 2012, in gather_for_metrics
tensor = self.gather(tensor)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/accelerator.py", line 1985, in gather
return gather(tensor)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/utils/operations.py", line 289, in gather
return _gpu_gather(tensor)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/utils/operations.py", line 269, in _gpu_gather
return recursively_apply(_gpu_gather_one, tensor, error_on_other_type=True)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/utils/operations.py", line 128, in recursively_apply
return func(data, *args, **kwargs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/utils/operations.py", line 266, in _gpu_gather_one
torch.distributed.all_gather(output_tensors, tensor)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1451, in wrapper
return func(*args, **kwargs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2448, in all_gather
work = default_pg.allgather([tensor_list], [tensor])
RuntimeError: Tensors must be contiguous
```
### Expected behavior
I would expect the evaluate function to work much like the example provided in the [documentation](https://huggingface.co/learn/nlp-course/chapter3/3?fw=pt#evaluation). | 07-25-2023 11:07:28 | 07-25-2023 11:07:28 | You should debug the code to find which tensor PyTorch complains about and make sure you add the `contiguous` as requested.
But we could also make sure to add it in `gather` directly @muellerzr <|||||>@notrichardren try running again, installing accelerate via `pip install git+https://github.com/huggingface/accelerate` please, it should work with the proposed solution :) <|||||>Thanks a ton -- it seems to work now!
I had to make a few changes to my codebase that were bugs on my side (DeepSpeed zero stage 3 and modifications to compute_metrics) but otherwise, it works and the original error doesn't pop up. |
transformers | 25,075 | closed | Set `TF32` flag for PyTorch cuDNN backend | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Set the `TF32` flag for both PyTorch's CUDA and cuDNN backend.
Currently, the `TrainingArguments` parser only sets the `TF32` flag for the CUDA backend. The user can manually pass `--tf32 False` in the command line, but the `torch.backends.cudnn.allow_tf32` would remain `True` during training. There are also some test cases in that we manually set `torch.backends.cuda.matmul.allow_tf32 = False`.
NOTE: The default value of `torch.backends.cudnn.allow_tf32` for the cuDNN backend is `True` (was added 3 years ago).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-25-2023 11:05:15 | 07-25-2023 11:05:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,074 | closed | Fix: repeat per sample for SAM image embeddings | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Use `repeat_interleave`, rather than `repeat`, to avoid the misalignment of the batch dimension. We repeat each images `point_batch_size` times.
Note that the semantic of `torch.repeat` is different from that of `tf` and `np`.
ref:
- https://github.com/facebookresearch/segment-anything/blob/6fdee8f2727f4506cfbbe553e23b895e27956588/segment_anything/modeling/mask_decoder.py#L126
- https://www.tensorflow.org/api_docs/python/tf/repeat
- https://numpy.org/doc/stable/reference/generated/numpy.repeat.html
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-25-2023 10:55:33 | 07-25-2023 10:55:33 | cc @younesbelkada since you added the model.<|||||>The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25074). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,073 | open | Slow Tokenizer adds whitespace after special token | ### System Info
Python 3.10.6
Transformers 4.31.0
<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer
import transformers
tokenizer = AutoTokenizer.from_pretrained(
"../models/llama-2-7b",
use_fast=False,
)
txt="this is one sentence." + tokenizer.eos_token + "this is another sentence." + tokenizer.eos_token + "this is the third sentence." + tokenizer.eos_token
txt_encoded = tokenizer.encode(txt, add_special_tokens=False)
txt_encoded_decoded = tokenizer.decode(txt_encoded)
txt_encoded_decoded_spaces_false = tokenizer.decode(txt_encoded, spaces_between_special_tokens=False)
print(transformers.__version__)
print(tokenizer.__class__)
print(f"INPUT:\n{txt}\n")
print(f"ROUNDTRIP:\n{txt_encoded_decoded}\n")
print(f"ROUNDTRIP w/ spaces_between_special_tokens=F:\n{txt_encoded_decoded}\n")
```
**Output**:
```
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
4.31.0
<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>
INPUT:
this is one sentence.</s>this is another sentence.</s>this is the third sentence.</s>
ROUNDTRIP:
this is one sentence.</s> this is another sentence.</s> this is the third sentence.</s>
ROUNDTRIP w/ spaces_between_special_tokens=F:
this is one sentence.</s> this is another sentence.</s> this is the third sentence.</s>
```
### Expected behavior
`txt == txt_encoded_decoded`
I expect `text` to be the same as `decode(encode(text))`, however a whitespace is added after each special token (`</s>`). From what I saw in previous issues, `spaces_between_special_tokens=F` should change that but it does not, whitespaces are still there.
What am I missing?
Thank you for your help and apologies in advance, this issue seems to come up quite often and I spent quite some time going through issues in this repo but nothing solved it for me.
| 07-25-2023 10:07:20 | 07-25-2023 10:07:20 | Hey! My first suggestion would be to not use the legacy behaviour by setting `legacy = False` when you initialize the tokenizer.
Second, the `txt_encoded == txt_encoded_decoded` assumption is not always true for all tokenizers. In this case, the decoding adds an extra space, maybe because it is based on the previous legacy behaviour. Will investigate<|||||>> My first suggestion would be to not use the legacy behaviour by setting `legacy = False` when you initialize the tokenizer.
thanks! I tried that though and it did not change the output<|||||>Ok, the same issue exists with the fast version, but the problem is with the encoding that adds extra spaces between the special tokens.... It's a mess haha<|||||>@ArthurZucker
Sorry I can't understand when and why we need to set `legacy=False` , Could you exlpain?
I run the code as follows:
```python
txt = "one more thing" + "<s>" + "traditionally" + "<s>"
tokenizer1 = LlamaTokenizer.from_pretrained(
"./resources/models/llama-2-7b-hf", legacy=True, use_fast=False
)
tokenizer2 = LlamaTokenizer.from_pretrained(
"./resources/models/llama-2-7b-hf", legacy=False, use_fast=False
)
t1 = tokenizer1.tokenize(txt)
t2 = tokenizer2.tokenize(txt)
```
Then I got:
```
t1:['▁one', '▁more', '▁thing', '<s>', '▁tradition', 'ally', '<s>']
t2:['▁one', '▁more', '▁thing', '<s>', 'tradition', 'ally', '<s>']
```
The word starting with a `▁` usually means the start of a new word (as when comparing `▁`more and `ally`).
Even though we don't add a space before "traditionally", it is still considered a new word.
So, seems `tokenizer2` is meaningful?
<|||||>No, words starting with `_` means that these word have a space before them, and thus the token is `_tradition`. While `tradition` is a different token. If you read the documentation that points to the PR #24565, there is a similar example.
What's important to understand is the concept of `added tokens`.
Most often, sentencepiece tokenizers have a vocabulary, but some tokens are added afterwards. This happens with t5 for example. In `transformers`, we do not modify the underlying sentencepiece object. But we still support adding tokens.
Now imagine if `thin` is part of the sentencpiece vocab, but not `_thin`. If `thin` appears next to a work like `thinking`, is will be tokenized as [`_`, `thin`, `king`], not [`_`, `thin`, `_king`]. The same applies for any tokens that are originally part of the sentencepiece model.
In `transformers` all `special tokens` are kind of added to the vocabulary, so we want to reproduce the behaviour and not add extra space.
PS: please refrain from asking something pretty much unrelated. If you have a question (not a bug) feel free to post it on [the discussion forum](https://discuss.huggingface.co/)
|
transformers | 25,072 | closed | [Docs] fix rope_scaling doc string | The `rope_scaling` supports *two* scaling strategies not three. | 07-25-2023 09:31:57 | 07-25-2023 09:31:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,071 | open | evaluation_strategy and eval_steps does not work | ### System Info
- `transformers` version: 4.27.1
- Platform: Linux-4.15.0-200-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): 2.4.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
train LM use run_clm.py
paramters like below:
--model_name_or_path gpt2 \
--output_dir ${out_model_dir} \
--train_file ${train_file} \
--validation_file ${validation_file} \
--validation_split_percentage 5 \
--block_size 512 \
--num_train_epochs 2 \
--per_device_train_batch_size 16 \
--do_train --do_eval --line_by_line \
--evaluation_strategy steps \
--eval_steps 100
but no eval result printed when training.
### Expected behavior
print eval result every eval_steps | 07-25-2023 09:18:39 | 07-25-2023 09:18:39 | Please provide a reproducer we can execute. We do not have your `train_file` and `validation_file`. |
transformers | 25,070 | closed | Param grad None despite model training with requires_grad=True | ### System Info
- `transformers` version: 4.32.0.dev0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): 2.13.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to call the HubertForCTC class and fine-tune the Hubert-base model with Librispeech 100h using CTC Loss. I notice that during training, the .grad values of the: a) model parameters (ie. self.hubert.parameters) as well as b) the output layer parameters (self.lm_head.parameters) is always None (even after several backprop updates), though requires_grad is True for all of these parameters. More confusingly, the loss is also decreasing normally and the WER is improving. Could someone explain why? Unless I am missing something, the .grad value should be set after backpropagation, is it not?
FYIM I have followed the Huggingface blog on fine-tuning Wav2Vec2 and adapted it for Hubert. I provide my [train.py](https://gist.github.com/Remorax/9a68143c56f2457969a3ab6a4b360d90) and my [config file](https://gist.github.com/Remorax/2cde160f4fd87166ada46796746b9c6f) here.
Steps to reproduce:
1. Replace lines 1234-1245 of [modelling_hubert.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/hubert/modeling_hubert.py) with this snippet (adds some print statements):
```
outputs = self.hubert(
input_values,
attention_mask=attention_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
print (f"No. of params: {len([p for p in list(self.hubert.parameters())])}")
print (f"No. of params with grad updated: {len([p for p in list(self.hubert.parameters()) if p.grad])}")
print (f"No. of params with requires grad updated: {len([p for p in list(self.hubert.parameters()) if p.requires_grad])}")
hidden_states = outputs[0]
hidden_states = self.dropout(hidden_states)
logits = self.lm_head(hidden_states)
print (f"No. of params with grad updated in LM Head: {len([p for p in list(self.lm_head.parameters()) if p.grad])}")
print (f"No. of params with requires grad updated in LM Head: {len([p for p in list(self.lm_head.parameters()) if p.requires_grad])}")
```
2. Download [train.py](https://gist.github.com/Remorax/9a68143c56f2457969a3ab6a4b360d90) and [config.json](https://gist.github.com/Remorax/2cde160f4fd87166ada46796746b9c6f), and call train.py as follows:
```
model_name="facebook/hubert-base-ls960"
prefix="results/hubert_debug"
config_path="<path_to_config>"
rm -rf ${DIR}/${prefix}
python3 train.py \
--model_name $model_name --save_prefix ${prefix} \
--num_workers 24 --language "en" \
--trainer_config $config_path
```
The output I always get is:
No. of params: 211
No. of params with grad updated: 0
No. of params with requires grad updated: 211
No. of params with grad updated in LM Head: 0
No. of params with requires grad updated in LM Head: 2
### Expected behavior
Since I am fine-tuning all parameters, ideally, I should get:
No. of params: 211
No. of params with grad updated: 211
No. of params with requires grad updated: 211
No. of params with grad updated in LM Head: 2
No. of params with requires grad updated in LM Head: 2 | 07-25-2023 08:11:14 | 07-25-2023 08:11:14 | cc @sanchit-gandhi since this is an audio model.<|||||>Hey @Remorax - the print statements you've added are in the forward pass of the model, where the gradients will always be zero. The gradients are only computed in the back prop, which is triggered once the forward pass is completed. Once the back propagation is complete, the gradients are used to compute the parameter updates (using the optimiser), and the parameter updates applied to the parameters. The gradients and parameters updates are then set to zero, and we go again:
https://github.com/huggingface/transformers/blob/2fac3422389d5b4284482f036409222b3beba822/src/transformers/trainer.py#L1892C38-L1892C38
This is why the gradients are always zero for the print statements you've added (they're reset after each parameter update step). If the gradients were truly zero for every training step, then we'd never make any parameter updates, and the train loss would stay constant. The fact that your loss is decreasing normally means that the gradients are being computed and parameter updates applied to the params. Hope that explains it.<|||||>Thanks so much, that helps! Yes I suspected it was training anyway but didn't know that .grad was reset at every backprop step - thought it would be visible in the forward pass as well.
Thanks for clarifying! |
transformers | 25,069 | open | Fast and normal tokenizer generate different output when handling consecutive spaces. | ### System Info
Transformer version: 4.29.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code to reproduce:
```python
import transformers
fast = transformers.AutoTokenizer.from_pretrained('openlm-research/open_llama_7b', use_fast=True, cache_dir='hf_cache')
normal = transformers.AutoTokenizer.from_pretrained('openlm-research/open_llama_7b', use_fast=False, cache_dir='hf_cache')
print(fast.__class__, fast('a b'))
print(normal.__class__, normal('a b'))
```
output:
```text
<class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'> {'input_ids': [1, 260, 31822, 31822, 31822, 284], 'token_type_ids': [0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1]}
<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'> {'input_ids': [1, 260, 284], 'attention_mask': [1, 1, 1]}
```
### Expected behavior
Same output | 07-25-2023 06:29:46 | 07-25-2023 06:29:46 | cc @ArthurZucker <|||||>This seems to be pretty close to #24918.
I don't know how they converted the fast tokenizer but it seems wrong. I suggest you open an issue at https://hf.co/openlm-research/open_llama_7b/discussions, as if you use the `huggyllama/llama-7b` model, you will not have this issue:
```python
import transformers
>>> fast = transformers.AutoTokenizer.from_pretrained('openlm-research/open_llama_7b', use_fast=True, cache_dir='hf_cache')
>>> normal = transformers.AutoTokenizer.from_pretrained('openlm-research/open_llama_7b', use_fast=False, cache_dir='hf_cache')
>>> print(fast.__class__, fast('a b'))
<class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'> {'input_ids': [1, 263, 1678, 289], 'attention_mask': [1, 1, 1, 1]}
>>> print(normal.__class__, normal('a b'))
<class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'> {'input_ids': [1, 263, 1678, 289], 'attention_mask': [1, 1, 1, 1]}
``` |
transformers | 25,068 | open | change vocab_size in deberta-v2 default config | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
It's tiny issues.
The default vocab size in deberta-v2 config is not fitted on the deberta-v2 in huggingface models.
I changed 128100 to 128001.
Thanks.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-25-2023 06:27:11 | 07-25-2023 06:27:11 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25068). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,067 | closed | Fix broken link in README_hd.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In line 457, the link we was pointing to https://huggingface.co/ instead of https://huggingface.co/docs/transformers/index#supported . I have fixed that in this PR.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-25-2023 03:36:22 | 07-25-2023 03:36:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,066 | closed | fix: add TOC anchor link | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
There are duplicate titles [Requirements] in perf_infer_gpu_one.md from line 51 and 117 causing an error occurs when moving the table of contents. So add anchor link for each title.
Fixes #25028
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @stevhliu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-25-2023 02:59:19 | 07-25-2023 02:59:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,065 | open | llama2 training has nan | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.19.0-42-generic-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: fp16
- use_cpu: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am training llama2-7b using huggingface trainer. I find that it will occur nan in **forward** when training with batchsize > 1 for llama2, however when batchsize=1 there is no error.
I dive into it and find that the **nan** occurs in layer.31.input_layer_norm, which is caused by **inf** in layers.30.mlp forward after the post_layer_norm, and this **inf** may comes from huge value in hidden_size. However, this doesn't explain why llama1 and llama2 with batchsize=1 can work, which also has huge outliners in hidden_size.
The code I use is like this:
The dataset format is: `write a xxx. ###sentences: xxxx`
The checkpoint of `meta-llama/Llama-2-7b-hf` and original meta checkpoint converted by transformers code are both tried.
```
trainer = transformers.Trainer(
model=model,
train_dataset=train_data,
eval_dataset=val_data,
args=transformers.TrainingArguments(
per_device_train_batch_size=args.micro_batch,
gradient_accumulation_steps=args.gradient_accumulation_steps,
warmup_ratio=args.warmup_ratio,
num_train_epochs=args.num_epoch,
learning_rate=3e-4,
fp16=True,
logging_steps=args.log_steps,
logging_first_step=True, # convenient
evaluation_strategy="no",
save_strategy=args.save_strategy,
eval_steps=None,
save_steps=args.save_steps,
output_dir=args.output_path,
load_best_model_at_end= False,
ddp_find_unused_parameters=False if ddp else None,
report_to="wandb" if args.wandb else [],
ignore_data_skip=args.ignore_data_skip,
),
data_collator=PROMPT.data_collator()
)
model.config.use_cache = False
if list(pathlib.Path(args.output_path).glob("checkpoint-*")):
trainer.train(resume_from_checkpoint=True)
else:
trainer.train()
trainer.save_state()
model.save_pretrained(args.output_path)
```
### Expected behavior
The training should has no nan in forward, thus loss will be normal in backward | 07-25-2023 01:37:14 | 07-25-2023 01:37:14 | What are you setting as your `pad_token`? I was getting `nan` while running batched inference (see post [here](https://discuss.huggingface.co/t/llama2-pad-token-for-batched-inference/48020)) and am wondering if this might be related to your issue?<|||||>I set tokenizer.pad_token_id = 0, which is the same as llama1 (:<|||||>I have the same question!<|||||>Me too!<|||||>Hey! Thanks all for reporting this. I would suggest using the following:
```python
if attention_mask is not None:
if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
raise ValueError(
f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
)
attn_weights = attn_weights + attention_mask
dtype_min = torch.tensor(
torch.finfo(attn_weights.dtype).min, device=attn_weights.device, dtype=attn_weights.dtype
)
attn_weights = torch.max(attn_weights, dtype_min)
```
This was removed from:
https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L343-L348
because it is not used in the original model. This should help. Otherwise adding
```python
# clamp inf values to enable fp16 training
if hidden_states.dtype == torch.float16:
max_dtype = torch.finfo(hidden_states.dtype).max
clamp_value = torch.where(torch.isinf(hidden_states).any(), max_dtype - 1000, max_dtype)
hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)
```
will also help mitigate this
<|||||>Another fix is to train with bfloat16 |
transformers | 25,064 | closed | Feature/forward | null | 07-25-2023 01:06:35 | 07-25-2023 01:06:35 | |
transformers | 25,063 | closed | model.generate does not work when using a AlbertModel | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
Apologies in advance if this is not a bug and instead a fault of my code.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
## Step 1: Loading AI4Bharat/indic-bert model and tokenizer
```
# Load model directly
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/indic-bert", src_lang="en_XX",tgt_lang="mr_IN")
model = AlbertModel.from_pretrained("ai4bharat/indic-bert", max_length = context_length)
```
## Step 2: Loading the dataset using datasets
```
from datasets import load_dataset
dataset = load_dataset("anujsahani01/English-Marathi", split = 'test[:20]')
```
## Step 3: Tokenizing the input data
```
# generate output
model_inputs = tokenizer(data['english'], text_target = data['marathi'], max_length= max_length, padding = 'max_length',
truncation=True, return_tensors = 'pt').to(device)
labels = model_inputs['labels']
source = model_inputs['input_ids']
```
## Step 4: Using .generate to make predictions
```
preds = model.generate(**model_inputs, max_new_tokens = max_length)
```
### Expected behavior
Hey folks,
I was comparing different multilingual language models on basis of different evaluation metrics. Was not able to generate outputs using indic-bert
## Model
```ai4bharat/indic-bert```
## Error Message
```
TypeError: The current model class (AlbertModel) is not compatible with `.generate()`, as it doesn't have a language model head.
```
Any of your inputs will be highly appreciated.
Thank You ! | 07-25-2023 00:20:12 | 07-25-2023 00:20:12 | I don't know what more you want me to say. As the error clearly states, this is not a model that supports `.generate`. ALBERT was trained on a masked language modeling objective so none of its variants are compatible with `.generate()`. |
transformers | 25,062 | open | GPTQ integration | # What does this PR do?
This PR adds the possibility to perform GTPQ quantization with transformers model using [optimum](https://github.com/huggingface/optimum) library. The backend relies on [auto_gptq](https://github.com/PanQiWei/AutoGPTQ) library where we use `GTPQ` and `QuantLinear` class.
Here's the related [PR](https://github.com/huggingface/optimum/pull/1216) on optimum side. This PR can only be merged after the optimum one.
### Quantize model
Unlike `bitsandbytes`, it is not feasible to quantize the weights right after loading them in order to reduce memory consumption. This is why, we need to first load the entire model and then quantize it (all done in `from_pretrained`)
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
model_name = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = GPTQConfig(bits=4, dataset = "c4", tokenizer=tokenizer, group_size=128, desc_act=False)
# works also with device_map (cpu offload works but not disk offload)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, quantization_config=config)
```
### Save hf model
We save the `quantization_config` in `model.config`
```py
# move to the device you want to save model (needed if you used device_map before)
model.to(device)
quantized_model_folder = 'opt-125m-quantized_hf'
model.save_pretrained.(save_folder)
```
### Load quantized weights
If the `model.config` has a `quantization_config` key, we will replace the layers of the model and load the quantized weights.
```py
quantized_model_from_saved = AutoModelForCausalLM.from_pretrained(quantized_model_folder,
device_map = "auto")
```
TODO:
- [ ] merge optimum PR first
- [x] doc | 07-24-2023 22:48:42 | 07-24-2023 22:48:42 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25062). All of your documentation changes will be reflected on that endpoint.<|||||>> 2- Would it make sense to slowly "deprecate" the args load_in_4bit and load_in_8bit as it might lead to confusion for users (because technically you can do 4bit / 8bit with GPTQ as well) - not sure about the right approach here
Yes, it would make sense to deprecate these args as we will add more quantization methods in the future. It will confuse the user to have load_4_bit and load_8_bit only for bitsandbytes quantization.
> 3- Note that I am not sure GPTQ works out of the box for vision / multimodal models, and would be better / safer if we just support text models for now (it would break at the tokenizer init call). How can we have a safety checker that effectively checks if the model is a text model?
We should definitely limit it to text model for now. For now, it works for decoder or encoder model but not for more complex architecture with multiple transformers block like decoder-encoder model. I will need to check if it can be easily extended. <|||||>> Just to confirm, does all the bnb slow tests pass with this PR? 🙏
Yes ! Both gptq and bnb slow tests passed
> Left few clarification and nits on the documentation. I think it is worth it to add a check to raise an error or warning if the model is not a pure text model. I think the easiest would be to check if self.main_input_name contains input_ids.
Added
> What do you think of adding an extra dependency list, [similarly as agents](https://github.com/huggingface/transformers/blob/main/setup.py#L410) that adds all the required packages to play with various supported quantization schemes (bitsandbytes, optimum, auto-gptq, accelerate)
Yes, we can do that after the release of optimum ! |
transformers | 25,061 | open | Generation refactor: new interface, new classes. | ## Summary
This PR introduces an updated `generate()` interface for the Huggingface Transformers library. The update focuses on enhancing extensibility and enabling the explicit declaration of the generation strategy, while ensuring backward compatibility.
## Detailed Description
### Introducing the "generation_strategy" Argument
In the existing `generate()` function, a user must pass arguments such as `num_beams=5, do_sample=True` to choose a specific generation strategy (in this case, beam sampling). This approach can be somewhat confusing, especially when aiming for a specific decoding strategy. For instance, one might assume `do_sample=False` by default. However, when a user changes the model, and the new model has `do_sample=True` as the default, the intended generation method also inadvertently changes. See a [previous PR](https://github.com/huggingface/transformers/pull/22473) for a scenario where this happened.
This PR proposes a new parameter, `generation_strategy`, within the `generate()` function. This addition allows the user to pass a string (`greedy`, `beam_search`, `beam_sample`, ...) to explicitly choose the intended generation method. Alternatively, instead of a string, the user can pass a custom GenerationStrategy object as the parameter (more on this later). If the provided parameters are not compatible with the requested strategy, an Exception is raised, alerting the user to the discrepancy. This update does not modify the default behaviour of the `generate()` function, nor does it break compatibility. To this end, I locally executed the generation tests, and they all pass with the same warnings (edit: I see they are not passing in CircleCI, I will investigate later).
### Enhancing Extensibility of Generation Strategies
While the Huggingface Transformers library is well-regarded for its extensibility, particularly regarding model innovations and implementations, the generation interface has lacked this quality to some degree.
Implementing a new generation strategy, like tweaking the Beam Search code, can be challenging. The associated code resides deep inside the `GenerationMixin`, a class that users cannot subclass. Additionally, there's no option to pass a custom BeamScorer to `generate()`.
A potential workaround is subclassing the model and overriding the `generate()` method. However, this requires rewriting a substantial amount of code from `generate()`, with a complex network of dependencies within `GenerationMixin` that isn't clear to interact with. Thus, enhancing the extensibility and making the generation part more "hack-friendly" was an important motivation for this PR.
### Proposed Changes
With these considerations in mind, the PR proposes a new abstract class, `GenerationStrategy` (or alternatively `Decoder`, naming can be discussed), which defines a common interface for implementing any `GenerationStrategy` variant. Concrete strategies are referred to as "Decoders", such as the "BeamSearchDecoder".
All existing strategies have been refactored into their respective `GenerationStrategy` class. This approach ensures `generate()` is agnostic to the decoding strategy and that each strategy checks its parameters and the generation config independently.
Subsequently, the `generate()` function has been refactored to use the new classes. Facade methods like `beam_search()`, which merely instantiate and call the new Decoders, have been retained in `generation/utils` for backwards compatibility.
With this change, now it is possible to elegantly create a custom GenerationStrategy or subclass an existing strategy, and just pass the customized object to `generate()`. This will allow the emerging research in generation strategies to use HF (right now, you can see in the literature that fairseq is more common).
### New use case examples
```python
# selecting strategy with a string
outputs = model.generate(input_ids=input_ids, generation_strategy='greedy')
# using a custom strategy
decoder = CustomBeamSearch()
outputs = model.generate(input_ids=input_ids, generation_strategy=decoder)
```
### Remaining Work
The proposed code in this PR currently lacks docstrings for the new classes, as it would be more appropriate to add these after finalizing the naming conventions and other details through discussion in this thread.
Additionally, the PR introduces changes to the library LazyImport init files, and feedback on best practices for working with Lazy imports would be greatly appreciated (as I don't have any experience). New tests to validate these changes will be added once the code receives some feedback.
Looking forward to your valuable feedback to improve this PR further.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
I see @gante @sgugger @patrickvonplaten @thomwolf very active in the git history for generation commits.
| 07-24-2023 22:46:14 | 07-24-2023 22:46:14 | Hi @manueldeprada 👋
Thank you for the proposal, but this PR adds further parameterization to `generate()`, and we want to go in the opposite direction whenever possible -- `.generate()` already has too many possible arguments.
Related to this PR: we are internally discussing how to refactor `.generate()` -- isolating the generation strategies like you did here is indeed one of the candidates. However, until we settle on a complete plan, we will not be considering refactors like this one :)
Bear with us, we also want to make `.generate()` more extensible! <|||||>Thanks for your reply, @gante!
It's great to hear that a comprehensive rethink of `generate()` is in progress. In this PR, my aim was to be minimally invasive while preserving backward compatibility. However, I am eager to see a thorough refactor, even if it entails introducing breaking changes. As someone who has worked extensively with generation strategies, I have a few observations that might be useful for the internal discussions:
- The current system of arguments is highly complex and can lead to confusion when selecting a generation strategy. Moreover, the situation is complicated further by models setting their own `generation_config`, as defaults can inadvertently change between models. My suggestion is to streamline `generate()` to have two primary arguments: `generation_method` and `generation_config`. Default behaviors could be as follows:
- If no method or configuration is passed, the function reverts to the model's default.
- If a method is specified, a valid configuration should also be provided. Users could create a new valid `generation_config`, or if they want to override certain parameters from the model's defaults, they can retrieve the model's default configuration (perhaps using `model.generation_config`), modify the desired parameters, and then pass it to `generate()`.
- Besides strings, the `generation_method` could accept custom objects, similar to what I proposed in this PR.
- I think `beam_search()` and similar methods from `generation.utils` should be deprecated and replaced with extensible classes, as demonstrated in this PR.
- The feature needs consistent and clear naming. Terms such as generators, decoders, generation adapters, or generation strategies could be used. It's crucial to establish a convention and stick to it.
- Isolating the strategies could foster an ecosystem where novel strategies and implementations could be shared via Spaces, similar to the current practice with metrics.
- The isolation of strategies could also make the refactoring of individual strategies much simpler. My work with `beam_search()` indicates that it could also benefit from a rethink.
- For instance, completely separating `beam_search` from `group_beam_search` could simplify and streamline the code, making it easier to extend. This could also open up possibilities for full vectorization of `beam_search` along the batch dimension (removing the `for` loops inside `BeamSearchScorer`).
- There is also important details that are hidden in obscure corners in the implementation. For example, all the `early_stopping` related info, and how it relates to `length_penalty`, is not clear in the documentation.
- I'm curious about the progress of the discussions on the future of `generate`. Do you have an estimated timeline for these changes? Given the growing body of literature on decoding strategies ([example 1](https://github.com/wouterkool/stochastic-beam-search), [example 2](https://github.com/rycolab/cpsbs), [example 3](https://github.com/Roxot/mbr-nmt))—mostly developed on Meta's Fairseq—I believe easy generation extensibility in Huggingface would attract these advancements to the platform.
To provide some context, my experience is primarily derived from porting example 1 to Huggingface Transformers. This necessitated forking the entire transformers library, which is not an ideal approach. Given that the reimagining of `generate()` will likely take some time, I plan to publish my changes as a separate companion package in the interim. Please keep us updated on the discussion, and let me know if I can assist further with the refactor! :hugs: |
transformers | 25,060 | open | LlaVa model in transformers | ### Feature request
Support to Llava model in transformers? https://github.com/haotian-liu/LLaVA Similar to InstructBlip w/ connector module between image embeddings and LLM's
### Motivation
Llava is performing really well in MLLM related tasks and for folks to try out InstructBlip vs Llava models it makes it easier if it's in hugging face as it's mostly using the same Image Encoder embeddings from (EVA or ViT or CLIP) and foundational models from (T5 or Vicuna or Llama-2). Code maintenance and ease of integration is easy
### Your contribution
I can definitely help with a PR or tag along with folks in hugging face to make it happen | 07-24-2023 21:41:07 | 07-24-2023 21:41:07 | Hi @RajeshRadha Thank you for the feature request.
As @ArthurZucker mentioning to me, the repo. has reached 4K starts and 300 fork, it seems this is quite popular.
Will leave our core maintainers @amyeroberts and @sgugger to see if this qualifies the model to be in `transformers` or we still prefer to have it first on the Hub. <|||||>Given the popularity and performance of the model, I think it'd be a good addition into `transformers` :)
@RajeshRadha if you'd like to add the model, feel free to open a PR and tag @ArthurZucker and myself for review. <|||||>Just for reference, before the model got so popular, #22848 and #23849 were opened!
|
transformers | 25,059 | closed | Fix `token` in auto classes | # What does this PR do?
Fix `token` in auto classes.
Fix #25008
We (me) should start to:
- add (some) explicit arguments to `from_pretrained` for auto classes
- (probably) clean-up the usage of `use_auth_token` even internally
But let's do this in separate PR(s), I promise. | 07-24-2023 18:31:31 | 07-24-2023 18:31:31 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25059). All of your documentation changes will be reflected on that endpoint.<|||||>Good catch. I guess I have to deal with the internal usages of those `kwargs`<|||||>Yes, ideally we want to only use `token` everywhere now :-) Happy to help if you need some! |
transformers | 25,058 | closed | Fix last models for common tests that are too big. | # What does this PR do?
This PR fixes the last batch of models that are too big for common tests. After it, only two classes override the common test to skip it:
- timm-backbone
- layoutlmv2
In the first case, it's because there is no timm model small enough to work (I did switch to the smallest resnet though) and in the second detectron2 does not let us configure a smaller backbone (I did try with the constant exposed by they don't seem to have any effect). | 07-24-2023 18:27:13 | 07-24-2023 18:27:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,057 | closed | fix deepspeed load best model at end when the model gets sharded | # What does this PR do?
1. Fixes https://github.com/huggingface/transformers/issues/25027 | 07-24-2023 17:41:23 | 07-24-2023 17:41:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,056 | closed | [`IDEFICS`] Fix idefics ci | # What does this PR do?
Attempt to fix IDEFICS failing CI as discussed offline @stas00 - more info coming soon
original PR: https://github.com/huggingface/transformers/pull/24796
| 07-24-2023 16:48:20 | 07-24-2023 16:48:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25056). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,055 | closed | [`RWKV`] Add note in doc on `RwkvStoppingCriteria` | # What does this PR do?
Given the usage of RWKV, decide to go with a tip in the doc rather than changing the `generate` and adding `RwkvStoppingCriteria` to the library. Adresses #23852 | 07-24-2023 16:47:31 | 07-24-2023 16:47:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,054 | closed | removed_unused_columns = True deletes dataset for custom loss func | Feel free to see the[ second comment](https://github.com/huggingface/transformers/issues/25054#issuecomment-1648263249) directly. The original issue was on confusion for the training dataset seemingly not working -- I narrowed it down to the ```removed_unused_columns = True``` dataset deleting an entire dataset, which contradicts the description in the documentation.
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-1037-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: fp16
- use_cpu: False
- num_processes: 7
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': False, 'zero_stage': 2}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: GPU
- Using distributed or parallel set-up in script?: parallel
### Who can help?
@sgugger
I sincerely believe this is a bug in Transformers, though my apologies if it turns out to be my own code.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import torch
from torch import nn
from transformers import AutoTokenizer, AutoModel, LlamaTokenizer, DataCollatorWithPadding
import pandas as pd
from datasets import load_dataset, Dataset, DatasetDict
from functools import partial
from transformers import TrainingArguments, Trainer
import numpy as np
import evaluate
class RewardModel(nn.Module):
def __init__(self, model):
super().__init__()
self.language_model = model
self.fc = nn.Linear(self.language_model.config.hidden_size, 1)
def forward(self, **args):
outputs = self.language_model(**args)
last_hidden_state = outputs.last_hidden_state
reward = self.fc(last_hidden_state) # (batch_size, seq_len, 1)
reward = reward.squeeze(-1) # (batch_size, seq_len)
reward = reward[:,-1] # takes reward at last seq pos (batch_size)
return reward
pretrained_model_name = "decapoda-research/llama-7b-hf"
model = AutoModel.from_pretrained(pretrained_model_name)
reward_model = RewardModel(model)
for param in reward_model.parameters(): # all the requires grads are false
param.requires_grad = False
for param in reward_model.fc.parameters(): # except the last layer
param.requires_grad = True
tokenizer = LlamaTokenizer.from_pretrained(pretrained_model_name)
if tokenizer.pad_token is None:
tokenizer.pad_token='[PAD]'
tokenized_dataset = load_dataset('notrichardren/hh-rlhf-tf') # datasetdict with train/test split that has columns 'input_ids', 'attention_mask', 'labels'
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
class LabelPopTrainer(Trainer):
def compute_loss(self,model,inputs, return_outputs=False):
labels = inputs.pop("labels")
outputs = model(**inputs).flatten()
loss = torch.nn.functional.cross_entropy(outputs, labels.half())
return (loss, outputs) if return_outputs else loss
args = TrainingArguments("test-trainer",
num_train_epochs = 3,
per_device_train_batch_size = 4,
logging_strategy = "steps",
logging_steps = 3,
fp16=True
)
trainer = LabelPopTrainer(
reward_model,
args,
train_dataset=tokenized_dataset["train"],
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train()
trainer.save_model("trained_model")
```
### Expected behavior
I get the error that “Invalid key: 222838 is out of bounds for size 0” which is due to the train_dataloader __getitem__ function not working. However, tokenized_dataset[“train”] (which is the one passed to the trainer) is:
```
Dataset({
features: ['labels', 'input_ids', 'attention_mask'],
num_rows: 224104
})
```
I would expect the dataset, therefore, to work when run with ```accelerate launch ___.py``` Below is the more complete error:
```
Traceback (most recent call last):
File "reward-model-v3-mve-2.py", line 67, in <module>
trainer.train()
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1787, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/accelerate/data_loader.py", line 384, in __iter__
current_batch = next(dataloader_iter)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = self.dataset.__getitems__(possibly_batched_index)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2796, in __getitems__
batch = self.__getitem__(keys)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2792, in __getitem__
return self._getitem(key)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2776, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 583, in query_table
_check_valid_index_key(key, size)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 536, in _check_valid_index_key
_check_valid_index_key(int(max(key)), size=size)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 526, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 202710 is out of bounds for size 0
``` | 07-24-2023 16:37:48 | 07-24-2023 16:37:48 | I did some further investigation and found that this dataset seems to work _if_ and only if you set remove_unused_columns = False in the training arguments.
**It seems very odd that "remove_unused_columns = True" would result in the entire dataset being wiped, given that the features in the dataset are ['labels', 'input_ids', 'attention_mask'].** This on its own seems like a bug rather than a feature and seems like it could be a point of confusion (especially without an appropriate error message for the user). It seems to also directly contradict the [documentation description](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.remove_unused_columns) of the function.
If you'd like to modify remove_unused_columns to reflect this, it may be good to keep the issue open to address this -- otherwise, feel free to close it.<|||||>`remove_unused_columns=True` will remove the keys in the dataset that are not accepted by the models, which is decided by looking at the model forward signature. Your `RewardModel` does not have any argument names (it takes `*args`), that's why there is this bug. If you unpack that `*args` to use names like `labels`, `input_ids`, `attention_mask`, the issue will disappear.<|||||>Ah, I see. Thank you very much for clarifying this, this makes a lot of sense. I appreciate it!<|||||>No problem! We should probably make a note of it in the documentation. |
transformers | 25,053 | closed | [ `PreTrainedTokenizerFast`] Keep properties from fast tokenizer | # What does this PR do?
Start a potential fix to #24179, ended up being releated to #24441, calling modifies the values of the underlying tokenizer, but never changes anything on the surface so will probably add some kind of warning in the documentation.
TLDR; adds the possibility of initialize a `PreTrainedTokenizerFast` from a `tokenizer.Tokenizer`, keeping the `padding` and `truncation` informations.
| 07-24-2023 16:35:40 | 07-24-2023 16:35:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Yes, currently everything is ignored |
transformers | 25,052 | closed | MaskFormer - enable return_dict in order to compile | # What does this PR do?
Is was previously not possible to use `torch.compile` on Maskformer as some modules always returned a dataclass, instead of a tuple.
This PR adds a `return_dict` argument to these modules, which defaults to `True` to maintain the previous behaviour.
Tested with:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor,AutoModelForInstanceSegmentation
checkpoint = "facebook/maskformer-swin-base-ade"
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained(checkpoint)
model = AutoModelForInstanceSegmentation.from_pretrained(checkpoint).to("cuda")
processed_input = processor([image, image], return_tensors='pt').to(device="cuda")
compiled_model = torch.compile(model, fullgraph=True)
with torch.no_grad():
compiled_model(**processed_input, return_dict=False)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 07-24-2023 15:39:59 | 07-24-2023 15:39:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @merveenoyan who highlighted the compiling issue 🙏 <|||||>@sgugger Yep! I'll update that<|||||>@sgugger Most of the custom tests aren't easy to remove, unfortunately. As MaskFormer combines hidden states from different modules -- encoder, transformer decoder and pixel module decoder -- which are defined in nested arguments in the config e.g. `model.config.decoder_config.num_hidden_layers` it breaks a lot of assumptions.
I've removed `test_training` and made `_prepare_for_class` and `test_output_attentions` match the common equivalent. I've also updated some of the config values to make sure they're small by default. <|||||>Thanks a lot!<|||||>Also, I wasn't able to enable the FX tests. There's an issue when the model is symbolically traced, where the output shapes are slightly different (off by one). Checking the outputs from the compiled model using `torch.compile` this doesn't occur, so I'm leaving this for a future PR :) |
transformers | 25,051 | open | Add ViTMatte | # What does this PR do?
This PR adds the ViTMatte model, an elegant approach to image matting, entirely relying on the Vision Transformer backbone doing the heavy work, with a lightweight head on top.
Here's a Colab notebook showcasing inference: https://colab.research.google.com/drive/1pWTn3Iur-NR2xUIyDE31dBgng_hXjSsn?usp=sharing.
Fixes #25040. | 07-24-2023 14:51:36 | 07-24-2023 14:51:36 | |
transformers | 25,050 | open | Trainer does not properly read "label" column. | ### System Info
When I have a HuggingFace dataset with 'input_ids', 'attention_mask', and 'labels', the current HuggingFace Trainer does not properly read 'labels' and separate the labels from the inputs in the forward pass.
I also tried a 'label' (as opposed to 'labels') column and it does not work with this either -- it fails to separate the label from the rest of the forward pass.
This contrasted with the demo for Trainer, where a dataset has the tokenized version of a given dataset as well as a 'label' feature column.
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The HuggingFace Accelerate config I'm using has fp16 enabled. Here is the code to replicate:
```
import torch
from torch import nn
from transformers import AutoTokenizer, AutoModel, LlamaTokenizer, DataCollatorWithPadding
import pandas as pd
from datasets import load_dataset, Dataset, DatasetDict
from functools import partial
from transformers import TrainingArguments, Trainer
import numpy as np
import evaluate
class RewardModel(nn.Module):
def __init__(self, model):
super().__init__()
self.language_model = model
self.fc = nn.Linear(self.language_model.config.hidden_size, 1)
def forward(self, **args):
outputs = self.language_model(**args)
last_hidden_state = outputs.last_hidden_state
reward = self.fc(last_hidden_state) # (batch_size, seq_len, 1)
reward = reward.squeeze(-1) # (batch_size, seq_len)
reward = reward[:,-1] # takes reward at last seq pos (batch_size)
return reward
pretrained_model_name = "decapoda-research/llama-7b-hf"
model = AutoModel.from_pretrained(pretrained_model_name)
reward_model = RewardModel(model)
tokenizer = LlamaTokenizer.from_pretrained(pretrained_model_name)
if tokenizer.pad_token is None:
tokenizer.pad_token='[PAD]'
tokenized_dataset = load_dataset('notrichardren/hh-rlhf-tf') # has columns 'input_ids', 'attention_mask', 'labels'
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
for param in reward_model.parameters(): # all the requires grads are false
param.requires_grad = False
for param in reward_model.fc.parameters(): # except the last layer
param.requires_grad = True
training_args = TrainingArguments("test-trainer",
num_train_epochs = 250,
per_device_train_batch_size = 1,
remove_unused_columns=False,
fp16=True
)
trainer = Trainer( # probably using cross entropy loss
reward_model,
training_args,
train_dataset=tokenized_dataset["train"],
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train()
trainer.save_model("trained_model")
```
I receive a TypeError on the forward pass, where **forward() got an unexpected keyword argument 'labels'**
```
File "minimum-viable-example.py", line 19, in forward
return forward_call(*args, **kwargs)
File "minimum-viable-example.py", line 19, in forward
tr_loss_step = self.training_step(model, inputs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2654, in training_step
return inner_training_loop(
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1809, in _inner_training_loop
return forward_call(*args, **kwargs)
File "minimum-viable-example.py", line 19, in forward
outputs = model(**inputs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
outputs = self.language_model(**args)
ret_val = func(*args, **kwargs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1735, in forward
tr_loss_step = self.training_step(model, inputs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2654, in training_step
outputs = self.language_model(**args)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
loss = self.compute_loss(model, inputs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2679, in compute_loss
outputs = self.language_model(**args)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
loss = self.compute_loss(model, inputs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2679, in compute_loss
return forward_call(*args, **kwargs)
return forward_call(*args, **kwargs)
File "/admin/home-notrichardren/.local/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
TypeError: forward() got an unexpected keyword argument 'labels'
return forward_call(*args, **kwargs)
TypeError: forward() got an unexpected keyword argument 'labels'loss = self.module(*inputs, **kwargs)
```
### Expected behavior
I would expect the 'labels' column (I tried both 'label' and 'labels') to not be put in the forward pass, but instead to be used in the loss function. | 07-24-2023 14:42:51 | 07-24-2023 14:42:51 | You are using a model which does not accept `labels` in its forward pass: `AutoModel` gives you the base model, it is only suitable to extract the last hidden state generated by the model, not for training.<|||||>By default, am I supposed to pass a model to Trainer that is able to accept "labels" in its forward pass?
I wasn't too sure how Trainer worked, but based on the documentation, my impression was that "labels" was supposed to be a separate column (that is then removed) before all of the args are passed into the forward pass. Afterward, the loss calculation would happen where the "labels" are compared to model predictions.
@sgugger |
transformers | 25,049 | closed | Better error message when signal is not supported on OS | # What does this PR do?
This PR adds a `try`/`except` block around the logic asking the user whether they want to execute the code on a distant repo when they don't set `trust_remote_code=True` to give a helpful error message.
Fixes #25029 | 07-24-2023 14:17:52 | 07-24-2023 14:17:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,048 | closed | [DOCS] add docstrings to TypicalLogitsWarper | # What does this PR do?
Added some doc string to TypicalLogitsWarper with some examples as well.
@gante let me know if there's anything else should be add or remove from the docs.
Fixes #24783
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
| 07-24-2023 14:17:35 | 07-24-2023 14:17:35 | @akshayamadhuri you probably need to run `make fixup` on your terminal and then commit the changes to make our CI happy :D |
transformers | 25,047 | closed | [`8bit`] Fix 8bit corner case with Blip2 8bit | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/25011
Fixes https://github.com/huggingface/transformers/issues/25026
https://github.com/huggingface/transformers/pull/24095/ introduced a new check for retrieving the list of modules that are not needed to be quantized (e.g. LM head). While it works perfectly fine for text models, when using `model.named_children()`, the last element of that list would be the entire `language_model` module for `Blip2` models. This lead to the entire language model not being converted in 8bit by `replace_bnb_linear` method, leading to an 8bit bnb weight forced to be loaded on a `nn.Linear` module, hence the error.
The fix is to use `model.named_parameters()` to correctly get the last parameter (usually the lm_head) and not the last children
cc @sgugger
Will mark as ready for review once the slow tests are green
| 07-24-2023 14:09:48 | 07-24-2023 14:09:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Tests are green, merging! |
transformers | 25,046 | closed | [DOCS] add example NoBadWordsLogitsProcessor | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
See #24783 .
Add example to NoBadWordsLogitsProcessor.
Some analysis [here](https://github.com/SoyGema/contrib_schema) .
Kudos to @nablabits
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-24-2023 14:09:22 | 07-24-2023 14:09:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(@SoyGema did a minor edit on the PR header: the word "fixes" before an issue number on a PR automatically closes the issue when the PR is merged) |
transformers | 25,045 | closed | Fix some bugs for two stage training of deformable detr | Hello @amyeroberts:
There are some issues encountered when training with the two-stage method using Deformable DETR, for which I have made modifications. | 07-24-2023 13:59:25 | 07-24-2023 13:59:25 | @jypjypjypjyp Thanks for opening this PR! Could you: share a minimal code snippet in the PR which triggers the errors on main and which this PR resolves? <|||||>> @jypjypjypjyp Thanks for opening this PR! Could you: share a minimal code snippet in the PR which triggers the errors on main and which this PR resolves?
Hello @amyeroberts:
Using the original code might not result in errors, but there are apparent mistakes in several places in the source code. Let me explain my modifications:
1. Line 2002: In the original code, 'enc_outputs' is stored in outputs, but the computation of loss uses **outputs_loss**, which means enc_outputs will never be utilized.
2. Line 2267: Due to the first point, 'enc_outputs' will never be used, so this problem will never arise. However, if the first issue is remedied, this problem is revealed. Throughout the entire code, 'class_labels' is used as a key rather than 'labels'.
3. Line 2270: This piece of code is derived from the original Deformable Detr code, and there is no log functionality in our implementation, rendering this code meaningless.
4. Line 1972: This segment of code involves the calculation of the auxiliary loss; if the original code is used, an error will be thrown during the calculation of the auxiliary loss because the shape of the tensor passed in is incorrect.
In my usage, the modified code can be correctly trained. It's possible that my understanding is incorrect, and I hope you can verify these issues.<|||||>@jypjypjypjyp Thank you for explaining so clearly and with such detail, it really helps :)
All of these changes make sense. My only request is that we add a test to make sure the model can train when `config.two_stage` is `True` e.g. somethlng like `test_training_two_stage`, similar to [test_training](https://github.com/huggingface/transformers/blob/03f98f96836477f6f5b86957d3ce98778cad5d94/tests/test_modeling_common.py#L508). <|||||>@amyeroberts Sorry, I'm not very familiar with the test portion of the code. Could you help me complete it, or explain it in more detail? I'm unsure about what specifically I should do.<|||||>@jypjypjypjyp Of course :)
All models are tested to make sure that they can be trained i.e. do a forwards / backwards pass. Most models' tests are implemented in [test_modeling_common.py](https://github.com/huggingface/transformers/blob/d27e4c18fe2970abcb9a48dcb8a824e48083b15f/tests/test_modeling_common.py#L4).
By default, Deformable Detr has [`two_stage` set to False](https://github.com/huggingface/transformers/blob/d27e4c18fe2970abcb9a48dcb8a824e48083b15f/src/transformers/models/deformable_detr/configuration_deformable_detr.py#L184C1-L184C1), and only the [default model config value](https://github.com/huggingface/transformers/blob/d27e4c18fe2970abcb9a48dcb8a824e48083b15f/tests/models/deformable_detr/test_modeling_deformable_detr.py#L136) was used during testing. This is why the issues with two stage training were never uncovered.
Each model has its own test module for its model logic e.g. [test_modeling_deformable_detr.py](https://github.com/huggingface/transformers/blob/d27e4c18fe2970abcb9a48dcb8a824e48083b15f/tests/models/deformable_detr/test_modeling_deformable_detr.py). Here we can add model specific tests which aren't covered by the tests in test_modeling_common.py.
I'm suggesting that we add another test to specifically test two-stage training in `DeformableDetrModelTest` e.g. something like:
```python
class DeformableDetrModelTest
...
def test_two_stage_training(self):
model_class = DeformableDetrForObjectDetection
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
config.return_dict = True
config.two_stage = True
model = model_class(config)
model.to(torch_device)
model.train()
inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)
loss = model(**inputs).loss
loss.backward()
```
<|||||>@amyeroberts Hello, Thank you for your tutorial! I have added a test_two_stage_training and discovered another issue - the feature dimension of get_proposal_pos_embed is fixed. I made modifications to address this (which also involved modifying the code in deta when using 'make fix-copies'). Please review again.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,044 | closed | compute_loss in trainer failing to label shift for PEFT model when label smoothing enabled. | # What does this PR do?
When training LlamaForCausalLM with PEFT enabled, I noticed that the compute_loss function of the trainer with label_smoothing enabled was not shifting the labels. Further investigation found that `unwrap_model(model)._get_name()` returned "PeftModelForCausalLM", and the `MODEL_FOR_CAUSAL_LM_MAPPING_NAMES` dict doesn't have that, so the label shift was not happening.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
I believe this PR would be a fit for @sgugger review.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-24-2023 13:49:10 | 07-24-2023 13:49:10 | No we cannot add this like that, as this is not a model that lives in Transformers.
cc @younesbelkada <|||||>@njbrake thanks for the PR, can you elaborate more on the label shifting issue?
https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L832 should be automatically shifting the labels if you pass a dataset with labels <|||||>@sgugger I think your comment makes sense. This issue is showing up at https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2680
In that case, it seems to make sense to me that in this context, the trainer should see the model that is sitting "behind" the PEFT model, so that it would see that the PEFT model was the LLama architecture. i'm not sure on whether that means a change is required in the PEFT library, the trainer code, or in my code (or maybe in all three 😄 )<|||||>@younesbelkada Since i'm using the label smoother, the trainer API pops out the labels via https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2669 before calling the forward function.<|||||>I don't think we should unwrap the model from its peft container, as it would trigger bug when saving (Younes will confirm!) so the best fix is probably to add an or clause in the specific test in the Trainer.<|||||>Thanks for clarifying @njbrake that makes sense, I second what @sgugger said, a better fix would be to add a proper check to see if the model is and instance of peft model on the appropriate line, otherwise users might encounter unexpected behaviours (e.g. when saving the model)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,043 | open | Can't Reproduce GLUE scores using official BERT implementation | ### System Info
- `transformers` version: 4.32.0.dev0
- Platform: Linux-5.15.0-1036-aws-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
As stated at https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification, the scores don't match the official scores on paper and GLUE benchmark. My scores match the huggingface benchmark but is lower than the official implementation in some benchmarks like CoLA. How did this happen and how can we avoid it? Looking forward to your help!
<img width="1018" alt="image" src="https://github.com/huggingface/transformers/assets/60613238/5f368343-0619-4084-9039-39683f772d3f">
### Expected behavior
<img width="550" alt="image" src="https://github.com/huggingface/transformers/assets/60613238/b6cc374a-5e2d-409d-ac9d-3b73a7ced4fd">
| 07-24-2023 13:45:40 | 07-24-2023 13:45:40 | The results you will get are very sensitive to the seed used.<|||||>Hi @sgugger , so it means that as long as I get the same results as on the huggingface README, my model is nearly identical to the paper, right?
Also a follow-up: in our paper we shall also include this score. Do you remember anyone ever reproduced the scores in the paper using huggingface?<|||||>I remember getting the same results on Cola or even better results on Cola with a different seed (but then some of the other results were different). I don't remember which seed however :sweat_smile: <|||||>Ugh that's embarassing 😅 these original authors are too good at these kind of tricks.
Anyway, thanks for the instant help! I'll get back to this thread if I get identical or even better results.
<|||||>@sgugger Sorry for putting this up after closing the issue. I'm writing to ask if you know anyone successfully reproduced pre-training the RoBERTa model from scratch and gained as good scores as the listed ones as shown in https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification. Specifically, I'm interested in how the script they use with [`run_mlm.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py).
Looking forward to your reply!<|||||>`run_mlm` is just an example, it does not reproduce RoBERTa pretraining. |
transformers | 25,042 | closed | Generate - add beam indices output in contrained beam search | # What does this PR do?
Fixes #25000
This PR adds the `beam_indices` output to constrained beam search, just like in the other beam methods. The changes are (mostly) copy-paste from beam search, and they pipe `beam_indices` all the way to the output.
Script for reproducibility (didn't work before, works after these changes)
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2").to('cuda')
prompt = """Tell me some about Canada"""
input_tokenized_info = tokenizer(prompt, return_tensors="pt")
input_ids, attention_mask = input_tokenized_info['input_ids'], input_tokenized_info[ 'attention_mask']
input_ids = input_ids.to('cuda')
attention_mask = attention_mask.to('cuda')
force_words = ["Canada"]
force_words_ids = tokenizer(force_words, add_special_tokens=False).input_ids
outputs = model.generate(input_ids=input_ids, attention_mask=attention_mask,num_beams =4,max_new_tokens=10, return_dict_in_generate=True,output_scores=True, force_words_ids=force_words_ids)
print(outputs.beam_indices)
# Before: `None`
# After: `tensor([[ 0, 1, 1, 1, 0, 0, 2, 3, 3, 1, -1, -1, -1, -1, -1]], device='cuda:0')`
``` | 07-24-2023 13:40:45 | 07-24-2023 13:40:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,041 | open | LlamaTokenizerFast report vocab_size info not correct | ### System Info
Linux
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model = AutoModelForCausalLM.from_pretrained(
base_model,
load_in_8bit=True,
device_map=device_map,
trust_remote_code=True
)
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(
base_model,
padding_side="right"
)
return ,LlamaTokenizerFast token
print(f" token class ,tokens vocab :{len(tokenizer.get_vocab())}\n {tokenizer}")
**token class ,tokens vocab :32001**
LlamaTokenizerFast(name_or_path='/models/WizardLM-13B-V1.0-Merged', **vocab_size=32000**,(mistake ) model_max_length=2048, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': '[PAD]'}, clean_up_tokenization_spaces=False)
### Expected behavior
with Vicuna or WizardLM-13B ,LlamaTokenizerFast report vocab_size=32000 ,but should 32001,because PAD token added by Vicuna
LlamaTokenizerFast(name_or_path='/models/WizardLM-13B-V1.0-Merged', **vocab_size=32000,** model_max_length=2048, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': '[PAD]'}, clean_up_tokenization_spaces=False)
| 07-24-2023 13:28:37 | 07-24-2023 13:28:37 | cc @ArthurZucker <|||||>Hey! Sorry, the formatting of the issue is a bit strange I do not understand the problem.
Could you try to reformulate the issue with a minimal reproducing script using a model path? <|||||>OK , base_model=/models/WizardLM-13B-V1.0-Merged
model = AutoModelForCausalLM.from_pretrained(
base_model,
load_in_8bit=True,
device_map=device_map,
trust_remote_code=True
)
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(
base_model,
padding_side="right"
)
print(f" token class ,tokens vocab :{len(tokenizer.get_vocab())}\n {tokenizer}")
print(tokenizer.get_vocab())
print( model )
out put
**token class ,tokens vocab :32001**
LlamaTokenizerFast(name_or_path='/models/WizardLM-13B-V1.0-Merged', **vocab_size=32000,**
origin model
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(**32001**, 5120, padding_idx=0)
<|||||>So, 1. you still have not provided a link to a valid model on the hub. `WizardLM-13B-V1.0-Merged` does not seem to exist as it is probably a local folder on your machine. 2. if it is on the hub, feel free to open an issue on the corresponding repository, as this does not seem to be an issue with transformers.
The llama2 models have the correct dimension. <|||||>https://huggingface.co/WizardLM/WizardLM-13B-V1.0 This is WizardLM-13B V1.0 diff weight. ,can using
Alpaca also can used test
The Stanford Alpaca model independently trained on decapoda-research/llama-7b-hf at "chavinlo/alpaca-native" uses tokenizer.add_special_tokens({'pad_token': '[PAD]'}) and hence the model's vocab size is set to 32001.<|||||>I am really sorry but I don't understand the problem <|||||>with https://huggingface.co/WizardLM/WizardLM-13B-V1.0 or Alpaca model,vocab size is 32001,
but LlamaTokenizerFast report vocab_size=32000 (print(tokenlizer) out put result)
token class ,tokens vocab :32001
LlamaTokenizerFast(name_or_path='/models/WizardLM-13B-V1.0-Merged', vocab_size=32000,<|||||>I just ran:
```python
>>> from transformers import LlamaTokenizerFast
>>> tok = LlamaTokenizerFast.from_pretrained("WizardLM/WizardLM-13B-V1.0")
>>> len(tok)
32001
```
Even if you want to force the vocab_size using :
```python
>>> tok = LlamaTokenizerFast.from_pretrained("WizardLM/WizardLM-13B-V1.0", vocab_size=32000)
```
this will not work. The fast tokenizer is initialized from the json file, and the `vocab_size` is not an argument that is used. If you don't want the padding token just set it to None and remove it from the `added_tokens_encoder` and `added_tokens_decoder`.
|
transformers | 25,040 | open | Add ViTMatte model | ### Model description
ViTMatte is a recently released model for alpha matting on images i.e. background removal.
The model accepts an input image and trimap (manually labelled grayscale image outlining the rough border of the foreground object) and predicts the alpha mate for each pixel.
It introduces a series of small adaptations to the ViT architecture - selective global attention + window attention; adding convolutional blocks between transformers blocks - to reduce computational complexity and enhancing the high-frequency information passed through the network.
At the time of publishing, ViTMatte showed SOTA performance on Distinctions-646 and strong performance (> Mattformer) on Composition-1K.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Github: https://github.com/hustvl/ViTMatte
Paper: https://arxiv.org/pdf/2305.15272.pdf
Demo: https://colab.research.google.com/drive/1Dc2qoJueNZQyrTU19sIcrPyRDmvuMTF3?usp=sharing | 07-24-2023 13:17:36 | 07-24-2023 13:17:36 | |
transformers | 25,039 | closed | Add test when downloading from gated repo | Following PR after https://github.com/huggingface/transformers/pull/25034.
Goal is to add a test to check that accessing a gated repo raises a custom message for the user. Cannot be merged yet as the server returns "RepoNotFound" when user is not authenticated, even when it's a public gated repo. Once server implementation is fixed ([see internal PR](https://github.com/huggingface/moon-landing/pull/7106)), this PR should pass.
Until then, no need to review/try to fix it.
**Note:** a workaround would be to have a token for a user in the CI but that would require a secret in GH which would not work in PRs. Let's keep it simple and have a test only for unauthenticated users. | 07-24-2023 13:17:21 | 07-24-2023 13:17:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger The fix is merged and deployed server-side so the test can now be added in `transformers` :) |
transformers | 25,038 | closed | Add dispatch_batches to training arguments | # What does this PR do?
This PR adds the option to set `Accelerator`'s `dispatch_batches` argument through the `TrainingArguments`. This is needed in cases such as https://github.com/huggingface/transformers/issues/24999, when `dispatch_batches=False` is needed on the streaming dataset
Fixes # (issue)
Solves https://github.com/huggingface/transformers/issues/24999
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 07-24-2023 13:01:07 | 07-24-2023 13:01:07 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25038). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,037 | closed | Add offload support to Bark | # What does this PR do?
Bark is a TTS model recently added by #24086.
It is made of is of 4 main sub-models, which are sequentially called during generation (`BarkModel.generate`). When a sub-model is used, the other models remain unused, taking up a lot of space on the GPU.
This PR thus propose a simple, yet effective, handmade offloading of sub-models.
```python
from transformers import BarkModel, BarkProcessor
# no need to load the model onto GPU, it will be done inside the generate function
model = BarkModel.from_pretrained("suno/bark")
processor = BarkProcessor.from_pretrained("suno/bark")
# works if a GPU is available. Throws a warning if not, and uses default behavior.
device = "cuda"
# input must be put onto the right device, i.e onto the GPU.
input = processor("Hey, it's a test").to(device)
# one simple additional argument
output = model.generate(**input, offload_to_cpu = True)
# output is loaded onto GPU as well
```
With this PR, GPU footprint is around 60% lower, while being less than 10% slower, based on a benchmark I've done and that will be shared soon (`batch_size = 1` on a single TITAN RTX).
TODO:
- [x] write tests
## Who can review?
Hey @sanchit-gandhi and @amyeroberts, I'm tagging you because you were the ones reviewing #24086 and because the changes are really small!
I'm wondering whether the code I've written conforms to the transformer paradigm and whether I need to raise additional warnings or errors in extreme cases!
Many thanks!
| 07-24-2023 12:56:42 | 07-24-2023 12:56:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Why not leverage Accelerate here as is done in Diffusers pipeline? (cc @patrickvonplaten )<|||||>Hi @sgugger, thank you for taking part!
Diffusers pipeline is using [`accelerate's cpu_offload`](https://github.com/huggingface/accelerate/blob/69e4c3c54da3201eda288b500d138761e7a5221c/src/accelerate/big_modeling.py#L146) which itself is using [`accelerate's dispatch model`](https://github.com/huggingface/accelerate/blob/69e4c3c54da3201eda288b500d138761e7a5221c/src/accelerate/big_modeling.py#L146).
The only issue here is that accelerate's cpu offload is done whenever calling the forward pass of the model. Here, some sub-model are auto-regressive so it would (if I'm not wrong) offload/onload over and over again while forwarding during the `generate` call. This would be sub-optimal and time-consuming, and would remove the benefits of my version of `offload_to_cpu`.
BTW, I'm open to suggestions on how to best make it work with accelerate, if it's possible! @muellerzr (hi !) or @sgugger , do you have some ideas?
<|||||>No Diffusers uses [`cpu_offload_with_hook`](https://github.com/huggingface/accelerate/blob/69e4c3c54da3201eda288b500d138761e7a5221c/src/accelerate/big_modeling.py#L194) which gives you a hook to pass along to the next model in the pipeline. This way you can have the auto-regressive model in the middle be called several times and only when we go to the next model is it offloaded back to the CPU, which looks like what you are doing here in this PR.<|||||>Nice, it's indeed what I'm doing @sgugger , many thanks for your help! I'll adapt the PR<|||||>Here an example: https://github.com/huggingface/diffusers/blob/fa356bd4da2593c2b91f76c1f63b6238249ec001/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L225<|||||>@sgugger , thanks for the quick review! I've applied your last nit comments.
<|||||>Hi @sgugger and @sanchit-gandhi , again thanks for the review!
I've applied your last nit comments.
Don't hesitate to reach out to me if I need to refactor or improve something! |
transformers | 25,036 | open | [BUG] RecursionError: maximum recursion depth exceeded when loading LLaMA-1 but success loading LLaMA-2 | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Running (Open-Assistant)[https://github.com/LAION-AI/Open-Assistant] script for (SFT)[https://github.com/LAION-AI/Open-Assistant/blob/main/model/model_training/trainer_sft.py]
When loading LLaMA-1, got this error:
```
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1155, in unk_token_id
return self.convert_tokens_to_ids(self.unk_token)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 250, in convert_tokens_to_ids
return self._convert_token_to_id_with_added_voc(tokens)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc
return self.unk_token_id
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1155, in unk_token_id
return self.convert_tokens_to_ids(self.unk_token)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 250, in convert_tokens_to_ids
return self._convert_token_to_id_with_added_voc(tokens)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc
return self.unk_token_id
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1155, in unk_token_id
return self.convert_tokens_to_ids(self.unk_token)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 250, in convert_tokens_to_ids
return self._convert_token_to_id_with_added_voc(tokens)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc
return self.unk_token_id
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1155, in unk_token_id
return self.convert_tokens_to_ids(self.unk_token)
RecursionError: maximum recursion depth exceeded
```
Seems like go into a infinite loop, and success loading LLaMA-2.
### Expected behavior
Success loading LLaMA-1 and LLaMA-2 | 07-24-2023 11:22:01 | 07-24-2023 11:22:01 | I set a small recursion limit to 50 and I now got this trace back for loading LLaMA-1:
```
File "/mnt/home//Open-Assistant/model/model_training/trainer_sft.py", line 481, in <module>
main()
File "/mnt/home//Open-Assistant/model/model_training/trainer_sft.py", line 333, in main
tokenizer = get_tokenizer(training_conf)
File "/mnt/home//Open-Assistant/model/model_training/utils/utils.py", line 214, in get_tokenizer
tokenizer = transformers.AutoTokenizer.from_pretrained(tokenizer_name, cache_dir=conf.cache_dir)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 702, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1841, in from_pretrained
return cls._from_pretrained(
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2004, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 126, in __init__
self.update_post_processor()
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 136, in update_post_processor
bos_token_id = self.bos_token_id
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1136, in bos_token_id
return self.convert_tokens_to_ids(self.bos_token)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 250, in convert_tokens_to_ids
return self._convert_token_to_id_with_added_voc(tokens)
File "/mnt/data/conda/envs/oasst/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc
return self.unk_token_id
...
```
Until it exceeded the limit.<|||||>FYI, I can load LLaMA-1 using `transformers==4.28.1`<|||||>Hey! Thanks for reporting, this is very similar to #22415, and #22414.
Having a minimal reproducer would be better, I cannot reproduce this out-of-the-box, have no idea what `tokenizer_config` they are using or what not.
From the traceback, I created a minimal reproducer:
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b", unk_token = "<endoftext>", bos_token="<endoftext>", eos_token="<endoftext>")
```
In this case, the `<endoftext>` token does not exist, and since there are a few issues with adding tokens when initializing, cf #23909 after calling `super().__init__()` the token is still not part of the vocab.
It works with `transformers==4.28.1`, because the tokenizer did not have the `self.update_post_processor()`.
There are no real quick fixes appart from downgrading for now, but I'll probably either remove the call in the init, or fixing token addition will make sure the token is properly added after calling `super().__init__()`. <|||||>@ArthurZucker
To reproduce:
`tokenizer_config.json`:
```json
{"bos_token": "", "eos_token": "", "model_max_length": 1000000000000000019884624838656, "tokenizer_class": "LlamaTokenizer", "unk_token": ""}
```
This is an old version of LLaMA-1, I think it is from [here](https://huggingface.co/decapoda-research/llama-7b-hf/tree/main)(need to edit `LLaMATokenizer` to `LlamaTokenizer`). |
transformers | 25,035 | closed | fix(integrations): store serialized `TrainingArgs` to `wandb.config` without sanitization. | Allows resuming training runs when reusing the wandb config.
# What does this PR do?
Currently the `WandbLogger` uses the `to_sanitized_dict()` method in `TrainingArguments` to serialize the training hyperparameters. This converts nested objects and `NoneType` objects to `str` for safe serialization. However, using the stored config when resuming a training run leads to issues while initializing the `TrainingArguments` from the `wandb.run.config`. This PR fixes this by instead using the `to_dict` method to serialize the `TrainingArguments`. The resulting dictionary can be stored in the `wandb.config` and reused to resume training runs.
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
## Who can review?
- trainer: @sgugger , @pacman100
| 07-24-2023 10:31:07 | 07-24-2023 10:31:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,034 | closed | Support GatedRepoError + use raise from | (PR started after comment from @osanseviero [on slack](https://huggingface.slack.com/archives/C03V11RNS7P/p1690185858721759?thread_ts=1689956871.406059&cid=C03V11RNS7P) -private link)
This PR adds 2 things:
- raise a more custom error in case of `GatedRepoError` when downloading a file. `GatedRepoError` is a subclass of `RepoNotFoundError` in which the repo is actually found but user don't have access to it (the inheritance is there for backward compatibility)
- when raising a EnvironmentError in `utils/hub.py` I think it's best to use the Python's syntax `raise ... from ...`. This make debugging much easier for both users and maintainers.
At the moment `GatedRepoError` is triggered only if token is passed but a [PR is moon-landing](https://github.com/huggingface/moon-landing/pull/7106) (private link) is opened to also trigger a gated repo error for unauthenticated users.
**Note:** there might be some tests to adapt and I'm willing to do it once the logic is approved
(EDIT: I just checked and in the lowest version of `huggingface_hub` that is supported (0.14.1), GatedRepoError [already exists](https://github.com/huggingface/huggingface_hub/blob/v0.14.1/src/huggingface_hub/utils/_errors.py#L108) so no import issue to worry about) | 07-24-2023 10:06:26 | 07-24-2023 10:06:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger thanks for the review :)
I made the requested change and updated another error message as well to use `token` instead of `use_auth_token` (the error message for generic `RepoNotFoundError`) |
transformers | 25,033 | closed | [`logging.py`] set default `stderr` path if `None` | # What does this PR do?
Attempt to fix #24047
Thanks to contributors, seems like the issue [is known](https://github.com/pyinstaller/pyinstaller/issues/7334#issuecomment-1357447176).
Though it is not entirely related to `transformers`, adding a safeguard seems like a good practice | 07-24-2023 09:43:13 | 07-24-2023 09:43:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Monkey patching globals like `sys.stdout` and `sys.stderr` is more of an emergency hack than a fix that libraries should be using. The proper way to handle Python windowed mode is to guard any direct usage of `sys.stderr` in `if sys.stderr is not None` blocks.
```python
sys.stderr.write("hello") # This is broken
# Any of these are fine
if sys.stderr:
sys.stderr.write("hello")
print("hello", file=sys.stderr)
```
> Though it is not entirely related to transformers, adding a safeguard seems like a good practice
It is related to any library that uses `sys.stderr` directly. |
transformers | 25,032 | open | Allow resuming of logging to WANDB with the Trainer | ### Feature request
As far as I can tell, it is currently not possible to resume a training run and continue logging to the same run on WANDB when using the Trainer. The reason is that WANDB would [require](https://github.com/wandb/wandb/blob/a32638cb1c6ab775e9ed431d9a9b4b8a30685453/wandb/sdk/wandb_init.py#L1036-L1053) you to set `resume=True` and a run `id` in `wandb.init` (or env `WANDB_RUN_ID`) for this to work.
The Trainer currently does not allow for these options as far as I can see
https://github.com/huggingface/transformers/blob/c9a82be592ca305180a7ab6a36e884bca1d426b8/src/transformers/integrations.py#L726-L737
###
### Motivation
Make it easy to resume logging to wandb without any code changes, directly through a config or CLI (TrainingArguments).
### Your contribution
I can work on this. I would update TrainingArguments to add two new arguments:
- `wandb_resume`
- `wandb_id` (which backs off to the environment variable `WANDB_RUN_ID`)
which would then be passed to the `wandb` integration Callback as part of `args`
https://github.com/huggingface/transformers/blob/c9a82be592ca305180a7ab6a36e884bca1d426b8/src/transformers/integrations.py#L684
which can then be used in `init`
https://github.com/huggingface/transformers/blob/c9a82be592ca305180a7ab6a36e884bca1d426b8/src/transformers/integrations.py#L725-L737
| 07-24-2023 09:30:15 | 07-24-2023 09:30:15 | cc @parambharat who has done a lot of work on the Wandb integration<|||||>Hi @BramVanroy , Thanks for bringing this up. If I understood the issue correctly, you want to resume a wandb run while resuming training. There are two solutions I can think of that don't require changing `TrainingArguments`. Can you try these instead ?
1. Set the env vars `WANDB_RESUME="must"` and `WANDB_RUN_ID=<run_id_you_want_to_resume>` before running your training script. wandb should read these env vars upon initialization and should resume your training run. [W&B Env Vars Reference](https://docs.wandb.ai/guides/track/environment-variables)
2. Initialize a `run` with `wandb.init(resume=must, id=<run_id_you_want_to_resume>` before initializing the `Trainer`. The trainer will [not initialize a new run](https://github.com/huggingface/transformers/blob/8f1f0bf50f402881c0aa53b18f21736a151adf5b/src/transformers/integrations.py#L733C16-L733C38) if a run already exists. Here's a [colab](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/wandb_hf_example.ipynb#scrollTo=RK5HRy1JQ0yX) example of this from the wandb examples repo.<|||||>@parambharat Oh, I was not aware of the resume environment variable! That would make life indeed much easier in combination with the wandb run ID!
A run's ID is the alphanumeric part of the URL, right? So in this example
> https://wandb.ai/username/projectname/runs/phup4zp1/
it would be `phup4zp1`?
If it's that easy then I am a happy man! <|||||>@BramVanroy , Yes, that's the `run_id` 👍 |
transformers | 25,031 | closed | Fix doctest | # What does this PR do?
Fix a doctest (that is recently added): precision issue | 07-24-2023 09:27:28 | 07-24-2023 09:27:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,030 | closed | [`generate`] Only warn users if the `generation_config`'s `max_length` is set to the default value | # What does this PR do?
Updates the condition for raising a warning in generate to warn users if the `generation_config`'s `max_length` is set to the default value. | 07-24-2023 09:15:50 | 07-24-2023 09:15:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger as discussed on Slack with @ArthurZucker: `max_new_tokens` can't be used to set a default `generation_config` that works well out of the box -- `max_length` does. As such, let's adapt the warning to enable us (and the users) to set good default arguments. |
transformers | 25,029 | closed | Bug in autotokenizer some open source models such as falcon. | ### System Info
I want to get the sentence difference and use autotokenizer a falcon model 40B and 7B but usually, I receive an attribute error bug: model 'signal' has no attribute 'SIGALRM'.
![image](https://github.com/huggingface/transformers/assets/23569519/75d671dd-8b57-4447-acf2-5c093b79eddd)
@younesbelkada @ArthurZucker @Rocketknight1
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
'''
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b-instruct")
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b-instruct")
'''
![image](https://github.com/huggingface/transformers/assets/23569519/e53a5a08-459e-4dde-bb95-ac151a018a33)
### Expected behavior
Give me a method to get the word embeddings using sentence transformers or autotokenizer open source models such as falcon or LLMA2. | 07-24-2023 07:37:22 | 07-24-2023 07:37:22 | Hey! Could you provide a full trace to know where the error come from?
I cannot reproduce the error with the script you provided. Make sure you are using the latest version of transformers too! <|||||>Thanks.
![image](https://github.com/huggingface/transformers/assets/23569519/4921397e-8ae2-4cfe-925e-8415b5ab66af)
<|||||>It seems that the [SIGALARM is not available on Windows](https://stackoverflow.com/questions/52779920/why-is-signal-sigalrm-not-working-in-python-on-windows). But I am not very familiar with this, will let @sgugger answer!
As per the[ contribution guidelines](https://github.com/huggingface/transformers/blob/c9a82be592ca305180a7ab6a36e884bca1d426b8/CONTRIBUTING.md), could you provide the platform you are running this on?
- which version of python are you using?
- which version of transformers are you using?
- are you on linux or windows? <|||||>You can avoid this error by passing `trust_remote_code=True` (which is needed for this model). We will make the error message clearer when signal is not available. |
transformers | 25,028 | closed | [docs] duplicate table of contents in perf_infer_gpu_one.mdx | ## Description
There are duplicate titles [**Requirements**] in `perf_infer_gpu_one.md` from line 51 and 117 causing an error occurs when moving the table of contents.
## Document / Language
`perf_infer_gpu_one.md` / en
## Suggestion
line 51
As is :
```### Requirements```
To be :
```### Requirements [[requirements-for-fp4-mixedprecision-inference]]```
line 117
As is :
```### Requirements```
To be :
```### Requirements [[requirements-for-int8-mixedprecision-matrix-decomposition]]```
Please let me know if I missed something in guideilnes.
Thank you in advance for your attention to it! | 07-24-2023 05:44:47 | 07-24-2023 05:44:47 | I can't see this on the main branch, maybe this was fixed by the recent refactor of this guide?<|||||>Good catch! I don't think this [doc](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one) was refactored in the recent PR (it was just moved), and I'm getting the same error on the `main` branch.
Would you like to open a PR with your proposed fix? 🤗 <|||||>Thanks for your feedbacks. I opened PR https://github.com/huggingface/transformers/pull/25066! |
transformers | 25,027 | closed | LIama-2 7B fine-tuning with DeepSpeed OOM error during loading the best model at end when `load_best_model_at_end` specified as True. | ### System Info
Hi Community!
I am using `run_clm.py` with `deepspeed` to fine-tune LIama 7B on `g5.12xlarge` EC2 instance (4 GPU, Total GPU memory 96 GB, vCPUs 48 with 192 GB).
* Transformer version: 4.28.1
* DeepSpeed version: 0.10.0 (lastest)
* Instance: `g5.12xlarge` EC2 instance (4 GPU, Total GPU memory 96 GB, vCPUs 48 with 192 GB).
* DeepSpeed file config: [ds_config.pdf](https://github.com/huggingface/transformers/files/12140984/ds_config.pdf)
* Invoking command: `cmd = /opt/conda/bin/python3.10 -u -m deepspeed.launcher.launch --world_info=<OMIT_AS_NON_IMPORTANT> --master_addr=<OMIT_AS_NON_IMPORTANT> --master_port=<OMIT_AS_NON_IMPORTANT>--enable_each_rank_log=None run_clm.py --deepspeed ds_config.json --model_name_or_path /tmp --train_file /opt/ml/input/data/train --do_train --output_dir /opt/ml/model --num_train_epochs 1 --gradient_accumulation_steps 4 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_steps 10 --warmup_ratio 0.1 --learning_rate 6e-06 --weight_decay 0.2 --seed 10 --max_input_length -1 --validation_split_ratio 0.1 --train_data_split_seed 0 --max_steps -1 --early_stopping_patience 3 --early_stopping_threshold 0.0 --adam_beta1 0.9 --adam_beta2 0.999 --max_grad_norm 1.0 --label_smoothing_factor 0.0 --logging_strategy steps --save_strategy steps --save_steps 10 --dataloader_num_workers 0 --lr_scheduler_type constant_with_warmup --warmup_steps 0 --evaluation_strategy steps --eval_steps 10 --bf16 --instruction_tuned --gradient_checkpointing --save_total_limit 1`
The model is able to be trained successfully until the very end step of loading the best model. Because of `load_best_model_at_end` argument being True, when trainer.py uses DeepSpeed engine to load the model, it goes OOM.
### Who can help?
@sgugger
@pacman100
@ArthurZucker and @younesbelkada
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
During the stage of load the best model, I saw weird print out of initialization of deepspeed log that appears at the beginning of the training. Then I verified and saw this [line](https://github.com/huggingface/transformers/blob/04ab5605fbb4ef207b10bf2772d88c53fc242e83/src/transformers/trainer.py#L2184) in trainer.py. Then I thought this OOM is due to unnecessary usage of Deep speed init function (as also indicated by the comment above the code).
Next, I uses latest version of Transformers `4.31.0` as I saw it no longer uses deepspeed init to load the best model ([line](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/trainer.py#L2107) and [deepspeed loading function](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/deepspeed.py#L371)). Then I hit LIama 2 configuration bug. See below. I don't know why during loading the best model, this [line](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/trainer.py#L2107) of deepspeed is not triggered but [this line](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/trainer.py#L2168) did.
```
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|trainer.py:1934] 2023-07-24 02:37:09,873 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|trainer.py:2093] 2023-07-24 02:37:09,889 >> Loading best model from /opt/ml/model/checkpoint-10 (score: 1.4037604331970215).
[INFO|trainer.py:2093] 2023-07-24 02:37:09,889 >> Loading best model from /opt/ml/model/checkpoint-10 (score: 1.4037604331970215).
Traceback (most recent call last):
File "/opt/ml/code/run_clm.py", line 229, in <module>
main()
File "/opt/ml/code/run_clm.py", line 178, in main
train_result = trainer.train() # load model/optimizer/scheduler states
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1944, in _inner_training_loop
self._load_best_model()
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2168, in _load_best_model
load_result = load_sharded_checkpoint(
File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 431, in load_sharded_checkpoint
model.load_state_dict(state_dict, strict=False)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LlamaForCausalLM:
#011size mismatch for model.layers.24.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.24.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.24.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.24.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.24.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.24.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.24.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.25.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.26.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.27.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.28.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.29.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.30.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.31.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for lm_head.weight: copying a param with shape torch.Size([32004, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
[2023-07-24 02:37:16,892] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 164
[2023-07-24 02:37:20,147] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 165
Traceback (most recent call last):
File "/opt/ml/code/run_clm.py", line 229, in <module>
main()
File "/opt/ml/code/run_clm.py", line 178, in main
train_result = trainer.train() # load model/optimizer/scheduler states
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1944, in _inner_training_loop
self._load_best_model()
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2168, in _load_best_model
load_result = load_sharded_checkpoint(
File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 431, in load_sharded_checkpoint
model.load_state_dict(state_dict, strict=False)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LlamaForCausalLM:
#011size mismatch for model.embed_tokens.weight: copying a param with shape torch.Size([32004, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.0.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.1.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.2.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.3.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.4.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.5.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.6.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.7.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.8.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.9.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.10.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.11.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.12.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.13.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.14.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.15.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.16.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.17.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.18.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.19.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.20.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.21.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.22.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.self_attn.q_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.self_attn.k_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.self_attn.v_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.self_attn.o_proj.weight: copying a param with shape torch.Size([4096, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.mlp.gate_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.mlp.up_proj.weight: copying a param with shape torch.Size([11008, 4096]) from checkpoint, the shape in current model is torch.Size([0]).
#011size mismatch for model.layers.23.mlp.down_proj.weight: copying a param with shape torch.Size([4096, 11008]) from checkpoint, the shape in current model is torch.Size([0]).
```
Then, I thought while I am waiting for HF to fix the configuration issue with LIama 2. I can uses the [latest code of loading the best model](https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/deepspeed.py#L371) from `transformers 4.31.0` and apply it to the code with`transformers 4.28.1`.
Thus I disable `load_best_model_at_end`, and try to load it after `Trainer.train()` with following code.
```
train_result = trainer.train()
checkpoint_dirs = sorted(glob.glob(f"/opt/ml/model/checkpoint-*"))
checkpoint_path = checkpoint_dirs[0] # this is because I set total_save_limit as 1
load_path, _ = trainer.model_wrapped.load_checkpoint(
checkpoint_path, load_optimizer_states=False, load_lr_scheduler_states=False
)
trainer.save_model()
```
I hit OOM when I specified `load_optimizer_states` and `load_lr_scheduler_states` being True. Then I thought since the model I save is used for evaluation/inference only rather than resuming training from the checkpoints. Thus I don't need optimzer and lr scheduler. However, when I set them as False, I still hit the error.
Please advise what you think on this issue. THX!
### Expected behavior
I expect the best model to be loaded without OOM error as the model can be trained successfully before hitting the final saving step. | 07-24-2023 02:38:24 | 07-24-2023 02:38:24 | Hello, I'm able to run the following minimal example with any issues:
```
export WANDB_DISABLED="true"
export CUDA_VISIBLE_DEVICES="0,1"
cd transformers
deepspeed --num_nodes 1 --num_gpus 2 --master_port 10999 /home/sourab/transformers/examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --do_train --do_eval --max_train_samples 30 --max_eval_samples 10 --block_size 512 --overwrite_output_dir --gradient_checkpointing --save_strategy "steps" --evaluation_strategy "steps" --eval_steps 10 --save_steps 10 --load_best_model_at_end --output_dir /tmp/test-clm --deepspeed /home/sourab/transformers/tests/deepspeed/ds_config_zero3.json
```
output:
```
2023-07-24 10:39:47,947] [INFO] [config.py:950:print_user_config] json = {
"fp16": {
"enabled": false,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": false
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 5e-05,
"betas": [0.9, 0.999],
"eps": 1e-08,
"weight_decay": 0.0
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 5e-05,
"warmup_num_steps": 0
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1.000000e+09,
"reduce_bucket_size": 5.898240e+05,
"stage3_prefetch_bucket_size": 5.308416e+05,
"stage3_param_persistence_threshold": 7.680000e+03,
"stage3_max_live_parameters": 1.000000e+09,
"stage3_max_reuse_distance": 1.000000e+09,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": 1,
"gradient_clipping": 1.0,
"steps_per_print": inf,
"train_batch_size": 2,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": false
}
[INFO|trainer.py:1682] 2023-07-24 10:39:47,947 >> ***** Running training *****
[INFO|trainer.py:1683] 2023-07-24 10:39:47,947 >> Num examples = 30
[INFO|trainer.py:1684] 2023-07-24 10:39:47,947 >> Num Epochs = 3
[INFO|trainer.py:1685] 2023-07-24 10:39:47,947 >> Instantaneous batch size per device = 1
[INFO|trainer.py:1688] 2023-07-24 10:39:47,947 >> Total train batch size (w. parallel, distributed & accumulation) = 2
[INFO|trainer.py:1689] 2023-07-24 10:39:47,947 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1690] 2023-07-24 10:39:47,947 >> Total optimization steps = 45
[INFO|trainer.py:1691] 2023-07-24 10:39:47,947 >> Number of trainable parameters = 124,439,808
0%| | 0/45 [00:00<?, ?it/s][WARNING|logging.py:295] 2023-07-24 10:39:48,027 >> `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
[WARNING|logging.py:295] 2023-07-24 10:39:48,027 >> `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
22%|████████████████████ | 10/45 [00:05<00:15, 2.27it/s][INFO|trainer.py:3081] 2023-07-24 10:39:53,150 >> ***** Running Evaluation *****
[INFO|trainer.py:3083] 2023-07-24 10:39:53,150 >> Num examples = 10
[INFO|trainer.py:3086] 2023-07-24 10:39:53,151 >> Batch size = 1
{'eval_loss': 3.356262683868408, 'eval_accuracy': 0.3947162426614481, 'eval_runtime': 0.5527, 'eval_samples_per_second': 18.092, 'eval_steps_per_second': 9.046, 'epoch': 0.67}
22%|████████████████████ | 10/45 [00:05<00:15, 2.27it/s[INFO|trainer.py:2807] 2023-07-24 10:39:53,991 >> Saving model checkpoint to /tmp/test-clm/checkpoint-10
[INFO|configuration_utils.py:458] 2023-07-24 10:39:53,991 >> Configuration saved in /tmp/test-clm/checkpoint-10/config.json
[INFO|configuration_utils.py:379] 2023-07-24 10:39:53,992 >> Configuration saved in /tmp/test-clm/checkpoint-10/generation_config.json
[INFO|modeling_utils.py:1855] 2023-07-24 10:39:54,649 >> Model weights saved in /tmp/test-clm/checkpoint-10/pytorch_model.bin
[INFO|tokenization_utils_base.py:2210] 2023-07-24 10:39:54,650 >> tokenizer config file saved in /tmp/test-clm/checkpoint-10/tokenizer_config.json
[INFO|tokenization_utils_base.py:2217] 2023-07-24 10:39:54,650 >> Special tokens file saved in /tmp/test-clm/checkpoint-10/special_tokens_map.json
[2023-07-24 10:39:54,735] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step10 is about to be saved!
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
[2023-07-24 10:39:54,738] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /tmp/test-clm/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt
[2023-07-24 10:39:54,738] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt...
[2023-07-24 10:39:54,744] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt.
[2023-07-24 10:39:54,744] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_optim_states.pt...
[2023-07-24 10:39:57,379] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_optim_states.pt.
[2023-07-24 10:39:57,379] [INFO] [engine.py:3285:_save_zero_checkpoint] zero checkpoint saved /tmp/test-clm/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_optim_states.pt
[2023-07-24 10:39:57,386] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step10 is ready now!
44%|████████████████████████████████████████ | 20/45 [00:13<00:12, 2.07it/s][INFO|trainer.py:3081] 2023-07-24 10:40:01,597 >> ***** Running Evaluation *****
[INFO|trainer.py:3083] 2023-07-24 10:40:01,598 >> Num examples = 10
[INFO|trainer.py:3086] 2023-07-24 10:40:01,598 >> Batch size = 1
{'eval_loss': 3.3019282817840576, 'eval_accuracy': 0.40371819960861055, 'eval_runtime': 0.3621, 'eval_samples_per_second': 27.618, 'eval_steps_per_second': 13.809, 'epoch': 1.33}
44%|████████████████████████████████████████ | 20/45 [00:14<00:12, 2.07it/s[INFO|trainer.py:2807] 2023-07-24 10:40:02,302 >> Saving model checkpoint to /tmp/test-clm/checkpoint-20
[INFO|configuration_utils.py:458] 2023-07-24 10:40:02,303 >> Configuration saved in /tmp/test-clm/checkpoint-20/config.json
[INFO|configuration_utils.py:379] 2023-07-24 10:40:02,303 >> Configuration saved in /tmp/test-clm/checkpoint-20/generation_config.json
[INFO|modeling_utils.py:1855] 2023-07-24 10:40:02,971 >> Model weights saved in /tmp/test-clm/checkpoint-20/pytorch_model.bin
[INFO|tokenization_utils_base.py:2210] 2023-07-24 10:40:02,971 >> tokenizer config file saved in /tmp/test-clm/checkpoint-20/tokenizer_config.json
[INFO|tokenization_utils_base.py:2217] 2023-07-24 10:40:02,972 >> Special tokens file saved in /tmp/test-clm/checkpoint-20/special_tokens_map.json
[2023-07-24 10:40:03,063] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step20 is about to be saved!
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
[2023-07-24 10:40:03,066] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /tmp/test-clm/checkpoint-20/global_step20/zero_pp_rank_0_mp_rank_00_model_states.pt
[2023-07-24 10:40:03,066] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-20/global_step20/zero_pp_rank_0_mp_rank_00_model_states.pt...
[2023-07-24 10:40:03,080] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-20/global_step20/zero_pp_rank_0_mp_rank_00_model_states.pt.
[2023-07-24 10:40:03,081] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-20/global_step20/zero_pp_rank_0_mp_rank_00_optim_states.pt...
[2023-07-24 10:40:06,196] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-20/global_step20/zero_pp_rank_0_mp_rank_00_optim_states.pt.
[2023-07-24 10:40:06,197] [INFO] [engine.py:3285:_save_zero_checkpoint] zero checkpoint saved /tmp/test-clm/checkpoint-20/global_step20/zero_pp_rank_0_mp_rank_00_optim_states.pt
[2023-07-24 10:40:06,204] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step20 is ready now!
67%|████████████████████████████████████████████████████████████ | 30/45 [00:22<00:07, 2.01it/s][INFO|trainer.py:3081] 2023-07-24 10:40:10,531 >> ***** Running Evaluation *****
[INFO|trainer.py:3083] 2023-07-24 10:40:10,531 >> Num examples = 10
[INFO|trainer.py:3086] 2023-07-24 10:40:10,531 >> Batch size = 1
{'eval_loss': 3.2902770042419434, 'eval_accuracy': 0.40332681017612526, 'eval_runtime': 0.4135, 'eval_samples_per_second': 24.186, 'eval_steps_per_second': 12.093, 'epoch': 2.0}
67%|████████████████████████████████████████████████████████████ | 30/45 [00:22<00:07, 2.01it/s[INFO|trainer.py:2807] 2023-07-24 10:40:11,199 >> Saving model checkpoint to /tmp/test-clm/checkpoint-30
[INFO|configuration_utils.py:458] 2023-07-24 10:40:11,200 >> Configuration saved in /tmp/test-clm/checkpoint-30/config.json
[INFO|configuration_utils.py:379] 2023-07-24 10:40:11,200 >> Configuration saved in /tmp/test-clm/checkpoint-30/generation_config.json
[INFO|modeling_utils.py:1855] 2023-07-24 10:40:12,098 >> Model weights saved in /tmp/test-clm/checkpoint-30/pytorch_model.bin
[INFO|tokenization_utils_base.py:2210] 2023-07-24 10:40:12,098 >> tokenizer config file saved in /tmp/test-clm/checkpoint-30/tokenizer_config.json
[INFO|tokenization_utils_base.py:2217] 2023-07-24 10:40:12,098 >> Special tokens file saved in /tmp/test-clm/checkpoint-30/special_tokens_map.json
[2023-07-24 10:40:12,188] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step30 is about to be saved!
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
[2023-07-24 10:40:12,191] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt
[2023-07-24 10:40:12,191] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt...
[2023-07-24 10:40:12,197] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt.
[2023-07-24 10:40:12,198] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_optim_states.pt...
[2023-07-24 10:40:15,492] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_optim_states.pt.
[2023-07-24 10:40:15,492] [INFO] [engine.py:3285:_save_zero_checkpoint] zero checkpoint saved /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_optim_states.pt
[2023-07-24 10:40:15,499] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step30 is ready now!
89%|████████████████████████████████████████████████████████████████████████████████ | 40/45 [00:31<00:02, 2.02it/s][INFO|trainer.py:3081] 2023-07-24 10:40:19,832 >> ***** Running Evaluation *****
[INFO|trainer.py:3083] 2023-07-24 10:40:19,832 >> Num examples = 10
[INFO|trainer.py:3086] 2023-07-24 10:40:19,832 >> Batch size = 1
{'eval_loss': 3.3038055896759033, 'eval_accuracy': 0.40136986301369865, 'eval_runtime': 0.4144, 'eval_samples_per_second': 24.13, 'eval_steps_per_second': 12.065, 'epoch': 2.67}
89%|████████████████████████████████████████████████████████████████████████████████ | 40/45 [00:32<00:02, 2.02it/s[INFO|trainer.py:2807] 2023-07-24 10:40:20,497 >> Saving model checkpoint to /tmp/test-clm/checkpoint-40
[INFO|configuration_utils.py:458] 2023-07-24 10:40:20,497 >> Configuration saved in /tmp/test-clm/checkpoint-40/config.json
[INFO|configuration_utils.py:379] 2023-07-24 10:40:20,498 >> Configuration saved in /tmp/test-clm/checkpoint-40/generation_config.json
[INFO|modeling_utils.py:1855] 2023-07-24 10:40:21,169 >> Model weights saved in /tmp/test-clm/checkpoint-40/pytorch_model.bin
[INFO|tokenization_utils_base.py:2210] 2023-07-24 10:40:21,169 >> tokenizer config file saved in /tmp/test-clm/checkpoint-40/tokenizer_config.json
[INFO|tokenization_utils_base.py:2217] 2023-07-24 10:40:21,169 >> Special tokens file saved in /tmp/test-clm/checkpoint-40/special_tokens_map.json
[2023-07-24 10:40:21,259] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step40 is about to be saved!
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
[2023-07-24 10:40:21,262] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /tmp/test-clm/checkpoint-40/global_step40/zero_pp_rank_0_mp_rank_00_model_states.pt
[2023-07-24 10:40:21,262] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-40/global_step40/zero_pp_rank_0_mp_rank_00_model_states.pt...
[2023-07-24 10:40:21,268] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-40/global_step40/zero_pp_rank_0_mp_rank_00_model_states.pt.
[2023-07-24 10:40:21,268] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /tmp/test-clm/checkpoint-40/global_step40/zero_pp_rank_0_mp_rank_00_optim_states.pt...
[2023-07-24 10:40:23,964] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /tmp/test-clm/checkpoint-40/global_step40/zero_pp_rank_0_mp_rank_00_optim_states.pt.
[2023-07-24 10:40:23,964] [INFO] [engine.py:3285:_save_zero_checkpoint] zero checkpoint saved /tmp/test-clm/checkpoint-40/global_step40/zero_pp_rank_0_mp_rank_00_optim_states.pt
[2023-07-24 10:40:23,971] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step40 is ready now!
100%|██████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:38<00:00, 1.37it/s][INFO|trainer.py:1930] 2023-07-24 10:40:26,063 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|trainer.py:2089] 2023-07-24 10:40:26,063 >> Loading best model from /tmp/test-clm/checkpoint-30 (score: 3.2902770042419434).
[INFO|deepspeed.py:381] 2023-07-24 10:40:26,063 >> Attempting to resume from /tmp/test-clm/checkpoint-30
[2023-07-24 10:40:26,073] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt...
[2023-07-24 10:40:26,077] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt.
[2023-07-24 10:40:26,078] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt...
[2023-07-24 10:40:26,082] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_model_states.pt.
[2023-07-24 10:40:26,086] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_optim_states.pt...
[2023-07-24 10:40:26,479] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /tmp/test-clm/checkpoint-30/global_step30/zero_pp_rank_0_mp_rank_00_optim_states.pt.
[2023-07-24 10:40:26,479] [INFO] [engine.py:2865:_get_all_zero_checkpoint_state_dicts] successfully read 2 ZeRO state_dicts for rank 0
[2023-07-24 10:40:26,605] [INFO] [engine.py:2815:_load_zero_checkpoint] loading 2 zero partition checkpoints for rank 0
{'train_runtime': 38.7307, 'train_samples_per_second': 2.324, 'train_steps_per_second': 1.162, 'train_loss': 3.3458041720920138, 'epoch': 3.0}
100%|██████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:38<00:00, 1.16it/s]
[INFO|trainer.py:2807] 2023-07-24 10:40:26,966 >> Saving model checkpoint to /tmp/test-clm
[INFO|configuration_utils.py:458] 2023-07-24 10:40:26,967 >> Configuration saved in /tmp/test-clm/config.json
[INFO|configuration_utils.py:379] 2023-07-24 10:40:26,967 >> Configuration saved in /tmp/test-clm/generation_config.json
[INFO|modeling_utils.py:1855] 2023-07-24 10:40:28,333 >> Model weights saved in /tmp/test-clm/pytorch_model.bin
[INFO|tokenization_utils_base.py:2210] 2023-07-24 10:40:28,333 >> tokenizer config file saved in /tmp/test-clm/tokenizer_config.json
[INFO|tokenization_utils_base.py:2217] 2023-07-24 10:40:28,333 >> Special tokens file saved in /tmp/test-clm/special_tokens_map.json
***** train metrics *****
epoch = 3.0
train_loss = 3.3458
train_runtime = 0:00:38.73
train_samples = 30
train_samples_per_second = 2.324
train_steps_per_second = 1.162
07/24/2023 10:40:28 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:3081] 2023-07-24 10:40:28,418 >> ***** Running Evaluation *****
[INFO|trainer.py:3083] 2023-07-24 10:40:28,418 >> Num examples = 10
[INFO|trainer.py:3086] 2023-07-24 10:40:28,418 >> Batch size = 1
100%|████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 15.77it/s]
***** eval metrics *****
epoch = 3.0
eval_accuracy = 0.4033
eval_loss = 3.2903
eval_runtime = 0:00:00.38
eval_samples = 10
eval_samples_per_second = 26.017
eval_steps_per_second = 13.009
perplexity = 26.8503
[2023-07-24 10:40:30,989] [INFO] [launch.py:347:main] Process 1140775 exits successfully.
[2023-07-24 10:40:31,991] [INFO] [launch.py:347:main] Process 1140774 exits successfully.
```
<|||||>Thanks @pacman100! Which model are you using in above example? previously I am also able to successfully run with GPT-Neo models (a relative small model) but hit issue with large models like Falcon 7B and IIama 2 7B on g5.12xlarge.<|||||>Hello @Neo9061, above PR https://github.com/huggingface/transformers/pull/25057 should fix this, please confirm the same.<|||||>Thanks @pacman100 for the quick fix! Just for my understanding, any insight why I used the code from transformers 4.31.0 (shown as below) and still hit the OOM error? I mean for my previous investigation. (for context details, plz see my post above. THX!)
At the meanwhile, I am testing your fix above. Will update in this thread.
```
train_result = trainer.train()
checkpoint_dirs = sorted(glob.glob(f"/opt/ml/model/checkpoint-*"))
checkpoint_path = checkpoint_dirs[0] # this is because I set total_save_limit as 1
load_path, _ = trainer.model_wrapped.load_checkpoint(
checkpoint_path, load_optimizer_states=False, load_lr_scheduler_states=False
)
trainer.save_model()
```<|||||>Hello, see this issue: https://github.com/huggingface/accelerate/issues/1707<|||||>Hi sorry for a probably unrelated problem here.
If I want to save the model in fp16 version, what should I do? Since I know fp16(AMP) is a way of accelerating the training process and saving mem in some cases, but the saved parameters are still fp32.
I just wanna do the same sth similar to the Llama model whose parameters are the fp16 version so that we can do faster about inferences.<|||||>Hi @pacman100 I still see the error using your branch of transfromers. See log below. Please let me know if there is anything you want me to provide. THX!
Second thought: for evaluation/inference purpose, I don't need optimizer and lr scheduler. Is there a way to not save those parameters to save some memory?
```
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] bfloat16_enabled ............. True
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] checkpoint_parallel_write_pipeline False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] checkpoint_tag_validation_enabled True
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] checkpoint_tag_validation_fail False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7f9090172bf0>
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] communication_data_type ...... None
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] curriculum_enabled_legacy .... False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] curriculum_params_legacy ..... False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] data_efficiency_enabled ...... False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] dataloader_drop_last ......... False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] disable_allgather ............ False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] dump_state ................... False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] dynamic_loss_scale_args ...... None
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_enabled ........... False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_gas_boundary_resolution 1
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_layer_name ........ bert.encoder.layer
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_layer_num ......... 0
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_max_iter .......... 100
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_stability ......... 1e-06
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_tol ............... 0.01
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] eigenvalue_verbose ........... False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] elasticity_enabled ........... False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] flops_profiler_config ........ {
"enabled": false,
"recompute_fwd_factor": 0.0,
"profile_step": 1,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": null
}
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] fp16_auto_cast ............... None
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] fp16_enabled ................. False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] fp16_master_weights_and_gradients False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] global_rank .................. 0
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] grad_accum_dtype ............. None
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] gradient_accumulation_steps .. 2
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] gradient_clipping ............ 1.0
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] gradient_predivide_factor .... 1.0
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] initial_dynamic_scale ........ 1
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] load_universal_checkpoint .... False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] loss_scale ................... 1.0
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] memory_breakdown ............. False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] mics_hierarchial_params_gather False
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] mics_shard_size .............. -1
[2023-07-25 00:30:36,502] [INFO] [config.py:964:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') enabled=False
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] nebula_config ................ {
"enabled": false,
"persistent_storage_path": null,
"persistent_time_interval": 100,
"num_of_version_in_retention": 2,
"enable_nebula_load": true,
"load_path": null
}
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] optimizer_legacy_fusion ...... False
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] optimizer_name ............... adamw
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] optimizer_params ............. {'lr': 6e-06, 'betas': [0.9, 0.999], 'eps': 1e-08, 'weight_decay': 0.2}
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] pld_enabled .................. False
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] pld_params ................... False
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] prescale_gradients ........... False
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] scheduler_name ............... WarmupLR
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] scheduler_params ............. {'warmup_min_lr': 0, 'warmup_max_lr': 6e-06, 'warmup_num_steps': 2}
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] sparse_attention ............. None
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] sparse_gradients_enabled ..... False
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] steps_per_print .............. inf
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] train_batch_size ............. 16
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] train_micro_batch_size_per_gpu 2
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] use_node_local_storage ....... False
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] wall_clock_breakdown ......... False
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] world_size ................... 4
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] zero_allow_untested_optimizer False
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] zero_config .................. stage=3 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=16777216 allgather_partitions=True allgather_bucket_size=500,000,000 overlap_comm=True load_from_fp32_weights=True elastic_checkpoint=False offload_param=DeepSpeedZeroOffloadParamConfig(device='cpu', nvme_path=None, buffer_count=5, buffer_size=100,000,000, max_in_cpu=1,000,000,000, pin_memory=False) offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='cpu', nvme_path=None, buffer_count=4, pin_memory=False, pipeline=False, pipeline_read=False, pipeline_write=False, fast_init=False) sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=15099494 param_persistence_threshold=40960 model_persistence_threshold=sys.maxsize max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=True ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] zero_enabled ................. True
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] zero_force_ds_cpu_optimizer .. True
[2023-07-25 00:30:36,503] [INFO] [config.py:964:print] zero_optimization_stage ...... 3
[2023-07-25 00:30:36,503] [INFO] [config.py:950:print_user_config] json = {
"fp16": {
"enabled": false,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 12,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 6e-06,
"betas": [0.9, 0.999],
"eps": 1e-08,
"weight_decay": 0.2
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 6e-06,
"warmup_num_steps": 2
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": false
},
"offload_param": {
"device": "cpu",
"pin_memory": false
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1.000000e+09,
"reduce_bucket_size": 1.677722e+07,
"stage3_prefetch_bucket_size": 1.509949e+07,
"stage3_param_persistence_threshold": 4.096000e+04,
"stage3_max_live_parameters": 1.000000e+09,
"stage3_max_reuse_distance": 1.000000e+09,
"stage3_gather_fp16_weights_on_model_save": true
},
"gradient_accumulation_steps": 2,
"gradient_clipping": 1.0,
"steps_per_print": inf,
"train_batch_size": 16,
"train_micro_batch_size_per_gpu": 2,
"wall_clock_breakdown": false
}
[INFO|trainer.py:1682] 2023-07-25 00:30:36,503 >> ***** Running training *****
[INFO|trainer.py:1683] 2023-07-25 00:30:36,503 >> Num examples = 180
[INFO|trainer.py:1684] 2023-07-25 00:30:36,503 >> Num Epochs = 1
[INFO|trainer.py:1685] 2023-07-25 00:30:36,504 >> Instantaneous batch size per device = 2
[INFO|trainer.py:1688] 2023-07-25 00:30:36,504 >> Total train batch size (w. parallel, distributed & accumulation) = 16
[INFO|trainer.py:1689] 2023-07-25 00:30:36,504 >> Gradient Accumulation steps = 2
[INFO|trainer.py:1690] 2023-07-25 00:30:36,504 >> Total optimization steps = 11
[INFO|trainer.py:1682] 2023-07-25 00:30:36,503 >> ***** Running training *****
[INFO|trainer.py:1683] 2023-07-25 00:30:36,503 >> Num examples = 180
[INFO|trainer.py:1684] 2023-07-25 00:30:36,503 >> Num Epochs = 1
[INFO|trainer.py:1685] 2023-07-25 00:30:36,504 >> Instantaneous batch size per device = 2
[INFO|trainer.py:1688] 2023-07-25 00:30:36,504 >> Total train batch size (w. parallel, distributed & accumulation) = 16
[INFO|trainer.py:1689] 2023-07-25 00:30:36,504 >> Gradient Accumulation steps = 2
[INFO|trainer.py:1690] 2023-07-25 00:30:36,504 >> Total optimization steps = 11
[INFO|trainer.py:1691] 2023-07-25 00:30:36,505 >> Number of trainable parameters = 6,738,448,384
[INFO|trainer.py:1691] 2023-07-25 00:30:36,505 >> Number of trainable parameters = 6,738,448,384
0%| | 0/11 [00:00<?, ?it/s]
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
[WARNING|logging.py:280] 2023-07-25 00:30:36,510 >> You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
[WARNING|logging.py:280] 2023-07-25 00:30:36,510 >> You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
07/25/2023 00:31:11 - INFO - __main__ - !!!!!!At this step throughput is 0.45318892143877243
9%|▉ | 1/11 [00:35<05:53, 35.31s/it]
07/25/2023 00:31:42 - INFO - __main__ - !!!!!!At this step throughput is 0.47042510136622717
18%|█▊ | 2/11 [01:05<04:51, 32.37s/it]
07/25/2023 00:32:13 - INFO - __main__ - !!!!!!At this step throughput is 0.47886025282245415
27%|██▋ | 3/11 [01:36<04:14, 31.84s/it]
07/25/2023 00:32:44 - INFO - __main__ - !!!!!!At this step throughput is 0.4844130442539049
36%|███▋ | 4/11 [02:07<03:40, 31.47s/it]
07/25/2023 00:33:15 - INFO - __main__ - !!!!!!At this step throughput is 0.4884299545826904
45%|████▌ | 5/11 [02:38<03:07, 31.24s/it]
07/25/2023 00:33:45 - INFO - __main__ - !!!!!!At this step throughput is 0.4916091094101314
55%|█████▍ | 6/11 [03:09<02:35, 31.02s/it]
07/25/2023 00:34:17 - INFO - __main__ - !!!!!!At this step throughput is 0.49364129923765976
64%|██████▎ | 7/11 [03:41<02:05, 31.42s/it]
07/25/2023 00:34:48 - INFO - __main__ - !!!!!!At this step throughput is 0.4954246781847558
73%|███████▎ | 8/11 [04:12<01:33, 31.16s/it]
07/25/2023 00:35:18 - INFO - __main__ - !!!!!!At this step throughput is 0.4971914292369494
82%|████████▏ | 9/11 [04:41<01:01, 30.68s/it]
07/25/2023 00:35:48 - INFO - __main__ - !!!!!!At this step throughput is 0.49877618579058647
91%|█████████ | 10/11 [05:11<00:30, 30.55s/it]
{'loss': 1.7188, 'learning_rate': 6e-06, 'epoch': 0.87}
91%|█████████ | 10/11 [05:11<00:30, 30.55s/it]
[INFO|trainer.py:3080] 2023-07-25 00:35:48,400 >> ***** Running Evaluation *****
[INFO|trainer.py:3080] 2023-07-25 00:35:48,400 >> ***** Running Evaluation *****
[INFO|trainer.py:3082] 2023-07-25 00:35:48,400 >> Num examples = 20
[INFO|trainer.py:3085] 2023-07-25 00:35:48,400 >> Batch size = 8
[INFO|trainer.py:3082] 2023-07-25 00:35:48,400 >> Num examples = 20
[INFO|trainer.py:3085] 2023-07-25 00:35:48,400 >> Batch size = 8
0%| | 0/1 [00:00<?, ?it/s]#033[A
#033[A
{'eval_loss': 1.104188323020935, 'eval_runtime': 3.1127, 'eval_samples_per_second': 6.425, 'eval_steps_per_second': 0.321, 'epoch': 0.87}
91%|█████████ | 10/11 [05:15<00:30, 30.55s/it]
#015100%|██████████| 1/1 [00:00<00:00, 1080.45it/s]
#033[A
#033[A
[INFO|trainer.py:2806] 2023-07-25 00:36:03,394 >> Saving model checkpoint to /opt/ml/model/checkpoint-10
[INFO|trainer.py:2806] 2023-07-25 00:36:03,394 >> Saving model checkpoint to /opt/ml/model/checkpoint-10
[INFO|configuration_utils.py:458] 2023-07-25 00:36:03,394 >> Configuration saved in /opt/ml/model/checkpoint-10/config.json
[INFO|configuration_utils.py:458] 2023-07-25 00:36:03,394 >> Configuration saved in /opt/ml/model/checkpoint-10/config.json
[INFO|configuration_utils.py:379] 2023-07-25 00:36:03,395 >> Configuration saved in /opt/ml/model/checkpoint-10/generation_config.json
[INFO|configuration_utils.py:379] 2023-07-25 00:36:03,395 >> Configuration saved in /opt/ml/model/checkpoint-10/generation_config.json
[INFO|modeling_utils.py:1863] 2023-07-25 00:36:15,055 >> The model is bigger than the maximum size per checkpoint (10GB) and is going to be split in 2 checkpoint shards. You can find where each parameters has been saved in the index located at /opt/ml/model/checkpoint-10/pytorch_model.bin.index.json.
[INFO|modeling_utils.py:1863] 2023-07-25 00:36:15,055 >> The model is bigger than the maximum size per checkpoint (10GB) and is going to be split in 2 checkpoint shards. You can find where each parameters has been saved in the index located at /opt/ml/model/checkpoint-10/pytorch_model.bin.index.json.
[INFO|tokenization_utils_base.py:2210] 2023-07-25 00:36:15,055 >> tokenizer config file saved in /opt/ml/model/checkpoint-10/tokenizer_config.json
[INFO|tokenization_utils_base.py:2210] 2023-07-25 00:36:15,055 >> tokenizer config file saved in /opt/ml/model/checkpoint-10/tokenizer_config.json
[INFO|tokenization_utils_base.py:2217] 2023-07-25 00:36:15,055 >> Special tokens file saved in /opt/ml/model/checkpoint-10/special_tokens_map.json
[INFO|tokenization_utils_base.py:2217] 2023-07-25 00:36:15,055 >> Special tokens file saved in /opt/ml/model/checkpoint-10/special_tokens_map.json
[2023-07-25 00:36:15,659] [INFO] [logging.py:96:log_dist] [Rank 0] [Torch] Checkpoint global_step10 is about to be saved!
/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1802: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
[2023-07-25 00:36:15,675] [INFO] [logging.py:96:log_dist] [Rank 0] Saving model checkpoint: /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt
[2023-07-25 00:36:15,675] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt...
[2023-07-25 00:36:15,689] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt.
[2023-07-25 00:36:15,689] [INFO] [torch_checkpoint_engine.py:21:save] [Torch] Saving /opt/ml/model/checkpoint-10/global_step10/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt...
[2023-07-25 00:37:16,991] [INFO] [torch_checkpoint_engine.py:23:save] [Torch] Saved /opt/ml/model/checkpoint-10/global_step10/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt.
[2023-07-25 00:37:16,992] [INFO] [engine.py:3285:_save_zero_checkpoint] zero checkpoint saved /opt/ml/model/checkpoint-10/global_step10/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt
[2023-07-25 00:37:17,699] [INFO] [torch_checkpoint_engine.py:33:commit] [Torch] Checkpoint global_step10 is ready now!
07/25/2023 00:37:49 - INFO - __main__ - !!!!!!At this step throughput is 0.49004957528181253
100%|██████████| 11/11 [07:12<00:00, 58.13s/it]
[INFO|trainer.py:1930] 2023-07-25 00:37:49,056 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|trainer.py:1930] 2023-07-25 00:37:49,056 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|trainer.py:2089] 2023-07-25 00:37:49,058 >> Loading best model from /opt/ml/model/checkpoint-10 (score: 1.104188323020935).
[INFO|trainer.py:2089] 2023-07-25 00:37:49,058 >> Loading best model from /opt/ml/model/checkpoint-10 (score: 1.104188323020935).
[INFO|deepspeed.py:381] 2023-07-25 00:37:49,060 >> Attempting to resume from /opt/ml/model/checkpoint-10
[INFO|deepspeed.py:381] 2023-07-25 00:37:49,060 >> Attempting to resume from /opt/ml/model/checkpoint-10
[2023-07-25 00:37:49,109] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt...
[2023-07-25 00:37:49,143] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt.
[2023-07-25 00:37:49,151] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt...
[2023-07-25 00:37:49,161] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /opt/ml/model/checkpoint-10/global_step10/zero_pp_rank_0_mp_rank_00_model_states.pt.
[2023-07-25 00:37:49,180] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /opt/ml/model/checkpoint-10/global_step10/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt...
[2023-07-25 00:38:05,103] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 230
[2023-07-25 00:38:08,243] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 231
[2023-07-25 00:38:08,243] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 232
[2023-07-25 00:38:11,500] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 233
```<|||||>Second thought: how can I get away with load the best model using Trainer and implement it outside of Trainer? like this line in clm.py https://github.com/philschmid/huggingface-llama-2-samples/blob/18838c203285e7eefa2169e5413db4b8e8013a02/training/scripts/run_clm.py#L238<|||||>Hi @pacman100 gentle bump on above issue to see if there is anything I can provide to let you better root cause. THX a lot!<|||||>> Hello, see this issue: https://github.com/huggingface/accelerate/issues/1707
As mentioned, this is the issue and isn't related to DeepSpeed integration. Please follow up with the DeepSpeed team |
transformers | 25,026 | closed | load_in_8bit=True broken with new transformers | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.20.3
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: 0
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Script based upon: https://huggingface.co/Salesforce/blip2-flan-t5-xl
```python
import requests
from PIL import Image
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
from transformers import Blip2Processor, Blip2ForConditionalGeneration
gpu_id = 0
device_map = {"": gpu_id}
blip_model = blip_processor = 'Salesforce/blip2-flan-t5-xl'
prompt = 'an image of'
device = 'cuda'
load_half = False
import torch
with torch.no_grad():
context_class_cast = torch.autocast
with context_class_cast(device):
processor = Blip2Processor.from_pretrained(blip_processor,
load_in_8bit=True,
device_map=device_map)
model = Blip2ForConditionalGeneration.from_pretrained(blip_model,
load_in_8bit=True,
device_map=device_map)
inputs = processor(image, prompt, return_tensors="pt")
output = model.generate(**inputs)
caption: str = processor.decode(output[0], skip_special_tokens=True)
print(caption)
```
output:
```
/home/jon/miniconda3/envs/h2ogpt/bin/python3.10 /home/jon/h2ogpt/checkblip2_mine.py
Console output is saving to: /home/jon/h2ogpt/pycharm.log
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda121.so
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 121
CUDA SETUP: Loading binary /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda121.so...
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /home/jon/miniconda3/envs/h2ogpt did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/opt/clang+llvm-4.0.0-x86_64-linux-gnu-ubuntu-16.04/lib'), PosixPath('/opt/rstudio-1.0.136/bin'), PosixPath('/usr/lib/jvm/default-java/jre/lib/amd64/server'), PosixPath('/home/jon/lib'), PosixPath('/usr/local/cuda/extras/CUPTI/lib64')}
warn(msg)
Loading checkpoint shards: 100%|██████████| 2/2 [00:07<00:00, 3.88s/it]
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py:318: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization
warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/transformers/generation/utils.py:1369: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
```
Gives empty output for caption in transformers==4.31.0 but gives correct output for all older transformers from 4.30.2 and backwards.
### Expected behavior
All dependencies fixed, just do:
```
pip install transformers==4.30.2
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting transformers==4.30.2
Downloading transformers-4.30.2-py3-none-any.whl (7.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.2/7.2 MB 55.8 MB/s eta 0:00:00
Requirement already satisfied: filelock in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (3.12.2)
Requirement already satisfied: huggingface-hub<1.0,>=0.14.1 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (0.16.4)
Requirement already satisfied: numpy>=1.17 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (1.23.5)
Requirement already satisfied: packaging>=20.0 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (23.1)
Requirement already satisfied: pyyaml>=5.1 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (6.0)
Requirement already satisfied: regex!=2019.12.17 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (2023.6.3)
Requirement already satisfied: requests in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (2.31.0)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (0.13.3)
Requirement already satisfied: safetensors>=0.3.1 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (0.3.1)
Requirement already satisfied: tqdm>=4.27 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from transformers==4.30.2) (4.65.0)
Requirement already satisfied: fsspec in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.30.2) (2023.6.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from huggingface-hub<1.0,>=0.14.1->transformers==4.30.2) (4.7.1)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from requests->transformers==4.30.2) (3.2.0)
Requirement already satisfied: idna<4,>=2.5 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from requests->transformers==4.30.2) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from requests->transformers==4.30.2) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages (from requests->transformers==4.30.2) (2023.5.7)
Installing collected packages: transformers
Attempting uninstall: transformers
Found existing installation: transformers 4.31.0
Uninstalling transformers-4.31.0:
Successfully uninstalled transformers-4.31.0
Successfully installed transformers-4.30.2
(h2ogpt) jon@pseudotensor:~/h2ogpt$
```
Then re-run script and get:
```
/home/jon/miniconda3/envs/h2ogpt/bin/python3.10 /home/jon/h2ogpt/checkblip2_mine.py
Console output is saving to: /home/jon/h2ogpt/pycharm.log
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda121.so
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 121
CUDA SETUP: Loading binary /home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda121.so...
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /home/jon/miniconda3/envs/h2ogpt did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/lib/jvm/default-java/jre/lib/amd64/server'), PosixPath('/opt/rstudio-1.0.136/bin'), PosixPath('/opt/clang+llvm-4.0.0-x86_64-linux-gnu-ubuntu-16.04/lib'), PosixPath('/home/jon/lib'), PosixPath('/usr/local/cuda/extras/CUPTI/lib64')}
warn(msg)
Loading checkpoint shards: 100%|██████████| 2/2 [00:08<00:00, 4.04s/it]
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py:318: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization
warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")
/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/transformers/generation/utils.py:1353: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
a woman and her dog on the beach
```
i.e. returns the caption: `a woman and her dog on the beach`
I also tried the latest bitsandbytes==0.41.0 and same effect as with older 0.39.0.
I also tried newer accelerate==0.21.0 and has same effect as with older 0.20.3
I also tried latest transformers 4.32.0.dev0 and same effect. | 07-23-2023 17:29:20 | 07-23-2023 17:29:20 | cc @younesbelkada <|||||>Hi @pseudotensor
Thanks for the issue and the clean reproducer, I can confirm the issue persists on the current main brach of transformers and does not occur with #25047 - once that PR merged your issue will be fixed
Also, there is no need to specify `load_in_8bit=True` and `device_map="auto"` in the `Blip2Processor.from_pretrained` method
```python
import requests
from PIL import Image
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
from transformers import Blip2Processor, Blip2ForConditionalGeneration
gpu_id = 0
device_map = {"": gpu_id}
blip_model = blip_processor = 'Salesforce/blip2-flan-t5-xl'
prompt = 'an image of'
device = 'cuda'
load_half = False
import torch
with torch.no_grad():
context_class_cast = torch.autocast
with context_class_cast(device):
processor = Blip2Processor.from_pretrained(blip_processor)
model = Blip2ForConditionalGeneration.from_pretrained(blip_model,
load_in_8bit=True,
device_map=device_map)
inputs = processor(image, prompt, return_tensors="pt")
output = model.generate(**inputs)
caption: str = processor.decode(output[0], skip_special_tokens=True)
print(caption)
``` |
transformers | 25,025 | open | KeyError: 'eval_loss' when doing LukeForMaskedLM | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.14.21-150400.24.55-default-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: **Yes**
- Using distributed or parallel set-up in script?: **No**
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm using [run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py) to evaluate some models on a custom dataset (one text by line). In some models (BERT, XLM-RoBERTa, DeBERTa) I managed to run the evaluations successfully and even on LUKE I can start the evaluation, but after all the evaluation steps on [studio-ousia/mluke-base-lite](https://huggingface.co/studio-ousia/mluke-base-lite), I got the following:
```
Traceback (most recent call last):
File "/cfs/home/u021274/higo/./language-modeling/run_mlm.py", line 658, in <module>
main()
File "/cfs/home/u021274/higo/./language-modeling/run_mlm.py", line 629, in main
perplexity = math.exp(metrics["eval_loss"])
KeyError: 'eval_loss'
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /cfs/home/u021274/higo/./language-modeling/run_mlm.py:658 in <module> │
│ │
│ 655 │
│ 656 │
│ 657 if __name__ == "__main__": │
│ ❱ 658 │ main() │
│ 659 │
│ │
│ /cfs/home/u021274/higo/./language-modeling/run_mlm.py:629 in main │
│ │
│ 626 │ │ max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is n │
│ 627 │ │ metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset)) │
│ 628 │ │ try: │
│ ❱ 629 │ │ │ perplexity = math.exp(metrics["eval_loss"]) │
│ 630 │ │ except OverflowError: │
│ 631 │ │ │ perplexity = float("inf") │
│ 632 │ │ metrics["perplexity"] = perplexity │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
KeyError: 'eval_loss'
```
To do that, i entered the following prompt:
```
CUDA_VISIBLE_DEVICES=5 python3 ./language-modeling/run_mlm.py \
--model_name_or_path studio-ousia/mluke-base-lite \
--validation_file ./data/test_data.txt \
--max_seq_length 512 \
--line_by_line True \
--do_eval \
--output_dir ./other_models/mluke-base-lite \
--fp16 True
```
The configuration above was functional on all other before mentioned models, only on LUKE I'm having this issue.
### Expected behavior
Evaluation of mluke-base-lite on a given text dataset. | 07-23-2023 16:25:49 | 07-23-2023 16:25:49 | This example might need adjustments to be used on Luke, as the model has an API that is slightly different from BERT. |
transformers | 25,024 | closed | Weights of BlipModel are not initialized from the model checkpoint | ### System Info
- `transformers` version: 4.31.0.dev0
- Platform: Linux-5.15.0-76-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.15
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@younesbelkada @ArthurZucker @amyeroberts @ydshieh
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from PIL import Image
import requests
from transformers import AutoProcessor, BlipModel
model = BlipModel.from_pretrained("Salesforce/blip-image-captioning-base")
processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1)
```
### Expected behavior
The code snippet is an example from https://huggingface.co/docs/transformers/model_doc/blip#transformers.BlipProcessor.
The warning that I get is:
Some weights of BlipModel were not initialized from the model checkpoint at Salesforce/blip-image-captioning-base and are newly initialized: ['text_model.encoder.layer.10.crossattention.output.dense.weight', 'text_model.encoder.layer.4.attention.output.LayerNorm.bias', 'text_model.encoder.layer.2.intermediate.dense.bias', 'text_model.encoder.layer.1.attention.self.value.bias', 'text_model.encoder.layer.5.attention.output.LayerNorm.bias', 'text_model.encoder.layer.2.attention.output.dense.bias', 'text_model.encoder.layer.1.crossattention.self.key.weight', 'text_model.encoder.layer.5.crossattention.self.key.bias', 'text_model.encoder.layer.11.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.1.attention.self.value.weight', 'text_model.encoder.layer.8.attention.self.key.weight', 'text_model.encoder.layer.9.crossattention.output.dense.bias', 'text_model.encoder.layer.7.crossattention.self.key.bias', 'text_model.encoder.layer.1.attention.output.dense.bias', 'text_model.encoder.layer.8.output.LayerNorm.bias', ...
It seems that the model weights are being initialised anew as there's some error with loading the pre-trained weights. Please guide me in solving this issue. | 07-23-2023 14:51:33 | 07-23-2023 14:51:33 | Also cc @ydshieh who was just discussing this internally :-)<|||||>Hi @Vibhu04
Thanks for the issue,
indeed there is a problem with `BlipModel` classes. Note that BlipModel would stand for the "pre-trained" versions of Blip to extract raw logits / hidden states from text and vision input. That class has been copied from CLIPModel class and needs a careful refactoring to be able to reproduce the correct pre-trained Blip models: https://github.com/salesforce/BLIP/blob/main/models/blip_pretrain.py#L112-L136 .
Even after the refactoring one would need to convert the pre-trained BLIP weights as they are different from existing weights on the Hub + they contain additional modules.
I can put that on my TODO but cannot give an accurate ETA, for now if you want to use Blip as a model to retrieve hidden states and logits, I would advise you to use `BlipForConditionalGeneration`<|||||>Hi @younesbelkada, thanks a lot for your prompt reply. I actually want to compute the image-text similarity score given an input image and a text, and I was hoping I could use `BlipModel` for that. Would there be a way of achieving this using `BlipForConditionalGeneration`? If not, is there any other `Blip` model class that I could use for this purpose?
Thanks a lot. <|||||>Thanks for your reply @Vibhu04
For computing image and text similarity score, I would advise you to use the ITM (image text matching) models: https://huggingface.co/Salesforce/blip-itm-base-coco
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-base-coco")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt")
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
```<|||||>Hi @younesbelkada, thank you so much. If I may, I just have one last question: is there a lighter variant (i.e. fewer parameters) of the model that you mentioned? Thanks a lot.<|||||>Hi @Vibhu04
Thanks a lot, hm, to the best of my knowledge the smallest model of that kind is: https://huggingface.co/Salesforce/blip-itm-base-coco - however you can run them in half-precision to reduce their memory footprint by 2:
```python
import requests
from PIL import Image
import torch
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-base-coco")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco", torch_dtype=torch.bfloat16)
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt").to(torch.bfloat16)
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
```<|||||>Thank you so much for your help @younesbelkada! |
transformers | 25,023 | open | 🌐 [i18n-KO] Translated `tokenizer_summary.md` to Korean | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `<tokenizer_summary>.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 가짜연구소 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
@sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? --> | 07-23-2023 09:20:13 | 07-23-2023 09:20:13 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25023). All of your documentation changes will be reflected on that endpoint.<|||||>리뷰를 남기고 submit을 안 했었네요 .. ㅠ<|||||>> 리뷰를 남기고 submit을 안 했었네요 .. ㅠ
ㅋㅋㅋㅋㅋㅋ 다행히 지금 번역 수정중이라서 확인했습니다!<|||||>> 나연님 항상 알기 쉬운 말로 번역해주셔서 좋아요! 게다가 이번 문서에서 토크나이저를 쭉 둘러볼 수 있어서 유익했습니다 😊
>
> 리뷰 하면서 glossary 관련한 수정 제안을 몇 가지 드렸습니다. 참고 부탁 드립니다!
제가 번역을 오랜만에 해서 그런지 glossary 관련 수정사항이 많군요.. 꼼꼼한 리뷰 감사합니다!! |
transformers | 25,022 | closed | Incorrect padding_side Setting as 'left' in Llama Family Model | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-1041-azure-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.2
- Safetensors version: 0.3.1
### Who can help?
text models: @ArthurZucker and @younesbelkada generate: @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When utilizing the Llama Family Model for batch generation, an issue arises due to the lack of a padding token. To clarify, the original model uses pad_id = -1, implying the absence of a padding token. This logic is infeasible for our scenario.
Here is our proposed solution:
Firstly, a padding token should be added using the command tokenizer.add_special_tokens({"pad_token":"<pad>"}), following which the token embedding must be resized accordingly. It's essential to also set model.config.pad_token_id. The embed_tokens layer of the model is initialized with self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.config.padding_idx). This ensures that encoding the padding token outputs zeros. Therefore, passing it during initialization is recommended.
### Expected behavior
Another important aspect is setting the padding_side to 'right'. This is crucial for correct padding direction. | 07-23-2023 09:16:04 | 07-23-2023 09:16:04 | Hey! Indeed, as it was written in the documentation a padding token is required. Seems that by default the padding side is set to `left`. We cannot update the `tokenization` file (for backward compatibility reasons) but we can update the tokenizers online to make sure they use `padding_side = right` by default. <|||||>> Hey! Indeed, as it was written in the documentation a padding token is required. Seems that by default the padding side is set to `left`. We cannot update the `tokenization` file (for backward compatibility reasons) but we can update the tokenizers online to make sure they use `padding_side = right` by default.
Great, I would be nice to update the default padding_side of those model. |
transformers | 25,021 | open | fp16 DDP training in 4.31.0 | ### System Info
pytorch 1.13.1
transformers==4.31.0
### Who can help?
Hi @sgugger ,
I used the 4.31.0 to train a Llama model with LoRA. I observe some problems with --fp16 training and I'm not sure if it is a bug in Trainer.py:
My model is like:
```
class MyModel(nn.Module):
def __init__(self, model_name):
super().__init__()
self.model_name = model_name
self.base_model = LlamaForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
self.base_model = get_peft_model(self.base_model, lora_config)
self.other_modules = nn.Linear(4096, 4096)
```
I used the Trainer to train the model with the following command line:
`torchrun --nproc_per_node=4 main.py --max_steps 100000 --fp16
`
I find the model's gradients (in self.optimizer in the Trainer) are not fp16 but fp32. Is it correct?
Also, I find that no gradient_scaling is conducted during training since self.do_grad_scaling is always False (because self.sharded_ddp is None and args.half_precision_backend will be always "auto"). The current trainer.py will not correctly set up args.half_precision_backend and scaler if self.sharded_ddp is None. Are these observations expected? I'm a little confused why setting up args.half_precision_backend and scaler require sharded_ddp. As a result, I've found that during the training process, I often encounter the loss becoming NaN. I'm not sure whether it is because no gradient_scaling is conducted and half_precision_backend is not correctly set up during training.
Following are my grad_norm (before grad_clipping) with and without --fp16. (My base model here is "JackFram/llama-160m" for debugging) **The results are significantly different.**
Without --fp16:
step 1: grad_norm=0.059
Step 5: grad_norm=0.054
Step 10: grad_norm=0.048
Step 15: grad_norm=0.050
Step 20: grad_norm=0.050
With --fp16:
Step 1: grad_norm = nan
Step 5: grad_norm = 129.88
Step 10: grad_norm=126.98
Step 15: grad_norm=149.58
Step 20: grad_norm=80.7
```
def compute_grad_norm(optimizer): # the function to compute grad_norm
total_norm = 0.0
for group in optimizer.param_groups:
for param in group['params']:
if param.grad is not None:
param_norm = param.grad.data.norm(2)
total_norm += param_norm.item() ** 2
total_norm = torch.sqrt(torch.tensor(total_norm))
return total_norm
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Expected behavior
do_grad_scaling=True when --fp16 is enabled; rarely confronting loss becoming nan | 07-23-2023 07:58:35 | 07-23-2023 07:58:35 | Similar problem here. When using fp16 to train Llama2 with LoRA, always nan. Bf16 works well, but official Llama2 uses fp16.<|||||>Oh, I understand! It doesn't use the cuda.amp but uses accelerator to automatically handle the loss scale, right?
And the reason why fp16's gradient is large is that they're scaled gradients.
However, it is still strange that gradients in the optimizer are fp32. Are they designed so for scaling? I'm sorry that I'm not very familiar with accelerator<|||||>This is **mixed precision** training. The gradients are computed in float16 but converted to float32 to do the optimizer update as we can't do the update in low precision. As for properly computing the gradient norm, you need to unscale the gradients first to compare them to a training without mixed precision (or bfloat16 as the training in bfloat16 does not require gradient scaling).<|||||>> This is **mixed precision** training. The gradients are computed in float16 but converted to float32 to do the optimizer update as we can't do the update in low precision. As for properly computing the gradient norm, you need to unscale the gradients first to compare them to a training without mixed precision (or bfloat16 as the training in bfloat16 does not require gradient scaling).
Many thanks for your answering. If I use --fp16 during training, what other arguments should I add for this? I am confused that using --fp16 to tune Llama2 always meets nan, but --bf16 works well.<|||||>I also observed the same thing that mixed precision training of llama-7b is very frequently resulting in nan losses. The issue does not exist for 13b for me. As you say, bfloat16 is more stable. I dont think there is anything wrong in the code base, rather some strange peculiarity with the 7b weights? Curious if someone has some insights on that.<|||||>> I also observed the same thing that mixed precision training of llama-7b is very frequently resulting in nan losses. The issue does not exist for 13b for me. As you say, bfloat16 is more stable. I dont think there is anything wrong in the code base, rather some strange peculiarity with the 7b weights? Curious if someone has some insights on that.
That's interesting findings. I haven't tried the 13b llama-2 yet.<|||||>I meet a more confused thing! When I torchrun/deepspeed fp16 train glm or baichuan with 1 or 2 gpus, the loss is ok. But when i use more than 2 gpus, like 3, the loss will overflow until fail! My gpus is V100, and i have try different version of transformers\torch\deepspeed<|||||>> I meet a more confused thing! When I torchrun/deepspeed fp16 train glm or baichuan with 1 or 2 gpus, the loss is ok. But when i use more than 2 gpus, like 3, the loss will overflow until fail! My gpus is V100, and i have try different version of transformers\torch\deepspeed
I observed the similar results. When using more gpus or larger gradient accumulatio steps, the result doesn't become better (as expected) but often becomes worse (using fp16 in v100)<|||||>so why? what is the difference between 2GPUs and 3(or more) GPUS when do training that causes the unexpected result. ps: I used to run the same train code in A100 with AMP fp16 which is ok. |
transformers | 25,020 | open | GenerationMixin: model_kwargs not passed to model in assisted decoding | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("gpt2")
assist = AutoModelForCausalLM.from_pretrained("distilgpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
inputs = tokenizer("The first rule of fight", return_tensors='pt')
outputs = model.generate(**inputs, token_type_ids=torch.tensor([[0,0,0,0,0]], dtype=torch.long))
print(tokenizer.decode(outputs[0]))
# output: The first rule of fight!!!!!!!!!!!!!!!
outputs = model.generate(**inputs, token_type_ids=torch.tensor([[0,0,0,0,0]], dtype=torch.long), num_beams=1, assistant_model=assist)
print(tokenizer.decode(outputs[0]))
# output: The first rule of fight-or-flight is to be prepared for the enemy. If you are
```
### Expected behavior
I would expect the outputs to be the same for the assisted generation as for the regular generation, as the token_type_ids is being passed into generate in both cases. It is expected that the `generate` method passes extra arguments to the model via its `prepare_inputs_for_generation` method.
In fact, the assisted generation does not forward the `model_kwargs` to the model as the other generation methods do. | 07-22-2023 22:25:29 | 07-22-2023 22:25:29 | I'm happy to have a go at fixing this if a maintainer is willing to support.<|||||>@sinking-point thank you for spotting it! Yes, I'd be very happy to support you in fixing this :D <|||||>@gante No problem, thank you for offering to support.
I've come up against a problem. This is from the GPT2 `prepare_inputs_for_generation` method, but I imagine it's the same for many other models:
```python
# only last token for inputs_ids if past is defined in kwargs
if past_key_values:
input_ids = input_ids[:, -1].unsqueeze(-1)
if token_type_ids is not None:
token_type_ids = token_type_ids[:, -1].unsqueeze(-1)
```
It assumes that if past_key_values is given, you only need the last token. In assisted generation, this is not the case, as multiple candidate tokens go in one pass.
Arguably, this is a bug in the implementation of `prepare_inputs_for_generation`. It would be better to only cut off as many tokens as we have past_key_values. E.g. with 20 past_key_values and 25 tokens given, it should take the last 5 tokens.
I have 2 options:
1. Fix `prepare_inputs_for_generation` in all models. This seems like it could be a lot of work, so I'm not sure I can take that on alone.
2. Modify the output from `prepare_inputs_for_generation` in `assisted_decoding` to correct the `input_ids`. This would be easier, but it removes control of this process from the models. It also may be insufficient, as models may create other kwargs in `prepare_inputs_for_generation` to match the shape of `input_ids`.
What do you think?<|||||>I propose to implement a `prepare_inputs_for_assisted_generation` method in `GenerationMixin`.
It will call the `prepare_inputs_for_generation` method and modify the `input_ids` in the returned dict to the correct number of candidate tokens.
Models can then override this if they need to implement custom logic.<|||||>Hey @sinking-point 👋 I appreciate your bias for action with #25135, but I'd like to propose a different route. A route that would benefit us all in the long run and implies a shorter PR :)
With your solution in #25135, we have a new function to maintain. From experience, different models will eventually make us add conditional branches to accumulate all expected input flavors -> it will be a burden (and a mess) in the long run 😞
You mentioned an alternative plan, fixing the existing `prepare_inputs_for_generation` to detect how many new tokens there are. In the long run, this is a much better route -- no additional maintenance burden and may unblock future applications with similar problems. However, fixing all models takes a very long time (and may not be the best use of our time, as some models are not used with assisted generation). So... let's modify it for the model you are using now, and raise an exception with instructions regarding how to enable other models :) I've successfully used this strategy in the past, e.g. [here](https://github.com/huggingface/transformers/blob/dd9d45b6ecc0861847e21d461187711331a56138/src/transformers/generation/utils.py#L545).
Would you be up for it?<|||||>Hi @gante . Thanks for taking a look, but I don't agree with your assessment here.
The issue with your suggestion is it would break assisted generation for models it currently works with. This would be a regression of functionality, and could break people's code.
The `prepare_inputs_for_assisted_generation` default implementation is intended to work for most, but not necessarily all models. If a new model is added that it doesn't work with, the model can override this method (as with `prepare_inputs_for_generation`). This avoids the need for adding conditional branches to the default implementation.<|||||>> The issue with your suggestion is it would break assisted generation for models it currently works with. This would be a regression of functionality, and could break people's code.
How so? input ids length = Attention mask length - past KV length, which would be true in all generation methods.
<|||||>Maybe I'm misunderstanding you, but isn't your suggestion to:
1. Make `prepare_inputs_for_generation` compatible with assisted generation in the models I need only
2. Raise an exception when anyone tries to use assisted generation with other models?
Currently, most models work with assisted generation. After implementing your suggestion, they would not.<|||||>I see, you are correct, if we change the code to use `prepare_inputs_for_generation` instead of manual input preparation, then the models that don't update this function will fail with assisted generation because the function only prepares one token at a time. In other words, we have to update them all.
Still, I'm very biased toward updating them all, it is a much wiser long-term solution and it is not that much more work -- all variations of assisted generation/speculative decoding will need it. It is more work to you (if you still want to implement it), but this sort of choice is critical to ensure we can keep maintaining `transformers` 🤗 <|||||>I don't want to go through 170+ models and fix them manually one by one.
I'm hoping they're similar enough that I can script it. I'll give that a go.<|||||>If I'm honest though, I still disagree with you that this is a more maintainable approach.
The reason this repetitive effort is necessary is that the logic is reapeated for every model rather than being implemented in the mixin.
If the logic in my PR needs to be changed, you just have to change it once, in one place (the mixin). Your concern regarding an eventual need for conditional branches is addressed by the ability of models to override the function, implementing their own logic only if they need to rather than every single time.
If I change all the `prepare_inputs_for_generation` functions individually and then the logic needs to be changed again, someone will have to go through and update all the models again.
If we're optimising for future dev time, we should focus on hoisting logic from the models to the mixin when the opportunity presents itself, in my opinion.
Is there anyone who can chime in to give a third opinion?<|||||>> The reason this repetitive effort is necessary is that the logic is reapeated for every model rather than being implemented in the mixin.
The reason the logic is repeated is a core principle of our design philosophy -- https://huggingface.co/docs/transformers/philosophy. This philosophy is one of the reasons `transformers` is so successful.
You are saying that we can code the wrapper once in the mixin and then overwrite it on a per-model basis... so pretty much the same as updating `prepare_inputs_for_generation`, but with extra steps and additional encapsulation. This is precisely why I want to avoid going this route.
As the main developer and maintainer of everything `generate`-related, I can assure you your suggestion is worse in the long run. Generalist functions containing functionality that is strongly model-dependent are the main reason why `generate` is so hard to develop at the moment, their complexity grows very quickly.
To wrap up: if we end up going in this direction, there has to be a much stronger reason than saving an hour or two of work.
> Is there anyone who can chime in to give a third opinion?
Feel free to ping others, but ultimately it's me who you have to convince :)<|||||>I hope I didn't come across as trying to undermine your authority. I just find that when there's a disagreement between two people, a third perspective can help to form a better consensus. If you agree, you would know better than me who to tag.
> You are saying that we can code the wrapper once in the mixin and then overwrite it on a per-model basis... so pretty much the same as updating prepare_inputs_for_generation, but with extra steps and additional encapsulation. This is precisely why I want to avoid going this route.
It's not the same. With my solution, in most cases the default implementation would suffice and there would be no need to override it. In fact, as it stands the tests pass for all models - none of them need to override the method. I'm just saying that in the event that, as you fear, you would have to add a conditional branch to the default implementation, you could instead override it in the model.
I don't think we have any fundamental disagreement on design philosophy. At the extreme end of the spectrum, you could do away with `GenerationUtils` and implement it all in every model. I think we can agree that to take the 'repeat yourself' philosophy to that extent is impractical. All we disagree on is where to draw the line.
That said, since you're the one who will have to deal with the consequences of whatever approach we take, I'm willing to defer to your preference.<|||||>> I hope I didn't come across as trying to undermine your authority. I just find that when there's a disagreement between two people, a third perspective can help to form a better consensus.
Not interpreted as so 🤗 We are internally aligned that `generate` consists of too many nested calls and that adding generalist functions on model-dependent parts is a recipe for chaos, hence my assertive comment. I hope this doesn't come across as downplaying your comments and suggestions -- since we bear the load of maintenance, sometimes we have to say no to seemingly good suggestions, using our past experience as a guide.
> All we disagree on is where to draw the line.
Precisely :)
> That said, since you're the one who will have to deal with the consequences of whatever approach we take, I'm willing to defer to your preference.
Thank you for being understanding 🤗 Let me know if I can help in any way! |
transformers | 25,019 | closed | Add saffu langauge model to the transformers library | # What does this PR do?
We upload a language model called SAFFU (self attention feed forward unit) to the huggingface with the theme of improving the computational efficiency in computing the attention distribution
| 07-22-2023 15:18:30 | 07-22-2023 15:18:30 | |
transformers | 25,018 | closed | Fix typo in LlamaTokenizerFast docstring example | # What does this PR do?
Fix typo in LlamaTokenizerFast docstring example
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-22-2023 14:45:32 | 07-22-2023 14:45:32 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25018). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,017 | open | 🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `add_tensorflow_model.md` file of the documentation to Korean 😄
Thank you in advance for your review!
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [ ] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 OSSCA 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- Team OSSCA, may you please review this PR? -->
@wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- May you please review this PR? -->
<!-- @sgugger, @ArthurZucker, @eunseojo --> | 07-22-2023 14:18:38 | 07-22-2023 14:18:38 | Please do not open and close four different PRs for the same file. You can update the original PR.<|||||>공들여 번역 및 수정을 통해 훌륭한 결과물이 나오신 것 같네요! 전체적으로 잘 읽히는 것 같습니다~ |
transformers | 25,016 | closed | 🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `add_tensorflow_model.md` file of the documentation to Korean 😄
Thank you in advance for your review!
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [ ] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
@keonju2
<!-- 1. 위 체크가 모두 완료된 뒤에만 OSSCA 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- Team OSSCA, may you please review this PR? -->
<!-- @wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- May you please review this PR? -->
<!-- @sgugger, @ArthurZucker, @eunseojo --> | 07-22-2023 13:49:41 | 07-22-2023 13:49:41 | |
transformers | 25,015 | closed | 🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `add_tensorflow_model.md` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `your_file.md` file of the documentation to Korean 😄
Thank you in advance for your review!
Part of https://github.com/huggingface/transformers/issues/20179
<!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [ ] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
@keonju2
<!-- 1. 위 체크가 모두 완료된 뒤에만 OSSCA 팀원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- Team OSSCA, may you please review this PR? -->
<!-- @wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- May you please review this PR? -->
<!-- @sgugger, @ArthurZucker, @eunseojo --> | 07-22-2023 13:48:48 | 07-22-2023 13:48:48 | |
transformers | 25,014 | closed | When I use the command "python convert_llama_weights_to_hf.py --input_dir /xxx/llama/ --model_size 70B --output_dir /xxxx/Llama-2-70b-chat-hf" killed | ### System Info
Hi,@ArthurZucker,@younesbelkada
When I use the command "python convert_llama_weights_to_hf.py --input_dir /xxx/llama/ --model_size 70B --output_dir /xxxx/Llama-2-70b-chat-hf", the following error occurred:
<img width="1342" alt="image" src="https://github.com/huggingface/transformers/assets/44772254/58946c17-e65d-4d56-b675-361ff0832576">
Additional Information: Memory Size:116GB
I noticed: Important note: you need to be able to host the whole model in RAM to execute this script (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM).
I would like to ask, is there a parameter control, I can use disk instead of memory to complete the task? I can live with him running slower.
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python convert_llama_weights_to_hf.py --input_dir /home/xxxx/llama/ --model_size 70B --output_dir /home/xxxx/llama/Llama-2-70b-chat-hf
### Expected behavior
it can execute successfully | 07-22-2023 12:38:34 | 07-22-2023 12:38:34 | Hi @cm-liushaodong
The llama-70B model weights file is at least 140GB in half precision, sadly I think that you need an instance of at least that CPU memory size to download the weights and load them in CPU memory. Maybe @ArthurZucker can confirm as he has used that script<|||||>Yes, as[ the documentation mentions, ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py#L52):
```python
Important note: you need to be able to host the whole model in RAM to execute this script (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM).
```
`killled` means you probably had a OOM issue. If you are on linux you can check with `dmesg -T| grep -E -i -B100 'killed process'` <|||||>Thanks all, I used a machine with a larger memory to transcode successfully. |
transformers | 25,013 | closed | [i18n-<languageCode>] Translating docs to <languageName>czech republic | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go 🔥
-->
| 07-22-2023 10:15:38 | 07-22-2023 10:15:38 | @Denyweeeed I would like to contribute to the Telugu language translation<|||||>Please properly fill the template when opening such issues. |
transformers | 25,012 | closed | [check_config_docstrings.py] improve diagnostics | It's difficult to know what to do when one gets an error:
```
python utils/check_config_docstrings.py
Traceback (most recent call last):
File "utils/check_config_docstrings.py", line 92, in <module>
check_config_docstrings_have_checkpoints()
File "utils/check_config_docstrings.py", line 88, in check_config_docstrings_have_checkpoints
raise ValueError(f"The following configurations don't contain any valid checkpoint:\n{message}")
ValueError: The following configurations don't contain any valid checkpoint:
IdeficsConfig
Exited with code exit status 1
```
After figuring out what it wants, proposing a better assert message. | 07-22-2023 07:25:00 | 07-22-2023 07:25:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,011 | closed | Transformers 4.31.0 Runtime error trying to load model saved as 8bit on HF fails | ### System Info
Transformers 4.31.0
Python 3.10.6
Linux and Windows (Same issue on both)
Bitsandbytes 0.39.1 to 0.40x
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi!
I've just found I'm now getting the following error when trying to load a model I've saved as 8bit on the huggingface.co website:
The error is:
```
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /home/user/.local/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117.so...
Traceback (most recent call last):
File "/home/user/testing/blip2_testing.py", line 12, in <module>
model = Blip2ForConditionalGeneration.from_pretrained(
File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2903, in from_pretrained
) = cls._load_pretrained_model(
File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3260, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 725, in _load_state_dict_into_meta_model
set_module_quantized_tensor_to_device(
File "/home/user/.local/lib/python3.10/site-packages/transformers/utils/bitsandbytes.py", line 116, in set_module_quantized_tensor_to_device
new_value = nn.Parameter(new_value, requires_grad=old_value.requires_grad)
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/parameter.py", line 36, in __new__
return torch.Tensor._make_subclass(cls, data, requires_grad)
RuntimeError: Only Tensors of floating point and complex dtype can require gradients
```
A minimal python script that I was testing:
```
from PIL import Image
import requests
from transformers import AutoProcessor, Blip2ForConditionalGeneration
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = AutoProcessor.from_pretrained(
"Mediocreatmybest/blip2-opt-2.7b_8bit",
load_in_8bit=True,
)
model = Blip2ForConditionalGeneration.from_pretrained(
"Mediocreatmybest/blip2-opt-2.7b_8bit",
load_in_8bit=True,
)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
url1 = "http://images.cocodataset.org/val2017/000000039769.jpg"
image1 = Image.open(requests.get(url, stream=True).raw)
batch = [image, image1]
inputs = processor(images=batch, return_tensors="pt").to(device, torch.float16)
generated_ids = model.generate(**inputs, min_new_tokens=8, max_new_tokens=30)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_text)
```
If I try loading a model that hasn't been saved in 8bit, for example the original Salesforce/blip2-opt-2.7b this then loads without issue and runs fine, from what I've been able to test it seems to be the 8bit saved model only.
Dropping the Transformers version back to 4.30.2 and the script runs fine without error.
### Expected behavior
Using Transformers version 4.30.2 the above example script run normally and outputs the described text.
Updating to Transformers 4.31.0 the above example script fails when trying to use an 8bit saved model such as Mediocreatmybest/blip2-opt-2.7b_8bit.
Using Transformers 4.31.0 on the original model works correctly when passing load_in_8bit=True. | 07-22-2023 06:25:30 | 07-22-2023 06:25:30 | cc @younesbelkada <|||||>Hi @mediocreatmybest
Thanks for the very clean reproducer, I managed to repro the issue and propose a fix in https://github.com/huggingface/transformers/pull/25047
The fix should be now live on the main branch of transformers and you should be able to use that by installing transformers from source. <|||||>Champion :) thanks! 👍 |
transformers | 25,010 | open | 🌐 [i18n-KO] Translated `philosophy.md` to Korean | # What does this PR do?
Translated the `philosophy.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
@sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! -->
<!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo --> | 07-22-2023 06:08:30 | 07-22-2023 06:08:30 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_25010). All of your documentation changes will be reflected on that endpoint. |
transformers | 25,009 | closed | uploading saffu language model to the transformers library | # What does this PR do?
In this PR, we upload a new language model **SAFFU** (Self-Attention-Feed-Forward-Unit) to the HuggingFace transformers library. This model is super efficient to use and fast to train that we derive a super efficient way of computing the self-attention matrix with an explicit mathematical solution.
| 07-22-2023 05:52:45 | 07-22-2023 05:52:45 | Need to fix some issues of the code |
transformers | 25,008 | closed | [bug] `token` not supported in `AutoModel` |
I see `use_auth_token` is already deprecated and will be replaced by `token`,
https://github.com/huggingface/transformers/blob/b257c46a075419c09e5ce5c5aa39bc346ecdb9a5/src/transformers/modeling_utils.py#L2196-L2204
But `AutoModel` doesn't accept the new argument `token` yet. Especially in https://github.com/huggingface/transformers/blob/b257c46a075419c09e5ce5c5aa39bc346ecdb9a5/src/transformers/configuration_utils.py#L631-L638
Directly call `LlamaForCausalLM` instead of `AutoModelForCausalLM` is a temporary workaround to get rid of the deprecation warning.
### Reproduction
```python3
transformers.AutoModelForCausalLM.from_pretrained('meta-llama/Llama-2-7b-chat-hf',use_auth_token='XXX')
transformers.LlamaForCausalLM.from_pretrained('meta-llama/Llama-2-7b-chat-hf',token='XXX')
```
### Expected behavior
`AutoModel` should support new `token` argument. | 07-22-2023 00:41:28 | 07-22-2023 00:41:28 | Could you let us know which version of transformers you are using? I just tried this on the main branch and it seems to work fine.<|||||>@sgugger
```
$ pip freeze | grep transformers
transformers @ git+https://github.com/huggingface/transformers.git@b08f41e62a41632195cb986fcc41d428a5bf1d56
```
Error Log for `token`
```
>>> transformers.AutoModelForCausalLM.from_pretrained('meta-llama/Llama-2-7b-chat-hf', token='■■■■■■')
Traceback (most recent call last):
File "/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 259, in hf_raise_for_status
response.raise_for_status()
File "/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/resolve/main/config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/utils/hub.py", line 417, in cached_file
resolved_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1195, in hf_hub_download
metadata = get_hf_file_metadata(
^^^^^^^^^^^^^^^^^^^^^
File "/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1541, in get_hf_file_metadata
hf_raise_for_status(r)
File "/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 291, in hf_raise_for_status
raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=■■■■■■)
Repository Not Found for url: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 461, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 983, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/configuration_utils.py", line 617, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/configuration_utils.py", line 672, in _get_config_dict
resolved_config_file = cached_file(
^^^^^^^^^^^^
File "/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/utils/hub.py", line 433, in cached_file
raise EnvironmentError(
OSError: meta-llama/Llama-2-7b-chat-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
Deprecation Log for `use_auth_token`
```
>>> transformers.AutoModelForCausalLM.from_pretrained('meta-llama/Llama-2-7b-chat-hf',use_auth_token='■■■■■■')
[2023-07-24 15:51:53,994] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
/home/renpang/miniconda3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:2197: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.04it/s]
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```<|||||>I can reproduce (in some way): the line
```bash
> /transformers/src/transformers/utils/hub.py(418)cached_file()
-> resolved_file = hf_hub_download(
```
doesn't get the token when passing `token` to auto model's `from_pretrained`.
@sgugger I can take a look if you are ok with this.<|||||>By all means, thanks!<|||||>@ain-soph
A fix #25083 is merged into `main` branch 🤗
Thank you for reporting again. |
transformers | 25,007 | open | [ROCM] GFX906 gpu dosent work when GFX900 gpu is also in the system | ### System Info
System:
* ROCM 5.6
* torch-2.1.0.dev20230721+rocm5.6
* GFX900 GPU (MI25) (HIP device 2)
* GFX906 GPU (MI50) (HIP device 1)
* GFX1030 GPU (rx6800xt) (HIP device 0)
* transformers @b257c46a075419c09e5ce5c5aa39bc346ecdb9a5
* Linux 6.4.3 with AMDGPU p2p activated
companion bug against rocm: https://github.com/RadeonOpenCompute/ROCm/issues/2328
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Have GFX900 and GFX906 gpu in system
run the following script:
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import argparse
if __name__ == "__main__":
parser = argparse.ArgumentParser("Transformers llm testing script")
parser.add_argument('--tokenizer', '-t', help="tokenizer to use")
parser.add_argument('--model', '-m', required=True, help="model to use")
parser.add_argument('--device', '-d', default="cpu", help="device to use")
parser.add_argument('--prompt', '-p', default="Today was a long day ", help="the promt to generate from")
args = parser.parse_args()
if args.device != 'cpu':
dtype = torch.bfloat16
else:
dtype = torch.float32
if args.tokenizer is None:
tokenizer = AutoTokenizer.from_pretrained(args.model, padding_side='left')
else:
tokenizer = AutoTokenizer.from_pretrained(args.tokenizer, padding_side='left')
model = AutoModelForCausalLM.from_pretrained(args.model, low_cpu_mem_usage=True, torch_dtype=dtype).to(args.device)
model.eval()
input_ids = tokenizer(args.prompt, return_tensors="pt").input_ids.to(args.device)
attention_mask = torch.ones(input_ids.shape, device=args.device, requires_grad=False)
outputs = model.generate(input_ids, attention_mask=attention_mask, do_sample=True, temperature=1)
response_decoded = tokenizer.batch_decode(outputs, skip_special_tokens=True)
response = response_decoded[0]
print(response)
```
I used bloom-6b for the model, however this dosent matter. If device is set to the GFX1030 gpu everything works. If the deivce is set to the GFX900 GPU everything works, if the deivce is set to the GFX906 the script fails with:
```
Traceback (most recent call last):
File "/home/philipp/machine-lerning/Transformersplayground/janachat/test-simple.py", line 29, in <module>
outputs = model.generate(input_ids, attention_mask=attention_mask, do_sample=True, temperature=1)
File "/home/philipp/machine-lerning/Transformersplayground/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/philipp/machine-lerning/Transformersplayground/venv/lib/python3.9/site-packages/transformers/generation/utils.py", line 1563, in generate
return self.sample(
File "/home/philipp/machine-lerning/Transformersplayground/venv/lib/python3.9/site-packages/transformers/generation/utils.py", line 2665, in sample
next_tokens.tile(eos_token_id_tensor.shape[0], 1).ne(eos_token_id_tensor.unsqueeze(1)).prod(dim=0)
RuntimeError: CUDA driver error: 303
```
running with export AMD_LOG_LEVEL=8 reveals that rocm appears to try to launch a GFX900 kernel on GFX906:
```
:1:devprogram.cpp :1873: 1265234165 us: 21877: [tid:0x7f3bf549b740] Error: The program ISA amdgcn-amd-amdhsa--gfx900:xnack- is not compatible with the device ISA amdgcn-amd-amdhsa--gfx906:sramecc+:xnack-Error: create kernel metadata map using COMgr
Error: Cannot Find Global Var Sizes
Error: Cannot create kernels.
```
Indeed removeing the GFX900 gpu from the system makes the GFX906 work
### Expected behavior
GFX906 gpu should work in all instances | 07-22-2023 00:15:58 | 07-22-2023 00:15:58 | This seems more of an issue for PyTorch and AMD? We only use those libraries in Transformers.<|||||>this is correct, this issue has since been tracked down here https://github.com/ROCmSoftwarePlatform/rocBLAS/issues/1346#issuecomment-1646942417
i will keep the bug here open untill its resolved so that any one else expieranceing the same issue will be redirected the underlying issue. |
transformers | 25,006 | open | RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.120+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.0+cpu (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to make a Sarcasm detector with Lightning in this Kaggle [notebook](https://www.kaggle.com/code/luizclaudioandrade/nlp-with-pytorch).
When I start the training, I get this error:
`RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn`
This is my LightningModule:
```
class SarcasmTagger(pl.LightningModule):
def __init__(
self,
model_name: str,
n_classes: int,
n_training_steps=None,
n_warmup_steps=None
):
super().__init__()
self.bert = BertModel.from_pretrained(model_name, return_dict=True)
#self.bert = BertForSequenceClassification.from_pretrained(model_name, return_dict=True)
self.classifier = nn.Linear(self.bert.config.hidden_size, n_classes)
self.n_training_steps = n_training_steps
self.n_warmup_steps = n_warmup_steps
def forward(self, input_ids, attention_mask):
outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)
#print(outputs)
logits = self.classifier(outputs.pooler_output)
return logits
def shared_step(self, batch, batch_idx):
input_ids = batch["input_ids"]
attention_mask = batch["attention_mask"]
label = batch["label"].view(-1, 1)
logits = self(input_ids=input_ids, attention_mask=attention_mask)
loss = nn.functional.cross_entropy(logits, label)
return logits, loss, label
def training_step(self, batch, batch_idx):
logits, loss, label = self.shared_step(batch, batch_idx)
self.log("train_loss", loss, prog_bar=True, logger=True)
return {"loss": loss, "predictions": logits, "label": label}
def validation_step(self, batch, batch_idx):
logits, loss, label = self.shared_step(batch, batch_idx)
self.log("val_loss", loss, prog_bar=True, logger=True)
return loss
def test_step(self, batch, batch_idx):
logits, loss, label = self.shared_step(batch, batch_idx)
self.log("test_loss", loss, prog_bar=True, logger=True)
return loss
def configure_optimizers(self):
optimizer = AdamW(self.parameters(), lr=2e-5)
scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=self.n_warmup_steps,
num_training_steps=self.n_training_steps
)
return dict(
optimizer=optimizer,
lr_scheduler=dict(
scheduler=scheduler,
interval='step')
)
```
What is the problem here? I'm lost.
Thanks!
### Expected behavior
Execute the training without errors. | 07-21-2023 20:59:23 | 07-21-2023 20:59:23 | Even setting the model to train model with:
`self.bert.train()`
I get the same error.<|||||>Hey 👋🏻
We need a minimal reproducing script in order to help: you are using an external library `pytorch lightning` and I have no idea what is happening inside. `transformers` also has a trainer class, we can help if a bug is related to it, but in this case I cannot ping anyone. <|||||>Hi there!
Is it possible to check my Kaggle notebook ([https://www.kaggle.com/code/luizclaudioandrade/nlp-with-pytorch](https://www.kaggle.com/code/luizclaudioandrade/nlp-with-pytorch))?
The notebook has a [dataset](https://www.kaggle.com/datasets/rmisra/news-headlines-dataset-for-sarcasm-detection) with headlines and a tag if it is sarcastic or not (1 or 0). I'm creating a pandas dat frame and creating a dataset with this dataset encoded.
The dataset:
```
class SarcasticHeadlineDataset(Dataset):
def __init__(
self,
data: pd.DataFrame,
tokenizer: BertTokenizer,
max_token_len: int,
):
self.tokenizer = tokenizer
self.data = data
self.max_token_len = max_token_len
def __len__(self):
return len(self.data)
def __getitem__(self, index: int):
data_row = self.data.iloc[index]
headline = data_row.headline
label = data_row.is_sarcastic
encoding = self.tokenizer.encode_plus(
headline,
padding='max_length',
max_length=self.max_token_len,
return_tensors='pt',
)
return dict(
headline=headline,
input_ids=encoding["input_ids"].flatten(),
attention_mask=encoding["attention_mask"].flatten(),
label=torch.tensor(label, dtype=torch.float)
)
```
The datamodule:
```
class SarcasticHeadlineDataModule(pl.LightningDataModule):
def __init__(
self,
train_df: pd.DataFrame,
test_df: pd.DataFrame,
tokenizer: BertTokenizer,
batch_size: int = 8,
max_token_len: int = 128,
num_workers: int = 4
):
super().__init__()
self.batch_size = batch_size
self.train_df = train_df
self.test_df = test_df
self.tokenizer = tokenizer
self.max_token_len = max_token_len
self.num_workers = num_workers
def setup(self, stage=None):
self.train_dataset = SarcasticHeadlineDataset(
self.train_df,
self.tokenizer,
self.max_token_len
)
self.test_dataset = SarcasticHeadlineDataset(
self.test_df,
self.tokenizer,
self.max_token_len
)
def train_dataloader(self):
return DataLoader(
self.train_dataset,
batch_size=self.batch_size,
shuffle=True,
num_workers=self.num_workers,
pin_memory=True,
persistent_workers=True,
)
def val_dataloader(self):
return DataLoader(
self.test_dataset,
batch_size=self.batch_size,
num_workers=self.num_workers,
pin_memory=True,
persistent_workers=True,
)
def test_dataloader(self):
return DataLoader(
self.test_dataset,
batch_size=self.batch_size,
num_workers=self.num_workers,
pin_memory=True,
persistent_workers=True,
)
```
Some parameters for the trainer:
```
chekpoint_dir = os.path.join(OUTPUT_DIR, 'checkpoints')
checkpoint_callback = ModelCheckpoint(
dirpath=chekpoint_dir,
filename="best-checkpoint",
save_top_k=1,
verbose=True,
monitor="val_loss",
mode="min"
)
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=2)
logger = CSVLogger(OUTPUT_DIR, name='lightning_logs')
```
The trainer:
```
trainer = pl.Trainer(
accelerator='auto',
devices='auto',
strategy='auto',
callbacks=[checkpoint_callback, early_stopping_callback],
max_epochs=N_EPOCHS,
logger=logger,
)
```
The LightningModule:
```
class SarcasmTagger(pl.LightningModule):
def __init__(
self,
model_name: str,
n_classes: int,
n_training_steps=None,
n_warmup_steps=None
):
super().__init__()
self.save_hyperparameters()
self.bert = BertModel.from_pretrained(model_name, return_dict=True)
self.bert.train()
self.classifier = nn.Linear(self.bert.config.hidden_size, n_classes)
self.n_training_steps = n_training_steps
self.n_warmup_steps = n_warmup_steps
def forward(self, input_ids, attention_mask):
outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)
#print(outputs)
logits = self.classifier(outputs.pooler_output)
return logits
def shared_step(self, batch, batch_idx):
input_ids = batch["input_ids"]
attention_mask = batch["attention_mask"]
label = batch["label"].view(-1, 1)
logits = self(input_ids=input_ids, attention_mask=attention_mask)
loss = nn.functional.cross_entropy(logits, label)
return logits, loss, label
def training_step(self, batch, batch_idx):
logits, loss, label = self.shared_step(batch, batch_idx)
self.log("train_loss", loss, prog_bar=True, logger=True)
return {"loss": loss, "predictions": logits, "label": label}
def validation_step(self, batch, batch_idx):
logits, loss, label = self.shared_step(batch, batch_idx)
self.log("val_loss", loss, prog_bar=True, logger=True)
return loss
def test_step(self, batch, batch_idx):
logits, loss, label = self.shared_step(batch, batch_idx)
self.log("test_loss", loss, prog_bar=True, logger=True)
return loss
def configure_optimizers(self):
optimizer = AdamW(self.parameters(), lr=2e-5)
scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=self.n_warmup_steps,
num_training_steps=self.n_training_steps
)
return dict(
optimizer=optimizer,
lr_scheduler=dict(
scheduler=scheduler,
interval='step')
)
```
And the actual training part:
```
checkpoint_file = os.path.join(chekpoint_dir, 'best-checkpoint.ckpt')
if os.path.isfile(checkpoint_file):
print('Resuming training from previous checkpoint...')
trainer.fit(
sarcasm_tagger,
datamodule=data_module,
ckpt_path=checkpoint_file
)
else:
print('Starting training from scratch...')
trainer.fit(
sarcasm_tagger,
datamodule=data_module
)
```
The problem doesn't seem to be in the trainer, I think it is related to the model as this error is related to the backpropagation tensors don't having a gradient function set. But as I'm not calling detach anywhere in the code and I'm setting the model to training mode, I'm lost...
Thanks in advance!<|||||>Again, I am sorry but I won't have time to debug your kaggle notbook no.
A minimal reproducing script is supposed to be ~10 lines of code where you show the issue with the model.
Here is one, showing that the loss is properly back-propagating:
```python
>>> from transformers import BertForSequenceClassification
>>> model = BertForSequenceClassification.from_pretrained("bert-base-uncased", return_dict=True)
>>> model(torch.ones(2,34).long(), labels = torch.ones(2,2)).loss
tensor(0.8236, grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
```
In this case, gradients properly flow and the output is the result of the `multi_label_classification` loss computation (BCE).
I understand your frustration, but I have no idea what is happening behind the `class SarcasmTagger(pl.LightningModule):` inheritance, so I cannot really help you on this part.
If you are using a model + classifier:
```python
>>> from transformers import BertModel
>>> model = BertModel.from_pretrained("bert-base-uncased", return_dict=True)
>>> pooler_out = model(torch.ones(2,34).long()).pooler_output
tensor([[-0.0807, -0.3833, -0.7920, ..., -0.9157, -0.5491, -0.0386],
[-0.0807, -0.3833, -0.7920, ..., -0.9157, -0.5491, -0.0386]],
grad_fn=<TanhBackward0>)
>>> classifier = torch.nn.Linear(model.bert.config.hidden_size, 2)
>>> classifier(pooler_out)
tensor([[-0.1094, 0.2125],
[-0.1094, 0.2125]], grad_fn=<AddmmBackward0>)
```
In these simple snippets, grad function is kept.
These kind of question should be asked on [the forum](https://discuss.huggingface.co/), as from what I see there is no bug in transformers.
<|||||>Thanks for your help!<|||||>I have a similar issue.
With `pytorch-lightning==1.9.4` and `transformers==4.26.1` the code runs fine (and has done with previous versions of both libraries for months/years - yes there have been code changes in that time but the core has been rather stable).
(Also just tested with `transformers==4.29.2` and works fine)
However, when I change nothing in the code and change no other dependencies (so `pytorch-lightning==1.9.4` and all others the same) except to upgrade to `transformers==4.30.2` the code fails with the error message:
```
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
The problem is that my codebase is very large and it will take me a while to generate a minimal reproducing script. I will try to put this together, but in the time it takes me to do this, perhaps someone else will have a simpler solution (considering the information I am sharing) and/or a simpler minimal reproducing script.
Perhaps also @lcoandrade you could try your script with `transformers==4.26.1` or `transformers==4.29.2` and see if that works for you?<|||||>Thanks a lot for the additional information! If you can isolate which version of transformers made it fail for your, in that case we can look into this as a regression! Would be very helpful if @lcoandrade can do this!
<|||||>some more details:
These combinations work:
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.26.1`
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.27.4`
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.28.1`
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.29.2`
These combinations don't:
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.30.0`
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.30.2`
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.31.0`
So the regression must have been introduced in `transformers==4.30.0`?
I'll try to see if I can get a minimal reproducing script together.<|||||>Thanks for the help, @Alex-ley-scrub !
I changed my install packages part to:
```
!pip install torch==2.0.0+cu117
!pip install pytorch-lightning==1.9.4
!pip install accelerate==0.21.0
!pip install tokenizers==0.13.3
!pip install transformers==4.26.1
```
But the error was still popping up. So, I thought the error could be related to the optimizer used. My optimizer was this one:
```
def configure_optimizers(self):
optimizer = AdamW(self.parameters(), lr=2e-5)
scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=self.n_warmup_steps,
num_training_steps=self.n_training_steps
)
return dict(
optimizer=optimizer,
lr_scheduler=dict(
scheduler=scheduler,
interval='step')
)
```
When I changed my method to use a simple Adam optimizer:
```
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=2e-5)
return [optimizer]
```
It worked!
So, the problem is in the AdamW with a scheduler. Reversing the install packages to just:
```
!pip install -q transformers
```
Makes the training work.
As the AdamW is deprecated, I think it is a good idea change the code to use the torch.optim.Adam for instance.
Should this be considered a bug in AdamW?
What do you think, @ArthurZucker and @Alex-ley-scrub?
Thanks again!<|||||>Most probably yes! Thanks for investigating, I'm sure this will help others! <|||||>some more details after I swapped this line of code:
```
from transformers import AdamW
```
with this line:
```
from torch.optim import AdamW
```
now all the versions of transformers I tested earlier work on my existing codebase:
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.26.1`
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.27.4`
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.28.1`
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.29.2`
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.30.0`
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.30.2`
- `torch==2.0.0+cu117`, `pytorch-lightning==1.9.4`, `accelerate==0.21.0`, `tokenizers==0.13.3`, `transformers==4.31.0`
therefore, there is pretty strong evidence that something in `transformers.AdamW` in `transformers==4.30.0` caused a regression?
thanks a lot @lcoandrade for that! 🙌 I can now upgrade our transformers dependency to the latest!<|||||>Pinging @muellerzr and @pacman100 as they are more familiar than me with the recent changes!<|||||>Not entirely sure this is worth looking into too much, given @stas00 point here: https://github.com/huggingface/transformers/pull/23417#issuecomment-1550506298
> This is a very old and deprecated implementation since it doesn't even follow the AdamW algorithm exactly. One should use torch.optim.AdamW instead, which also has a fused version since pt-2.0.0 which is almost as fast as apex's fused AdamW. So really you shouldn't be using this version anyway.
> The only reason it was kept is for BC for those who rely on exact results remaining exact after new transformers versions are released, otherwise we would have just replaced it with torch.optim.AdamW in the first place.
So yes, AdamW is slated for deprecation and you should use `torch.optim.AdamW`. @sgugger do we know when that is going to be? Or should we look into this more.
There wasn't anything explicit in the change to AdamW since v0.29.0, so it'll take some digging to find the exact commit certainly.<|||||>If our AdamW is not working properly, all the more reasons to switch the default to the PyTorch one. Users will still be able to switch back if they do not like the change.<|||||>Hi all, the default has been changed on main now and will populate on the next release. Install with `pip install git+https://github.com/huggingface/transformers` to use it OOTB! |