repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 25,005 | closed | Make more test models smaller | # What does this PR do?
This PR continues on the work of #24824 by fixing the models used in more common tests. This is big enough to be reviewed, I'll do the last ones in a separate PR :-) | 07-21-2023 20:29:49 | 07-21-2023 20:29:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Yes we could go even smaller if we wanted to, but it's hard for big encoer/decoder models with backbones. |
transformers | 25,004 | closed | Move template doc file to md | # What does this PR do?
This PR fixes the `add-new-model` command which broke when we moved all the doc file from MDX to MD (dunno why the test missed it since we deactivated it after if I recall correctly).
Fixes #25003 | 07-21-2023 19:10:01 | 07-21-2023 19:10:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 25,003 | closed | [cookiecutter] Fails to create new model template | ### System Info
2023-07-21 18:34:34.146635: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING:tensorflow:From /home/zekun/transformers/src/transformers/commands/env.py:100: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.32.0.dev0
- Platform: Linux-4.14.290-217.505.amzn2.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): 2.13.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm trying to create a new model called `GeoLM` using the `cookiecutter` utility following the tutorial here: https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model
However, the `transformers-cli add-new-model` was failed at one step, reporting a missing file error for `'cookiecutter-template-GeoLM/geolm.md'`. I can find a file named `geolm.mdx` under the same dir but not `geolm.md`.
The previous step `pip install -e ".[quality]"` was completed without any error. The full running log is provided below.
----------------------------------------------------------------------------------------------------------------------------
```
(huggingface) [zekun@ip-172-31-9-231 transformers]$ transformers-cli add-new-model
2023-07-21 18:33:20.317785: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/home/zekun/transformers/src/transformers/commands/add_new_model.py:58: UserWarning: The command `transformers-cli add-new-model` is deprecated and will be removed in v5 of Transformers. It is not actively maintained anymore, so might give a result that won't pass all tests and quality checks, you should use `transformers-cli add-new-model-like` instead.
warnings.warn(
modelname [BrandNewBERT]: GeoLM
uppercase_modelname [BRAND_NEW_BERT]: GEOLM
lowercase_modelname [brand_new_bert]: geolm
camelcase_modelname [BrandNewBert]: GeoLM
authors [The HuggingFace Team]: Zekun Li
checkpoint_identifier [brand-new-bert-base-cased]: zekun-li/geolm-base-cased
Select tokenizer_type:
1 - Based on BERT
2 - Based on BART
3 - Standalone
Choose from 1, 2, 3 [1]: 1
Select generate_tensorflow_pytorch_and_flax:
1 - PyTorch, TensorFlow and Flax
2 - PyTorch & TensorFlow
3 - PyTorch & Flax
4 - TensorFlow & Flax
5 - PyTorch
6 - TensorFlow
7 - Flax
Choose from 1, 2, 3, 4, 5, 6, 7 [1]: 5
Select is_encoder_decoder_model:
1 - True
2 - False
Choose from 1, 2 [1]: 2
Traceback (most recent call last):
File "/home/zekun/.conda/envs/huggingface/lib/python3.8/shutil.py", line 791, in move
os.rename(src, real_dst)
FileNotFoundError: [Errno 2] No such file or directory: 'cookiecutter-template-GeoLM/geolm.md' -> '/home/zekun/transformers/docs/source/en/model_doc/geolm.md'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/zekun/.conda/envs/huggingface/bin/transformers-cli", line 8, in <module>
sys.exit(main())
File "/home/zekun/transformers/src/transformers/commands/transformers_cli.py", line 55, in main
service.run()
File "/home/zekun/transformers/src/transformers/commands/add_new_model.py", line 185, in run
shutil.move(
File "/home/zekun/.conda/envs/huggingface/lib/python3.8/shutil.py", line 811, in move
copy_function(src, real_dst)
File "/home/zekun/.conda/envs/huggingface/lib/python3.8/shutil.py", line 435, in copy2
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "/home/zekun/.conda/envs/huggingface/lib/python3.8/shutil.py", line 264, in copyfile
with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:
FileNotFoundError: [Errno 2] No such file or directory: 'cookiecutter-template-GeoLM/geolm.md'
```
### Expected behavior
I should see the following files generated below:
```
docs/source/model_doc/<model_name>.md
src/transformers/models/<model_name>/configuration_<model_name>.py
src/transformers/models/<model_name>/modeling_<model_name>.py
src/transformers/models/<model_name>/modeling_tf_<model_name>.py
src/transformers/models/<model_name>/tokenization_<model_name>.py
tests/test_modeling_<model_name>.py
tests/test_modeling_tf_<model_name>.py
```
I can see some of these generated, missing the `*_tf_*` ones (expected) and these:
```
docs/source/model_doc/<model_name>.md
src/transformers/models/<model_name>/tokenization_<model_name>.py
tests/test_modeling_<model_name>.py
```
| 07-21-2023 18:47:13 | 07-21-2023 18:47:13 | Please note that this command is deprecated and it's better to use `add-new-model-like`. Will fix this though.<|||||>> Please note that this command is deprecated and it's better to use `add-new-model-like`. Will fix this though.
Good to know, thanks! |
transformers | 25,002 | closed | Fix graph break for Segformer | # What does this PR do?
Small change in the forward pass of Segformer which reduces the number of graph breaks in the compiled model from 5 to 0.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 07-21-2023 18:10:12 | 07-21-2023 18:10:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Closing as this is resolved on torch nightly |
transformers | 25,001 | open | [WIP]Add ViTPose to Transformers | # What does this PR do?
Adds ViTPose to Huggingface/Transformers
Code and weights: https://github.com/ViTAE-Transformer/ViTPose
Paper: https://arxiv.org/abs/2204.12484
Fixes #24915
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts | 07-21-2023 17:26:59 | 07-21-2023 17:26:59 | @amyeroberts weights for ViTPose are in PyTorch checkpoint files, should I convert them into .bin or something else?<|||||>@shauray8 You will want to write a conversion script, which translates the names of the weights in the state dict to their equivalent in transformers. These weights will then be loaded into the transformers model. Then we save the transformers model using `model.save_pretrained(...)` which will save out the weights in the desired format (safetensors) as well as all other necessary files such as the model config.
For this model, because the encoder is ViT, the translation of these weights will follow a similar pattern to the conversion script here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/convert_vit_timm_to_pytorch.py
Similarly, the encoder structure can be implemented directly by using `#Copied from` statements to copy the ViT architecture in the modeling file.
If you haven't seen it already, I'd also suggest looking through [this doc](https://huggingface.co/docs/transformers/add_new_model) as it covers in detail all the steps for adding a model to transformers. <|||||>Thank you for the information and the link to the conversion script for ViT. |
transformers | 25,000 | closed | beam_indices = None | ### System Info
Hi dear officer
I find a bug, that is, if I use the 'force_words_id' parameter in generate() function and set output_score = True. then I want to get beam_indices, it will return None. but if I remove force_words_id. it will work.
`
prompt = """Tell me some about Canada"""
input_tokenized_info = tokenizer(prompt, return_tensors="pt")
input_ids, attention_mask = input_tokenized_info['input_ids'], input_tokenized_info[
'attention_mask']
input_ids = input_ids.to('cuda')
attention_mask = attention_mask.to('cuda')
force_words = ["Canada"]
force_words_ids = tokenizer(force_words, add_special_tokens=False).input_ids
outputs = model.generate(input_ids=input_ids, attention_mask=attention_mask,num_beams =4,max_new_tokens=10,\
return_dict_in_generate=True,output_scores=True)
`
`print(outputs.beam_indices) tensor([[ 0, 0, 0, 0, 1, 0, 0, 3, 1, 0, -1, -1, -1, -1, -1]],
device='cuda:0')`
But if add 'force_words_id' `outputs = model.generate(input_ids=input_ids, attention_mask=attention_mask,num_beams =4,max_new_tokens=10,\
return_dict_in_generate=True,output_scores=True,force_words_ids=force_words_ids)
print(outputs.beam_indices) None`
### Who can help?
@ArthurZucker , please give me some guidance, thanks ๐
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
it should return indices.
### Expected behavior
it should return indices. | 07-21-2023 16:01:29 | 07-21-2023 16:01:29 | cc @gante who is gonna be more familiar with this! <|||||>Hey @Dongximing ๐
The PR linked above (#25042) should fix it :) In a nutshell, the values were not being piped all the way to the output. |
transformers | 24,999 | open | dataloading bug after upgrading to 4.31.0 | ### System Info
transformers=4.31.0
pytorch=1.13.1
### Who can help?
Hi @sgugger and @ArthurZucker
When I used transformers 4.29.2 and 4.30.2 with the streaming dataset and local batch size=1, I didn't pad the text sequence and everything goes well.
However, after I upgrade the transformers to 4.31.0. My previous training pipeline fails. Error messages are:
File "myenv/lib/python3.8/site-packages/accelerate/data_loader.py", line 556, in __iter__
next_batch, next_batch_info = self._fetch_batches(main_iterator)
File "myenv/lib/python3.8/site-packages/accelerate/data_loader.py", line 520, in _fetch_batches
batch = concatenate(batches, dim=0)
File "myenv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 441, in concatenate
return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()})
File "myenv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 441, in <dictcomp>
return type(data[0])({k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()})
File "myenv/lib/python3.8/site-packages/accelerate/utils/operations.py", line 444, in concatenate
return torch.cat(data, dim=dim)
RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 655 but got size 563 for tensor number 1 in the list.
I find that in the following function in data_loader.py (from accelerate), the variable "batches" contain examples with different lengths, causing the error. For example, I trained my model on 4 GPUs with local batch size=1. Then, the list batches will have 4 elements (each is a batch of 1 example). But these 4 elements may have different lengths, causing the above error when concatenating. However, as my local batch size=1, there should be no need to make the samples to be in the same length. I think it is a bug introduced in 4.31.0 because in the previous transformers version (e.g., 4.29.2 and 4.30.2), the training script can run smoothly without raising the error. I look forward to your comments and suggestions. Thank you
def _fetch_batches(self, iterator):
batches, batch = None, None
# On process 0, we gather the batch to dispatch.
if self.state.process_index == 0:
try:
if self.split_batches:
# One batch of the main iterator is dispatched and split.
batch = next(iterator)
else:
# num_processes batches of the main iterator are concatenated then dispatched and split.
# We add the batches one by one so we have the remainder available when drop_last=False.
batches = []
for _ in range(self.state.num_processes):
batches.append(next(iterator))
batch = concatenate(batches, dim=0)
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
dataset = load_dataset("json", data_files={"train": train_file, "eval": eval_file}, streaming=True)โจ
dataset = dataset.with_format("torch")
โจtrain_dataset = dataset["train"]
โจeval_dataset = dataset["eval"]โจโจ
train_dataset = train_dataset.map(tokenize_function, batched=True)โจ
eval_dataset = eval_dataset.map(tokenize_function, batched=True)
โจtrain_model(model, train_dataset, eval_dataset)
### Expected behavior
no error message and training smoothly | 07-21-2023 15:46:32 | 07-21-2023 15:46:32 | cc @muellerzr but we will need a full reproducer to be able to help.<|||||>Thank you. I list my code as follows:
def train_model(model, train_dataset, eval_dataset, epochs=5, batch_size=1):
training_args = TrainingArguments(
output_dir="outputs/",
overwrite_output_dir=True,
num_train_epochs=epochs,
max_steps=100000,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
eval_accumulation_steps=8,
save_strategy="steps",
save_steps=500,
evaluation_strategy="steps",
eval_steps=100,
logging_steps=20,
logging_dir="logs",
learning_rate=8e-5,
gradient_accumulation_steps=8,
fp16=True,
do_train=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
train_result = trainer.train()
trainer.save_model()
trainer.log_metrics("train", train_result.metrics)
metrics = trainer.evaluate()
trainer.log_metrics("eval", metrics)
trainer.save_metrics("eval", metrics)
def tokenize_function(examples):
max_len = max_txt_len + 128
output = model.tokenizer(examples["text"], truncation=True, max_length=max_len, padding=False)
output["labels"] = [list(e) for e in output["input_ids"]]
return output
def main():
train_file = "train.jsonl"
eval_file = "valid.jsonl"
dataset = load_dataset("json", data_files={"train": train_file, "eval": eval_file}, streaming=True)
dataset = dataset.with_format("torch")
train_dataset = dataset["train"]
eval_dataset = dataset["eval"]
train_dataset = train_dataset.map(tokenize_function, batched=True)
eval_dataset = eval_dataset.map(tokenize_function, batched=True)
train_model(model, train_dataset, eval_dataset)
main()<|||||>How did this work in 4.29 if you are not providing a data_collator to the Trainer and not padding your texts?<|||||>> How did this work in 4.29 if you are not providing a data_collator to the Trainer and not padding your texts?
As per_device_train_batch_size=1 in my code, it runs properly in 4.29.2 and 4.30.2 even if I didn't pad and didn't provide a data_collator.
It only fails in 4.31.0.
BTW, in 4.31.0, even if I provided a data_collator, it still fails. It only works if I pre-pad all the sequences into the same length in the tokenize_function().<|||||>Ah yes, understood. This doesn't work anymore because Accelerate will by default use `dispatch_batches=True` for iterable datasets, which builds the batch on process 0 (with a batch size 4 here since you have 4 processes) then split it to send it to each GPU.
@muellerzr what we need is to surface the option `dispatch_batches=False` here.
I think if you add a line `trainer.accelerator.dispatch_batches=False` it will work again @getao <|||||>> Ah yes, understood. This doesn't work anymore because Accelerate will by default use `dispatch_batches=True` for iterable datasets, which builds the batch on process 0 (with a batch size 4 here since you ahve 4 processes) then split it to send it to each GPU.
>
> @muellerzr what we need is to surface the option `dispatch_batches=False` here.
>
> I think if you add a line `trainer.accelerator.disaptch_batches=False` it will work again @getao
Oh, I see! Thank you very much for your help! <|||||>Thanks @getao! #25038 should solve this, once merged just set `args.dispatch_batches=False` and your code should run just fine |
transformers | 24,998 | closed | [`Llama`] remove persistent `inv_freq` tensor | # What does this PR do?
The tensor should not be persistent as if the model is loaded from pretrained, whatever the head dimension size, it will not be resized. | 07-21-2023 15:42:50 | 07-21-2023 15:42:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,997 | closed | Better handling missing SYS in llama conversation tokenizer | The existing code failed to add SYS if the conversation has history without SYS, but does modify the passed conversation to include it.
Rearrange the code so modification to the conversation object are taken into account for token id generation. | 07-21-2023 15:31:53 | 07-21-2023 15:31:53 | cc @ArthurZucker <|||||>Hey! Could you elaborate on
> failed to add SYS if the conv has history without SYS?
The SYSTEM prompting should always go at the start of the whole conversation, thus we only check whether a system prompt is in the first prompt or not, because if you go through the conversational API, you should add the system prompt at the beginning.
Is the usecase that you are requesting to be able to add a system prompt in the middle of the conversation ? <|||||>Lets look at 2 use cases:
- **history (i.e more than 1 user message), no SYS in first message**: the code adds SYS to conversation, but then proceeds to use `dialogue` where SYS is not added. So model doesn't see SYS (incorrect). If the conversation is used again, then there will be SYS present (OK, I guess)
- **single (user) message in conversation (no `past_user_inputs`), no SYS in it**: the code adds SYS to `dialogue`, and the SYS ends up in the model input (correct), but the message/conversation is not modified (i.e if used again there will be no sys)
So there is discrepancy between what the model sees at this iteration and what would happen in the next iteration. So I changed the code to 1) first modify the conversation object, before `dialogue` is computed and 2) modify both the `past_user_inputs` and `new_user_input`, so no case will be left unhandled.
So now the model would see SYS in both cases, and the conversation is modified (for future use) in both cases.
An alternative is to modify the `dialogue` object only (it's just an array so that's even simpler - no difference between history and no history), then the conversation object would stay without SYS, but the model will see SYS every time.<|||||>@ArthurZucker ready for approval<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,996 | open | one of the variables needed for gradient computation has been modified by an inplace operation | ### System Info
* Ubuntu 20.04
* Architecture x86_64
* 3 x Tesla P100-PCIE-12GB
* Python 3.8.10
* torch==1.12.1+cu116
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I encountered the error `one of the variables needed for gradient computation has been modified by an inplace operation...` when training my model with DistributedDataParallel (DDP). My code run smoothly when I do not use DDP. I have spent time inspecting the problem and below is the minimal code for reproducing the problem.
```python
import torch
from torch import nn
import argparse
class BertEmbeddings(nn.Module):
def __init__(self, config):
super().__init__()
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)))
def forward(
self, input_ids, past_key_values_length=0
):
seq_length = input_ids.shape[1]
position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length]
return self.position_embeddings(position_ids)
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--local_rank", type=int, default=-1)
args = parser.parse_args()
local_rank = args.local_rank
torch.cuda.set_device(local_rank)
device = torch.device("cuda", local_rank)
torch.distributed.init_process_group(backend="nccl")
w = BertEmbeddings(config=argparse.Namespace(max_position_embeddings=10, hidden_size=24))
w.to(device)
# setup distributed
w = torch.nn.parallel.DistributedDataParallel(w, device_ids=[local_rank],
output_device=local_rank,
find_unused_parameters=False)
input_ids = torch.tensor([[1, 2, 3]]).to(device)
x = w(input_ids)
y = w(input_ids)
M = torch.sum(x)
M.backward()
if __name__ == "__main__":
main()
```
Suppose this code is put in a file named `debug_distributed.py`. I run this code with the command
```shell
python -m torch.distributed.launch --nproc_per_node=3 debug_distributed.py
```
, and I got the error
<pre>
one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 3]] is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
</pre>
If I do not use DDP, there is no such error. Specifically, put the following in a file named `debug_normal.py` and run `python debug_normal.py`
```python
import torch
from torch import nn
import argparse
class BertEmbeddings(nn.Module):
def __init__(self, config):
super().__init__()
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)))
def forward(
self, input_ids, past_key_values_length=0
):
seq_length = input_ids.shape[1]
position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length]
return self.position_embeddings(position_ids)
def main():
w = BertEmbeddings(config=argparse.Namespace(max_position_embeddings=10, hidden_size=24))
w.to("cuda")
input_ids = torch.tensor([[1, 2, 3]]).to("cuda")
x = w(input_ids)
y = w(input_ids)
M = torch.sum(x)
M.backward()
if __name__ == "__main__":
main()
```
This problem prevents me from training my BertModel in distributed mode. I found that the problem lies on the line `position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length]`. It seems like an "inplace operation" as the error suggests. If I change that line to `position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length].clone()`, the problem will be gone.
I think this problem is much more related to PyTorch. It may be a Pytorch bug. However, the simplest workaround is to add a `.clone()` as I showed above. Currently, `transformers` of version `>=4` uses this "inplace operation" and all `>=4` versions of `transformers` will get this error. So, is there anyway to better fix the problem, so I don't need to change library (`transformers`) code?
### Expected behavior
BertModel works in distributed training with DistributedDataParallel | 07-21-2023 15:29:10 | 07-21-2023 15:29:10 | cc @pacman100 <|||||>Hi @levuloihust99
do you face the same issue by setting:
```python
find_unused_parameters=True
```<|||||>> Hi @levuloihust99 do you face the same issue by setting:
>
> ```python
> find_unused_parameters=True
> ```
Setting `find_unused_parameters=True` gave me the exact same error. Additionally, in my example code, it is more performant to set `find_unused_parameters=False` since there is no unused parameters. |
transformers | 24,995 | closed | [`bnb`] Add simple check for bnb import | # What does this PR do?
as discussed internally @sgugger let's add a GPU check inside `is_bnb_available`
| 07-21-2023 15:28:44 | 07-21-2023 15:28:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,994 | open | Llama-2-hf non stopping token generation. | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-1035-azure-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.22.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
model_id = "/home/modelweights/llama2-hf-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map='auto',
load_in_8bit=True,
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=256,
temperature=0.1,
)
sequences = pipe(
'Hello there! How are you doing?',
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Expected behavior
Hi,
My Llama 2 model is not generating the stopping tokens.
For example the reply of the question Hello there! How are you doing? is:
Result: Hello there! How are you doing? I hope you are doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing well. I am doing w
The reply only stops when the max token criteria is met
What am I doing wrong? | 07-21-2023 14:14:17 | 07-21-2023 14:14:17 | cc @gante and @ArthurZucker <|||||>Hey! This is expected, the llama model kind of rarely generates the `eos_token`. It was the same with Llama 1, and if you run your script with the original llama, you will get the same output:
```python
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This software may be used and distributed according to the terms of the GNU General Public License version 3.
import fire
from llama import Llama
def main(
ckpt_dir: str,
tokenizer_path: str,
temperature: float = 0.6,
top_p: float = 0.9,
max_seq_len: int = 512,
max_gen_len: int = 256,
max_batch_size: int = 8,
):
generator = Llama.build(
ckpt_dir=ckpt_dir,
tokenizer_path=tokenizer_path,
max_seq_len=max_seq_len,
max_batch_size=max_batch_size,
)
results = generator.text_completion(
['Hello there! How are you doing?'],
max_gen_len=max_gen_len,
temperature=temperature,
top_p=top_p,
)
print('Hello there! How are you doing?')
print(f"> {results['generation']}")
print("\n==================================\n")
if __name__ == "__main__":
fire.Fire(main)
```
and
```bash
torchrun --nproc_per_node=1 llama/example_text_completion.py --ckpt_dir Llama-2-7b --tokenizer_path Llama-2-7b/tokenizer.model --temperature 0.1 --top_p 0.95
```
will produce:
```txt
> I hope you are doing well. I am doing well. I am happy to be back here. I have been away for a while. I have been busy with my studies. I have been busy with my work. I have been busy with my life. I have been busy with my family. I have been busy with my friends. I have been busy with my hobbies. I have been busy with my interests. I have been busy with my passions. I have been busy with my dreams. I have been busy with my goals. I have been busy with my ambitions. I have been busy with my aspirations. I have been busy with my plans. I have been busy with my projects. I have been busy with my ideas. I have been busy with my thoughts. I have been busy with my feelings. I have been busy with my emotions. I have been busy with my mind. I have been busy with my heart. I have been busy with my soul. I have been busy with my spirit. I have been busy with my body. I have been busy with my mind. I have been busy with my soul. I have been busy with my spirit. I have been
```
Note that the setting you are providing (`temperature = 0.1` etc) can influence on the generation.<|||||>So what are the best practices currently known to reduce this random ramble?<|||||>You should probably ask this question on [the forum](https://discuss.huggingface.co/), using the default parameters should already help (`temperature=0.9`), you can try to use `LogitsProcessor` for length penalty to reduce potential hallucination. You can also change the sampling stategies, use `top_k`, `contrastive search` etc etc. @gante will have better solution!<|||||>@AnishAlapattu-GEP @ArthurZucker
This is actually a hard problem to solve, and I have yet to see a solution that generalizes well! A few things that can be tried, ranked by implementation complexity:
1. In the prompt, mention that you want a short output (be specific if you can, like "Reply in 3 sentences or less");
2. Add a custom logits processor that increases the score of the eos token according to some rule (e.g. scaling with the generated length)
3. Generate text in excess and have a post-processing step to crop unwanted text (e.g. based on the conditional probability of each sequence -- when the model starts rambling, I suspect there is a significant drop in the probability of the sentence given the past sentences)
4. Fine-tune the model ๐ <|||||>Thanks @ArthurZucker and @gante! |
transformers | 24,993 | closed | Use main_input_name for include_inputs_for_metrics | # What does this PR do?
Instead of hard-coding `"input_ids"`, we should use the model main input name when getting the inputs for the metrics.
Fixes #24933 | 07-21-2023 14:06:43 | 07-21-2023 14:06:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,992 | closed | Add `OwlViTForObjectDetection` to `MODEL_FOR_OBJECT_DETECTION` | # What does this PR do?
~~This seems a miss, but need @NielsRogge to make sure.~~
well, he told me
> it's not a bug, it's intended. OWL-ViT can be loaded using the AutoModelForZeroShotObjectDetectionModel class (few that's a long name) | 07-21-2023 13:39:03 | 07-21-2023 13:39:03 | well<|||||>The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24992). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,991 | closed | MarianMTModel model.generate function issue after v4.30.2 | Hello there,
I have encountered a problem with the model.generate function after version v4.30.2 on several different systems. Below, you can find the problem description and the corresponding code block:
```
from transformers import AutoTokenizer, MarianMTModel
model_name = f"Helsinki-NLP/opus-tatoeba-en-tr"
model = MarianMTModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = 'Once upon a time I met a boy named Hugo Cabret he lived in a train station' # Just random sample text
print('Translating text ... ')
batch = tokenizer(text, return_tensors="pt",
padding=True, truncation=True)
print('batch:', batch)
generated_ids = model.generate(**batch, max_length=512)
print('generated_ids:', generated_ids)
translated = tokenizer.batch_decode(
generated_ids, skip_special_tokens=True)
print('Translated text:',:', translated)
```
Output when I downgrade the transformers library to v4.30.2, which is correct.
```
Translating text ...
batch: {'input_ids': tensor([[ 895, 1100, 13, 144, 7, 1227, 13, 660, 3018, 15658,
15952, 32, 15, 53, 2896, 21, 13, 2825, 2865, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
generated_ids: tensor([[59993, 95, 2491, 15658, 15952, 3433, 5240, 14, 12768, 45057,
2, 8530, 25435, 13855, 2, 0]])
Translated text: ['Bir zamanlar Hugo Cabret adฤฑnda bir รงocukla tanฤฑลmฤฑลtฤฑm. Tren istasyonunda yaลฤฑyordu.']
```
But unfortunately, when I upgraded the transformers library to the newest version v4.31.0, the model.generate function's return becomes corrupted.
```
Translating text ...
batch: {'input_ids': tensor([[ 895, 1100, 13, 144, 7, 1227, 13, 660, 3018, 15658,
15952, 32, 15, 53, 2896, 21, 13, 2825, 2865, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
generated_ids: tensor([[59993, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 0]])
Translated text: ..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
```
As can be seen, the tokenizer works correctly, but unfortunately, the model.generate function returns corrupted data. By the way, I also tested the 'Helsinki-NLP/opus-mt-tc-big-en-tr' model, and it also generates corrupted data.
Despite that, the 'Helsinki-NLP/opus-mt-en-es' model generates correct results. | 07-21-2023 13:37:49 | 07-21-2023 13:37:49 | Hi @bariscankurtkaya There is a problem in the weights of that model, I made a PR [here](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-tr/discussions/3) to fix them but it hasn't been merged. You can load the model weights of this PR by adding `revision="pr/3"` in your `from_pretrained` call.<|||||>> Hi @bariscankurtkaya There is a problem in the weights of that model, I made a PR [here](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-tr/discussions/3) to fix them but it hasn't been merged. You can load the model weights of this PR by adding `revision="pr/3"` in your `from_pretrained` call.
Thanks you are the best. ๐ ๐ค |
transformers | 24,990 | closed | Fix `llama` tokenization doctest | # What does this PR do?
See comment in the change. | 07-21-2023 13:05:58 | 07-21-2023 13:05:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,989 | closed | Generate: `ImageToTextPipeline` now supports generation kwargs | # What does this PR do?
As the title indicates -- previously the generation kwargs had to be passed as a separate dictionary, which was inconsistent with the other pipelines.
This PR borrows code from the other pipelines, to allow things like `pipe(input, min_new_tokens=10)` (as opposed to `pipe(input, generate_kwargs={"min_new_tokens":10})`). It also updates the docs, which were slightly outdated.
Fixes #24836 | 07-21-2023 12:41:44 | 07-21-2023 12:41:44 | > This PR borrows code from the other pipelines, to allow things like `pipe(input, min_new_tokens=10)` (as opposed to `pipe(input, generate_kwargs={"min_new_tokens":10})`). It also updates the docs, which were slightly outdated.
Actually, other pipelines that accept anything, are the ones that have been done so because of legacy reasons.
Accepting anything causes headaches because `generate` can arbitrarily add new kwargs, some of which could clash with existing pipeline kwargs.
The first one that comes to mind is `max_length` which already clashed with tokenizer `max_length`.
I really think this is a bad idea, and that whitelisting is better.<|||||>Ah ah, glad I asked ๐
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@Narsil @sgugger I see, arg clashing is absolutely undesirable!
Explicit whitelisting (like `max_new_tokens` in `ImageToTextPipeline`) also seems excessive -- different modalities/tasks benefit from controlling specific parameters, so we would have to pick between additional maintenance burden + large docs and usage capabilities.
How about we move all generative pipelines to accept a `generation_config` (with the proper deprecation cycles and doc changes)? It would be an explicitly whitelisted argument from the pipeline perspective while enabling the vast majority of generation modes.
[Regardless of the specific decision here, I believe we would benefit from making all pipelines consistent with each other.]<|||||>That works for me!<|||||>`generate_config` seems even better than `generate_kwargs` !<|||||>Awesome, I'll close this PR and open a new one reflecting the decision here ๐ |
transformers | 24,988 | closed | Fix type annotation for deepspeed training arg | # What does this PR do?
#24550 wanted to put a more exact type annotation for the deepspeed arg, which then makes that training arg fail in CLI commands. This PR reverts that part and adds a comment so we do not break this again.
Fixes #24974 | 07-21-2023 12:25:25 | 07-21-2023 12:25:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,987 | open | [i18n-KO] Translated docs: ko: pr_checks.md to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค! -->
# What does this PR do?
Translated the `pr_checks.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
@sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final) | 07-21-2023 11:54:19 | 07-21-2023 11:54:19 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24987). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,986 | closed | All `meta-llama/Llama-2-*-hf` models have incorrect `max_position_embeddings` | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.34
- Python version: 3.8.13
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The hugging face version of Llama 2 have `max_position_embeddings` set to `2048` instead of `4096` in the config file. I am unsure if it's just an incorrect setting or if the models need to be converted again.
See the below links for detail:
- [`meta-llama/Llama-2-7b-hf` config.json](https://huggingface.co/meta-llama/Llama-2-7b-hf/blob/main/config.json#L13)
- [`meta-llama/Llama-2-7b-chat-hf` config.json](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/blob/main/config.json#L13)
- [`meta-llama/Llama-2-13b-hf` config.json](https://huggingface.co/meta-llama/Llama-2-13b-hf/blob/main/config.json#L13)
- [`meta-llama/Llama-2-13b-chat-hf` config.json](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf/blob/main/config.json#L12)
- [`meta-llama/Llama-2-70b-hf` config.json](https://huggingface.co/meta-llama/Llama-2-70b-hf/blob/main/config.json#L13)
- [`meta-llama/Llama-2-70b-chat-hf` config.json](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf/blob/main/config.json#L12)
If helpful, someone posted an issue in [Meta's official repo](https://github.com/facebookresearch/llama/issues/359).
### Expected behavior
According to Meta, Llama 2 has a context length `4096`. This should be the `max_position_embeddings` value in all the `config.json` files. If the model checkpoints on hugging face are not correctly converted, they should be converted again using the correct configuration. | 07-21-2023 11:47:19 | 07-21-2023 11:47:19 | Hey! ๐๐ป
Indeed the paper does mention that the default `max_position_embeddings` is `4096`, and no, the models ~don't have~ should not have to be reconverted:
```
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", max_position_embeddings=4096)
```
worked out of the box for me. The config should probably be updated, the previous choice is explained by the fact that in all the demonstrations [`example_chat_completion`](https://github.com/facebookresearch/llama/blob/main/example_chat_completion.py) and [`example_text_completion`](https://github.com/facebookresearch/llama/blob/main/example_text_completion.py) the `max_position_embeddings` was lowered (on purpose it seems?).
I also noticed [this](https://huggingface.co/daryl149/llama-2-7b-chat-hf/commit/d8654a4f69178a0c9260cf730241ebac2e72b923) which is why I am investigating! Github issue on the original repo: [#359 ](https://github.com/facebookresearch/llama/issues/359#issuecomment-1640876808)
The `max_position_embeddings` only affects the ROPE embedding layer, which is computed on the fly. <|||||>Okay, so currently we have[ a safeguard](https://github.com/ArthurZucker/transformers/blob/050c4a48f77e42b9d5cd87fccaac955950799acc/src/transformers/models/llama/modeling_llama.py#L118-L120):
```python
if seq_len > self.max_seq_len_cached:
self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=x.dtype)
```
which is not the best in terms of compute, but produces the correct values for now ๐
This also explains why changing for the actual value is wrong: the cos sin will not be re-computed!
I'll open fixes in transformers (make inv_freq non persistent) and will push updated checkpoints<|||||>Thank you so much for the update! I just took a look at the code; this safeguard is already part of the `transformers v4.31.0` release.
It would be great if you could let me know the correct way to use Llama 2 if we want to maintain the advertised `4096` context length without degrading the performance. Should we just pass `max_position_embeddings=4096` as mentioned earlier? Or should we use `max_position_embeddings=2048` until the problem is fully resolved?
```python
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", max_position_embeddings=4096)
```<|||||>I updated the configuration file of all the models to make sure you don't need to do anything!
The performance degradation was not reported again, so If it comes up, I will make sure to adresse it! <|||||>Can confirm that just updating the config file works fine.<|||||>Updated all configs online, closing as it is fixed! ๐ค
|
transformers | 24,985 | open | [i18n-KO] Translated `big_models.md` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค! -->
# What does this PR do?
Translated the `big_models.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [ ] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [ ] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์, ์ด ์๋์ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ ํ์๋ค์ ๋ฉ์
ํด์ฃผ์ธ์! -->
May you please review this PR? @hyunhp @nuatmochoi @heuristicwave @mjk0618 @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo --> | 07-21-2023 10:41:13 | 07-21-2023 10:41:13 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24985). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,984 | closed | ๐ [i18n-KO] Translated `add_tensorflow_model.md` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `add_tensorflow_model.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `your_file.md` file of the documentation to Korean ๐
Thank you in advance for your review!
Part of https://github.com/huggingface/transformers/issues/20179
<!-- ๋ฉ์ธ ์ด์์ ๊ธฐ๋ก์ด ๋จ์์! ๊ฐ์ง์ฐ๊ตฌ์ ๋ฆฌํฌ๋ฅผ ์ฌ์ฉํด ์ฐ์ตํ์ค๋๋ ์ ๊ฑฐํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [ ] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง OSSCA ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- Team OSSCA, may you please review this PR? -->
<!-- @wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- May you please review this PR? -->
<!-- @sgugger, @ArthurZucker, @eunseojo --> | 07-21-2023 10:23:48 | 07-21-2023 10:23:48 | |
transformers | 24,983 | open | ๐ [i18n-KO] Translated perf_train_gpu_many.md to Korean | # What does this PR do?
Translated the `perf_train_gpu_many.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
May you please review this PR? @nuatmochoi, @bolizabeth, @heuristicwave, @mjk0618, @jungnerd, @hyunhp
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo --> | 07-21-2023 10:00:03 | 07-21-2023 10:00:03 | Please do not open and close several PRs for the same file. You can update a PR by pushing more commits to it.<|||||>The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24983). All of your documentation changes will be reflected on that endpoint.<|||||>> Please do not open and close several PRs for the same file. You can update a PR by pushing more commits to it.
Duly noted |
transformers | 24,982 | closed | ๐ [i18n-KO] Translated `<perf_train_gpu_many>.md` to Korean | # What does this PR do?
Translated the `<your_file>.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
May you please review this PR? @nuatmochoi, @bolizabeth, @heuristicwave, @mjk0618, @jungnerd, @hyunhp
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo --> | 07-21-2023 09:34:17 | 07-21-2023 09:34:17 | ๋ฏธ๋ฆฌ๋ณด๊ธฐ ๋ฌธ์ ์ ready for request ์งํํ์ฌ ์ทจ์ ํ, reopen |
transformers | 24,981 | open | trainer throw "Torch not compiled with CUDA enabled" | ### System Info
transformers version: 4.31.0.dev0
Platform: macOS-13.2.1-arm64-arm-64bit
Python version: 3.10.10
Huggingface_hub version: 0.16.4
Safetensors version: 0.3.1
Accelerate version: 0.22.0.dev0
Accelerate config: not found
PyTorch version (GPU?): 2.0.0 (False)
Tensorflow version (GPU?): 2.10.0 (True)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?:
Using distributed or parallel set-up in script?:
Accelerate version: 0.22.0.dev0
### Who can help?
@younesbelkada, @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm running following code in Macbook M2 Pro.
I can run this code with T4 GPU but when I try to run it on my Apple Macbook, it is throwing error, "Torch not compiled with CUDA enabled". I can run other models with my MAC. GPU utilization is good. I suspect that I forgot to change some parameters related to MPS. Thanks in advance.
```
import transformers
from transformers import LlamaTokenizer, LlamaForCausalLM
from typing import List
import polars as pl
import html
import datasets
import torch
from datasets import load_dataset
from datasets import Dataset
import pandas as pd
DEVICE = "mps"
CUTOFF_LEN = 256
safety_param = 10
import pandas as pd
df = pd.read_parquet("zkp_training_data.parquet")
data_filtered = (
pl.from_pandas(df).with_columns(pl.when(pl.col("description").is_null()).then("").otherwise(pl.col("description") + ". ").alias("description"))
.with_columns( (pl.col("description") + pl.col("readme")).alias("final_text") )
.select(["repo_id",pl.col("final_text"), pl.col("label")])
).to_pandas()
data_filtered.loc[data_filtered.label == "other",["label"]] = "not zero-knowledge-proof"
dataset = datasets.Dataset.from_pandas(data_filtered)
dataset_rev = dataset.shuffle(seed=42)
dataset_final = dataset_rev.class_encode_column("label").train_test_split(stratify_by_column="label",train_size=0.8)
def map_func(example):
mapping=dataset_final["train"].features["label"]
val_set = mapping.int2str(example["label"])
del example["label"]
example["labels"] = val_set
return example
dataset_final_v2 = dataset_final.map(map_func,batched=True)
BASE_MODEL = "decapoda-research/llama-7b-hf"
model = LlamaForCausalLM.from_pretrained(
BASE_MODEL,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
tokenizer.pad_token_id = (
0 # unk. we want this to be different from the eos token
)
tokenizer.padding_side = "left"
def generate_dummy_prompt_v2(input,output):
partial_string = f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. # noqa: E501
### Instruction:
Does this project include zero-knowledge-proof implementation?
### Response:
{output}"""
full_string = html.unescape(partial_string)
result = tokenizer(
full_string,
padding=False)
constant_part_token_len = len(result["input_ids"])
input_html = html.unescape(input)
result_input = tokenizer(
input,
padding=False)
input_token_len = len(result_input["input_ids"])
allowed_len = (CUTOFF_LEN - constant_part_token_len) - safety_param
input = tokenizer.decode(result_input["input_ids"][:allowed_len],skip_special_tokens = True)
final_prompt = f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. # noqa: E501
### Instruction:
Does this project include zero-knowledge-proof implementation?
### Input:
{input}
### Response:
{output}
"""
final_prompt = html.unescape(final_prompt)
if len(final_prompt.split(" "))<3:
print("aaa")
return final_prompt
dataset_final_v3 = dataset_final_v2.map(
lambda x: {"final_prompt": [generate_dummy_prompt_v2(a,b) for a,b in zip(x["final_text"],x["labels"]) ]}, batched=True
)
def tokenize(prompt, add_eos_token=True):
result = tokenizer(
prompt["final_prompt"],
truncation=True,
max_length=CUTOFF_LEN,
padding=False,
return_tensors=None,
)
if (
result["input_ids"][-1] != tokenizer.eos_token_id
and len(result["input_ids"]) < CUTOFF_LEN
and add_eos_token
):
result["input_ids"].append(tokenizer.eos_token_id)
result["attention_mask"].append(1)
result["labels"] = result["input_ids"].copy()
return result
dataset_final_v4 = dataset_final_v3.map(tokenize,batched=False,remove_columns=["repo_id","final_text","final_prompt"])
dataset_final_v4.save_to_disk("dataset_final_v4")
train_data = dataset_final_v4["train"]
val_data = dataset_final_v4["test"]
LORA_R = 8
LORA_ALPHA = 16
LORA_DROPOUT= 0.05
LORA_TARGET_MODULES = [
"q_proj",
"v_proj",
]
BATCH_SIZE = 128
MICRO_BATCH_SIZE = 4
GRADIENT_ACCUMULATION_STEPS = BATCH_SIZE // MICRO_BATCH_SIZE
LEARNING_RATE = 3e-4
TRAIN_STEPS = 300
OUTPUT_DIR = "experiments_rev"
training_arguments = transformers.TrainingArguments(
per_device_train_batch_size=MICRO_BATCH_SIZE,
gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS,
warmup_steps=100,
max_steps=TRAIN_STEPS,
learning_rate=LEARNING_RATE,
fp16=False,
logging_steps=2,
optim="adamw_torch",
evaluation_strategy="steps",
save_strategy="steps",
eval_steps=4,
save_steps=24,
output_dir=OUTPUT_DIR,
use_mps_device = True,
save_total_limit=3,
overwrite_output_dir=True,
report_to="tensorboard"
)
data_collator = transformers.DataCollatorForSeq2Seq(
tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True
)
trainer = transformers.Trainer(
model=model,
train_dataset=train_data,
eval_dataset=val_data,
args=training_arguments,
data_collator=data_collator
)
model.config.use_cache = False
model = torch.compile(model)
trainer.train(resume_from_checkpoint = False)
model.save_pretrained(OUTPUT_DIR)
```
### Expected behavior
it should start to train data but it throws error when trainer.train is executed. I suspect mps is not supporting some features of model. | 07-21-2023 09:16:09 | 07-21-2023 09:16:09 | Hi @nemesis00sam
Thanks for the issue, there are two issues with your training setup
1- you are trying to load your model in 8bit, which is only supported on GPU devices (not M1 chips)
2- It seems you are trying to do pure 8bit training, despite I see lora-specific arguments, i don't see where they are used. If you want to train 8bit models, consider converting your model into a PeftModel and train adapters on it for example: https://github.com/huggingface/peft/blob/main/examples/int8_training/Finetune_opt_bnb_peft.ipynb<|||||>Thanks @younesbelkada. I'm a little bit confused. I need more explanation. Sorry for taking your time. load_in_8bit=True will not work for m1? Should I convert it to 8bit with different package? |
transformers | 24,980 | closed | fsdp fixes and enhancements | # What does this PR do?
1. Fixes #24724. Should be merged after https://github.com/huggingface/accelerate/pull/1753
2. Fixes #24568. Should be merged after https://github.com/huggingface/accelerate/pull/1753
| 07-21-2023 08:47:43 | 07-21-2023 08:47:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,979 | closed | [ `ForSequenceClassification`] Support `left` padding | # What does this PR do?
Update the computation of `sequence_lengths` that determines the index of the outpus to be pooled on models that use pooled inputs in such a way.
Adresses #24265
- [ ] add a common test to make sure whether you pad left or right, sequence outputs are the same | 07-21-2023 08:24:06 | 07-21-2023 08:24:06 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24979). All of your documentation changes will be reflected on that endpoint.<|||||>Test seems flaky:
```python
FAILED tests/models/gptj/test_modeling_gptj.py::GPTJModelTest::test_pt_tf_model_equivalence - AssertionError: 0.14537059 not less than or equal to 1e-05 : outputs.logits: Difference between PyTorch and TF is 0.14537058770656586 (>= 1e-05).
FAILED tests/models/gptj/test_modeling_tf_gptj.py::TFGPTJModelTest::test_pt_tf_model_equivalence - AssertionError: 0.21959408 not less than or equal to 1e-05 : outputs.logits: Difference between torch and tf is 0.2195940762758255 (>= 1e-05).
``` |
transformers | 24,978 | open | ๐ [i18n-KO] Translated `perf_infer_gpu_one.md` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค! -->
# What does this PR do?
Translated the `perf_infer_gpu_one.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์, ์ด ์๋์ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ ํ์๋ค์ ๋ฉ์
ํด์ฃผ์ธ์! -->
May you please review this PR? @sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
May you please review this PR? @sgugger, @ArthurZucker, @eunseojo | 07-21-2023 08:02:51 | 07-21-2023 08:02:51 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24978). All of your documentation changes will be reflected on that endpoint.<|||||>Could you review this PR? ๐
@sgugger, @ArthurZucker, @eunseojo |
transformers | 24,977 | closed | ๐ [i18n-KO] Translated `generation_strategies.md` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `generation_strategies.md` file of the documentation to Korean ๐
Thank you in advance for your review!
Part of https://github.com/huggingface/transformers/issues/20179
<!-- ๋ฉ์ธ ์ด์์ ๊ธฐ๋ก์ด ๋จ์์! ๊ฐ์ง์ฐ๊ตฌ์ ๋ฆฌํฌ๋ฅผ ์ฌ์ฉํด ์ฐ์ตํ์ค๋๋ ์ ๊ฑฐํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค! :smile: -->
## Before reviewing
- [ ] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [ ] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [ ] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [ ] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [ ] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง OSSCA ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- Team OSSCA, may you please review this PR? -->
<!-- @wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- May you please review this PR? -->
<!-- @sgugger, @ArthurZucker, @eunseojo --> | 07-21-2023 07:31:05 | 07-21-2023 07:31:05 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24977). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,976 | closed | Llama weights are in `bfloat16` but loaded as `float32` | ### System Info
- `transformers` version: 4.32.0.dev0
- Platform: Linux-5.8.0-1035-gcp-x86_64-with-glibc2.31
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0.dev20230719+cpu (False)
- Tensorflow version (GPU?): 2.14.0-dev20230719 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
model = LlamaForCausalLM.from_pretrained(path)
print(model.model.norm.weight.detach().dtype)
```
This prints `torch.float32`.
### Expected behavior
This result is `torch.float32`, which indicates that the model is loaded in `float32` format. However, the result should be `torch.bfloat16` because it is the default for Llama models. The model weights are in `bfloat16` format. | 07-21-2023 07:25:11 | 07-21-2023 07:25:11 | Hey! ๐๐ป
Without the path to the checkpoint that you are using, it's gonna be a bit hard for me to debug. The `dtype` can be affectect by the `model.config.torch_dtype` which is stored online. <|||||>It is expected behavior: https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained.torch_dtype. You can try to pass `torch_dtype="auto"` to `from_pretrained` and if the config.json is well written, it should load in the dtype specified in the config.<|||||>In general you should specify the dtype in which you want the model with `torch_dtype` (using auto is not recommended as configs are not always well written). In PyTorch models are always loaded in float32 by default (even if the state dict has another dtype), this is not something specific to Transformers. |
transformers | 24,975 | closed | ๐ [i18n-KO] Translated `text-to-speech.md` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `text-to-speech.md` file of the documentation to Korean ๐
Thank you in advance for your review!
Part of https://github.com/huggingface/transformers/issues/20179
<!-- ๋ฉ์ธ ์ด์์ ๊ธฐ๋ก์ด ๋จ์์! ๊ฐ์ง์ฐ๊ตฌ์ ๋ฆฌํฌ๋ฅผ ์ฌ์ฉํด ์ฐ์ตํ์ค๋๋ ์ ๊ฑฐํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค! :smile: -->
## Before reviewing
- [ ] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [ ] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [ ] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [ ] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [ ] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง OSSCA ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- Team OSSCA, may you please review this PR? -->
<!-- @wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- May you please review this PR? -->
<!-- @sgugger, @ArthurZucker, @eunseojo --> | 07-21-2023 07:22:19 | 07-21-2023 07:22:19 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24975). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,974 | closed | deepspeed typing Dict error | ### System Info
- `transformers` version: 4.32.0.dev0
- Platform: Linux-5.10.179-171.711.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- Accelerate version: 0.22.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Right now any command with `--deepspeed /path/to/json` will fail and throw the following error
```
--deepspeed: invalid Dict value
```
This is reported in https://github.com/huggingface/transformers/pull/24549#issuecomment-1613046347 but the merged fix https://github.com/huggingface/transformers/pull/24574 does not resolve this.
In fact the `deepspeed: Union[str, Dict]` field in `training_args.py` still raise the error when the `deepspeed` is passed as the string. This seems to be a limitation of python dataclass.
### Expected behavior
`deepspeed` flag should support string. | 07-21-2023 06:10:58 | 07-21-2023 06:10:58 | Yes, this has been broken by #24550 again... |
transformers | 24,973 | closed | ๐ [i18n-KO] Translated perf_infer_gpu_one.md to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค! -->
# What does this PR do?
Translated the `perf_infer_gpu_one.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์, ์ด ์๋์ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ ํ์๋ค์ ๋ฉ์
ํด์ฃผ์ธ์! -->
May you please review this PR? @sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo --> | 07-21-2023 06:08:55 | 07-21-2023 06:08:55 | |
transformers | 24,972 | closed | ๐ [i18n-KO] Translated perf_infer_gpu_one.md to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค! -->
# What does this PR do?
Translated the `perf_infer_gpu_one.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์, ์ด ์๋์ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ ํ์๋ค์ ๋ฉ์
ํด์ฃผ์ธ์! -->
May you please review this PR? @sronger, @TaeYupNoh, @kj021, @HanNayeoniee, @eenzeenee, @sim-so
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo --> | 07-21-2023 06:00:56 | 07-21-2023 06:00:56 | |
transformers | 24,971 | closed | New mode in `model.generate` | Hello, is there any support for combining `contrastive_search` and `beam_search` in `model.generate` method?
My motivation is that when using `beam_search` I observe the outputs of model sometimes tend to produce repetitive text. I learned that `contrastive_search` was proposed to address the repetition issue. But when changing to `contrastive_mode`, the performance drops a lot, probably due to a lack of the beams. However, I checked the source code and found `contrastive_mode` triggers only when `num_beams == 1`.
Therefore, I wonder if I can combine the advantages of the two generation modes for better result? | 07-21-2023 05:30:38 | 07-21-2023 05:30:38 | cc @gante <|||||>Hey @huangjy-pku I believe the authors intended its use to be as an alternative to beam search, but I agree the two could be used together (beam search explores beam diversity, and contrastive search explores repetition avoidance).
Building it takes a significant time, and I haven't seen demand for it outside this issue -- we won't devote resources to it for now, as our bandwidth is limited. I would also oppose a PR, as it would add up to our long-term maintenance budget. If demand increases, I will revisit this decision :)
However, I have a short-term suggestion: have you tried the `repetition_penalty` argument in `generate`? ([docs](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.repetition_penalty))<|||||>Thanks. By the way, I tried increasing `repetition_penalty` in `generate` and it did help :) |
transformers | 24,970 | closed | Parameter ... has been marked as ready twice. | ### System Info
* Ubuntu 20.04 Desktop
* Architecture x86_64
* Python 3.8.10
* torch 1.12.1+cu116
* GPU P100 x 3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have the following code in file named `debug.py`
```python
import os
import torch
import argparse
class CustomModel(torch.nn.Module):
def __init__(self):
super(CustomModel, self).__init__()
self.w1 = torch.nn.Parameter(torch.tensor(1.5), requires_grad=True)
self.w2 = torch.nn.Parameter(torch.tensor(1.5), requires_grad=True)
self.w3 = torch.nn.Parameter(torch.tensor(1.5), requires_grad=True)
def forward(self, x, order):
if order == 0:
return self.w1 * x
elif order == 1:
return self.w2 * x
else:
return self.w3 * x
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--local_rank", type=int, default=-1)
args = parser.parse_args()
local_rank = args.local_rank
torch.cuda.set_device(local_rank)
device = torch.device("cuda", local_rank)
torch.distributed.init_process_group(backend="nccl")
model = CustomModel()
model.to(device)
# setup distributed
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank],
output_device=local_rank,
find_unused_parameters=True)
x = torch.tensor(1.)
y1 = model(x, 0)
y2 = model(x, 1)
y3 = model(x, 2)
y1.backward()
if __name__ == "__main__":
main()
```
I run this code with the following command
```shell
TORCH_DISTRIBUTED_DEBUG=DETAIL python -m torch.distributed.launch --nproc_per_node=3 debug.py
```
Then, I got the error
<pre>
Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 0 with name .w1 has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.
</pre>
If I change the line `y1.backward()` to `y2.backward()`, the parameter that `has been marked as ready twice` change to `.w2`
As the error suggested, there might have shared parameters across multiple forward-backward pass or a same set of parameters used by multiple backward passes. However, I find none of these two suggestions match the above provided code.
The error disappeared if I set `find_unused_parameters=False`. However, in my actual code, this setting caused another error, which was `Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel...`.
I am able to change my code to fix the problem. But the focus of my question is why such a simple code above produced the error? Why was there a parameter that `has been marked as ready twice`?
### Expected behavior
The code run without errors | 07-21-2023 05:13:22 | 07-21-2023 05:13:22 | |
transformers | 24,969 | open | Fix beam search when using model parallel | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes a crash when running beam search on multiple GPUs. Similar issue is also observed and fixed on T5 https://github.com/huggingface/transformers/pull/11717 and LLama https://github.com/huggingface/transformers/pull/24224
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-21-2023 05:07:20 | 07-21-2023 05:07:20 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24969). All of your documentation changes will be reflected on that endpoint.<|||||>I agree with you!
Which do you think is better, fixing the rest of the model in this PR or creating a new PR fixing the rest?
I am willing to do further work based on your comments.<|||||>I think it will be easier to have everything in one PR, given how small and repetitive of a change it is! <|||||>@ArthurZucker
I have added fixed commits for every model that required correction, and I have also made modifications to the cookiecutter template.
And I have updated the PR title and content to align with the task.
As there were numerous models that needed correction, there is a possibility that some parts might have been overlooked. Therefore, I would appreciate it if you could review the changes again. Thank you for your attention to this matter.<|||||>There is no `device` in tensorflow, let's limit the changes to pytorch! <|||||>Oh! Thank you for correcting the mistake.
As you suggested, I have dropped the modifications for the tf base model.<|||||>@ArthurZucker
Is there anything else you'd like me to fix?
I want to use the merged main branch for my work.<|||||>Not at all sorry, let me have a final look and I'll merge this! |
transformers | 24,968 | closed | ๐ [i18n-KO] Translated `hpo_train.md` to Korean | # What does this PR do?
Translated the `hpo_train.md` file of the documentation to Korean ๐
Thank you in advance for your review!
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
@wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
May you please review this PR?
@sgugger, @ArthurZucker, @eunseojo | 07-21-2023 04:35:17 | 07-21-2023 04:35:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Can you just solve the conflict so we can merge this PR? |
transformers | 24,967 | open | run summarization | ### System Info
with open(output_prediction_file, "w", encoding="utf-8") as writer:: The script opens the output file in write mode with UTF-8 encoding to handle non-ASCII characters
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
with open(output_prediction_file, "w" as writer)
UnicodeEncodeError: 'ascii' codec can't encode character '\xe2' in position 1710: ordinal not in range(128)
### Expected behavior
It should generate predictions | 07-21-2023 01:54:22 | 07-21-2023 01:54:22 | @tigernandita
Please report the issue with providing your env. info.
You can run the command `transformers-cli env` and copy-paste its output.
Also, please provide the full traceback. |
transformers | 24,966 | closed | ๐ [i18n-KO] Translated `perf_hardware.md` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค! -->
# What does this PR do?
Translated the `perf_hardware.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
@0525hhgus, @Sunmin0520, @54data, @seank021, @kihoon71
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์, ์ด ์๋์ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ ํ์๋ค์ ๋ฉ์
ํด์ฃผ์ธ์! -->
<!-- May you please review this PR? @member1 @member2 ... -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
May you please review this PR? @sgugger, @ArthurZucker, @eunseojo | 07-21-2023 00:35:33 | 07-21-2023 00:35:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>์์์ @0525hhgus ๋์ ๋๊ธ์ ์ ์ธํ๊ณ ๋ ๋ณ๋์ ๋ฆฌ๋ทฐ์ฌํญ ์์ต๋๋ค :) <|||||>> Looks good! Could you update the ## GPU [[gpu]] in the english version as well? Seems like it is not rendering properly, let's kill two birds with one stone!
>
> <img alt="image" width="959" src="https://user-images.githubusercontent.com/48595927/255879037-d22d9839-7263-47d6-988b-408ed3bec0aa.png">
@ArthurZucker
I updated that issue! and also i could confirm [here](https://huggingface.co/docs/transformers/main/en/perf_hardware) that it worked successfully.
This part didn't work well in previous versions, but it works well in the current `main`. It was a problem that appeared because the newline was omitted in the markdown file
Thank you for your hard work and dedication, it is greatly appreciated. |
transformers | 24,965 | open | device_map='sequential' does not utilize gpu devices other than the first when running in 8bit and 4bit | ### System Info
transformers==4.31.0
python==3.10.6
bitsandbytes==0.40.2
torch==2.0.1
Whenever I set the parameter `device_map='sequential'`, only the first gpu device is taken into account. For models that do not fit on the first gpu, the model returns a cuda OOM, as if only running on the first gpu, instead of spilling over to the second gpu.
This is contrary to the expected behaviour as described in https://huggingface.co/docs/accelerate/usage_guides/big_modeling. It has happened for as long as I have been using it (last 3 months), so it is not just for the most recent version of transformers or torch, or bitsandbytes.
I have both a cuda:0 and cuda:1 gpu correctly installed and recognized by torch and transformers. For example, when setting `device_map= 'auto'`, the model is split on both gpus equally, as expected.
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
example code:
```
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-70b-chat-hf", use_safetensors=True)
model = LlamaForCausalLM.from_pretrained("meta-llama/Llama-2-70b-chat-hf", device_map="sequential", load_in_8bit=True, use_safetensors=True)
```
### Expected behavior
model correcly loaded by spilling over remaining layers to second gpu, instead of OOM on just first gpu. | 07-20-2023 23:21:07 | 07-20-2023 23:21:07 | Hi @Daryl149
Thanks for your issue, per the documentation it says:
> "sequential" will fit what it can on GPU 0, then move on GPU 1 and so forth (so wonโt use the last GPUs if it doesnโt need to).
Maybe there are some corner cases when using 8bit quantization, hence the OOM error. What is the GPU hardware you are using ? (with gpu vram)
Indeed the canonical way to load a model and split it across all GPU evenly is to use device_map=auto<|||||>Could be 8 bit related, also happens in 4bit. Will have to try unquantized.
I have unequal memory on this particular setup, so `device_map=auto` is
suboptimal:
First gpu is an A6000 (non Ada), 48GB.
Second gpu is an RTX 3090, 24GB. With `auto`, it only uses 24GB on both
cards as expected (48GB combined) . Whereas the expected `sequential`
behavior would be perfect for this situation by filling up to the combined
72GB.
On Fri, Jul 21, 2023, 08:46 Younes Belkada ***@***.***> wrote:
> Hi @Daryl149 <https://github.com/Daryl149>
> Thanks for your issue, per the documentation it says:
>
> "sequential" will fit what it can on GPU 0, then move on GPU 1 and so
> forth (so wonโt use the last GPUs if it doesnโt need to).
>
> Maybe there are some corner cases when using 8bit quantization, hence the
> OOM error. What is the GPU hardware you are using ? (with gpu vram)
> Indeed the canonical way to load a model and split it across all GPU
> evenly is to use device_map=auto
>
> โ
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/24965#issuecomment-1645064989>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABTMWHBJYDKPTVEFXZCMLLDXRIQV3ANCNFSM6AAAAAA2SC443A>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
Update:
It is definitely caused by setting the flag `load_in_8bit=True` (also occurs for `load_in_4bit=True`). When loading the model as such:
```
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-70b-chat-hf", use_safetensors=True)
model = LlamaForCausalLM.from_pretrained("meta-llama/Llama-2-70b-chat-hf", device_map="sequential", use_safetensors=True)
```
It shows the expected behaviour for `sequential`. Unfortunately, this means that:
- when running in 4 and 8 bit it goes OOM because it only utilizes the first gpu.
- when running the model in full, it goes OOM because it is too big anyway for 2 GPUs.
Hmmm, I'll report this with bitsandbytes as well. I would not consider the 8bit and 4bit flags as corner cases anymore for transformers though.
|
transformers | 24,964 | closed | improve from_pretrained for zero3 multi gpus mode | # Decrease RAM consumption during Deepspeed Zero 3 model initialisation with multiple GPUs.
This simple PR will save a ton of RAM in case with multi GPUs and Zero 3 Deepspeed scenario.
The idea is simple. We do not need to load checkpoints for all instances, because `deepspeed.zero.GatheredParameters` will copy weights from 0 rank.
## Issues
Related issue #12273
## Who can review?
- @stas00
- @pacman100 | 07-20-2023 20:41:10 | 07-20-2023 20:41:10 | Thank you for the PR, @1ytic
The idea is excellent, but we need to think about multiple use-cases here.
what happens if zero3 is used, but not `zero.Init` - won't this code fail if you try to access any of the meta weights before `deepspeed.initialize` gets called at which point deepspeed will partition weights from rank 0 to other ranks. I'm flagging the issue of timing of when they are sharded
Unfortunately, when I initially designed this I didn't think anybody would want to use zero-3 w/o `zero.Init` - so I lumped them together - @pacman100 improved upon it in Accelerate to have 2 separate possibilities - is zero3 and is zero.init enabled, so it gives a more refined control for such optimizations.
So let's wait for Sourab to weigh in.
Meanwhile please test
1. that this code works with pytorch-1.9 (I don't remember when `meta` was made to work)
2. use `USE_SLOW=1 pytest tests/deepspeed` to do coverage testing, since the PR CI doesn't run deepspeed tests. <|||||>Thank you for feedback, @stas00
Just to clarify a little bit.
My changes effect only state_dict from checkpoints. The function `deepspeed.zero.Init()` doesn't care about state_dict. The real magic happened [here](https://github.com/huggingface/transformers/blob/9ef5256dfb28f5419649af8ea94e82573a6b2e77/src/transformers/modeling_utils.py#L539), when we exit from `deepspeed.zero.GatheredParameters()` context.
I know `modelling_utils.py` is 4k lines monster and maybe I missed something, but seems like I effect only one scenario when we load checkpoints for already partitioned zero3 model. At least, I tested this scenario with 10GB checkpoint and 4 GPUs. I was able to decrease RAM consumption from 45GB to 17GB on single node.<|||||>Exactly, the model instantiation is super complex, that's why I wrote the above.
The deepspeed integration test suite has a very high coverage so if you try running it and it succeeds then most likely it's all good. The size of the checkpoint doesn't matter for the purpose of accepting the PR, what matters is to ensure it doesn't break things.
and btw, you actually don't need to even load the checkpoint if you're resuming from a deepspeed zero checkpoint. In another project I hacked to have the model created w/o loading the model and then just used deepspeed checkpoint loading directly, which should already be doing that efficiently, since each gpu will only read its own shard of weights.
But, alas, making it generic enough so that it'd satisfy everybody is very difficult, that's why the general case is to ensure ease of use out of the box often at the cost of slow startup and more memory consumption.
Ideally the protocol should be like this:
1. create a model on meta (~0 secs)
2. load each shard into the gpu it belongs to (a few secs)
this should be extremely fast even for a huge model like BLOOM-176B
In the case of new training, there should be a way to pre-shard the model before loading it, so resume and new training will be identical model loading-wise. This is eventually will be done when universal checkpoint will be implemented for ZeRO (currently it's only available in Megatron-Deepspeed) https://github.com/microsoft/DeepSpeed/issues/2921
So lots and lots of things to improve there.
And more things to fix on the deepspeed side, e.g. this is very wasteful https://github.com/microsoft/DeepSpeed/issues/1971<|||||>so practically please run the integration tests I described in the first reply of mine and if possible with pytorch-1.9 (minimal supported pytorch version).<|||||>Just to quickly chime int, the minimal version is actually 1.10 now ;-) The meta device is in 1.9+ so that shouldn't be an issue.<|||||>Thank you for this insight, Sylvain.
So then any recent pt version should be ok to test with, @1ytic <|||||>Hello,
The trainer's behaviour isn't changed at all because the DeepSpeed config is still created using `HfTrainerDeepSpeedConfig` which sets the weakref `_hf_deepspeed_config_weak_ref` which is used in `is_deepspeed_zero3_enabled` to check if it is Stage-3. So, from trainer's perspective, this should work fine.
From Accelerate's perspective, when user specifies `zero3_init_flag=False`, the weakref `_hf_deepspeed_config_weak_ref` isn't created and as such the `is_deepspeed_zero3_enabled` will return `False` even if it is using Stage-3 because the user doesn't want to use `deepspeed.zero.Init` context manager. So, in this case too, this PR should work fine as `map_location = "cpu"` due to absence of weakref.
So, the changes of this PR look good if all the slow tests pass.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@stas00 you were right, I caught an uninitialized error while testing. After fixing the tests passed:
`RUN_SLOW=1 pytest -rs tests/deepspeed/`
```
================================================= short test summary info ==================================================
SKIPPED [1] tests/deepspeed/test_deepspeed.py:949: test requires bfloat16 hardware support
================================= 108 passed, 1 skipped, 98 warnings in 2273.67s (0:37:53) =================================
```
I also added one more import. If it's too much, I can revert it.<|||||>@tjruwase, please kindly have a look - do you see any problems to this approach of loading weights only on rank 0 and relying on partitioning to distribute the weights to the rest of the ranks under zero3? Could this somehow cause problem in the future?
The idea is to skip loading weights on all ranks but rank 0, since they will be discarded anyway.
Thank you!<|||||>@1ytic, this is pretty neat. LGTM. Thanks! <|||||>> Thanks for your PR! Can we just leave `torch.distributed` as it was? `dist` is way less obvious as a name.
I thought `dist` quite common name for `torch.distributed`, but up to you. I will rename it back.<|||||>It is quite common and I would have no problem if this was in the Trainer file, but this file is not a distributed script and can be read by people less used to this. That's why it's better to spell it out IMO.<|||||>Thanks for bearing with me!<|||||>The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24964). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,963 | closed | Remove tokenizers from the doc table | # What does this PR do?
As discussed offline with @LysandreJik and @stas00 it doesn't really make sense to have the info on tokenizers in this table in the index, which:
- makes it look like something is missing for vision models or speech models
- is not accurate when a model uses another's tokenizer model.
This PR removes it and only leaves the frameworks supported by each model. | 07-20-2023 20:23:59 | 07-20-2023 20:23:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,962 | closed | Trainer bug > using dict (with single or multiple datasets) for eval_dataset arg throws error | ### System Info
Using python 3.10, transformers 4.28.1
Dear team @sgugger ,
When I tried passing multiple datasets to the eval_dataset argument of the trainer using a dict structure as per the documentation, I experienced that only metrics on the first of the passed datasets get compute at the evaluation step in the training process. For the subsequent dataset, I receive the following error:
Code:
```
eval_datasets = {"test": datasets["test"].shuffle(seed=SEED),
"val": datasets["val"].shuffle(seed=SEED), }
trainer = Trainer("model":model,
"args":t_args,
"train_dataset":datasets["train"],
"eval_dataset":eval_datasets,
"compute_metrics": metric_fn)
```
Error:
```
Traceback (most recent call last):
...
File "src/code_main.py", line 767, in run_experiment_
trainer.train(resume_from_checkpoint=resume)
File "lib/python3.10/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "lib/python3.10/site-packages/transformers/trainer.py", line 2006, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "lib/python3.10/site-packages/transformers/trainer.py", line 2285, in _maybe_log_save_evaluate
metrics.update(dataset_metrics)
TypeError: 'NoneType' object is not iterable
```
Notably, even if I only pass one dataset in the form of dictionary as opposed to directly passing its dataset instance, the same error appears, without any metrics being reported back.
I triple checked that all the datasets I pass are correct dataset instances with >0 samples.
Any suggestions?
Thanks!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
eval_datasets = {"test": datasets["test"].shuffle(seed=SEED),
"val": datasets["val"].shuffle(seed=SEED), }
trainer = Trainer("model":model,
"args":t_args,
"train_dataset":datasets["train"],
"eval_dataset":eval_datasets,
"compute_metrics": metric_fn)
### Expected behavior
Metric function runs on both dataset instance values of the passed dict, one after the other. All metrics with the according prefixes are reported back. No error is thrown. | 07-20-2023 20:18:38 | 07-20-2023 20:18:38 | It looks like one of your datasets might not have labels, so the metrics returned are None, maybe?<|||||>Thanks for the prompt answer! I don't think this is the problem for two reasons:
1st) running
```
eval_datasets["val"].unique("labels")
eval_datasets["test"].unique("labels")
```
returns `[0,1]` for both eval datasets.
2nd) When passing the datasets directly as dataset instances, I get metrics that are fine, so it seems it has to do with the fact of them together or individually being passed as dict type.<|||||>Could you share a small reproducer of the issue I can execute?<|||||>Hi @sgugger, please find a minimum version of the code below.
```python
# by Lennart Schulze, July 2023
############## SETUP
import random
import numpy as np
import os
import torch
import torch.nn as nn
import torch.optim
CACHE_ROOT = "/.cache/hf/"
os.environ['TRANSFORMERS_CACHE'] = CACHE_ROOT
os.environ['HF_DATASETS_CACHE'] = CACHE_ROOT
import transformers as hft
import datasets as hfds
import evaluate as hfev
g = torch.Generator()
SEED = 3
def seed(seed_num=SEED):
torch.manual_seed(seed_num)
random.seed(seed_num)
np.random.seed(seed_num)
g.manual_seed(seed_num)
seed()
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
print(DEVICE)
############## MODEL & TOKENIZER
def get_model():
model_name = "microsoft/deberta-v3-large"
tokenizer = hft.AutoTokenizer.from_pretrained(model_name)
print(tokenizer)
num_labels = 1
model = hft.DebertaV2ForSequenceClassification.from_pretrained(model_name, num_labels=num_labels)
print(type(model))
print(model.config.id2label)
print(model)
return model, tokenizer
############## DATASET
def get_datasets():
cache_dir = os.path.join(CACHE_ROOT)
datasets = hfds.load_dataset("rotten_tomatoes", cache_dir=cache_dir)
datasets["test"] = hfds.load_dataset("rotten_tomatoes", split="test", cache_dir=cache_dir)
print(datasets)
print(type(datasets))
print(datasets.keys())
for key, ds in datasets.items():
print(key, ":", len(ds))
return datasets
# tokenize dataset & prepare for torch transformer
def tokenize_function(tokenizer, sample):
return tokenizer(sample["text"], padding="max_length", truncation=True,
return_tensors="pt", max_length=128)
def preprocess_datasets(tokenizer, datasets):
tokenize_fn = lambda sample: tokenize_function(tokenizer, sample)
tokenized_datasets = datasets.map(tokenize_fn, batched=True)
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
new_features = tokenized_datasets["train"].features.copy()
new_features["labels"] = hfds.ClassLabel(num_classes=2, names=["No", "Yes"])
tokenized_datasets = tokenized_datasets.cast(new_features)
tokenized_datasets = tokenized_datasets.with_format("torch")
print("preprocess_datasets > train: \t", tokenized_datasets["train"].features)
print("preprocess_datasets > val: \t", tokenized_datasets["validation"].features)
print("preprocess_datasets > test: \t", tokenized_datasets["test"].features)
print("preprocess_datasets > val:", tokenized_datasets["validation"][0])
print("preprocess_datasets > test:", tokenized_datasets["test"][0])
return tokenized_datasets
############## TRAINING
# metric for trainer
def get_metric_fn():
metric = hfev.load("accuracy", cache_dir=CACHE_ROOT)
def compute_metrics(eval_prediction):
out = {}
logits, gt_labels = eval_prediction
predicted_labels = (torch.sigmoid(torch.tensor(logits))>=0.5).float().numpy()
key = 0
out[f"ACC_{key}"] = metric.compute(predictions=predicted_labels, references=gt_labels)["accuracy"]
return out
return compute_metrics
class CustomTrainer(hft.Trainer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def evaluate(self, **kwargs):
print("\n> EVALUATING >\n")
super().evaluate(**kwargs)
print("\n< EVALUATING DONE <\n")
def compute_loss(self, model, inputs, return_outputs=False):
inputs["labels"] = inputs["labels"].float()
labels = inputs.get("labels")
outputs = model(**inputs)
logits = outputs.get("logits")
threshold = 0.5
preds = (torch.sigmoid(logits) >= threshold).float()
loss_kwargs = {}
loss_fn = nn.BCEWithLogitsLoss(**loss_kwargs)
loss = loss_fn(logits.view(-1, 1), labels.view(-1, 1))
return (loss, outputs) if return_outputs==True else loss
def get_trainer(t_args, model, datasets,
eval_ds="test", train_size=5000, eval_size=50):
metric_fn = get_metric_fn()
sample_size_train = train_size if train_size!="all" else len(datasets["train"])
if eval_size!="all":
sample_size_eval_val = eval_size
sample_size_eval_test = sample_size_eval_val
else:
sample_size_eval_val = len(datasets["validation"])
sample_size_eval_test = len(datasets["test"])
eval_datasets = {}
if eval_ds!=None:
eval_datasets = {"test": datasets["test"].shuffle(seed=SEED).select(range(sample_size_eval_test)),
"val": datasets["validation"].shuffle(seed=SEED).select(range(sample_size_eval_val)),
}
print("get trainer > eval_datasets val ",eval_datasets["val"][0])
print("get trainer > eval_datasets test",eval_datasets["test"][0])
print("get trainer > eval_datasets val ",eval_datasets["val"].unique("labels"))
print("get trainer > eval_datasets test",eval_datasets["test"].unique("labels"))
if eval_ds=="val":
###del eval_datasets["test"] # this won't work!
eval_datasets = eval_datasets["val"] # this will work!
elif eval_ds=="test":
del eval_datasets["val"] # this won't work!
#eval_datasets = eval_datasets["test"] # this will work!
trainer_constructor_kwargs = {
"model":model,
"args":t_args,
"train_dataset":datasets["train"].shuffle(seed=SEED).select(range(sample_size_train)),
"eval_dataset":eval_datasets,
"compute_metrics": metric_fn
}
trainer = CustomTrainer(
**trainer_constructor_kwargs,
)
return trainer
# training arguments for trainer
def get_trainer_args():
output_dirs_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../")
output_dir = "chkpts"
t_args = hft.TrainingArguments(
output_dir=os.path.join(output_dirs_root, output_dir),
evaluation_strategy="steps",
logging_strategy="steps",
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=1,
eval_steps=1,
logging_steps=1,
num_train_epochs=1,
learning_rate=1e-6,
save_steps=1000,
report_to="tensorboard",
)
return t_args
############## AUTOMATION
def run_experiment_(model=None, tokenizer=None, datasets=None):
t_args = get_trainer_args()
trainer = get_trainer(t_args, model, datasets)
trainer.train()
def start_notebook():
model, tokenizer = get_model()
datasets = get_datasets()
datasets = preprocess_datasets(tokenizer, datasets)
return model, tokenizer, datasets
############## EXPERIMENTS
if __name__=="__main__":
print(torch.version.cuda)
print(torch.cuda.is_available())
model, tokenizer, datasets = start_notebook()
run_experiment_(model=model, datasets=datasets)
```
The key is the `get_trainer` function, where the key argument is `eval_ds` (try "all"/"test"/"val").
Please let me know in case of questions.<|||||>In your code sample, your orverloaded `evaluate` function does not return anything, this is what causes the problem.
```py
def evaluate(self, **kwargs):
print("\n> EVALUATING >\n")
result = super().evaluate(**kwargs)
print("\n< EVALUATING DONE <\n")
return result
```
this the issue on my side.<|||||>This works, thank you very much! It did not cross my mind that the results need to be returned at this level because they were still printed. |
transformers | 24,961 | closed | RuntimeError: mat1 and mat2 shapes cannot be multiplied - Llama-2-13b-chat-hf - v4.31.0 | ### System Info
```
Name: transformers
Version: 4.31.0
Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Home-page: https://github.com/huggingface/transformers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)
Author-email: [email protected]
License: Apache 2.0 License
Location: /opt/conda/lib/python3.10/site-packages
Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm
Required-by:
---
Name: bitsandbytes
Version: 0.40.2
Summary: k-bit optimizers and matrix multiplication routines.
Home-page: https://github.com/TimDettmers/bitsandbytes
Author: Tim Dettmers
Author-email: [email protected]
License: MIT
Location: /opt/conda/lib/python3.10/site-packages
Requires:
Required-by:
---
Name: accelerate
Version: 0.21.0
Summary: Accelerate
Home-page: https://github.com/huggingface/accelerate
Author: The HuggingFace team
Author-email: [email protected]
License: Apache
Location: /opt/conda/lib/python3.10/site-packages
Requires: numpy, packaging, psutil, pyyaml, torch
Required-by: catalyst
---
Name: torch
Version: 2.0.0
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: [email protected]
License: BSD-3
Location: /opt/conda/lib/python3.10/site-packages
Requires: filelock, jinja2, networkx, sympy, typing-extensions
Required-by: accelerate, catalyst, easyocr, fastai, kornia, pytorch-ignite, pytorch-lightning, timm, torchaudio, torchdata, torchmetrics, torchtext, torchvision
```
### Who can help?
@ArthurZucker @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
wget https://raw.githubusercontent.com/xhluca/llama-2-local-ui/main/requirements.txt
pip install -r requirements.txt -q
pip show transformers bitsandbytes accelerate torch
```
in python:
```python
from transformers import LlamaForCausalLM, LlamaTokenizer
import textwrap
def format_prompt(history, message, system_prompt):
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
prompt = f"{B_INST} {B_SYS}{system_prompt}{E_SYS} "
for user_msg, asst_msg in history:
user_msg = str(user_msg).strip()
asst_msg = str(asst_msg).strip()
prompt += f"{user_msg} {E_INST} {asst_msg} </s><s> {B_INST} "
message = str(message).strip()
prompt += f"{message} {E_INST} "
return prompt
SYSTEM_PROMPT = """\
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."""
SYSTEM_PROMPT = textwrap.dedent(SYSTEM_PROMPT).strip()
model_name = "meta-llama/Llama-2-13b-chat-hf"
model = LlamaForCausalLM.from_pretrained(
model_name, token=auth_token, load_in_4bit=True, device_map="auto"
).eval()
tokenizer = LlamaTokenizer.from_pretrained(model_name, token=auth_token)
prompt = format_prompt(history=[], message="What is a llama?", system_prompt=SYSTEM_PROMPT)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
max_gen_len = 4096
temperature = 0.6
top_p = 0.9
out = model.generate(
**inputs,
max_new_tokens=max_gen_len,
temperature=temperature,
top_p=top_p,
)
```
### Expected behavior
Should output something, instead getting this error:
```
/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py:1270: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation )
warnings.warn(
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[11], line 1
----> 1 out = model.generate(
2 **inputs,
3 max_new_tokens=max_gen_len,
4 temperature=temperature,
5 top_p=top_p,
6 )
File /opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py:1538, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, **kwargs)
1532 raise ValueError(
1533 "num_return_sequences has to be 1 when doing greedy search, "
1534 f"but is {generation_config.num_return_sequences}."
1535 )
1537 # 11. run greedy search
-> 1538 return self.greedy_search(
1539 input_ids,
1540 logits_processor=logits_processor,
1541 stopping_criteria=stopping_criteria,
1542 pad_token_id=generation_config.pad_token_id,
1543 eos_token_id=generation_config.eos_token_id,
1544 output_scores=generation_config.output_scores,
1545 return_dict_in_generate=generation_config.return_dict_in_generate,
1546 synced_gpus=synced_gpus,
1547 streamer=streamer,
1548 **model_kwargs,
1549 )
1551 elif is_contrastive_search_gen_mode:
1552 if generation_config.num_return_sequences > 1:
File /opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py:2362, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)
2359 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
2361 # forward pass to get next token
-> 2362 outputs = self(
2363 **model_inputs,
2364 return_dict=True,
2365 output_attentions=output_attentions,
2366 output_hidden_states=output_hidden_states,
2367 )
2369 if synced_gpus and this_peer_finished:
2370 continue # don't waste resources running the code we don't need
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:806, in LlamaForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
803 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
805 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
--> 806 outputs = self.model(
807 input_ids=input_ids,
808 attention_mask=attention_mask,
809 position_ids=position_ids,
810 past_key_values=past_key_values,
811 inputs_embeds=inputs_embeds,
812 use_cache=use_cache,
813 output_attentions=output_attentions,
814 output_hidden_states=output_hidden_states,
815 return_dict=return_dict,
816 )
818 hidden_states = outputs[0]
819 if self.pretraining_tp > 1:
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:693, in LlamaModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
685 layer_outputs = torch.utils.checkpoint.checkpoint(
686 create_custom_forward(decoder_layer),
687 hidden_states,
(...)
690 None,
691 )
692 else:
--> 693 layer_outputs = decoder_layer(
694 hidden_states,
695 attention_mask=attention_mask,
696 position_ids=position_ids,
697 past_key_value=past_key_value,
698 output_attentions=output_attentions,
699 use_cache=use_cache,
700 )
702 hidden_states = layer_outputs[0]
704 if use_cache:
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:408, in LlamaDecoderLayer.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache)
405 hidden_states = self.input_layernorm(hidden_states)
407 # Self Attention
--> 408 hidden_states, self_attn_weights, present_key_value = self.self_attn(
409 hidden_states=hidden_states,
410 attention_mask=attention_mask,
411 position_ids=position_ids,
412 past_key_value=past_key_value,
413 output_attentions=output_attentions,
414 use_cache=use_cache,
415 )
416 hidden_states = residual + hidden_states
418 # Fully Connected
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
163 output = old_forward(*args, **kwargs)
164 else:
--> 165 output = old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:295, in LlamaAttention.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache)
292 key_slices = self.k_proj.weight.split(key_value_slicing, dim=0)
293 value_slices = self.v_proj.weight.split(key_value_slicing, dim=0)
--> 295 query_states = [F.linear(hidden_states, query_slices[i]) for i in range(self.pretraining_tp)]
296 query_states = torch.cat(query_states, dim=-1)
298 key_states = [F.linear(hidden_states, key_slices[i]) for i in range(self.pretraining_tp)]
File /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:295, in <listcomp>(.0)
292 key_slices = self.k_proj.weight.split(key_value_slicing, dim=0)
293 value_slices = self.v_proj.weight.split(key_value_slicing, dim=0)
--> 295 query_states = [F.linear(hidden_states, query_slices[i]) for i in range(self.pretraining_tp)]
296 query_states = torch.cat(query_states, dim=-1)
298 key_states = [F.linear(hidden_states, key_slices[i]) for i in range(self.pretraining_tp)]
RuntimeError: mat1 and mat2 shapes cannot be multiplied (145x5120 and 1x2560)
``` | 07-20-2023 20:00:49 | 07-20-2023 20:00:49 | Demo on kaggle: https://www.kaggle.com/xhlulu/error-transformers-v4-31-0-llama-2-13b-chat-hf<|||||>Thanks for reporting! We'll take a look cc @ArthurZucker <|||||>@ArthurZucker The corresponding model still has `pretraining_tp=2`, this is why there is this issue.
@xhluca While waiting for the model to be fixed, you can add `pretraining_tp=1` in your call to `LlamaForCausalLM.from_pretrained`, this should fix the issue.<|||||>@sgugger this solution worked!<|||||>Yes! Fixed in https://huggingface.co/meta-llama/Llama-2-13b-chat-hf/commit/3a989db99aa8d9ef4cfd55f87521fc4c04891d3d ! This one slipped through the cracks<|||||>What is the best way to approach this for the inference space? We dont have this issue on 4.30.1 but upgrading to 4.31 it begins causing issues on for example the Puffin model. Is it safe to always override this like the workaround suggests for any model or should I consider the models this happens on broken / will HF fix this in an update?<|||||>You should just set `pretraining_tp = 1` for every model that you are using, it is totally safe!
The fix should probably come from the `bitsandbytes` library ๐ an issue was opened here. <|||||>Good to know since I will ship that to thousands of users in production who can run any HF model in the future. So I wanted to be sure future models won't be broken. Ill begin shipping this workaround along with transformerd 4.31. |
transformers | 24,960 | closed | Avoid importing all models when instantiating a pipeline | # What does this PR do?
The check on the model being in the right mapping for the pipeline requires the import of all models associated with that pipeline. This is because it uses the `MODEL_XXX_MAPPING` instead of the `MODEL_XXX_MAPPING_NAMES`. This PR suggests to switch to that, since the check is done on the model name anyway, to avoid weird warnings unrelevant to the user (for instance #24903 gives one example, I also end up having some logs of CUDA kernel being built on my side).
The only feature of the `MODEL_XXX_MAPPING` we need to keep compared to `MODEL_XXX_MAPPING_NAMES` is the extra content (coming from model on the Hub), so this PR adds a reference from the mapping names to the mapping.
Fixes #24903 | 07-20-2023 19:54:10 | 07-20-2023 19:54:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>It works too, but I think with the design selected for the PEFT integration in `from_pretrained` (which will supercede #24827), we won't need to remove them. I think it's a useful warning to have when users try to instantiate a pipeline with a model not suitable for it. |
transformers | 24,959 | open | Whisper not returning last phrase from audio >25s | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.4.226-129.415.amzn2.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes, don't know why pytorch was not detected as it is using it for sure
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi
### Information
- [X] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Whisper consistently does not return the last phrase in a transcription for clips >25s.
"https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed0.mp3" is a 30s clip
"https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed7.mp3" is the first half of above clip, 15s.
"https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed8.mp3" is the second half of above clip, 15s.
```
import whisper
import torch
import tempfile
import requests
import os
from transformers import WhisperProcessor, WhisperForConditionalGeneration, WhisperTokenizer
from datasets import load_dataset, Dataset, Audio
VERSION = "medium"
def download_file(url):
""" Download a file frome a url and save it to a named temporary file """
response = requests.get(url)
temp = tempfile.NamedTemporaryFile(delete=True, dir=os.getcwd(), mode='w+b')
temp.write(response.content)
temp.seek(0)
return temp
urls = ["https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed0.mp3",
"https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed7.mp3",
"https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed8.mp3"]
processor = WhisperProcessor.from_pretrained(f"openai/whisper-{VERSION}")
generator = WhisperForConditionalGeneration.from_pretrained(f"openai/whisper-{VERSION}").to("cuda")
generator.eval()
tokenizer = WhisperTokenizer.from_pretrained(f"openai/whisper-{VERSION}")
for url in urls:
with download_file(url) as f:
audio = whisper.load_audio(f.name)
processed = processor(audio, sampling_rate=16000, return_tensors="pt", return_attention_mask=True).to("cuda")
generated = generator.generate(processed.input_features, attention_mask=processed.attention_mask, return_timestamps=True, output_scores=True, return_dict_in_generate=True, temperature=0, repetition_penalty=8.0, num_beams=2)
predicted_ids = generated[0].to("cpu")
batch_transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False, decode_with_timestamps=True, output_offsets=True)
print(batch_transcription)
```
[{'text': '<|startoftranscript|><|en|><|transcribe|><|0.00|> A Popular Personage at Home by Thomas Hardy.<|12.36|><|12.36|> I live here, Wessex is my nameโI am a dog known rather well.<|19.12|><|19.12|> I guard the house, but how that climb to be my whim I cannot tell.<|25.40|><|25.40|><|endoftext|>', 'offsets': [{'text': ' A Popular Personage at Home by Thomas Hardy.', 'timestamp': (0.0, 12.36)}, {'text': ' I live here, Wessex is my nameโI am a dog known rather well.', 'timestamp': (12.36, 19.12)}, {'text': ' I guard the house, but how that climb to be my whim I cannot tell.', 'timestamp': (19.12, 25.400000000000002)}]}]
[{'text': '<|startoftranscript|><|en|><|transcribe|><|0.00|> A Popular Personage at Home by Thomas Hardy.<|4.76|><|4.76|> Read for LibriVox.org by Anita Hibbard, January 23rd, 2023.<|11.84|><|11.84|> I live here.<|14.04|><|14.04|> Essex is my name.<|15.04|><|endoftext|>', 'offsets': [{'text': ' A Popular Personage at Home by Thomas Hardy.', 'timestamp': (0.0, 4.76)}, {'text': ' Read for LibriVox.org by Anita Hibbard, January 23rd, 2023.', 'timestamp': (4.76, 11.84)}, {'text': ' I live here.', 'timestamp': (11.84, 14.040000000000001)}, {'text': ' Essex is my name.', 'timestamp': (14.040000000000001, 15.040000000000001)}]}]
[{'text': '<|startoftranscript|><|en|><|transcribe|><|0.00|> name. I am a dog known rather well. I guard the house, but how that climb to be my whim<|7.76|>**<|7.76|> I cannot tell. With a leap and a heart elate I go at the end of an hour\'s expectancy."<|15.04|><|endoftext|>**', 'offsets': [{'text': ' name. I am a dog known rather well. I guard the house, but how that climb to be my whim', 'timestamp': (0.0, 7.76)}, {'text': ' I cannot tell. With a leap and a heart elate I go at the end of an hour\'s expectancy."', 'timestamp': (7.76, 15.040000000000001)}]}]
Observe that the last phrase is caught by the second half 15s clip but not the 30s one.
This behavior is also observed in pipeline. (The first half 15s clip is repeating oddly without the beam search but that is a different issue).
```
from transformers import pipeline
pipe = pipeline(task="automatic-speech-recognition", model="openai/whisper-medium")
for url in urls:
print(pipe(url, return_timestamps=True))
```
{'text': ' A Popular Personage at Home by Thomas Hardy. I live here. Wessex is my name. I am a dog known rather well. I guard the house, but how that climb to be my whim I cannot tell.', 'chunks': [{'timestamp': (0.0, 12.36), 'text': ' A Popular Personage at Home by Thomas Hardy.'}, {'timestamp': (12.36, 14.08), 'text': ' I live here.'}, {'timestamp': (14.08, 16.2), 'text': ' Wessex is my name.'}, {'timestamp': (16.2, 19.12), 'text': ' I am a dog known rather well.'}, {'timestamp': (19.12, 25.4), 'text': ' I guard the house, but how that climb to be my whim I cannot tell.'}]}
{'text': ' A Popular Personage at Home by Thomas Hardy. Read for LibriVox.org by Anita Hibbard. January 23, 2023. I live here. Essex is my name. I live here. Essex is my name. I live here. Essex is my name. I live here. Essex is my name. I live here. Essex is my name. I live here. Essex is my name. I live here. Essex is my name. I live here. Essex is my name.', 'chunks': [{'timestamp': (0.0, 4.76), 'text': ' A Popular Personage at Home by Thomas Hardy.'}, {'timestamp': (4.76, 8.68), 'text': ' Read for LibriVox.org by Anita Hibbard.'}, {'timestamp': (8.68, 11.24), 'text': ' January 23, 2023.'}, {'timestamp': (11.24, 14.04), 'text': ' I live here.'}, {'timestamp': (14.04, 15.04), 'text': ' Essex is my name.'}, {'timestamp': (15.04, 16.04), 'text': ' I live here.'}, {'timestamp': (16.04, 17.04), 'text': ' Essex is my name.'}, {'timestamp': (17.04, 18.04), 'text': ' I live here.'}, {'timestamp': (18.04, 19.04), 'text': ' Essex is my name.'}, {'timestamp': (19.04, 20.04), 'text': ' I live here.'}, {'timestamp': (20.04, 21.04), 'text': ' Essex is my name.'}, {'timestamp': (21.04, 22.04), 'text': ' I live here.'}, {'timestamp': (22.04, 23.04), 'text': ' Essex is my name.'}, {'timestamp': (23.04, 24.04), 'text': ' I live here.'}, {'timestamp': (24.04, 25.04), 'text': ' Essex is my name.'}, {'timestamp': (25.04, 26.04), 'text': ' I live here.'}, {'timestamp': (26.04, 27.04), 'text': ' Essex is my name.'}, {'timestamp': (27.04, 28.04), 'text': ' I live here.'}, {'timestamp': (28.04, 29.04), 'text': ' Essex is my name.'}]}
{'text': " name. I am a dog known rather well. I guard the house, but how that climb to be my whim I cannot tell. With a leap and a heart elate I go, at the end of an hour's expectancy.", 'chunks': [{'timestamp': (0.0, 7.76), 'text': ' name. I am a dog known rather well. I guard the house, but how that climb to be my whim'}, {'timestamp': (7.76, 15.04), 'text': " I cannot tell. With a leap and a heart elate I go, at the end of an hour's expectancy."}]}
None of the configuration settings I change seem to affect this, including setting early_stopping to "never". This also seems to be a different issue to https://github.com/huggingface/transformers/issues/23231 as the last segment does not show up in the regular decoding output or the offsets.
### Expected behavior
The last segment should be returned for 30s clips. A workaround would be splitting >25s clips in half but that is not so ideal since it impacts both performance and accuracy. | 07-20-2023 17:36:26 | 07-20-2023 17:36:26 | Thanks for the examples - they suggest that the model is predicting the EOS token too early within the 30s segment. Having listened to the audio, I think this is because the last sentence does not finish completely within the 30s window, so the Whisper model decides to ignore it.
What we can do is feed a larger context length to the model, so it can handle audios of arbitrary length. The last sentence now finishes within a context window, so Whisper now predicts the EOS token after this sentence. Let's also use the English variant of the medium checkpoint if working with the English language (better performance vs the multilingual one for English):
```python
from transformers import pipeline
import torch
url = "https://github.com/pli66/test/raw/main/personage_hardy_ah_64kbTrimmed0.mp3"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
pipe = pipeline(task="automatic-speech-recognition", model="openai/whisper-medium.en", device=device)
print(pipe(url, return_timestamps=True, chunk_length_s=30.0))
``` |
transformers | 24,958 | closed | [`LlamaConfig`] Nit: pad token should be None by default | # What does this PR do?
Does not set the padding token as `0` is for the `unk_token`. Though Llama has bytefallback support, meaning most of the time the unkown token will not be form the input, it can generate the `<unk>` token (probably very rarely). The pad token is used in the embedding and the unk embedding is not None.
This should warn user that are doing `SequenceClassification` that the padding token is not set. | 07-20-2023 17:04:29 | 07-20-2023 17:04:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,957 | open | ๐ [i18n-KO] Translated `add_new_model.md` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค! -->
# What does this PR do?
Translated the `add_new_model.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์, ์ด ์๋์ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ ํ์๋ค์ ๋ฉ์
ํด์ฃผ์ธ์! -->
<!-- May you please review this PR? @member1 @member2 ... -->
May you please review this PR? @nuatmochoi, @bolizabeth, @hyunhp, @mjk0618, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
<!-- 2. ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! --> | 07-20-2023 16:58:16 | 07-20-2023 16:58:16 | Hello @sgugger , may you please approve the workflow for this PR?
I ran doc-builder locally, and it looks OK. We just want to see the live-preview to double check.
Thank you so much for your support.
Hope you have a great day! ๐ <|||||>The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24957). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,956 | closed | Change logic for logging in the examples | # What does this PR do?
Changes logic to check if we are doing distributed training by reading the `parallel_mode` instead, since `local_rank` will always (for the most part) be `>-1` with Accelerate now
Fixes # (issue)
Solves https://github.com/huggingface/transformers/issues/24924
Verified that it now shows as "distributed" when using multi-gpu, and `False` if not.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 07-20-2023 16:17:34 | 07-20-2023 16:17:34 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24956). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,955 | closed | [`RWKV`] Add Gradient Checkpointing support for RWKV | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/24831
As stated by the issue above, technically there is no reason to not support GC for RWKV model
cc @sgugger
| 07-20-2023 15:44:44 | 07-20-2023 15:44:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This snippet worked on my end:
```python
import torch
from transformers import RwkvForCausalLM
model = RwkvForCausalLM.from_pretrained("RWKV/rwkv-4-169m-pile").to(0)
model.train()
model.gradient_checkpointing_enable()
dummy_input = torch.LongTensor([[1, 2, 3, 4]]).to(0)
outputs = model(dummy_input)
logits = outputs.logits
loss = logits.mean()
loss.backward()
```
I will assume things are working (+ the CI should be triggered once the model has `supports_gradient_checkpointing = True`) , merging! |
transformers | 24,954 | closed | Bump aiohttp from 3.8.1 to 3.8.5 in /examples/research_projects/decision_transformer | Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.8.1 to 3.8.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/releases">aiohttp's releases</a>.</em></p>
<blockquote>
<h2>3.8.5</h2>
<h2>Security bugfixes</h2>
<ul>
<li>
<p>Upgraded the vendored copy of llhttp_ to v8.1.1 -- by :user:<code>webknjaz</code>
and :user:<code>Dreamsorcerer</code>.</p>
<p>Thanks to :user:<code>sethmlarson</code> for reporting this and providing us with
comprehensive reproducer, workarounds and fixing details! For more
information, see
<a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w</a>.</p>
<p>.. _llhttp: <a href="https://llhttp.org">https://llhttp.org</a></p>
<p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7346">#7346</a>)</p>
</li>
</ul>
<h2>Features</h2>
<ul>
<li>
<p>Added information to C parser exceptions to show which character caused the error. -- by :user:<code>Dreamsorcerer</code></p>
<p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7366">#7366</a>)</p>
</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>
<p>Fixed a transport is :data:<code>None</code> error -- by :user:<code>Dreamsorcerer</code>.</p>
<p>(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/3355">#3355</a>)</p>
</li>
</ul>
<hr />
<h2>3.8.4</h2>
<h2>Bugfixes</h2>
<ul>
<li>Fixed incorrectly overwriting cookies with the same name and domain, but different path.
(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/6638">#6638</a>)</li>
<li>Fixed <code>ConnectionResetError</code> not being raised after client disconnection in SSL environments.
(<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7180">#7180</a>)</li>
</ul>
<hr />
<h2>3.8.3</h2>
<p>.. attention::</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/aio-libs/aiohttp/blob/v3.8.5/CHANGES.rst">aiohttp's changelog</a>.</em></p>
<blockquote>
<h1>3.8.5 (2023-07-19)</h1>
<h2>Security bugfixes</h2>
<ul>
<li>
<p>Upgraded the vendored copy of llhttp_ to v8.1.1 -- by :user:<code>webknjaz</code>
and :user:<code>Dreamsorcerer</code>.</p>
<p>Thanks to :user:<code>sethmlarson</code> for reporting this and providing us with
comprehensive reproducer, workarounds and fixing details! For more
information, see
<a href="https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w">https://github.com/aio-libs/aiohttp/security/advisories/GHSA-45c4-8wx5-qw6w</a>.</p>
<p>.. _llhttp: <a href="https://llhttp.org">https://llhttp.org</a></p>
<p><code>[#7346](https://github.com/aio-libs/aiohttp/issues/7346) <https://github.com/aio-libs/aiohttp/issues/7346></code>_</p>
</li>
</ul>
<h2>Features</h2>
<ul>
<li>
<p>Added information to C parser exceptions to show which character caused the error. -- by :user:<code>Dreamsorcerer</code></p>
<p><code>[#7366](https://github.com/aio-libs/aiohttp/issues/7366) <https://github.com/aio-libs/aiohttp/issues/7366></code>_</p>
</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>
<p>Fixed a transport is :data:<code>None</code> error -- by :user:<code>Dreamsorcerer</code>.</p>
<p><code>[#3355](https://github.com/aio-libs/aiohttp/issues/3355) <https://github.com/aio-libs/aiohttp/issues/3355></code>_</p>
</li>
</ul>
<hr />
<h1>3.8.4 (2023-02-12)</h1>
<h2>Bugfixes</h2>
<ul>
<li>Fixed incorrectly overwriting cookies with the same name and domain, but different path.
<code>[#6638](https://github.com/aio-libs/aiohttp/issues/6638) <https://github.com/aio-libs/aiohttp/issues/6638></code>_</li>
<li>Fixed <code>ConnectionResetError</code> not being raised after client disconnection in SSL environments.
<code>[#7180](https://github.com/aio-libs/aiohttp/issues/7180) <https://github.com/aio-libs/aiohttp/issues/7180></code>_</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/aio-libs/aiohttp/commit/9c13a52c21c23dfdb49ed89418d28a5b116d0681"><code>9c13a52</code></a> Bump aiohttp to v3.8.5 a security release</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/7c02129567bc4ec59be467b70fc937c82920948c"><code>7c02129</code></a> ๏ฃ Bump pypa/cibuildwheel to v2.14.1</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/135a45e9d655d56e4ebad78abe84f1cb7b5c62dc"><code>135a45e</code></a> Improve error messages from C parser (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7366">#7366</a>) (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7380">#7380</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/9337fb3f2ab2b5f38d7e98a194bde6f7e3d16c40"><code>9337fb3</code></a> Fix bump llhttp to v8.1.1 (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7367">#7367</a>) (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7377">#7377</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/f07e9b44b5cb909054a697c8dd447b30dbf8073e"><code>f07e9b4</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7373">#7373</a>/66e261a5 backport][3.8] Drop azure mention (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7374">#7374</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/01d9b70e5477cd746561b52225992d8a2ebde953"><code>01d9b70</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7370">#7370</a>/22c264ce backport][3.8] fix: Spelling error fixed (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7371">#7371</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/3577b1e3719d4648fa973dbdec927f78f9df34dd"><code>3577b1e</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7359">#7359</a>/7911f1e9 backport][3.8] ๏ฃ Set up secretless publishing to PyPI (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7360">#7360</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/8d45f9c99511cd80140d6658bd9c11002c697f1c"><code>8d45f9c</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7333">#7333</a>/3a54d378 backport][3.8] Fix TLS transport is <code>None</code> error (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7357">#7357</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/dd8e24e77351df9c0f029be49d3c6d7862706e79"><code>dd8e24e</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7343">#7343</a>/18057581 backport][3.8] Mention encoding in <code>yarl.URL</code> (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7355">#7355</a>)</li>
<li><a href="https://github.com/aio-libs/aiohttp/commit/40874103ebfaa1007d47c25ecc4288af873a07cf"><code>4087410</code></a> [PR <a href="https://redirect.github.com/aio-libs/aiohttp/issues/7346">#7346</a>/346fd202 backport][3.8] ๏ฃ Bump vendored llhttp to v8.1.1 (<a href="https://redirect.github.com/aio-libs/aiohttp/issues/7352">#7352</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/aio-libs/aiohttp/compare/v3.8.1...v3.8.5">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=aiohttp&package-manager=pip&previous-version=3.8.1&new-version=3.8.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 07-20-2023 15:33:06 | 07-20-2023 15:33:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,953 | closed | ๐ [i18n-KO] Translated `add_tensorflow_model.md` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `add_tensorflow_model.md` file of the documentation to Korean ๐
Thank you in advance for your review!
Part of https://github.com/huggingface/transformers/issues/20179
<!-- ๋ฉ์ธ ์ด์์ ๊ธฐ๋ก์ด ๋จ์์! ๊ฐ์ง์ฐ๊ตฌ์ ๋ฆฌํฌ๋ฅผ ์ฌ์ฉํด ์ฐ์ตํ์ค๋๋ ์ ๊ฑฐํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- Team PseudoLab, may you please review this PR? -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- May you please review this PR? --> | 07-20-2023 14:40:38 | 07-20-2023 14:40:38 | Please stop opening and closing the same PR like this. |
transformers | 24,952 | open | Add Text-To-Speech pipeline | # What does this PR do?
Until recently, there was only one TTS model in Transformers. Recent ([Bark](https://huggingface.co/docs/transformers/model_doc/bark)) and future ([FastSpeechConformer2](https://github.com/huggingface/transformers/pull/23439)) additions have and will further enrich the number of TTS models in Transformers.
This may be the best time to add a text-to-speech pipeline to Transformers.
This PR tentatively proposes:
- The addition of a text-to-speech pipeline whose design could be modified in line with future TTS additions.
- Add a class AutoModelForTextToSpeech
- Add a `processor` task to the pipeline code to facilitate use of the `processor`.
My conception of the architecture for now:
- Backward compatibility with [FastSpeechConformer2](https://github.com/huggingface/transformers/pull/23439), retaining the ability to use its hacked `generate_speech` method.
- Future compatibility with future TTS models, counting on the fact that these models will use a `generate` method to generate audio.
- Possible compatibility with other TTA (text-to-audio) models such as [MusicGen](https://huggingface.co/docs/transformers/model_doc/musicgen).
What I'm counting on:
- future models should have a `generate` method, even if they are not AR models per se (for the moment, [FastSpeechConformer2](https://github.com/huggingface/transformers/pull/23439) is not AR and has no such method) or counts on an additional head model ([FastSpeechConformer2](https://github.com/huggingface/transformers/pull/23439) needs a vocoder on top to pass from a spectrogram to an audio - see [discussion here](https://github.com/huggingface/transformers/pull/23439#discussion_r1258660411)).
- future models will use a `Processor` even if they only use a tokenizer, to allow easy use of other conditional inputs such as audio or speaker embeddings. And the processor must be added to `PROCESSOR_MAPPING` (not the case of [MusicGen](https://huggingface.co/docs/transformers/model_doc/musicgen) atm).
I'm open to further discuss the architecture and to make some changes!
Fixes #22487
*Note:* I was inspired by @LysandreJik draft of a [TTS pipeline](https://huggingface.co/lysandre/text-to-speech-pipeline/blob/main/tts.py).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? [LINK](https://github.com/huggingface/transformers/issues/22487#issuecomment-1496312713)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Hey @sanchit-gandhi and @Narsil, I think you're the right people to talk to before the core review!
| 07-20-2023 14:15:43 | 07-20-2023 14:15:43 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24952). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @Narsil , thanks for your fast review!
Basically, I will refactor my code to meet your expectations !
There are still 2 things I'd like to discuss before and that I talked about in the comments:
1. **`speechT5` specific case:** `speechT5` was introduced 5 months ago, and has two issues - it uses a `.generate_speech` method instead of a `.generate`, and it needs an additional vocoder on top in order to actually produce audio signals instead of a spectrogram. What's the best way to stay consistent with it and with the pipeline logic? Should I still introduce model specific code or should I work on modifying `speechT5` instead? Modying `speechT5` might be problematic since it was the first TTS model introduced so users might be used to its API and because it would leave `BarkModel` has the only TTS model supported in the pipeline for a short time
2. **`speaker_embeddings`** and other `Processor`-related utilities: how to stay consistent with the library and continue to use some of the benefits of the Processor or continue to use speaker embeddings in an easy way? I fear that it might add unnecessary difficulties for the users to forward `speaker_embeddings` arguments, WDYT?
Anyways, many thanks again for the review!
<|||||>> Hi @Narsil , thanks for your fast review! Basically, I will refactor my code to meet your expectations ! There are still 2 things I'd like to discuss before and that I talked about in the comments:
>
> 1. **`speechT5` specific case:** `speechT5` was introduced 5 months ago, and has two issues - it uses a `.generate_speech` method instead of a `.generate`, and it needs an additional vocoder on top in order to actually produce audio signals instead of a spectrogram. What's the best way to stay consistent with it and with the pipeline logic? Should I still introduce model specific code or should I work on modifying `speechT5` instead? Modying `speechT5` might be problematic since it was the first TTS model introduced so users might be used to its API and because it would leave `BarkModel` has the only TTS model supported in the pipeline for a short time
I will let maintainers focusing on audio answer to that @sanchit-gandhi I think.
But what I do know is that not relying on invariants within `transformers` makes pipelines play the never ending game of `catch-up` for every model thrown into the mix. pipelines see `AutoModelFor` which should have consistent API which we can rely on.
I remember talks about splitting `generate` and `generate_speech` to allow differentation between the 2.
For the `vocoder`, I don't know how, but it should be invinsible to users.
In ASR we've had ngram being added to the configuration for instance, which makes it loadable automatically.
>
> 2. **`speaker_embeddings`** and other `Processor`-related utilities: how to stay consistent with the library and continue to use some of the benefits of the Processor or continue to use speaker embeddings in an easy way? I fear that it might add unnecessary difficulties for the users to forward `speaker_embeddings` arguments, WDYT?
Again, there might already be solutions.
But loading from a random dataset some random data within `preprocess` is not really sustainable.
My suggestion to put this in usercode alleviates that contraint.
But in general having speaker_embedding for TTS should always be purely optional imo.
>
>
> Anyways, many thanks again for the review!
<|||||>Thanks @Narsil , I will wait for @sanchit-gandhi opinion on it then!
What about [this comment](https://github.com/huggingface/transformers/pull/24952#discussion_r1270368324) ?<|||||>I haven't generated the tiny models for `bark`. I will do it today ๐ (I can't guarantee it will be able to be generated smoothly - usually they should be already on the Hub, and if not, it means the creation process has some issue for this model)<|||||>Hi @ylacombe . There are a few issues that blocks the creation of tiny model (of `bark` for pipeline testing).
The first one is `tests/models/bark/test_modeling_bark.py` has no `BarkModelTest` and `BarkModelTester`. Only the component models (fine, coarse, semantic).
Are those component models also used as standalone models? Or they are really just components for `BarkModel` and we expect the users to use `BarkModel` rather than those components?
More importantly, for the pipeline implemented in this PR, which model types do it needs.
Thanks in advance.<|||||>Hi @ydshieh, thanks for your help on the matter!
There's only `BarkModelIntegrationTests` for now. The other sub-models are used as components for `BarkModel`. Users are expected to use `BarkModel`, which will ultimately be used in the pipeline.
Let me know if I can help you with anything!<|||||>> Users are expected to use BarkModel
In this case, it means we need to create a tiny model for `BarkModel` or models with head (if any) on top of it. This implies we need a `BarkModelTest` and `BarkModelTester`. For the creation, we don't really need to implement test methods in `BarkModelTest`, but there should be later (I would not say it's urgent though).
I will open a PR to quickly add something necessary so we can create tiny model for `bark`. I will ping you for a review so you know what's necessary (in the future for new models you will add ๐ค ).
<|||||>LGTM ! Thanks for the rehaul <|||||>Hey @ylacombe
If you don't mind, let's have the tiny model ready and try with it first before merge.
I am just able to create it.<|||||>Just uploaded
https://huggingface.co/hf-internal-testing/tiny-random-BarkModel/
But I haven't tried it with the tests implemented in this PR. |
transformers | 24,951 | closed | T5 Tokenizer Legacy behaviour warning | ### System Info
I am running on Google Collab (Python 3.10.6) using tokenizers-0.13.3, transformers-4.31.0, huggingface-hub-0.16.4 ,safetensors-0.3.1
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm getting a legacy behaviour warning come up when simply loading a T5 tokenizer - it appears even before using the tokenizer. Is there an updated way to load the tokenizer? The warning appears when running the following lines of code:
_from transformers import AutoTokenizer
tokeniser = AutoTokenizer.from_pretrained("google/mt5-small")_
The warning is:
_You are using the legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
/usr/local/lib/python3.10/dist-packages/transformers/convert_slow_tokenizer.py:470: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text.
warnings.warn(_
### Expected behavior
No warning | 07-20-2023 13:56:34 | 07-20-2023 13:56:34 | Hey! Will just answer for potentially confused users:
- the warning is triggered if `legacy=True` which is the default for backward compatibility
- to use the latest behaviour, use `tokeniser = AutoTokenizer.from_pretrained("google/mt5-small", legacy=False)`
<|||||>It might be nice to add the fix to the error message -- it's a bit hard to find :) |
transformers | 24,950 | closed | Update processing_vision_text_dual_encoder.py |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes: small typo: kwrags -> kwargs
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-20-2023 11:51:52 | 07-20-2023 11:51:52 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24950). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,949 | closed | Bump pygments from 2.11.2 to 2.15.0 in /examples/research_projects/decision_transformer | Bumps [pygments](https://github.com/pygments/pygments) from 2.11.2 to 2.15.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pygments/pygments/releases">pygments's releases</a>.</em></p>
<blockquote>
<h2>2.15.0</h2>
<ul>
<li>
<p>Added lexers:</p>
<ul>
<li>Carbon (<a href="https://redirect.github.com/pygments/pygments/issues/2362">#2362</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2365">#2365</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2366">#2366</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2367">#2367</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2368">#2368</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2369">#2369</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2370">#2370</a>)</li>
<li>Dax (<a href="https://redirect.github.com/pygments/pygments/issues/2335">#2335</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2345">#2345</a>)</li>
<li>MediaWiki Wikitext (<a href="https://redirect.github.com/pygments/pygments/issues/2373">#2373</a>, <a href="https://redirect.github.com/pygments/pygments/issues/827">#827</a>)</li>
<li>PostgreSQL Explain (<a href="https://redirect.github.com/pygments/pygments/issues/2398">#2398</a>)</li>
<li>WGSL (WebGPU Shading Language) (<a href="https://redirect.github.com/pygments/pygments/issues/2386">#2386</a>)</li>
<li>X++ (<a href="https://redirect.github.com/pygments/pygments/issues/2339">#2339</a>)</li>
</ul>
</li>
<li>
<p>Updated lexers:</p>
<ul>
<li>
<p>AMDGPU: Add support for <code>scratch_</code> instructions, the <code>attr*.*</code> argument,
as well as the <code>off</code> modifier (<a href="https://redirect.github.com/pygments/pygments/issues/2327">#2327</a>).</p>
</li>
<li>
<p>APDL: Miscellaneous improvements (<a href="https://redirect.github.com/pygments/pygments/issues/2314">#2314</a>)</p>
</li>
<li>
<p>bash/tcsh:</p>
<ul>
<li>Move <code>break</code> to keywords (<a href="https://redirect.github.com/pygments/pygments/issues/2377">#2377</a>)</li>
<li>Improve bash math expansion lexing (<a href="https://redirect.github.com/pygments/pygments/issues/2255">#2255</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2353">#2353</a>)</li>
</ul>
</li>
<li>
<p>Chapel: Support attributes (<a href="https://redirect.github.com/pygments/pygments/issues/2376">#2376</a>)</p>
</li>
<li>
<p>CMake: Implement bracket style comments (<a href="https://redirect.github.com/pygments/pygments/issues/2338">#2338</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2354">#2354</a>)</p>
</li>
<li>
<p>CSS: Improve lexing of numbers inside function calls (<a href="https://redirect.github.com/pygments/pygments/issues/2382">#2382</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2383">#2383</a>)</p>
</li>
<li>
<p>diff: Support normal diff syntax, as opposed to unified diff syntax (<a href="https://redirect.github.com/pygments/pygments/issues/2321">#2321</a>)</p>
</li>
<li>
<p>GLSL, HLSL:</p>
<ul>
<li>Support line continuations in preprocessor code (<a href="https://redirect.github.com/pygments/pygments/issues/2350">#2350</a>)</li>
<li>Improve preprocessor directive handling (<a href="https://redirect.github.com/pygments/pygments/issues/2357">#2357</a>)</li>
</ul>
</li>
<li>
<p>LilyPond: minor update of builtins</p>
</li>
<li>
<p>PHP: support attributes (<a href="https://redirect.github.com/pygments/pygments/issues/2055">#2055</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2347">#2347</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2360">#2360</a>), fix anonymous classes without
parameters (<a href="https://redirect.github.com/pygments/pygments/issues/2359">#2359</a>), improve lexing of variable variable syntax (<a href="https://redirect.github.com/pygments/pygments/issues/2358">#2358</a>)</p>
</li>
<li>
<p>Python:</p>
<ul>
<li>Add missing builtins (<a href="https://redirect.github.com/pygments/pygments/issues/2334">#2334</a>)</li>
<li>Fix inconsistent lexing of <code>None</code> (<a href="https://redirect.github.com/pygments/pygments/issues/2406">#2406</a>)</li>
</ul>
</li>
<li>
<p>Rebol/Red: Don't require script headers (<a href="https://redirect.github.com/pygments/pygments/issues/2348">#2348</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2349">#2349</a>)</p>
</li>
<li>
<p>Spice: Update keywords (<a href="https://redirect.github.com/pygments/pygments/issues/2336">#2336</a>)</p>
</li>
<li>
<p>SQL+Jinja (<code>analyse_text</code> method): Fix catastrophic backtracking (<a href="https://redirect.github.com/pygments/pygments/issues/2355">#2355</a>)</p>
</li>
<li>
<p>Terraform: Add <code>hcl</code> alias (<a href="https://redirect.github.com/pygments/pygments/issues/2375">#2375</a>)</p>
</li>
</ul>
</li>
<li>
<p>Declare support for Python 3.11 and drop support for Python 3.6 (<a href="https://redirect.github.com/pygments/pygments/issues/2324">#2324</a>).</p>
</li>
<li>
<p>Update <code>native</code> style to improve contrast (<a href="https://redirect.github.com/pygments/pygments/issues/2325">#2325</a>).</p>
</li>
<li>
<p>Update `github-dark`` style to match latest Primer style (<a href="https://redirect.github.com/pygments/pygments/issues/2401">#2401</a>)</p>
</li>
<li>
<p>Revert a change that made guessing lexers based on file names slower
on Python 3.10 and older (<a href="https://redirect.github.com/pygments/pygments/issues/2328">#2328</a>).</p>
</li>
<li>
<p>Fix some places where a locale-dependent encoding could unintentionally
be used instead of UTF-8 (<a href="https://redirect.github.com/pygments/pygments/issues/2326">#2326</a>).</p>
</li>
<li>
<p>Fix Python traceback handling (<a href="https://redirect.github.com/pygments/pygments/issues/2226">#2226</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2329">#2329</a>).</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pygments/pygments/blob/master/CHANGES">pygments's changelog</a>.</em></p>
<blockquote>
<h2>Version 2.15.0</h2>
<p>(released April 10th, 2023)</p>
<ul>
<li>
<p>Added lexers:</p>
<ul>
<li>Carbon (<a href="https://redirect.github.com/pygments/pygments/issues/2362">#2362</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2365">#2365</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2366">#2366</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2367">#2367</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2368">#2368</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2369">#2369</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2370">#2370</a>)</li>
<li>Dax (<a href="https://redirect.github.com/pygments/pygments/issues/2335">#2335</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2345">#2345</a>)</li>
<li>MediaWiki Wikitext (<a href="https://redirect.github.com/pygments/pygments/issues/2373">#2373</a>, <a href="https://redirect.github.com/pygments/pygments/issues/827">#827</a>)</li>
<li>PostgreSQL Explain (<a href="https://redirect.github.com/pygments/pygments/issues/2398">#2398</a>)</li>
<li>WGSL (WebGPU Shading Language) (<a href="https://redirect.github.com/pygments/pygments/issues/2386">#2386</a>)</li>
<li>X++ (<a href="https://redirect.github.com/pygments/pygments/issues/2339">#2339</a>)</li>
</ul>
</li>
<li>
<p>Updated lexers:</p>
<ul>
<li>
<p>AMDGPU: Add support for <code>scratch_</code> instructions, the <code>attr*.*</code> argument,
as well as the <code>off</code> modifier (<a href="https://redirect.github.com/pygments/pygments/issues/2327">#2327</a>).</p>
</li>
<li>
<p>APDL: Miscellaneous improvements (<a href="https://redirect.github.com/pygments/pygments/issues/2314">#2314</a>)</p>
</li>
<li>
<p>bash/tcsh:</p>
<ul>
<li>Move <code>break</code> to keywords (<a href="https://redirect.github.com/pygments/pygments/issues/2377">#2377</a>)</li>
<li>Improve bash math expansion lexing (<a href="https://redirect.github.com/pygments/pygments/issues/2255">#2255</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2353">#2353</a>)</li>
</ul>
</li>
<li>
<p>Chapel: Support attributes (<a href="https://redirect.github.com/pygments/pygments/issues/2376">#2376</a>)</p>
</li>
<li>
<p>CMake: Implement bracket style comments (<a href="https://redirect.github.com/pygments/pygments/issues/2338">#2338</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2354">#2354</a>)</p>
</li>
<li>
<p>CSS: Improve lexing of numbers inside function calls (<a href="https://redirect.github.com/pygments/pygments/issues/2382">#2382</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2383">#2383</a>)</p>
</li>
<li>
<p>diff: Support normal diff syntax, as opposed to unified diff syntax (<a href="https://redirect.github.com/pygments/pygments/issues/2321">#2321</a>)</p>
</li>
<li>
<p>GLSL, HLSL:</p>
<ul>
<li>Support line continuations in preprocessor code (<a href="https://redirect.github.com/pygments/pygments/issues/2350">#2350</a>)</li>
<li>Improve preprocessor directive handling (<a href="https://redirect.github.com/pygments/pygments/issues/2357">#2357</a>)</li>
</ul>
</li>
<li>
<p>LilyPond: minor update of builtins</p>
</li>
<li>
<p>PHP: support attributes (<a href="https://redirect.github.com/pygments/pygments/issues/2055">#2055</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2347">#2347</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2360">#2360</a>), fix anonymous classes without
parameters (<a href="https://redirect.github.com/pygments/pygments/issues/2359">#2359</a>), improve lexing of variable variable syntax (<a href="https://redirect.github.com/pygments/pygments/issues/2358">#2358</a>)</p>
</li>
<li>
<p>Python:</p>
<ul>
<li>Add missing builtins (<a href="https://redirect.github.com/pygments/pygments/issues/2334">#2334</a>)</li>
<li>Fix inconsistent lexing of <code>None</code> (<a href="https://redirect.github.com/pygments/pygments/issues/2406">#2406</a>)</li>
</ul>
</li>
<li>
<p>Rebol/Red: Don't require script headers (<a href="https://redirect.github.com/pygments/pygments/issues/2348">#2348</a>, <a href="https://redirect.github.com/pygments/pygments/issues/2349">#2349</a>)</p>
</li>
<li>
<p>Spice: Update keywords (<a href="https://redirect.github.com/pygments/pygments/issues/2336">#2336</a>)</p>
</li>
<li>
<p>SQL+Jinja (<code>analyse_text</code> method): Fix catastrophic backtracking (<a href="https://redirect.github.com/pygments/pygments/issues/2355">#2355</a>)</p>
</li>
<li>
<p>Terraform: Add <code>hcl</code> alias (<a href="https://redirect.github.com/pygments/pygments/issues/2375">#2375</a>)</p>
</li>
</ul>
</li>
<li>
<p>Declare support for Python 3.11 and drop support for Python 3.6 (<a href="https://redirect.github.com/pygments/pygments/issues/2324">#2324</a>).</p>
</li>
<li>
<p>Update <code>native</code> style to improve contrast (<a href="https://redirect.github.com/pygments/pygments/issues/2325">#2325</a>).</p>
</li>
<li>
<p>Update `github-dark`` style to match latest Primer style (<a href="https://redirect.github.com/pygments/pygments/issues/2401">#2401</a>)</p>
</li>
<li>
<p>Revert a change that made guessing lexers based on file names slower
on Python 3.10 and older (<a href="https://redirect.github.com/pygments/pygments/issues/2328">#2328</a>).</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pygments/pygments/commit/6c187ad83267be9ce142af3fd5c9e670339dc7aa"><code>6c187ad</code></a> Prepare 2.15 release.</li>
<li><a href="https://github.com/pygments/pygments/commit/00b9cb022cc9c05784c43c11bd7f73e64008b347"><code>00b9cb0</code></a> Prepare for release.</li>
<li><a href="https://github.com/pygments/pygments/commit/a0824a45f0bd6c45528fa16132f09dd3570a8234"><code>a0824a4</code></a> Update CHANGES</li>
<li><a href="https://github.com/pygments/pygments/commit/26f9f6c852846fe579c37fe936a872b68fa686ba"><code>26f9f6c</code></a> Merge pull request <a href="https://redirect.github.com/pygments/pygments/issues/2406">#2406</a> from rdbende/fix-fromimport-none</li>
<li><a href="https://github.com/pygments/pygments/commit/62b1bbbe6e329268eaa4c68f0e3eb8867c450acc"><code>62b1bbb</code></a> Change token of None after from keyword</li>
<li><a href="https://github.com/pygments/pygments/commit/acee60e4e8dde9ea99fc494740e20b06188791ac"><code>acee60e</code></a> Update CHANGES</li>
<li><a href="https://github.com/pygments/pygments/commit/eaca69091119e0ac5c97e626ba9e3b21b688c5ed"><code>eaca690</code></a> Add lexer for MediaWiki Wikitext (<a href="https://redirect.github.com/pygments/pygments/issues/2373">#2373</a>)</li>
<li><a href="https://github.com/pygments/pygments/commit/0e9c87bcf096908956e031f15a4e589e83be1691"><code>0e9c87b</code></a> Update CHANGES</li>
<li><a href="https://github.com/pygments/pygments/commit/ef0abbaece522732031d61391567c017d48d87b7"><code>ef0abba</code></a> Add PostgreSQL Explain lexer (<a href="https://redirect.github.com/pygments/pygments/issues/2398">#2398</a>)</li>
<li><a href="https://github.com/pygments/pygments/commit/3c6e2af8fbc44bb1ef77389d09118c37faea8746"><code>3c6e2af</code></a> Update CHANGES</li>
<li>Additional commits viewable in <a href="https://github.com/pygments/pygments/compare/2.11.2...2.15.0">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pygments&package-manager=pip&previous-version=2.11.2&new-version=2.15.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 07-20-2023 11:28:00 | 07-20-2023 11:28:00 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24949). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,948 | open | Converting Llama2 to HF weight on Windows 10 PC failed | ### System Info
Setup: RTX A2000 12G; CUDA 12.2
Command:
python r:/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir Llama-2-7b --model_size 7B --output_dir hf_wght_7b
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Setup: RTX A2000 12G; CUDA 12.2
Command:
python r:/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir Llama-2-7b --model_size 7B --output_dir hf_wght_7b
Error:
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin C:\Python311\Lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so
False
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
C:\Python311\Lib\site-packages\bitsandbytes\cuda_setup\main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
warn(msg)
CUDA SETUP: WARNING! libcuda.so not found! Do you have a CUDA driver installed? If you are on a cluster, make sure you are on a CUDA machine!
C:\Python311\Lib\site-packages\bitsandbytes\cuda_setup\main.py:149: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)!
warn(msg)
C:\Python311\Lib\site-packages\bitsandbytes\cuda_setup\main.py:149: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library...
warn(msg)
CUDA SETUP: Loading binary C:\Python311\Lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Problem: The main issue seems to be that the main CUDA library was not detected.
CUDA SETUP: Solution 1): Your paths are probably not up-to-date. You can update them via: sudo ldconfig.
CUDA SETUP: Solution 2): If you do not have sudo rights, you can do the following:
CUDA SETUP: Solution 2a): Find the cuda library via: find / -name libcuda.so 2>/dev/null
CUDA SETUP: Solution 2b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_2a
CUDA SETUP: Solution 2c): For a permanent solution add the export from 2b into your .bashrc file, located at ~/.bashrc
Traceback (most recent call last):
File "C:\Python311\Lib\site-packages\transformers\utils\import_utils.py", line 1099, in _get_module
return importlib.import_module("." + module_name, self.__name__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Python311\Lib\site-packages\transformers\models\llama\modeling_llama.py", line 32, in <module>
from ...modeling_utils import PreTrainedModel
File "C:\Python311\Lib\site-packages\transformers\modeling_utils.py", line 86, in <module>
from accelerate import dispatch_model, infer_auto_device_map, init_empty_weights
File "C:\Python311\Lib\site-packages\accelerate\__init__.py", line 3, in <module>
from .accelerator import Accelerator
File "C:\Python311\Lib\site-packages\accelerate\accelerator.py", line 35, in <module>
from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
File "C:\Python311\Lib\site-packages\accelerate\checkpointing.py", line 24, in <module>
from .utils import (
File "C:\Python311\Lib\site-packages\accelerate\utils\__init__.py", line 131, in <module>
from .bnb import has_4bit_bnb_layers, load_and_quantize_model
File "C:\Python311\Lib\site-packages\accelerate\utils\bnb.py", line 42, in <module>
import bitsandbytes as bnb
File "C:\Python311\Lib\site-packages\bitsandbytes\__init__.py", line 6, in <module>
from . import cuda_setup, utils, research
File "C:\Python311\Lib\site-packages\bitsandbytes\research\__init__.py", line 1, in <module>
from . import nn
File "C:\Python311\Lib\site-packages\bitsandbytes\research\nn\__init__.py", line 1, in <module>
from .modules import LinearFP8Mixed, LinearFP8Global
File "C:\Python311\Lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in <module>
from bitsandbytes.optim import GlobalOptimManager
File "C:\Python311\Lib\site-packages\bitsandbytes\optim\__init__.py", line 6, in <module>
from bitsandbytes.cextension import COMPILED_WITH_CUDA
File "C:\Python311\Lib\site-packages\bitsandbytes\cextension.py", line 20, in <module>
raise RuntimeError('''
RuntimeError:
CUDA Setup failed despite GPU being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "r:\transformers\src\transformers\models\llama\convert_llama_weights_to_hf.py", line 23, in <module>
from transformers import LlamaConfig, LlamaForCausalLM, LlamaTokenizer
File "<frozen importlib._bootstrap>", line 1231, in _handle_fromlist
File "C:\Python311\Lib\site-packages\transformers\utils\import_utils.py", line 1090, in __getattr__
value = getattr(module, name)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\transformers\utils\import_utils.py", line 1089, in __getattr__
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\transformers\utils\import_utils.py", line 1101, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback):
CUDA Setup failed despite GPU being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
### Expected behavior
Convert Llama2 to HF weights | 07-20-2023 11:26:24 | 07-20-2023 11:26:24 | It looks like there is a problem with your installation of `bitsandbytes`. You should fix it or uninstall it.<|||||>Did both. Neither ways worked. I left a ticket on bitsandbytes.<|||||>It works for me to convert 70b-chat. I don't even install the bitsandbytes library. |
transformers | 24,947 | closed | fix: cast input pixels to appropriate dtype for image_to_text pipelines | # What does this PR do?
### _Automatically converts input pixels to the correct type when running image_to_text pipelines_
---
Currently, when using image_to_text pipelines with half precision, I encounter an error in the forward function when passing in the pixel data. I found that casting to the target_dtype within the forward function fixes this issue in my case.
This is my first time working with the transformers library, so I'm not sure if this is the "correct" place to fix this kind of issue, or if perhaps there's another step in the pipeline code that I've missed that should be responsible for casting the input data to the correct type.
I assume there are other similar models that may benefit from the same type of fix. However, I've constrained my fix to the models that I've been working with already, as I didn't want to contribute unvalidated code. I'm happy to make similar changes to further models if required, or if someone with more experience with this library wants to rework these changes and approach the fix differently, I'd be happy with that, and can test with the subset of models I've been using if required.
- _Possibly_ Fixes #24834
_(This issue looked similar to the issue I was encountering, however I had different additional issues using 8bit optimizations, so I've only tested my fix under float16. But I think given that issue seems to relate to the same root cause of missing casting for input data within pipelines, I think there's a good chance it may be fixed by this issue.)_
PR Checklist Notes
- I haven't been able to run tests as I ran into dependency issues on my local environment, and the CI workflows appear to use self hosted runners, which I assume won't be easy for me to set up. Given the small scope of my changes, perhaps someone can approve running my PR against CircleCI if necessary?
- I haven't added new tests for this change, I had a read through of the existing tests and I wasn't sure if this sort of low level change would usually warrant explicit test coverage. Since I don't currently have a good way of running the tests myself, I figured it was best to omit new tests for now.
Tagging @younesbelkada and @NielsRogge as the last authors of the lines I edited. | 07-20-2023 11:21:16 | 07-20-2023 11:21:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks, I've run the styling and copy checks locally, which identified some copy checks that were failing. If I understand correctly, these checks are ensuring that the implementations of derived model classes match their sources? On that basis, I've gone ahead and added the same type casting to the classes identified, and pushed a commit with those in. It looks like all of the CI checks are passing now.
> Usually we advise users to cast the input to the desired dtype manually by calling .to() to the processor's output
In my case, I'm currently using transformers via the AWS Deep Learning Containers, using Sagemaker. I've made a small tweak to the entrypoint to allow passing the `torch_dtype` into the pipeline kwargs, but otherwise I was trying to keep my modified container as generic as possible to aid maintainability. |
transformers | 24,946 | open | convertion to hf format of llama2 70b get kill | ### System Info
transformers 4.31.0
RAM: 20G, 8Core
A100-80GB
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
run the checkpoint conversion python script provided by transformers and the program will get kill. I run it successfully to 7B and 13B model.
### Expected behavior
Convert checkpoint properly. | 07-20-2023 09:58:46 | 07-20-2023 09:58:46 | That is because you do not have enough CPU RAM to do the conversion. It needs 140GB of memory. If you have access to the weights, you can get them on the Hugging Face Hub (as long as you email address for HF matches the one with which you got the weights).<|||||>@sgugger The files on https://huggingface.co/meta-llama/Llama-2-70b are not converted. Could you point to the right one on the Hub?<|||||>The repo suffixed with hf: https://huggingface.co/meta-llama/Llama-2-70b-hf |
transformers | 24,945 | closed | BatchSampler | ### Feature request
I am trying to use Transformers Trainer and I want to generate batches and not single items.
### Motivation
I need that in order to generate batches of different sizes without doing padding.
Is that possible to do with the implemented trainer ?
### Your contribution
.. | 07-20-2023 09:37:18 | 07-20-2023 09:37:18 | Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only. |
transformers | 24,944 | closed | replace no_cuda with use_cpu in test_pytorch_examples | ### What does this PR do?
This PR replace `no_cuda` with `use_cpu` in `test_pytorch_examples.py`, as `no_cuda` training argument is deprecated([see](https://github.com/huggingface/transformers/pull/24863)) . By the way delete a piece of code that will never be used.
| 07-20-2023 06:10:49 | 07-20-2023 06:10:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,943 | closed | ๐ [i18n-KO] Translated `perf_infer_gpu_many.md` to Korean | # What does this PR do?
Translated the `perf_infer_gpu_many.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [X] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [X] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [X] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [X] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [X] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
May you please review this PR? @nuatmochoi, @bolizabeth, @hyunhp, @mjk0618, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
May you please review this PR? @sgugger, @ArthurZucker, @eunseojo | 07-20-2023 05:30:34 | 07-20-2023 05:30:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,942 | closed | Fallback for missing attribute `Parameter.ds_numel` | #22193 added a parameter count for Deepspeed sharded models (i.e. using the `Parameter` attribute `ds_numel`). However, `Parameter` Tensors don't always have the `ds_numel` attribute (even when the model is sharded with Zero stage 3).
We can see how this is [alternatively handled in Deepspeed](https://github.com/microsoft/DeepSpeed/blob/ceccfa3ef68182384c6db1349fab43b9af3ed7f3/deepspeed/runtime/engine.py#L3220), by falling back to `Parameter.numel()` if `ds_numel` is not an attribute. I've added this fix to the function in question.
Fixes #24792
@stas00 @pacman100 | 07-20-2023 05:22:38 | 07-20-2023 05:22:38 | Sure -- imo my rewrite was more readable but I guess that's subjective ๐
Anyway, I've reverted the other changes and added the fix. Lmk, thanks!<|||||>> Sure -- imo my rewrite was more readable but I guess that's subjective +1
>
> Anyway, I've reverted the other changes and added the fix. Lmk, thanks!
If I may share why I wrote it this way:
From the perspective of a user who doesn't know anything about deepspeed the current version is easier to understand since they don't need to go into that branch.
From the perspective of a deepspeed user your original version is easier to read.
As there are many more non-deepspeed users the former version is preferred.
We could actually rewrite all of this code into a single line of
```
p.ds_numel if hasattr(p, "ds_numel") else p.numel()
```
and not even need to check if it's running under deepspeed zero-3, but then the reader will wonder what in the world `ds_numel` is ;)
Thank you again for your contribution, @apoorvkh
<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,941 | open | Prevent Dynamo graph fragmentation in GPTNeoX with torch.baddbmm fix | # What does this PR do?
Fixes #24940.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker and @younesbelkada
| 07-20-2023 00:42:00 | 07-20-2023 00:42:00 | I'm struggling to see what's wrong with my code style since when I run `black` locally it says it's fine.<|||||>> Hey! Thanks for submitting a PR! You should be using `make style` to properly format the code.
This doesn't seem to work for me. `make style` makes a ton of changes to _other_ files that I didn't modify, but doesn't change my code at all.<|||||>> My only concern here is that the dtype is not set. A small test like this shows that this will have some effect for some head size
I suppose I can manually cast `norm_factor` to the appropriate dtype and then convert back to a Python scalar. On the other hand, inverse square root isn't going to have _any_ numerical error as long as the head size is a power of 2, which it almost always is (in fact, it's almost always 64). So in practice this shouldn't matter at all.
Also, presumably the ground truth should be the Eleuther `gpt-neox` implementation. That implementation seems to already encode `norm_factor` as a Python scalar and uses `math.sqrt`, see [here](https://github.com/EleutherAI/gpt-neox/blob/408e29d9c746a02d842917bb7447c5c4be0b42d4/megatron/model/transformer.py#L298). It passes a Python scalar to `torch.baddbmm` [here](https://github.com/EleutherAI/gpt-neox/blob/408e29d9c746a02d842917bb7447c5c4be0b42d4/megatron/model/transformer.py#L420).
So in fact, our current implementation is already "wrong" with respect to the original, and this PR would correct the discrepancy. Although, as I said, in almost all cases this will make no difference.<|||||>Ok! Got what you mean. Part of the code was added in #22888 in order to have proper float16 casting. Yes we should match the original results (which is currently the case) but we have also other functionalities on which a lot of users rely!
Let's make sure that the value is casted to the correct type for the next computations.
(make style is probably behaving wrongly because of the black / ruff versioning) <|||||>> Ok! Got what you mean. Part of the code was added in #22888 in order to have proper float16 casting. Yes we should match the original results (which is currently the case) but we have also other functionalities on which a lot of users rely! Let's make sure that the value is casted to the correct type for the next computations.
>
> (make style is probably behaving wrongly because of the black / ruff versioning)
Hmm okay, it looks like prior to #22888 we were just using `float32` all the time which is clearly wrong. This PR shouldn't cause any regressions because the Python scalar will work with any parameter dtype.
Is there anything I can do to get the codestyle check to pass? That seems to be the only thing preventing merging right now.<|||||>The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24941). All of your documentation changes will be reflected on that endpoint.<|||||>Sorry but no, in transformers we autocast the weights to the required dtype. You can try the following:
```python
>>> from transformers import GPTNeoXModel
>>> import torch
>>> model = GPTNeoXModel.from_pretrained("EleutherAI/gpt-neox-20b", torch_dtype = torch.float16)
>>> print(model.layers[1].attention.norm_factor)
tensor(9.7969, dtype=torch.float16)
```
So the dtype was properly handle. We initialized it with float32 but then it can be casted to any dtype.
For the red ci run `make style`! |
transformers | 24,940 | open | TorchDynamo graph needlessly fragmented for GPTNeoX due to baddbmm type mistake | ### System Info
- `transformers` version: 4.32.0.dev0
- Platform: Linux-5.19.0-42-generic-x86_64-with-glibc2.27
- Python version: 3.10.10
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```py
from transformers import AutoModelForCausalLM
import torch
def debug_backend(gm: torch.fx.GraphModule, example_inputs: list[torch.Tensor]):
print("debug_backend() called with FX graph:")
gm.graph.print_tabular()
return gm.forward # return a python callable
model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-160m")
jitted = torch.compile(model, backend=debug_backend)
jitted(**model.dummy_inputs)
```
The output is too long to fit in a comment, so you'll have to run the code yourself. It features `"debug_backend() called with FX graph:"` being printed several times, each time followed with a fragment of the whole computation graph. This is not expected since NeoX has no data-dependent control flow.
### Expected behavior
The `torch.compile` backend should only be called once, and therefore `"debug_backend() called with FX graph:"` should only appear once, because GPT NeoX does not actually require any data-dependent control flow.
I've already checked that this can be fixed by turning `GPTNeoXAttention.norm_factor` into a Python scalar instead of a tensor. This is actually what `torch.baddbmm` expects for its `alpha` parameter; it's [supposed to be](https://pytorch.org/docs/stable/generated/torch.baddbmm.html) a scalar. But it seems to silently convert tensors into scalars, so this doesn't cause a crash in normal use.
<img width="577" alt="Captura de pantalla 2023-07-19 a la(s) 5 27 42 p m" src="https://github.com/huggingface/transformers/assets/39116809/24274bdb-2599-4ab6-896b-dd77ff98461e">
The exact fix is, in `modeling_gpt_neox.py`, replace lines 103-107 with:
```py
self.norm_factor = self.head_size ** -0.5
```
and replace the `baddbmm` call inside `_attn` with:
```py
attn_scores = torch.baddbmm(
attn_scores,
query,
key.transpose(1, 2),
beta=1.0,
alpha=self.norm_factor,
)
``` | 07-20-2023 00:39:38 | 07-20-2023 00:39:38 | @fxmarty I think you are more familiar with this topic? If so, could you take a look, thanks!<|||||>Hi @norabelrose, would you like to submit a PR?<|||||>> Hi @norabelrose, would you like to submit a PR?
I already did! ๐ See #24941. |
transformers | 24,939 | open | #24028 seems to break the last epoch for a logging integration | ### System Info
Hey @muellerzr,
thanks for your lightning fast (accelerated) reply ;) regarding #24028, I'm currently debugging what's causing the issue
Setup:
- A custom callback to log embeddings, the data collator in the Trainer is wrapped to extract ids of each sample in a batch
Error:
- The wrapped data collation works fine except in the last step
How to reproduce?
See reproduction tab
Currently this is the example I can show for reproduction.
My first guess, it's related to multiprocessing. It seems like the custom collator is not called in the last step.
But will give more details or a possible solution soon.
### Who can help?
@muellerzr
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
git clone https://github.com/rungalileo/dataquality.git
cd dataquality
python -m venv .venv
source .venv/bin/activate
pip install invoke
inv all
pip install --upgrade transformers
pytest tests/integrations/hf/test_text_classification_hf.py -s -k test_remove_unused_columns
```
### Expected behavior
Test should finish collating each step | 07-19-2023 22:30:41 | 07-19-2023 22:30:41 | Reproducible in [this colab example](https://colab.research.google.com/drive/1NObtjxY37VEx2u3SBFDDa_zwTILbX3B2?usp=sharing) currently<|||||>Found the issue, it seems like the on step end is called only after two data points have been collated. |
transformers | 24,938 | open | Serious issue with `device_map='balanced'` on GPT-2 | ### System Info
- `transformers` version: 4.29.1
- Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Loading GPT-2 with `device_map='balanced'` silently fails to load the pre-trained parameters for the LM head. On transformers `4.29.1`, there is no warning; if I upgrade to `4.31.0`, there is a warning that the LM head is not using pre-trained weights:
```
Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2-xl and are newly initialized: ['lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
The problem seems to be hardware-specific (a dual-A100 machine does **not** show this problem, but a dual-A6000 machine does).
Repro:
```
import transformers
m = transformers.GPT2LMHeadModel.from_pretrained("gpt2-xl", device_map='balanced', cache_dir='/scr/em7')
tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2-xl", cache_dir='/scr/em7')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
inputs['labels'] = inputs.input_ids.clone()
outputs = m(**inputs)
loss = outputs.loss
print(loss)
```
Running this twice on a dual-A6000 machine with transformers 4.29.1, I get
```
tensor(10.8342, grad_fn=<ToCopyBackward0>)
```
both times. On a dual-A100 machine, I get the expected value of
```
tensor(4.0143, grad_fn=<ToCopyBackward0>)
```
Simply removing the `device_map='balanced'` argument produces the correct output in all cases. If I had to guess, I'd say there is a bug in `PretrainedModel.from_pretrained()` and/or `PretrainedModel._load_pretrained_model`.
### Expected behavior
The loss should be the same (`tensor(4.0143, grad_fn=<ToCopyBackward0>)`) on all machines and model configurations. | 07-19-2023 22:10:45 | 07-19-2023 22:10:45 | cc @sgugger (especially the user says `Simply removing the device_map='balanced' argument produces the correct output in all cases`) |
transformers | 24,937 | open | Support symbolic tracing for NeoX models | ### Feature request
Currently `transformers.utils.fx.symbolic_trace` fails when passed any NeoX model, and I'd like to fix that:
```
NotImplementedError: Model GPTNeoXForCausalLM is not supported yet, supported models: AlbertForMaskedLM,
AlbertForMultipleChoice, AlbertForPreTraining, AlbertForQuestionAnswering, AlbertForSequenceClassification,
AlbertForTokenClassification, AlbertModel, AltCLIPModel, AltCLIPTextModel, AltCLIPVisionModel, BartForCausalLM,
BartForConditionalGeneration, BartForQuestionAnswering, BartForSequenceClassification, BartModel, BertForMaskedLM,
BertForMultipleChoice, BertForNextSentencePrediction, BertForPreTraining, BertForQuestionAnswering,
BertForSequenceClassification, BertForTokenClassification, BertLMHeadModel, BertModel, BlenderbotForCausalLM,
BlenderbotForConditionalGeneration, BlenderbotModel, BlenderbotSmallForCausalLM,
BlenderbotSmallForConditionalGeneration, BlenderbotSmallModel, BloomForCausalLM, BloomForQuestionAnswering,
BloomForSequenceClassification, BloomForTokenClassification, BloomModel, CLIPModel, CLIPTextModel,
CLIPTextModelWithProjection, CLIPVisionModel, CLIPVisionModelWithProjection, ConvNextBackbone,
ConvNextForImageClassification, ConvNextModel, DebertaForMaskedLM, DebertaForQuestionAnswering,
DebertaForSequenceClassification, DebertaForTokenClassification, DebertaModel, DebertaV2ForMaskedLM,
DebertaV2ForMultipleChoice, DebertaV2ForQuestionAnswering, DebertaV2ForSequenceClassification,
DebertaV2ForTokenClassification, DebertaV2Model, DistilBertForMaskedLM, DistilBertForMultipleChoice,
DistilBertForQuestionAnswering, DistilBertForSequenceClassification, DistilBertForTokenClassification,
DistilBertModel, DonutSwinModel, ElectraForCausalLM, ElectraForMaskedLM, ElectraForMultipleChoice,
ElectraForPreTraining, ElectraForQuestionAnswering, ElectraForSequenceClassification,
ElectraForTokenClassification, ElectraModel, GPT2DoubleHeadsModel, GPT2ForQuestionAnswering,
GPT2ForSequenceClassification, GPT2ForTokenClassification, GPT2LMHeadModel, GPT2Model, GPTJForCausalLM,
GPTJForQuestionAnswering, GPTJForSequenceClassification, GPTJModel, GPTNeoForCausalLM, GPTNeoForQuestionAnswering,
GPTNeoForSequenceClassification, GPTNeoForTokenClassification, GPTNeoModel, GitVisionModel, HubertForCTC,
HubertForSequenceClassification, HubertModel, LayoutLMForMaskedLM, LayoutLMForQuestionAnswering,
LayoutLMForSequenceClassification, LayoutLMForTokenClassification, LayoutLMModel, LxmertForPreTraining,
LxmertForQuestionAnswering, LxmertModel, M2M100ForConditionalGeneration, M2M100Model, MBartForCausalLM,
MBartForConditionalGeneration, MBartForQuestionAnswering, MBartForSequenceClassification, MBartModel,
MT5ForConditionalGeneration, MT5Model, MarianForCausalLM, MarianMTModel, MarianModel, MegatronBertForCausalLM,
MegatronBertForMaskedLM, MegatronBertForMultipleChoice, MegatronBertForNextSentencePrediction,
MegatronBertForPreTraining, MegatronBertForQuestionAnswering, MegatronBertForSequenceClassification,
MegatronBertForTokenClassification, MegatronBertModel, MobileBertForMaskedLM, MobileBertForMultipleChoice,
MobileBertForNextSentencePrediction, MobileBertForPreTraining, MobileBertForQuestionAnswering,
MobileBertForSequenceClassification, MobileBertForTokenClassification, MobileBertModel, NezhaForMaskedLM,
NezhaForMultipleChoice, NezhaForNextSentencePrediction, NezhaForPreTraining, NezhaForQuestionAnswering,
NezhaForSequenceClassification, NezhaForTokenClassification, NezhaModel, OPTForCausalLM, OPTForQuestionAnswering,
OPTForSequenceClassification, OPTModel, PLBartForCausalLM, PLBartForConditionalGeneration,
PLBartForSequenceClassification, PLBartModel, PeftModelForCausalLM, PeftModelForSeq2SeqLM, PegasusForCausalLM,
PegasusForConditionalGeneration, PegasusModel, ResNetBackbone, ResNetForImageClassification, ResNetModel,
RobertaForCausalLM, RobertaForMaskedLM, RobertaForMultipleChoice, RobertaForQuestionAnswering,
RobertaForSequenceClassification, RobertaForTokenClassification, RobertaModel, SegformerForImageClassification,
SegformerForSemanticSegmentation, SegformerModel, Speech2Text2Decoder, Speech2Text2ForCausalLM,
Speech2TextForConditionalGeneration, Speech2TextModel, SwinBackbone, SwinForImageClassification,
SwinForMaskedImageModeling, SwinModel, T5ForConditionalGeneration, T5Model, TrOCRDecoder, TrOCRForCausalLM,
ViTForImageClassification, ViTForMaskedImageModeling, ViTModel, Wav2Vec2ForCTC, Wav2Vec2ForMaskedLM,
Wav2Vec2ForPreTraining, Wav2Vec2ForSequenceClassification, Wav2Vec2Model, XGLMForCausalLM, XGLMModel
```
### Motivation
The main motivation for this is to enable graph rewriting with the EleutherAI Pythia model suite. Graph rewriting has various interpretability use-cases and the Pythia suite was designed for interpretability research.
### Your contribution
I plan to implement a PR for this soon unless there's some major blocker for it. | 07-19-2023 22:08:14 | 07-19-2023 22:08:14 | Hi @norabelrose
That would be very nice if you are able to enable this ๐ค . Thanks in advance. |
transformers | 24,936 | closed | Add support for Llama-2-70b-chat-hf in transformers | ### Model description
Not sure if it is a bug, or that it is intentionally not supported yet. In either case: there have been 0 confirmations of people being able to successfully run the official **Llama-2-70b-chat-hf** model in transformers.
### Open source status
- [ ] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Official model weights:
https://huggingface.co/meta-llama/Llama-2-70b-chat-hf
Related open bug:
https://github.com/facebookresearch/llama/issues/423 | 07-19-2023 21:51:01 | 07-19-2023 21:51:01 | Turns out the model is supported, but has a bug in the `config.json`, see https://github.com/facebookresearch/llama/issues/423
Also, I am adding a version that works without any manual changes here: https://huggingface.co/daryl149/llama-2-70b-chat-hf |
transformers | 24,935 | closed | fix llama2 chat system prompt | # What does this PR do?
Fixes the Llama 2 System Prompt so its consistent with METAs version.
When testing my finetuning script, I found that the official LLAMA Code and the Huggingface code returned different tokens for the same code, apparently due to different linebreaks.
Code to test:
```python
#from https://github.com/facebookresearch/llama/blob/6c7fe276574e78057f917549435a2554000a876d/llama/generation.py#L46C11-L46C11
DEFAULT_SYSTEM_PROMPT = """\
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."""
tokenizer = LlamaTokenizer.from_pretrained("/Users/jph/dev2/models/Llama-2-7b-chat-hf")
token_llama=tokenizer.encode(DEFAULT_SYSTEM_PROMPT)
from transformers.models.llama.tokenization_llama import DEFAULT_SYSTEM_PROMPT as DEFAULT_SYSTEM_PROMPT_TRANSFORMERS
token_hf=tokenizer.encode(DEFAULT_SYSTEM_PROMPT_TRANSFORMERS_FAST)
for i in range(100):
print (f"llama: {tokenizer.decode(token_llama[i])} , transformers: {tokenizer.decode(token_hf[i])}")
if token_llama[i]!=token_hf[i]:
print('!!!')
```
which returns:
```
...
llama: being , transformers: being
llama: safe , transformers: safe
llama: . , transformers: .
llama: Your , transformers: Your
llama: answers , transformers: ans
!!!
llama: should , transformers: wers
...
```
I don't know how big the difference is, but it surely doesn't make sense to deviate here (even if there is no performance issue and it's just for debugging reasons).
## Who can review?
@ArthurZucker @sgugger
| 07-19-2023 21:48:10 | 07-19-2023 21:48:10 | > Hey! Thanks for catching this. There seems to be a missing space in our prompt, here: `Youranswers should not include...`.
>
> I can't accept the PR in the current state as the formating is there to comply with our linters. If you can just add the missing space would be great! Thanks for catching this. We have a test in the conversational pipeline, which did not detect this!
sure; tried to fix the formatting and make it more readable....<|||||>Hey! Sorry, #24930 is ready, we'll merge it in favor of this one! Thanks a lot for pointing out and contributing! ๐ค <|||||>> Hey! Sorry, #24930 is ready, we'll merge it in favor of this one! Thanks a lot for pointing out and contributing! ๐ค
sure, didnt see that there was another PR fixing this.
But just for next time - why did the checks still fail? I have absolutely no idea from the CircleCI messages how to check myself or what the reason is, lines should be short enough?<|||||>You have to run the linter! The lines are probably too short this time and can be optimised haha! Use `make style` again after each changed. |
transformers | 24,934 | open | Change package name from "transformers" to something less generic | ### Feature request
I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude.
My preference would be a pattern like what you get with all the other big libraries like numpy or pandas:
```
import huggingface as hf
# hf.transformers, hf.datasets, hf.evaluate
```
or things like
```
import huggingface.transformers as tf
# tf.load_model(), etc
```
If this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on.
I realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this.
Sorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like "package name".
Sister issues:
- **transformers**
- [datasets](https://github.com/huggingface/datasets/issues/6053)
- [evaluate](https://github.com/huggingface/evaluate/issues/476)
### Motivation
Not taking up package names the user is likely to want to use.
### Your contribution
No - more a matter of internal discussion among core library authors. | 07-19-2023 19:53:24 | 07-19-2023 19:53:24 | You do realize this would break the existing code of many many people?<|||||>Yes
My theory/suggestion is that HF is still a relatively young library used by a relatively niche community used to having to move in a rapidly developing field (we're not talking about the C standard lib or something), that a lot of people likely feel this way, and that if this change were implemented it would be looked back on as a good decision ten years later (not as if we're new to breaking changes in the Python community - hell even HF has pushed breaking changes before)<|||||>That kind of stuff would be hell for projects like ours, we have many low level patches in place to extend HF.<|||||>Hello there.
Would like to share some mental models about this
_General TLTR ; No, because for now, the libraries are consistent and helpful in becoming the standard ._
The following comments have sections
* Impact
**TLTR;** _17M monthly downloads, 1700 monthly MAUs, +100K repositories impact_
Q : can I help you brainstorm other names -syntactically and semantically aligned- that could help solve your problem?
* Considerations in the matter
**TLTR;** _Other standard names are also taken._
_Balancing Makers and Takers to scale and sustain Open Source is a line of thought to take into account_
Q : would it be worthy to think deeply about the trade-off that the libraries are giving with respect to what they are taking ? Can I help you brainstorm the utilities you put on **evaluate.py** and **datasets.py** on your code and submit a contribution so we can encapsulate your needs to all coders and avoid frustration?
* Responsibility when becoming the standard
**TLTR;** _Motivation of owners might be becoming the standard. They seem worried about that responsibility in many dimensions._
Q : do you think we shall consider this dimension into account for this matter?
* Bibliography and Openness
### Impact
**TLTR;** _17M monthly downloads, 1700 monthly MAUs, +100K repositories impact_
**Hypothesis limitations**: _this data could change with other insights about MAUs funnel conversion and maintained active repositories + private repositories. Total MAUs have not being calculated due to incomplete information that would made data-driven conclusions too intuitive_
In order to gain some data-driven perspective about the impact of this change, what I did is check-in the downloads coming from [PyPI](https://pypistats.org/) from the 3 libraries and make a sum of the last month's downloads, giving an overall sum of 17M-ish . I'm assuming that there is a _clear funnel_ here that separates users that are newcomers, explorers, and MAUs ( Monthly Active Users ). My analysis took me to focus on these last ones, as they are using the code regularly or might be the ones that might be using the libraries in a production scenario or in a work dependent project. Taking out 4 orders of magnitude - in a pessimistic overview - the hypothesis takes us to new 1700 montly-MAUs
<img width="1153" alt="Captura de pantalla 2023-07-27 a las 12 47 21" src="https://github.com/huggingface/transformers/assets/24204714/74b1fce6-3eec-471b-a204-77e5a804dd79">
<img width="1117" alt="Captura de pantalla 2023-07-27 a las 12 48 20" src="https://github.com/huggingface/transformers/assets/24204714/424d490d-7b22-4965-a32f-575e712f37d9">
<img width="1105" alt="Captura de pantalla 2023-07-27 a las 12 47 50" src="https://github.com/huggingface/transformers/assets/24204714/1d699b9b-c26e-4ade-a7c1-3582f8c61cbb">
Therefore, the data-driven impact exploration took me to **used-by** reporting in the head page of the repository, as the impact of a number of repositories that depend on the libraries. Transformers library has been reported to be used by 84,4 K people, datasets by 20,4 k people, and datasets by 2.9 k people. This gave a total of +100K repositories this change could have impact in .
Hypothesis limitations: this data could change with other insights about MAUs funnel conversion and maintained active repositories + private repositories.
_Before going further, and I guess this is a question directly for @geajack , can I help you brainstorm other names - syntactically and semantically aligned - that could help solve your problem?_
### Considerations in the matter
**TLTR;** _Other standard names are also taken._
_Balancing Makers and Takers to scale and sustain Open Source is a line of thought to take into account_
What I understood from the issue is that the generalization of the package name supposed an interference and a cognitive dissonance WRT the naming standard with respect to other libraries. Then I went to `check-availability` package to see if **other standard names** could solve your problem - tried dataset and evaluation - and none were available.
```
check-availability pypi dataset --verbose 3
GET https://pypi.org/project/dataset
Got status code 200
The name dataset is not available on pypi
```
```
GET https://pypi.org/project/evaluation
Got status code 200
The name evaluation is not available on pypi
```
I really -really- tried to benchmark your motivations with Open Source Research insights [1](https://arxiv.org/pdf/2101.10291.pdf) [2](https://arxiv.org/pdf/2306.05548.pdf) [3](https://openaccess.city.ac.uk/id/eprint/5955/1/ContentServer_%281%29.pdf) to try to have an _empathetic generalistic view about this concern ._ Still maturing it, but what Im taking is that you might encounter beneficial and aligned with some Open Source ideas(yet to be proven representative) that generalistic **names** are not proprietary, beyond your individual code problem.
However, I invite you to go deeper into motivations behind Open Source, as there seem to be equally important motivations that contributors and users are driven by. Encourage you to please share with me mature ideas that might not be aligned with my mental model. If we can go beyond one individual, and try to catch a community o a more general mental model, that would be amazing.
On the other hand, putting myself in Hugginface's shoes, I couldnยดt stop thinking broadly about their Open Source sustainability contribution with respect to other companies and proprietary software. Really recommend this [reading](https://dri.es/balancing-makers-and-takers-to-scale-and-sustain-open-source)!
_Before going further , and I guess these is a question for @geajack , would it be worthy to think deeply about the trade-off that the libraries are giving with respect to what they are taking ? Can I help you brainstorm the utilities you put on evaluate.py and datasets.py on your code and submit a contribution so we can encapsulate your needs to all coders and avoid frustration?_
### Responsibility when becoming the standard
**TLTR;** _Motivation of owners might be becoming the standard. They seem worried about that responsibility in many dimensions._
It might be fair to think that that **naming** in this case might entail the search for becoming the standard, and I left to the reader to analyze whether the owners of the libraries are being responsible or not with respect to their Open Source duties for being recognized as such beyond the naming in order to analyze coherence. On my side, the trust level system and contributor management , together with the pro-active response with respect to other Open Source responsibilities, talk by itself. This doesnยดt entail that they should have a present and future concern on this matter.
I guess this is a question for @geajack , do you think we shall consider this dimension into account for this matter?
### Bibliography and Openess
Beyond the cited readings, I really recommend this [book](https://www.amazon.com/Perspectives-Free-Source-Software-Press/dp/0262562278) .
I m acknowledging that this response might be dense, so I would like to thank the reader, the owner of this issue, the contributors, and the maintainer for going through this material. As an emotional openness exercise and following the bravery of @geajack , I must confess It has taken me a significant amount of **courage** to press Comment on this one.
I just hope that this can glimpse another logical perspective, new possible paths coming from questions, and other thoughts that might be mutable due to new shreds of evidence.
<|||||>@SoyGema thanks for the detailed breakdown. First of all I just want to say that I don't intend to present myself as some kind of sponsor for these issues - I just want there to be a place in the issue tracker for people to voice this concern if it is indeed a common concern.
I do think you may have misunderstood the issue at a couple of points, though. In your second section, it sounds like you think the complaint is that because HF is taking up `evaluate` on PyPi, that therefore I or somebody else can't have our own package on PyPi. That isn't the issue - the issue is that if I want to use HF's `evaluate` locally, I can't have my own *local* `evaluate.py`.
My most recent use-case for this was wanting a script called `evaluate.py` that I would actually run from the command line to run evaluation of my results - I had to change it to something more awkward like `evaluation.py`, which is annoying because it is after all a command and should ideally have the form of an imperative verb. I also routinely have a package called in my codebases to provide utility functions for managing my own datasets. As it happens, I've always called that package `data`, but I could imagine another programmer wanting to call it `datasets` and being annoyed that they can't.
I'm not under the impression that this is a change that can be made tomorrow or even this year. When I opened these issues I pictured them (assuming they didn't just get buried) being the kinds of issues that sit open for years and years accumulating hundreds of comments, acting as an informal community forum before anything is done about them. The only place on the internet I could find someone expressing a similar sentiment was [this highly upvoted /r/Python comment](https://old.reddit.com/r/Python/comments/xeoyqc/how_did_hugging_face_get_such_good_pypi_package/ioi5ud5/), but I suspect a fair few people feel this way.<|||||>Hey @geajack thanks for your response and for the clarification. Thanks also for the reddit link, that wasn't on my radar until now. As feedback , if you could share a line with the motivations and links behind this issue when opened that would be great!๐
I'm happy that you already have a turn around for this . Yes, you are correct. I thought that this was beyond a local use of a script and more library oriented due to the impact of the change and my normal sparks under 'annoying' naming scenario.
I agree with your impression, and let's see what time brings ๐
|
transformers | 24,933 | closed | KeyError: 'input_ids' on Whisper training with include_inputs_for_metrics | ### System Info
- `transformers` version: 4.32.0.dev0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (gpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This link https://colab.research.google.com/drive/1mMolQVClnnC_hi1J6DeCUOElJtatSXoz?usp=sharing points to a Colab notebook that reproduces the issue. It is a slightly modified version from the official post in https://huggingface.co/blog/fine-tune-whisper
Basically, whenever setting `include_inputs_for_metrics=True` for a training with whisper, the error message and stack trace below appear.
As a clarification, I was just experimenting with `include_inputs_for_metrics`. I realize it doesn't make much sense in this context to receive the bare inputs in `compute_metrics`. It would be better to receive any other type of metadata from other dataset fields.
Nevertheless, it seems like this is a bug, given that these models receive their inputs as `input_features` and not `input_ids`.
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-20-3435b262f1ae>](https://localhost:8080/#) in <cell line: 1>()
----> 1 trainer.train()
6 frames
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1524 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1525 )
-> 1526 return inner_training_loop(
1527 args=args,
1528 resume_from_checkpoint=resume_from_checkpoint,
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1886 self.control = self.callback_handler.on_step_end(args, self.state, self.control)
1887
-> 1888 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
1889 else:
1890 self.control = self.callback_handler.on_substep_end(args, self.state, self.control)
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval)
2211 metrics.update(dataset_metrics)
2212 else:
-> 2213 metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
2214 self._report_to_hp_search(trial, self.state.global_step, metrics)
2215
[/usr/local/lib/python3.10/dist-packages/transformers/trainer_seq2seq.py](https://localhost:8080/#) in evaluate(self, eval_dataset, ignore_keys, metric_key_prefix, **gen_kwargs)
157 self._gen_kwargs = gen_kwargs
158
--> 159 return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
160
161 def predict(
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in evaluate(self, eval_dataset, ignore_keys, metric_key_prefix)
2919
2920 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
-> 2921 output = eval_loop(
2922 eval_dataloader,
2923 description="Evaluation",
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix)
3109 # Prediction step
3110 loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
-> 3111 inputs_decode = self._prepare_input(inputs["input_ids"]) if args.include_inputs_for_metrics else None
3112
3113 if is_torch_tpu_available():
[/usr/local/lib/python3.10/dist-packages/transformers/feature_extraction_utils.py](https://localhost:8080/#) in __getitem__(self, item)
84 """
85 if isinstance(item, str):
---> 86 return self.data[item]
87 else:
88 raise KeyError("Indexing with integers is not available when using Python based feature extractors")
KeyError: 'input_ids'
```
### Expected behavior
The dummy training should be able to complete its validation loop. | 07-19-2023 18:29:06 | 07-19-2023 18:29:06 | The `include_inputs_for_metrics=True` feature only supports text models, not other modalities for now.<|||||>Noted. Thanks for the quick response! I couldn't find anything that suggested so in the docs, so I hope this github issue comes in handy if anyone else tries this.
I'll go ahead and close the issue.<|||||>You don't need to close this, we should probably support other modalities better in the Trainer ;-) I was mainly stating that `input_ids` is hard-coded in this code and we would need to update it.<|||||>Oh ok. I opened it again then. Thanks!<|||||>Should be fixed by the PR linked above if you want to try.<|||||>Hi. That's awesome! I just tried it in the colab notebook I shared and it worked nicely. Thanks! |
transformers | 24,932 | open | Don't wait for mlflow.log_artifact in Trainer api | ### Feature request
It would be ideal if there was an option to make `transformers.Trainer.train` with `HF_MLFLOW_LOG_ARTIFACTS=1` continue training while a separate process/thread logs the artifacts, so the whole run doesn't hang because of a network bottleneck.
I would be happy to discuss the possibilities and/or limitations of something like this (for example if it conflicts with `save_total_limit`).
### Motivation
I've had multiple situations where model artifact logging was almost as slow as training itself. This is especially a problem when training on expensive cloud GPU nodes for reasons I don't even need to explain.
### Your contribution
I would be willing to discuss and potentially contribute to a feature like this once we've discussed it. | 07-19-2023 17:21:33 | 07-19-2023 17:21:33 | I think this makes a lot of sense.
Note that:
> Integrations with reporting platforms are entirely maintained by the developers of those integrations or the community.
Would you like to open a PR ๐ค ?<|||||>Hi @ydshieh, thank's for the response.
Before opening a PR, I'd like to discuss the feasability of such a feature in the first place.
I can't tell if this would require an aggressive refactor of platform integration callbacks, if so, maybe it isn't worth it at the moment.
As I mentioned, I have no idea how introducing this concurrency will affect the rest of the Trainer api, it could cause a race condition when there's stuff like `save_total_limit`. Anyway, here's an outline I thought of so far, haven't tested it yet:
```py
class AsyncTrainerCallback(TrainerCallback):
# ...
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._log_queue = queue.Queue()
self._worker_thread = threading.Thread(target=self._worker_loop)
self._shutdown = False
self._worker_loop.start()
def _worker_loop(self):
while not self._shutdown or not self._log_queue.empty():
task = self._log_queue.get()
task()
self._log_queue.task_done()
def _stop_worker(self):
self._shutdown = True
if not self._log_queue.empty():
print("Waiting for logging to finish...")
self._log_queue.join()
self._worker_thread.join()
class MLflowCallback(AsyncTrainerCallback):
def on_save(self, args, state, control, **kwargs):
if self._initialized and state.is_world_process_zero and self._log_artifacts:
ckpt_dir = f"checkpoint-{state.global_step}"
artifact_path = os.path.join(args.output_dir, ckpt_dir)
### instead of:
# logger.info(f"Logging checkpoint artifacts in {ckpt_dir}. This may take time.")
# self._ml_flow.pyfunc.log_model(
# ckpt_dir,
# artifacts={"model_path": artifact_path},
# python_model=self._ml_flow.pyfunc.PythonModel(),
# )
### do:
task = lambda: self._ml_flow.pyfunc.log_model(
ckpt_dir,
artifacts={"model_path": artifact_path},
python_model=self._ml_flow.pyfunc.PythonModel(),
)
self._log_queue.put(task)
def on_train_end(self, args, state, control, **kwargs):
# ...
self._stop_worker()
```<|||||>I think it's probably better not to create the worker from the start. Just create the queue and worker inside `on_save`, then put the task to the queue.
WE shouldn't have to worry about `save_total_limit`, as I don't see it is used in the current `MLflowCallback`, which means it is controlled by the `Trainer` class. |
transformers | 24,931 | closed | [doc] `image_processing_vilt.py` wrong default documented | Sync the doc with reality.
https://github.com/huggingface/transformers/blob/ee4250a35f3bd5e9a4379b4907b3d8f9d5d9523f/src/transformers/models/vilt/image_processing_vilt.py#L299
https://github.com/huggingface/transformers/blob/ee4250a35f3bd5e9a4379b4907b3d8f9d5d9523f/src/transformers/models/vilt/image_processing_vilt.py#L310 | 07-19-2023 17:02:58 | 07-19-2023 17:02:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,930 | closed | Fix missing spaces in system prompt of Llama2 tokenizer | # What does this PR do?
This PR fixes a typo in system prompt of Llama2 tokenizer. There are missing spaces compared to https://github.com/facebookresearch/llama/blob/main/llama/generation.py
Thank you. | 07-19-2023 16:50:04 | 07-19-2023 16:50:04 | cc @ArthurZucker <|||||>Hey thanks, a similar PR is also #24935. Same comment would apply here, make sure `make style` is green and should be good! <|||||>Hi, thank you so much for your help. Seems it still cannot go green. I am not very familiar with it ... Could you give me some guidance? ;)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,929 | closed | The xformers result can not match with norm attention result | ### System Info
Collecting environment information...
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Alibaba Group Enterprise Linux Server 7.2 (Paladin) (x86_64)
GCC version: (GCC) 7.5.0
Clang version: Could not collect
CMake version: version 3.22.0
Libc version: glibc-2.32
Python version: 3.8.13 (default, Oct 21 2022, 23:50:54) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.112-005.ali5000.alios7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.3.58
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 470.154
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.4.0
/usr/lib64/libcudnn_adv_infer.so.8.4.0
/usr/lib64/libcudnn_adv_train.so.8.4.0
/usr/lib64/libcudnn_cnn_infer.so.8.4.0
/usr/lib64/libcudnn_cnn_train.so.8.4.0
/usr/lib64/libcudnn_ops_infer.so.8.4.0
/usr/lib64/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.4
[pip3] torch==1.13.0+cu111
[pip3] torchaudio==0.11.0
[pip3] torchvision==0.14.0
[conda] No relevant packages
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: module 'triton.language' has no attribute 'constexpr'
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: module 'triton.language' has no attribute 'constexpr'
xFormers 0.0.15.dev+103e863.d20221125
memory_efficient_attention.flshatt: available - requires GPU with compute capability 7.5+
memory_efficient_attention.cutlass: available
memory_efficient_attention.small_k: available
swiglu.fused.p.cpp: available
is_triton_available: False
is_functorch_available: False
pytorch.version: 1.13.0
pytorch.cuda: available
gpu.compute_capability: 8.0
gpu.name: NVIDIA A100-SXM4-80GB
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I use the gpt-neox model to inference. And try to modify the `_attn` with xformers to speedup, but the generate output result is wrong with `use_cache=True` while correct with `use_cache=False`.
I modify from #24653 by replacing `_attn` function of `GPTNeoXAttention` with code below
```
def _xformers_attn(self, query, key, value, **kwargs):
# q, k, v: [bs, num_attention_heads, seq_len, attn_head_size]
# xformers Input tensors must be in format [B, M, H, K], where B is the batch size, M the sequence length, H the number of heads, and K the embeding size per head
# [bs, num_attention_heads, seq_len, attn_head_size] -> [bs, seq_len, num_attention_heads, attn_head_size]
query = query.transpose(1, 2).to(value.dtype)
key = key.transpose(1, 2).to(value.dtype)
value = value.transpose(1, 2)
# org [bs, num_attention_heads, seq_len, attn_head_size]
# xformers return multi-head attention Tensor with shape [B, Mq, H, Kv]
output = xops.memory_efficient_attention(
query, key, value, op=xops.MemoryEfficientAttentionFlashAttentionOp,
attn_bias=xops.LowerTriangularMask(),
p=self.config.attention_dropout if self.training else 0.0
)
# [b, sq, np, hn] -> [b, np, sq, hn]
matmul_result = output.transpose(1, 2)
return matmul_result.to(query.dtype), None
```
The generate output is correct with `use_cache=False`, while wrong with `use_cache=True` (first token is right but the latter ones are wrong).
here is the generate output with `use_cache=True`
![image](https://github.com/huggingface/transformers/assets/21999339/0fe8015e-097c-4854-a8c5-7fe3cfa14250)
And I have test the output of `_attn` and `_xformers_attn` in https://github.com/facebookresearch/xformers/issues/798 , which is correct.
### Expected behavior
I want to speed up the attention with xformers. | 07-19-2023 16:18:42 | 07-19-2023 16:18:42 | You should use the [forums](https://discuss.huggingface.co/) to help debug your code. It's not really Transformers fault if you can't match the results of a model after changing its implementation ๐
|
transformers | 24,928 | closed | Remove `llama2` | A mistake ๐
| 07-19-2023 15:23:01 | 07-19-2023 15:23:01 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24928). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,927 | closed | Allow generic composite models to pass more kwargs | # What does this PR do?
Generic composite models: `(Text|Vision|Speech)EncoderDecoder`: their `forward` don't have the full info. in their 2 components.
Fix #24919 | 07-19-2023 15:10:34 | 07-19-2023 15:10:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looking at the errors, we probably need to check whether the model has `self.encoder` or `self.model.encoder` <|||||>> Looking at the errors, we probably need to check whether the model has `self.encoder` or `self.model.encoder`
I can do this, but the `model` in `self.model` is not something from convention I guess: it's more a implementation detail of each (concrete) encoder decoder model class (like `bart`)<|||||>@gante
Let me know if the latest change LGTY ๐ค . Thanks! |
transformers | 24,926 | closed | fix fsdp checkpointing issues | # What does this PR do?
1. FSDP loading now return the `load_result` to be given to `_issue_warnings_after_load`. Should be merged after https://github.com/huggingface/accelerate/pull/1745
2. Earlier it was wrongly saving `optimizer.pt` with FSDP, fixed it. | 07-19-2023 14:33:31 | 07-19-2023 14:33:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,925 | open | Fully compatible with the open clip Tokeniser | ### Feature request
The open clip tokeniser has a ```pad_token_id``` of 0, but this cannot be achieved because ```CLIPTokeniser.__init__()``` cannot set a ```pad_token_id```
### Motivation
Related to https://github.com/huggingface/diffusers/issues/4153
Stable diffusion v2 and XL use the open clip tokeniser. To avoid increasing dependencies, the transformers must also have the same functionality.
### Your contribution
It seems possible to set pad_token_id directly, but it is not realistic.
```
tokenizer = CLIPTokenizer.from_pretrained("stabilityai/stable-diffusion-2-1", pad_token="<|endoftext|>")
tokenizer.pad_token_id = 0
``` | 07-19-2023 14:25:37 | 07-19-2023 14:25:37 | @patrickvonplaten<|||||>cc @patil-suraj and @ArthurZucker <|||||>Hey, I am not really sure I understand the issue here. The `pad_token_id` is dependent on the `pad_token`. By default the `pad_token` is set to `'!'` which, maps to `0` : `tokenizer.decode(torch.tensor([0]))`, `tokenizer.convert_tokens_to_ids('!')`. If you set the padding token to a different value by hand, the following will happen:
```python
>>> tokenizer = CLIPTokenizer.from_pretrained("stabilityai/stable-diffusion-2-1", pad_token="<|endoftext|>")
>>> print(tokenizer.pad_token_id)
49407
>>> tokenizer.pad_token_id = 0
>>> print(tokenizer.pad_token)
'!'
```<|||||>> Hey, I am not really sure I understand the issue here. The `pad_token_id` is dependent on the `pad_token`. By default the `pad_token` is set to `'!'` which, maps to `0` : `tokenizer.decode(torch.tensor([0]))`, `tokenizer.convert_tokens_to_ids('!')`. If you set the padding token to a different value by hand, the following will happen:
You are right, my implementation does not seem to be perfect. The purpose of this implementation is to match when converting from text to token_id. What I want to know is how to set pad_token_id to 0 without affecting the words that are normally used like ```!```
<|||||>You can't really do that unless you train a new tokenizer ๐
You can add a new token, with a new index, which will prevent splitting `!`. The problem is that the embedding at position `0` might have been trained as padding token and is thus a random tensor (not updated by gradient computation). <|||||>cc @patil-suraj to explain the context around Stable Diffusion and OAI vs. openCLIP here maybe<|||||>Hey @laksjdjf ,
Indeed, there's a discrepancy between the `pad_token_id` in the open clip tokenizer and the `CLIPTokenizer` in `transformers`. But we can't change it for the existing models for backward compatibility reasons.
But note that for the tokenizer used in SD2 and SDXL it's already set correctly cf https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/tokenizer/special_tokens_map.json#L16
And a bit more context about padding token in CLIP. CLIP doesn't care about padding token and the wrong padding token will only affect inference when using all token embeddings (like Stable Diffusion). For training, even if the padding token is wrong (i.e if we use `eos` instead of `!`, it shouldn't affect) because
- `CLIP` did not use `attention_mask` during training.
- `CLIPTextEncoder` uses a casual mask, so the tokens to the right don't influence the hidden states of tokens to the left.
- `CLIP` is trained with contrastive loss, which is computed using the `projections`, and the `text_projection` is computed by pooling the `eos _token` embeddings, which will always be similar no matter what the padding token is, because `CLIPTextEncoder` is causal, so the eos embeddings won't be affected by tokens on the right.
- For downstream training (like SD), as long as a consistent token is used for padding, it shouldn't severely affect the training. But for inference, we will need to use the same token.
So the way CLIP is trained, it doesn't care about padding token. It'll only affect the inference if a different token (compared to the padding token used for training) is used for padding. And this is already taken care of in SD 2 and SDXL repos.<|||||>> But note that for the tokenizer used in SD2 and SDXL it's already set correctly cf https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/tokenizer/special_tokens_map.json#L16
My concern is that the above process will result in ```!``` no longer being available in its normal sense.
<|||||>Hey @laksjdjf,
As @patil-suraj mentioned CLIP never used a padding token for training. It was trained with a causal mask and only tokens **until the eos token** are taken into account when computing the CLIP contrastive loss. All tokens following the eos token have **no** influence on the model, so one could have added any token here.
Now, it only matters if a pretrained CLIP is further fine-tuned as is done for SD. In this case the padding token was used to influence the loss and in that sense SD does not make a difference between `!` and a `padding` token. **But** this is purely due to the way SD uses CLIP for fine-tuning - this is not an inherit characteristic of CLIP.<|||||>Hmmm, maybe I just don't understand, but my question is about the behaviour of the ```! token```, not the behaviour of the ```pad token```. If ```! token``` is converted to ```pad token```, it seems to make a difference when processing text containing ```! token``` .
```Python console
>>>tokenizer = CLIPTokenizer.from_pretrained("stabilityai/stable-diffusion-xl-base-0.9", subfolder="tokenizer_2")
>>>prompt = "! !! !!!"
>>>input_ids = tokenizer(prompt,padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors='pt').input_ids
>>>print(input_ids)
tensor([[49406, 0, 0, 0, 0, 0, 0, 49407, 0, 0, ...
>>>print(open_clip.tokenize(prompt))
tensor([[49406, 256, 748, 995, 49407, 0, 0, 0, 0, 0, ...
``` |
transformers | 24,924 | closed | `VisionTextDualEncoder`: Distributed training is always enabled | ### System Info
- `transformers` version: 4.32.0.dev0
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): 2.13.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: **It seems yes, but I don't want to ;)**
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi,
I'm running the **unchanged** ["VisionTextDualEncoder and CLIP model training example"](https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py) on my local laptop (which has 1 GPU) and wonder why it claims to do `distributed training: True` (and not `False`). From the output:
```
07/19/2023 15:21:22 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False
```
The above output originates from [`run_clip.py`](https://github.com/huggingface/transformers/blob/ee4250a35f3bd5e9a4379b4907b3d8f9d5d9523f/examples/pytorch/contrastive-image-text/run_clip.py#L260C1-L263C6)
```
logger.warning(
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
+ f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
)
```
* The default should be `training_args.local_rank=-1` according to [`TrainingArguments`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) but is somehow set to `0` in this example and I don't know why.
* Adding `local_rank=-1` to the [run_clip.py example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text#train-the-model) does not show any effect.
My questions:
* Is it intended that `local_rank` is set to `0`?
* Does `local_rank=0` really mean that distributed training in `Trainer` is enabled? (I'm new to `Trainer` and usually work with `DistributedDataParallel`)
* How to switch off distributed training?
---
Bigger picture: Sometimes my training (on a cluster) hangs up in n-1 iteration and never finishes. I wonder if this has to do with distributed training. I don't know how to debug this.
```
100%|โโโโโโโโโโ| 2875/2876 [11:34<00:00, 4.10it/s]
````
Thanks in advance!
### Expected behavior
I don't want to use distributed training, i.e. `training_args.local_rank = -1` | 07-19-2023 14:15:41 | 07-19-2023 14:15:41 | How are you launching the training script? Could you share that part?<|||||>I use the unchanged code from the [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text#train-the-model):
```
python examples/pytorch/contrastive-image-text/run_clip.py \
--output_dir ./clip-roberta-finetuned \
--model_name_or_path ./clip-roberta \
--data_dir $PWD/data \
--dataset_name ydshieh/coco_dataset_script \
--dataset_config_name=2017 \
--image_column image_path \
--caption_column caption \
--remove_unused_columns=False \
--do_train --do_eval \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="64" \
--learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \
--overwrite_output_dir
```
I neither use `python -m torch.distributed.launch ...` nor things like `accelerate launch ...`.
Just pure `python ...` :)
Thank you in advance!<|||||>That is really weird. @muellerzr could you have a look here to check we didn't mess something with the Accelerate integration in the Trainer?<|||||>This is fine, the scripts need to be updated however as checking `local_rank != -1` is the wrong check to use after the accelerate integration. Will open a PR. You can confirm it's training on non-multi-GPU by adding the following to that warning:
```python
logger.warning(
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
+ f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
+ f'State: {training_args.distributed_state}'
)
```
Which will print the accelerator state which has:
```
Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda
```
Like we expect ๐ <|||||>All the examples are updated in #24956 <|||||>Perfect! Thanks a lot for the clarification :+1: |
transformers | 24,923 | open | ๐ [i18n-KO] Translated `perf_train_cpu_many.md` to Korean | # What does this PR do?
Translated the `perf_train_cpu_many.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
May you please review this PR? @nuatmochoi, @bolizabeth, @hyunhp, @heuristicwave, @mjk0618, @jungnerd
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
May you please review this PR? @sgugger, @ArthurZucker, @eunseojo | 07-19-2023 14:13:30 | 07-19-2023 14:13:30 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24923). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,922 | closed | Deprecate unused OpenLlama architecture | Hello!
# What does this PR do?
1. Deprecate `OpenLlama` following #24787.
2. Add a disclaimer pointing users to `LLaMA` for the [OpenLLaMA models](https://huggingface.co/models?search=openllama).
3. Resolve a typo in a warning in `check_repo.py`
4. Read modeling files with `encoding="utf8"` in `check_config_attributes_being_used`.
If preferred, I can revert 3 and 4 and cherry-pick them into a separate PR.
Follow-up of #24913. This is a considerably better solution - I feel a bit embarrassed I even considered the other one.
## Details
I've followed the steps that @sgugger seems to have taken from #24787 to move OpenLlama into deprecation. This involves moving the main code, adapting the `__init__` files, removing the tests, and updating the documentation with a disclaimer.
Feel free to let me know if you'd rather keep the model non-deprecated for the time being, and then I'll revert to only the addition of the disclaimer.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Discussed in #24913.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
- Tom Aarsen
| 07-19-2023 13:59:41 | 07-19-2023 13:59:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> FAILED tests/test_modeling_utils.py::ModelPushToHubTester::test_push_to_hub - huggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/repos/create
Test failure seems unrelated, I can push an empty commit or let you rerun them.<|||||>> Re-launched the tests to make sure it's just a fluke. Pinging the original author @s-JoL for information and to see if there is any plan on open-sourcing another checkpoint for this architecture?
Due to various reasons, the previous open-source project has been shut down. Considering that there are more and more open-source models available now (including commercially available ones like Llama2), I believe that open-sourcing another similar model won't add much value to the community. Therefore, I think it's reasonable to mark this model as deprecated. However, I understand that some users are training with the Llama model that includes XFormers. Is it possible to add an optional XFormers acceleration in the Llama model to facilitate these users?<|||||>We can see how to add an optional XFormers acceleration in Llama in other PRs, yes. |
transformers | 24,921 | open | error when load model | ### System Info
2023-07-19 13:47:33.834732: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model = BertForSequenceClassification.from_pretrained('bert-base-uncased',
num_labels = 2)
model.to(device)
model.load_state_dict(torch.load("my_model.pt", map_location = torch.device("cpu")))
### Expected behavior
When running the code, it shows run time error: _IncompatibleKeys(missing_keys=[], unexpected_keys=['bert.embeddings.position_ids']). The error does not occur yesterday, but appear now. | 07-19-2023 13:49:23 | 07-19-2023 13:49:23 | You will need to drop that key from your state dict, or use the `save_pretrained` and `from_pretrained` method of the library.<|||||>I'm having the same issue, except my model is a custom model that includes a transformer model as a part of it so I need to use `torch.load_state_dict` to load the model.
Here's some code to reproduce the issue:
```python
import torch.nn as nn
from transformers import XLMRobertaConfig, XLMRobertaModel
class MyCustomModel(nn.Module):
def __init__(self):
super().__init__()
self.roberta = XLMRobertaModel(
XLMRobertaConfig.from_pretrained("Unbabel/xlm-roberta-comet-small")
)
self.my_thing = nn.Linear(384, 1) # let's pretend this is way more complicated
def main():
model = MyCustomModel()
print(sorted(list(model.state_dict().keys()))[:10])
if __name__ == "__main__":
main()
```
When running the script with transformers==4.30.0 (which is what I used for training the model) I get the following output:
```
['my_thing.bias', 'my_thing.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.position_ids', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.word_embeddings.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight']
```
Note the presence of `roberta.embeddings.position_ids`.
Now when I try to load the model using transformers==4.31.0, it gives me a key error, because the position IDs key seems to have been removed in the new version. Running the same code gives:
```
['my_thing.bias', 'my_thing.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.word_embeddings.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.dense.bias']
```
The position IDs key is not there anymore.
Shouldn't this at least be documented as a breaking change in the release notes?<|||||>Same issue here<|||||>You should use `save_pretrained` and `from_pretained` to save/load your models. There is no breaking changes with those methods, we do not guarantee the same if you choose to save/load your model on your own across different versions of Transformers. |
transformers | 24,920 | closed | ๐ [i18n-KO] Translated `perf_infer_cpu.md` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค! -->
# What does this PR do?
Translated the `<your_file>.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง OSSCA ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- Team OSSCA, may you please review this PR? -->
@wonhyeongseo, @keonju2, @harheem, @HongB1, @junejae
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
May you please review this PR? @sgugger, @ArthurZucker, @eunseojo | 07-19-2023 13:44:07 | 07-19-2023 13:44:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,919 | closed | `VisionEncoderDecoderModel.generate()` rejects argument `interpolate_pos_encoding=True` | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35
- Python version: 3.9.7
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When trying to use a different size than the `VisionEncoderDecoderModel` was trained with, you are supposed to pass `interpolate_pos_encoding=True` to the forward method. That in itself works fine and hence the training does too, but when using the `generate()` method, it rejects `interpolate_pos_encoding` during the validation of the model kwargs.
For example using TrOCR, which was trained on an image size of 384x384, and changing the input size of the images to 128x768:
```py
import torch
from transformers import VisionEncoderDecoderModel
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
images = torch.rand((1, 3, 128, 768))
# interpolate_pos_encoding=True should be passed to the forward method in order to use
# different image sizes, but the validation of .generate() rejects it.
model.generate(pixel_values=images, interpolate_pos_encoding=True)
```
Produces the error:
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /home/mjungo/ocr/summer-school/repro_interpolate_pos_encoding.py:10 in <module> โ
โ โ
โ 7 โ
โ 8 # interpolate_pos_encoding=True should be passed to the forward method in order to use โ
โ 9 # different image sizes, but the validation of .generate() rejects it. โ
โ โฑ 10 model.generate(pixel_values=images, interpolate_pos_encoding=True) โ
โ 11 โ
โ โ
โ /home/mjungo/miniconda3/lib/python3.9/site-packages/torch/utils/_contextlib.py:115 in โ
โ decorate_context โ
โ โ
โ 112 โ @functools.wraps(func) โ
โ 113 โ def decorate_context(*args, **kwargs): โ
โ 114 โ โ with ctx_factory(): โ
โ โฑ 115 โ โ โ return func(*args, **kwargs) โ
โ 116 โ โ
โ 117 โ return decorate_context โ
โ 118 โ
โ โ
โ /home/mjungo/miniconda3/lib/python3.9/site-packages/transformers/generation/utils.py:1271 in โ
โ generate โ
โ โ
โ 1268 โ โ generation_config = copy.deepcopy(generation_config) โ
โ 1269 โ โ model_kwargs = generation_config.update(**kwargs) # All unused kwargs must be m โ
โ 1270 โ โ generation_config.validate() โ
โ โฑ 1271 โ โ self._validate_model_kwargs(model_kwargs.copy()) โ
โ 1272 โ โ โ
โ 1273 โ โ # 2. Set generation parameters if not already defined โ
โ 1274 โ โ logits_processor = logits_processor if logits_processor is not None else LogitsP โ
โ โ
โ /home/mjungo/miniconda3/lib/python3.9/site-packages/transformers/generation/utils.py:1144 in โ
โ _validate_model_kwargs โ
โ โ
โ 1141 โ โ โ โ unused_model_args.append(key) โ
โ 1142 โ โ โ
โ 1143 โ โ if unused_model_args: โ
โ โฑ 1144 โ โ โ raise ValueError( โ
โ 1145 โ โ โ โ f"The following `model_kwargs` are not used by the model: {unused_model_ โ
โ 1146 โ โ โ โ " generate arguments will also show up in this list)" โ
โ 1147 โ โ โ ) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
ValueError: The following `model_kwargs` are not used by the model: ['interpolate_pos_encoding'] (note: typos in the
generate arguments will also show up in this list)
```
### Expected behavior
It was expected to accept `interpolate_pos_encoding=True` and pass it to the forward method.
Note: When removing the verification check, it works as expected, so this is purely an issue of the validation being incorrect, i.e. missing the additional arguments that are allowed for this kind of model. | 07-19-2023 12:40:57 | 07-19-2023 12:40:57 | Hi @jungomi
Thank you for raising this issue! Looks valid, I will take a look quickly.<|||||>Hi @gante
It looks we need to update `_validate_model_kwargs` in some way, so some extra but necessary (for correctness) arguments could be allowed even if not in `prepare_inputs_for_generation` (here of the generic class `VisioniEncoderDecoder`.
Let me try a fix.<|||||>@jungomi
The PR is merged. You can try the latest commit on the `main` branch if you would like to ๐ค |
transformers | 24,918 | open | Inconsistent results between LlamaTokenizer and LlamaTokenizerFast | ### System Info
- `transformers` version: 4.30.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.15
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import LlamaTokenizerFast, LlamaTokenizer
fast_tokenizer = LlamaTokenizerFast.from_pretrained('openlm-research/open_llama_7b')
slow_tokenizer = LlamaTokenizer.from_pretrained('openlm-research/open_llama_7b')
input_text = 'tasks'
slow_tokenizer.encode(input) # [1, 8931]
fast_tokenizer.encode(input) # [1, 31822, 31824, 5577]
```
### Expected behavior
. | 07-19-2023 12:21:47 | 07-19-2023 12:21:47 | Hey! Sorry but I cannot reproduce your issue:
```python
In [2]: slow_tokenizer.encode(input_text)
Out[2]: [1, 8931]
In [3]: fast_tokenizer.encode(input_text)
Out[3]: [1, 8931]
```
Make sure you have the latest version of tokenizers maybe? |
transformers | 24,917 | closed | ๐ [i18n-KO] Translated `perf_infer_gpu_one.md` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค! -->
# What does this PR do?
Translated the `perf_infer_gpu_one.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [ ] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [ ] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [ ] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [ ] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [ ] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์, ์ด ์๋์ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ ํ์๋ค์ ๋ฉ์
ํด์ฃผ์ธ์! -->
May you please review this PR? @sronger @TaeYupNoh @HanNayeoniee @sim-so
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo --> | 07-19-2023 12:02:13 | 07-19-2023 12:02:13 | |
transformers | 24,916 | closed | Fix `main_input_name` in `src/transformers/keras_callbacks.py` | # What does this PR do?
Fix #24872 | 07-19-2023 11:19:41 | 07-19-2023 11:19:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1 Just for you to have a final comment when you are back ๐ <|||||>Also LGTM, thanks for catching the bug! |
transformers | 24,915 | open | ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation | ### Model description
ViTPose is used in 2D human pose estimation, a subset of the keypoint detection task #24044
It provides a simple baseline for vision transformer-based human pose estimation. It utilises a pretrained vision transformer backbone to extract features and a simple decoder head to process the extracted features. Despite no elaborate designs in the model, ViTPose obtained state-of-the-art (SOTA) performance of 80.9 AP on the MS COCO Keypoint test-dev set.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Code and weights: https://github.com/ViTAE-Transformer/ViTPose
Paper: https://arxiv.org/abs/2204.12484
@Annbless | 07-19-2023 11:14:22 | 07-19-2023 11:14:22 | Glad you get something different to work on ๐ ๐ ๐ <|||||>Hi, @amyeroberts, I don't know if you are working on this but if not I would be more than happy to take it up.<|||||>Oh, this is the issue page, not the PR page!<|||||>@shauray8 You're very welcome to take this up! :)
This model presents a new task for the library, so there might be some iterations and discussions on what the inputs and outputs should look like. The model translation should be fairly straightforward though, so I'd suggest starting with a PR that implements that and then on the PR we can figure out what works best. |
transformers | 24,914 | closed | Fix `test_model_parallelism` for `FalconModel` | # What does this PR do?
Just a device issue (when running on multi-gpu env.) | 07-19-2023 10:32:04 | 07-19-2023 10:32:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,913 | closed | Remove unsupported and confusing "OpenLlama" architecture | This reverts commit c2c99dc7ef5edab8f7674a1eb00cf6ac6996fd0f from #22795.
Hello!
# What does this PR do?
Removes [Open-Llama](https://huggingface.co/docs/transformers/model_doc/open-llama), a confusingly named and unused model architecture from which all documentation links throw 404 errors.
## Motivation
* The [Open-Llama-V1 model](https://huggingface.co/s-JoL/Open-Llama-V1) and [Open-Llama GitHub repository](https://github.com/s-JoL/Open-Llama) have both been removed.
* There is only [one issue](https://github.com/huggingface/transformers/issues?q=OpenLlamaForCausalLM) on the transformers repo that refers to `OpenLlamaForCausalLM`.
* There is only [one model](https://huggingface.co/search/full-text?q=OpenLlamaForCausalLM) on the Hub that mentions `OpenLlamaForCausalLM`.
* All of the ~1900 other LLama models use the standard `LlamaForCausalLM` instead.
* Users on the HF Discord have been confused by the [Open-Llama](https://huggingface.co/docs/transformers/model_doc/open-llama) documentation page.
In case you are opposed out of principle, i.e. that no architectures should be removed, then we may want to at least update the documentation. However, I urge you to remove this confusing architecture.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Reviewers of the original PR: @ArthurZucker @sgugger
cc: @s-JoL
- Tom Aarsen | 07-19-2023 10:16:40 | 07-19-2023 10:16:40 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24913). All of your documentation changes will be reflected on that endpoint.<|||||>No we cannot removing the architecture entirely that would be a breaking change in the library that would be unprecedented and against our philosphy/commitment to backward compatibility.
Even removing it entirely from the docs seem a bit extreme. If users are confused by which class to use to load a model, they should use the Auto model API to let it select the right class for them. We can add a disclaimer in the doc page of open-llama doc or move it to the deprecated models but that's pretty much the extent of what we can do.
cc @LysandreJik <|||||>I had such a fear - I understand your reasoning completely. Whichever way we go, we end up with a suboptimal situation. I'll let the user on Discord know that it won't be changed.
I could add a disclaimer that the OpenLlama models do not use the OpenLlama architecture, but simply the Llama one.<|||||>> I could add a disclaimer that the OpenLlama models do not use the OpenLlama architecture, but simply the Llama one.
Yes, that would be great! |
transformers | 24,912 | open | blip2 always decode eos_token first | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I just run the code provided by the example given in the doc:
https://huggingface.co/docs/transformers/main/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example
### Expected behavior
I got exactly the same caption of the image.
generated_text
'two cats laying on a couch'
But its weird that generated ids look like:
generated_ids:
tensor([[ 2, 7109, 10017, 11963, 15, 10, 16433, 50118]])
where he leading `2` means a `</s>` is generated. Is this expected or something wrong with the decoding?
@amyeroberts @ArthurZucker do you guys have any ideas on it? | 07-19-2023 09:18:42 | 07-19-2023 09:18:42 | cc @younesbelkada <|||||>Hi @TobiasLee
Thanks for the issue. I think this is expected, the way Blip2 has been trained always prepends the BOS (Beginning of Sentence) token to the input text due to the underlying tokenizer they use. If conditional text is added to the input, the tokenizer should normally take care of adding that token to the beginning of the sentence.
See this line for reference: https://github.com/huggingface/transformers/blob/e75cb0cb3c5fef887abea6f099252e59a659af9d/src/transformers/models/blip_2/modeling_blip_2.py#L1837 / that manually adds the BOS token if no text is passed
And this line from the original code: https://github.com/salesforce/LAVIS/blob/main/lavis/models/blip2_models/blip2_opt.py#L219 that uses the opt token which adds the bos token to any input by default according to the tokenizer_config file: https://huggingface.co/facebook/opt-350m/raw/main/tokenizer_config.json |
transformers | 24,911 | closed | ๐ [i18n-KO] Translated `perf_train_cpu.md` to Korean | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `perf_train_cpu.md` file of the documentation to Korean ๐
Thank you in advance for your review!
Part of https://github.com/huggingface/transformers/issues/20179
<!-- ๋ฉ์ธ ์ด์์ ๊ธฐ๋ก์ด ๋จ์์! ๊ฐ์ง์ฐ๊ตฌ์ ๋ฆฌํฌ๋ฅผ ์ฌ์ฉํด ์ฐ์ตํ์ค๋๋ ์ ๊ฑฐํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค! :smile: -->
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- Team PseudoLab, may you please review this PR? -->
@kihoon71, @0525hhgus, @54data, @Sunmin0520, @seank021, @augustinLib
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. ๊ฐ์ง์ฐ๊ตฌ์ ํ์๋ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์๋ง ํ๊น
ํ์ด์ค ์ง์๋ค์๊ฒ ๋ฆฌ๋ทฐ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
May you please review this PR?
@sgugger, @ArthurZucker, @eunseojo | 07-19-2023 08:54:40 | 07-19-2023 08:54:40 | ๋ฆฌ๋ทฐ ์ฌํญ ์์ต๋๋ค!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>์๊ณ ๋ง์ผ์
จ์ต๋๋ค!!<|||||>์ ๋ @0525hhgus ๋์ ๋๊ธ ๋ถ๋ถ ์ธ์๋ ๋ณ๋์ ๋ฆฌ๋ทฐ ์ฌํญ ์์ต๋๋ค :) |
transformers | 24,910 | closed | labels should not be same as input_ids for casual language model | ### System Info
in the example for casual language mode pretraing (examples/pytorch/language-modeling/run_clm.py: line 490),
labels in 'result' should not be the same as input_ids, but maybe as follows
````
input_ids = result["input_ids"]
result["input_ids"] = input_ids[:, :-1].contiguous()
result["labels"]=input_ids[:, :-1].contiguous()
````
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
examples/pytorch/language-modeling/run_clm.py: line 490
### Expected behavior
update | 07-19-2023 08:10:37 | 07-19-2023 08:10:37 | @thomas010 did you mean to provide the same code for `result['input_ids']` and `result['labels']`?
I don't know if this helps you, but in standard causal language modeling, the input sequence and labels _should_ be the same. Suppose the sequence is <token_A>, <token_B>, <token_C>. In step 0, we want to predict <token_A>. In Step 1, we want to predict <token_B> from <token_A>. In step 2, we want to predict <token_C> from <token_A>, <token_B>. `transformers` knows this is what we want when we provide the same sequence as `input_ids` and `labels`.<|||||>i got it, thank you |
transformers | 24,909 | closed | Fix minor llama2.md model doc typos | Fixes typos in the llama2 model doc | 07-19-2023 08:08:51 | 07-19-2023 08:08:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 24,908 | closed | [`Llama2`] Add support for Llama 2: Fix convert_llama_weights_to_hf.โฆ | โฆpy support 7Bf
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-19-2023 07:56:40 | 07-19-2023 07:56:40 | What does `f` mean in model size? Does it mean `fine-tuned`?<|||||>I understand that "f" should be modified to "-CHAT" to support the CHAT
model. I have made further modifications to the relevant code, replacing
all instances of "f" with "-CHAT". Could you please reload the same commit
for me?
On Wed, Jul 19, 2023 at 4:06โฏPM Ahmad Fahadh Ilyas ***@***.***>
wrote:
> What does f mean in model size? Does it mean fine-tuned?
>
> โ
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/24908#issuecomment-1641622215>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/A7FN72XG322S2MFMCMRBO23XQ6ISFANCNFSM6AAAAAA2PQNSRQ>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
<|||||>cc @ArthurZucker <|||||>I also think the same way. At first, I couldn't understand 7Bf, but now it seems that these things are useless. I just deleted 7Bf, 13Bf, and 70Bf.<|||||>@sgugger I have finished the modifications, ready for merging.<|||||>No no, why would you remove the `70Bf`? All the `f` are for the finetuned/chat checkpoints |
transformers | 24,907 | closed | Fixed issue where ACCELERATE_USE_CPU="False" results in bool(True) | - This results in cpu mode on Apple Silicon mps
# What does this PR do?
Fixes a bug which prevents device("mps") working if `ACCELERATE_USE_CPU` is set to anything including "False".
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 07-19-2023 06:55:28 | 07-19-2023 06:55:28 | The docs for this PR live [here](https://huggingface.co/docs/transformers/pr_24907). All of your documentation changes will be reflected on that endpoint. |
transformers | 24,906 | closed | [`Llama2`] replace `self.pretraining_tp` with `self.config.pretraining_tp` | # What does this PR do?
Llama-2 (and also in the past Bloom) has introduced a new attribute in the config file `pretraining_tp` to mimic the behaviour of the original model at inference. Therefore, inside some layers the TP paradigm is "reproduced" by manually simulating the TP paradigm, see for example: https://github.com/huggingface/transformers/blob/476be08c4aa96f8c1cae4200d2677bbe8f12cf80/src/transformers/models/llama/modeling_llama.py#L291
![Screenshot from 2022-04-22 15-55-48](https://user-images.githubusercontent.com/49240599/164728838-3f78585b-1018-4366-a499-95133fdeaa89.png)
In fact this can lead to unexpected behaviour to users, especially for peft library: (related: https://github.com/huggingface/peft/issues/726) (currently it is not possible to finetune llama-2 models with `pretraining_tp > 1`)
```bash
File "/home/younes_huggingface_co/code/transformers/src/transformers/models/llama/modeling_llama.py", line 209, in forward
gate_proj = torch.cat([F.linear(x, gate_proj_slices[i]) for i in range(self.pretraining_tp)], dim=-1)
File "/home/younes_huggingface_co/code/transformers/src/transformers/models/llama/modeling_llama.py", line 209, in <listcomp>
gate_proj = torch.cat([F.linear(x, gate_proj_slices[i]) for i in range(self.pretraining_tp)], dim=-1)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (2048x5120 and 1x6912)
```
I would argue that these slight numerical differences are ok to have in the context of training, thus we should let users the possibility to disable this behaviour, at their own risk. This PR fixes this and proposes to add a new method `disable_pretraining_tp` to disable that behavior. I also added a method to revert the TP behavior in case users want to revert that after training.
On par with: https://github.com/huggingface/peft/pull/728
cc @ArthurZucker @sgugger @pacman100 @BenjaminBossan | 07-19-2023 06:51:36 | 07-19-2023 06:51:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'm curious, if we just set `pretraining_tp` to 1 in `config.json`, does it mean we disable it? Because when I see the model, there is some `if-else condition` checking the value of `config.pretraining_tp > 1`.<|||||>@fahadh4ilyas the variable is stored as a private attribute so even if you modify the config it won't work sadly, hence the PR
EDT: yes if you manually modify the config file locally on the Hub or locally it will work yes, but this might break existing inference setups (if we modify the official repo)<|||||>> @fahadh4ilyas the variable is stored as a private attribute so even if you modify the config it won't work sadly, hence the PR
>
> EDT: yes if you manually modify the config file locally on the Hub or locally it will work yes, but this might break existing inference setups (if we modify the official repo)
Well at least if you want to fine-tune model based on llama 2, you could just change the config and use that config value right?<|||||>yes this is definitely possible and correct, but in terms of UX, I personally would prefer to call a single line `model.disable_tp()` rather than manually going over the config, change it and use that version instead. For end-users and people that are not familiar with TP that want to run training of Llama2 out of the box (for example using PEFT library as linked in the PEFT PR) might get very confused by looking at the error, and this PR combined with the PEFT PR solves the issue. Let's see what arthur and sylvain will say, happy to close the PR if they think it is not relevant !<|||||>> yes this is definitely possible and correct, but in terms of UX, I personally would prefer to call a single line `model.disable_tp()` rather than manually going over the config, change it and use that version instead. For end-users and people that are not familiar with TP that want to run training of Llama2 out of the box (for example using PEFT library as linked in the PEFT PR) might get very confused by looking at the error, and this PR combined with the PEFT PR solves the issue. Let's see what arthur and sylvain will say, happy to close the PR if they think it is not relevant !
That's fair enough. I actually also didn't understand at first what is `pretraining_tp` until I saw this pull request. Changing config manually might be possible for people who understand the architecture. But, for people who just want to use the model wont bother changing it.
I'm wondering does having `pretaining_tp` make any difference? If the use case is for parallelism, what I saw in the script is that the script only for-looping each part of weight. I don't see any parallel method there. |