Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
jinaai
/
jina-bert-flash-implementation
like
4
Follow
Jina AI
276
Transformers
bert
custom_code
Inference Endpoints
๐ช๐บ Region: EU
Model card
Files
Files and versions
Community
18
Train
Deploy
Use this model
main
jina-bert-flash-implementation
6 contributors
History:
109 commits
koukandre
feat-add-configs (
#18
)
b78d159
verified
6 months ago
README.md
Safe
1.89 kB
feat: added README
8 months ago
bert_padding.py
Safe
9.78 kB
reference the flash attention GitHub
8 months ago
block.py
Safe
17.4 kB
reference the flash attention GitHub
8 months ago
config.json
Safe
1.24 kB
feat-add-configs (#18)
6 months ago
configuration_bert.py
Safe
5.77 kB
Porting v2 models to flash attention (#15)
8 months ago
convert_v2_weights.py
Safe
6.1 kB
feat: for converting v2, added lines to save model weights and print config
8 months ago
embedding.py
Safe
2.26 kB
clean up embeddings.py (#7)
8 months ago
mha.py
Safe
35.4 kB
fix: handle window_size passed as list
6 months ago
mlp.py
Safe
8.05 kB
fix-glu-mlp (#17)
7 months ago
modeling_bert.py
Safe
33.4 kB
fix-glu-mlp (#17)
7 months ago
modeling_for_glue.py
Safe
10.7 kB
feat: assert return_dict
8 months ago
modeling_lora.py
Safe
12.3 kB
fix: use staticmethod istead of classmethod
8 months ago