Transformers
English
Inference Endpoints
norabelrose commited on
Commit
3d77bfc
1 Parent(s): 714e47b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -7,7 +7,7 @@ language:
7
  library_name: transformers
8
  ---
9
 
10
- This is a set of sparse autoencoders (SAEs) trained on the residual stream of [Llama 3.1 8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) using the 10B sample of the [RedPajama v2 corpus](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2), which comes out to roughly 8.5B tokens using the Llama 3 tokenizer. The SAEs are organized by hookpoint, and can be loaded using the EleutherAI [`sae` library](https://github.com/EleutherAI/sae).
11
 
12
  With the `sae` library installed, you can access an SAE like this:
13
  ```python
 
7
  library_name: transformers
8
  ---
9
 
10
+ This is a set of sparse autoencoders (SAEs) trained on [Llama 3.1 8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) using the 10B sample of the [RedPajama v2 corpus](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2), which comes out to roughly 8.5B tokens using the Llama 3 tokenizer. The SAEs are organized by hookpoint, and can be loaded using the EleutherAI [`sae` library](https://github.com/EleutherAI/sae).
11
 
12
  With the `sae` library installed, you can access an SAE like this:
13
  ```python