Maximax67 commited on
Commit
56f7efa
1 Parent(s): 83bc921

Upload all files

Browse files
README.md CHANGED
@@ -1,3 +1,48 @@
1
- ---
2
- license: unlicense
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # English Valid Words
2
+
3
+ This repository contains CSV files with valid English words along with their frequency, stem, and stem valid probability.
4
+
5
+ Dataset Github link: https://github.com/Maximax67/English-Valid-Words
6
+
7
+ ## Files included
8
+
9
+ 1. **valid_words_sorted_alphabetically.csv**:
10
+ * N: Counter for each word entry.
11
+ * Word: The English word itself.
12
+ * Frequency count: The number of occurrences of the word in the 1-grams dataset.
13
+ * Stem: The stem of the word.
14
+ * Stem valid probability: Probability indicating the validity of the stem within the English language.
15
+
16
+ 2. **valid_words_sorted_by_frequency.csv**:
17
+ * Rank: The ranking of the word based on its frequency count.
18
+ * Word: The English word.
19
+ * Frequency count: The count of occurrences of the word in the 1-grams dataset.
20
+ * Stem: The stem of the word.
21
+ * Stem valid probability: Probability indicating the validity of the stem within the English language.
22
+
23
+ 3. **valid_words.txt**: Txt file which contains valid words. Each word appears on a new line for convenient readability and usage.
24
+
25
+ ## Data Collection Process
26
+
27
+ In order to curate a comprehensive dataset of valid English words, the following steps were undertaken:
28
+
29
+ 1. **Initial Dataset**: I was searching a list of valid english words for my personal project and I found [this github repo](https://github.com/dwyl/english-words). However, to refine the dataset to meet my project specifications, a filtering process was necessary.
30
+
31
+ 2. **Words Filtering**: I wrote the Words-filter.ipynb notebook to remove of words with non-alphabetical characters and words exceeding 25 characters.
32
+
33
+ 3. **Frequency Data Collection**: To enrich the dataset with frequency information, the 1-grams dataset provided by Google was employed. Words with a frequency count less than 10,000 were removed.
34
+
35
+ 4. **Stemming and Probability Calculation**: I used NLTK's Porter, Lancaster, and Snowball stemmers, along with a custom prefix stemmer to get stems with the highest frequency among all stemmers, which also existed in the dataset. Additionally, the probability of stem validity was calculated based on the frequencies of the original word and its stem. For further insights into the data curation process, please refer to the Valid-Word-List-Maker.ipynb file.
36
+
37
+
38
+ ## License
39
+ This repository is released under the Unlicensed license. You are free to use, modify, and distribute the contents of this repository for any purpose without any restrictions.
40
+
41
+ ## Acknowledgments
42
+ I would like to acknowledge the contributions of the following resources:
43
+
44
+ - [Word list by infochimps (archived)](https://web.archive.org/web/20131118073324/https://www.infochimps.com/datasets/word-list-350000-simple-english-words-excel-readable)
45
+ - [English words github repo by dwyl](https://github.com/dwyl/english-words)
46
+ - [The Google Books Ngram Viewer (used 1-grams dataset, version 20200217)](https://books.google.com/ngrams/)
47
+ - [NLTK (Natural Language Toolkit)](https://www.nltk.org/)
48
+ - [WordNet](https://wordnet.princeton.edu/)
Valid-Word-List-Maker.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
Words-filter.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
datasets/1-grams/readme.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ To utilize the code and generate your customized word list, follow these steps to download the required dataset files:
2
+
3
+ 1. Visit the 1-grams dataset files webpage: https://storage.googleapis.com/books/ngrams/books/datasetsv3.html
4
+
5
+ 2. Download the dataset files. Please note that the first six files contain invalid words as they include digits and other non-alphabetical characters.
6
+
7
+ 3. Save the downloaded files in the current folder as .gz archives. There's no need to unpack them.
datasets/filtered_words.txt ADDED
The diff for this file is too large to render. See raw diff
 
datasets/parsed_words_data.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61359f7563af6026c78d543fd906887cec4076d6dc22fc216bc4250d20f51b5b
3
+ size 3706483
datasets/words.txt ADDED
The diff for this file is too large to render. See raw diff
 
valid_words.txt ADDED
The diff for this file is too large to render. See raw diff
 
valid_words_sorted_alphabetically.csv ADDED
The diff for this file is too large to render. See raw diff
 
valid_words_sorted_by_frequency.csv ADDED
The diff for this file is too large to render. See raw diff