hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d00a1a76f01c6b8640851e7de9465d132e8a079f | 2,369 | ipynb | Jupyter Notebook | face_detection.ipynb | vivek7415/face_detection | 213f10989eaefba1b9f529cfcd232acf2c83d460 | [
"MIT"
] | 1 | 2019-06-26T07:15:44.000Z | 2019-06-26T07:15:44.000Z | face_detection.ipynb | vivek7415/face_detection | 213f10989eaefba1b9f529cfcd232acf2c83d460 | [
"MIT"
] | null | null | null | face_detection.ipynb | vivek7415/face_detection | 213f10989eaefba1b9f529cfcd232acf2c83d460 | [
"MIT"
] | null | null | null | 25.473118 | 84 | 0.528071 | [
[
[
"import cv2",
"_____no_output_____"
],
[
"face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')\neye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')\nsmile_cascade = cv2.CascadeClassifier('haarcascade_smile.xml')",
"_____no_output_____"
],
[
"def detect(gray, frame):\n face = face_cascade.detectMultiScale(gray, 1.3, 5)\n for (x,y,w,h) in face:\n cv2.rectangle(frame, (x,y), (x+w,y+h), (0,0,255), 2)\n roi_gray = gray[y:y+h, x:x+w]\n roi_frame = frame[y:y+h, x:x+w]\n eyes = eye_cascade.detectMultiScale(roi_gray, 1.1,22)\n for (ex,ey,ew,eh) in eyes:\n cv2.rectangle(roi_frame, (ex,ey), (ex+ew,ey+eh), (0,255,0),2)\n smile = smile_cascade.detectMultiScale(roi_gray, 1.7, 22)\n for (sx,sy,sw,sh) in smile:\n cv2.rectangle(roi_frame, (sx,sy), (sx+sw,sy+sh), (255,0,0),2)\n return frame\n\nvideo_capture = cv2.VideoCapture(0)\nwhile True:\n _,frame = video_capture.read()\n gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)\n canvas = detect(gray, frame)\n cv2.imshow('Video', canvas)\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\nvideo_capture.release()\ncv2.destroyAllWindows()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d00a1eba177661a99c679881017f491fbaecc56d | 68,690 | ipynb | Jupyter Notebook | module2-random-forests/Ahvi_Blackwell_LS_DS_222_assignment.ipynb | ahvblackwelltech/DS-Unit-2-Kaggle-Challenge | 27c78c361261649eb51ba335931e01d2fe7d91bb | [
"MIT"
] | null | null | null | module2-random-forests/Ahvi_Blackwell_LS_DS_222_assignment.ipynb | ahvblackwelltech/DS-Unit-2-Kaggle-Challenge | 27c78c361261649eb51ba335931e01d2fe7d91bb | [
"MIT"
] | null | null | null | module2-random-forests/Ahvi_Blackwell_LS_DS_222_assignment.ipynb | ahvblackwelltech/DS-Unit-2-Kaggle-Challenge | 27c78c361261649eb51ba335931e01d2fe7d91bb | [
"MIT"
] | null | null | null | 106.496124 | 43,258 | 0.800699 | [
[
[
"<a href=\"https://colab.research.google.com/github/ahvblackwelltech/DS-Unit-2-Kaggle-Challenge/blob/master/module2-random-forests/Ahvi_Blackwell_LS_DS_222_assignment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Lambda School Data Science\n\n*Unit 2, Sprint 2, Module 2*\n\n---",
"_____no_output_____"
],
[
"# Random Forests\n\n## Assignment\n- [ ] Read [“Adopting a Hypothesis-Driven Workflow”](https://outline.com/5S5tsB), a blog post by a Lambda DS student about the Tanzania Waterpumps challenge.\n- [ ] Continue to participate in our Kaggle challenge.\n- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features.\n- [ ] Try Ordinal Encoding.\n- [ ] Try a Random Forest Classifier.\n- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)\n- [ ] Commit your notebook to your fork of the GitHub repo.\n\n## Stretch Goals\n\n### Doing\n- [ ] Add your own stretch goal(s) !\n- [ ] Do more exploratory data analysis, data cleaning, feature engineering, and feature selection.\n- [ ] Try other [categorical encodings](https://contrib.scikit-learn.org/categorical-encoding/).\n- [ ] Get and plot your feature importances.\n- [ ] Make visualizations and share on Slack.\n\n### Reading\n\nTop recommendations in _**bold italic:**_\n\n#### Decision Trees\n- A Visual Introduction to Machine Learning, [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/), and _**[Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)**_\n- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.html#advantages-2)\n- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)\n- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)\n- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU)\n\n#### Random Forests\n- [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 8: Tree-Based Methods\n- [Coloring with Random Forests](http://structuringtheunstructured.blogspot.com/2017/11/coloring-with-random-forests.html)\n- _**[Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/)**_\n\n#### Categorical encoding for trees\n- [Are categorical variables getting lost in your random forests?](https://roamanalytics.com/2016/10/28/are-categorical-variables-getting-lost-in-your-random-forests/)\n- [Beyond One-Hot: An Exploration of Categorical Variables](http://www.willmcginnis.com/2015/11/29/beyond-one-hot-an-exploration-of-categorical-variables/)\n- _**[Categorical Features and Encoding in Decision Trees](https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931)**_\n- _**[Coursera — How to Win a Data Science Competition: Learn from Top Kagglers — Concept of mean encoding](https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv)**_\n- [Mean (likelihood) encodings: a comprehensive study](https://www.kaggle.com/vprokopev/mean-likelihood-encodings-a-comprehensive-study)\n- [The Mechanics of Machine Learning, Chapter 6: Categorically Speaking](https://mlbook.explained.ai/catvars.html)\n\n#### Imposter Syndrome\n- [Effort Shock and Reward Shock (How The Karate Kid Ruined The Modern World)](http://www.tempobook.com/2014/07/09/effort-shock-and-reward-shock/)\n- [How to manage impostor syndrome in data science](https://towardsdatascience.com/how-to-manage-impostor-syndrome-in-data-science-ad814809f068)\n- [\"I am not a real data scientist\"](https://brohrer.github.io/imposter_syndrome.html)\n- _**[Imposter Syndrome in Data Science](https://caitlinhudon.com/2018/01/19/imposter-syndrome-in-data-science/)**_\n\n\n### More Categorical Encodings\n\n**1.** The article **[Categorical Features and Encoding in Decision Trees](https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931)** mentions 4 encodings:\n\n- **\"Categorical Encoding\":** This means using the raw categorical values as-is, not encoded. Scikit-learn doesn't support this, but some tree algorithm implementations do. For example, [Catboost](https://catboost.ai/), or R's [rpart](https://cran.r-project.org/web/packages/rpart/index.html) package.\n- **Numeric Encoding:** Synonymous with Label Encoding, or \"Ordinal\" Encoding with random order. We can use [category_encoders.OrdinalEncoder](https://contrib.scikit-learn.org/categorical-encoding/ordinal.html).\n- **One-Hot Encoding:** We can use [category_encoders.OneHotEncoder](http://contrib.scikit-learn.org/categorical-encoding/onehot.html).\n- **Binary Encoding:** We can use [category_encoders.BinaryEncoder](http://contrib.scikit-learn.org/categorical-encoding/binary.html).\n\n\n**2.** The short video \n**[Coursera — How to Win a Data Science Competition: Learn from Top Kagglers — Concept of mean encoding](https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv)** introduces an interesting idea: use both X _and_ y to encode categoricals.\n\nCategory Encoders has multiple implementations of this general concept:\n\n- [CatBoost Encoder](http://contrib.scikit-learn.org/categorical-encoding/catboost.html)\n- [James-Stein Encoder](http://contrib.scikit-learn.org/categorical-encoding/jamesstein.html)\n- [Leave One Out](http://contrib.scikit-learn.org/categorical-encoding/leaveoneout.html)\n- [M-estimate](http://contrib.scikit-learn.org/categorical-encoding/mestimate.html)\n- [Target Encoder](http://contrib.scikit-learn.org/categorical-encoding/targetencoder.html)\n- [Weight of Evidence](http://contrib.scikit-learn.org/categorical-encoding/woe.html)\n\nCategory Encoder's mean encoding implementations work for regression problems or binary classification problems. \n\nFor multi-class classification problems, you will need to temporarily reformulate it as binary classification. For example:\n\n```python\nencoder = ce.TargetEncoder(min_samples_leaf=..., smoothing=...) # Both parameters > 1 to avoid overfitting\nX_train_encoded = encoder.fit_transform(X_train, y_train=='functional')\nX_val_encoded = encoder.transform(X_train, y_val=='functional')\n```\n\nFor this reason, mean encoding won't work well within pipelines for multi-class classification problems.\n\n**3.** The **[dirty_cat](https://dirty-cat.github.io/stable/)** library has a Target Encoder implementation that works with multi-class classification.\n\n```python\n dirty_cat.TargetEncoder(clf_type='multiclass-clf')\n```\nIt also implements an interesting idea called [\"Similarity Encoder\" for dirty categories](https://www.slideshare.net/GaelVaroquaux/machine-learning-on-non-curated-data-154905090).\n\nHowever, it seems like dirty_cat doesn't handle missing values or unknown categories as well as category_encoders does. And you may need to use it with one column at a time, instead of with your whole dataframe.\n\n**4. [Embeddings](https://www.kaggle.com/learn/embeddings)** can work well with sparse / high cardinality categoricals.\n\n_**I hope it’s not too frustrating or confusing that there’s not one “canonical” way to encode categoricals. It’s an active area of research and experimentation! Maybe you can make your own contributions!**_",
"_____no_output_____"
],
[
"### Setup\n\nYou can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab (run the code cell below).",
"_____no_output_____"
]
],
[
[
"%%capture\nimport sys\n\n# If you're on Colab:\nif 'google.colab' in sys.modules:\n DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'\n !pip install category_encoders==2.*\n\n# If you're working locally:\nelse:\n DATA_PATH = '../data/'",
"_____no_output_____"
],
[
"import pandas as pd\nfrom sklearn.model_selection import train_test_split\n\ntrain = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), \n pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))\ntest = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')\nsample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')\n\ntrain.shape, test.shape",
"_____no_output_____"
]
],
[
[
"# 1. Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n%matplotlib inline",
"_____no_output_____"
],
[
"# Splitting the train into a train & val\ntrain, val = train_test_split(train, train_size=0.80, test_size=0.02,\n stratify=train['status_group'], random_state=42)",
"_____no_output_____"
],
[
"def wrangle(X):\n X = X.copy()\n X['latitude'] = X['latitude'].replace(-2e-08, 0)\n cols_with_zeros = ['longitude', 'latitude', 'construction_year',\n 'gps_height', 'population']\n\n for col in cols_with_zeros:\n X[col] = X[col].replace(0, np.nan)\n X[col+'_MISSING'] = X[col].isnull()\n\n duplicates = ['quantity_group', 'payment_type']\n X = X.drop(columns=duplicates)\n\n unusable_variance = ['recorded_by', 'id']\n X = X.drop(columns=unusable_variance)\n\n X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)\n\n\n X['year_recorded'] = X['date_recorded'].dt.year\n X['month_recorded'] = X['date_recorded'].dt.month\n X['day_recorded'] = X['date_recorded'].dt.day\n X = X.drop(columns='date_recorded')\n\n X['years'] = X['year_recorded'] - X['construction_year']\n X['years_MISSING'] = X['years'].isnull()\n\n return X\n\ntrain = wrangle(train)\nval = wrangle(val)\ntest = wrangle(test)",
"_____no_output_____"
],
[
"# The target is status_group\ntarget = 'status_group'\ntrain_features = train.drop(columns=[target])\n# Getting list of the numeric features\nnumeric_features = train_features.select_dtypes(include='number').columns.tolist()\ncardinality = train_features.select_dtypes(exclude='number').nunique()\ncategorical_features = cardinality[cardinality <= 50].index.tolist()\n# Combined the lists\nfeatures = numeric_features + categorical_features",
"_____no_output_____"
],
[
"# Arranging the data into X features matrix & y target vector\nX_train = train[features]\ny_train = train[target]\nX_val = val[features]\ny_val = val[target]\nX_test = test[features]",
"_____no_output_____"
],
[
"# 38 features\nprint(X_train.shape, X_val.shape)",
"(47520, 38) (1188, 38)\n"
]
],
[
[
"# 2. Try Ordinal Encoding",
"_____no_output_____"
]
],
[
[
"pip install category_encoders",
"Requirement already satisfied: category_encoders in /usr/local/lib/python3.6/dist-packages (2.1.0)\nRequirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.17.5)\nRequirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.22.1)\nRequirement already satisfied: pandas>=0.21.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.25.3)\nRequirement already satisfied: statsmodels>=0.6.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.10.2)\nRequirement already satisfied: patsy>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (0.5.1)\nRequirement already satisfied: scipy>=0.19.0 in /usr/local/lib/python3.6/dist-packages (from category_encoders) (1.4.1)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->category_encoders) (0.14.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2018.9)\nRequirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.1->category_encoders) (2.6.1)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from patsy>=0.4.1->category_encoders) (1.12.0)\n"
],
[
"%%time\nimport category_encoders as ce\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.pipeline import make_pipeline\n\npipeline = make_pipeline(\n ce.OneHotEncoder(use_cat_names=True),\n SimpleImputer(strategy='mean'),\n RandomForestClassifier(n_jobs=-1, random_state=42)\n)\n\npipeline.fit(X_train, y_train)\n\nprint('Validation Accuracy:', round(pipeline.score(X_val, y_val)))",
"Validation Accuracy: 1.0\nCPU times: user 25.1 s, sys: 258 ms, total: 25.4 s\nWall time: 14.9 s\n"
],
[
"encoder = pipeline.named_steps['onehotencoder']\nencoded_df = encoder.transform(X_train)\nprint('X_train shape after encoding', encoded_df.shape)\n# Now there are 182 features",
"X_train shape after encoding (47520, 182)\n"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# Get feature importances\nrf = pipeline.named_steps['randomforestclassifier']\nimportances = pd.Series(rf.feature_importances_, encoded_df.columns)\n\nn = 25 \nplt.figure(figsize=(10, n/2))\nplt.title(f'Top {n} Features')\nimportances.sort_values()[-n:].plot.barh(color='grey');",
"_____no_output_____"
],
[
"# My Submission CSV\ny_pred = pipeline.predict(X_test)\n\nsubmission = sample_submission.copy()\nsubmission['status_group'] = y_pred\nsubmission.to_csv('ALB_submission_2.csv', index=False)\n\nsubmission.head()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00a220deabeb831d871d971c1c52bdde4c198e9 | 107,526 | ipynb | Jupyter Notebook | object-detection/ex12_05_keras_VGG16_transfer.ipynb | farofang/thai-traffic-signs | 9a5624ff89143c33817b94ebd46eff05e03760c3 | [
"MIT"
] | null | null | null | object-detection/ex12_05_keras_VGG16_transfer.ipynb | farofang/thai-traffic-signs | 9a5624ff89143c33817b94ebd46eff05e03760c3 | [
"MIT"
] | null | null | null | object-detection/ex12_05_keras_VGG16_transfer.ipynb | farofang/thai-traffic-signs | 9a5624ff89143c33817b94ebd46eff05e03760c3 | [
"MIT"
] | 1 | 2021-08-17T16:00:04.000Z | 2021-08-17T16:00:04.000Z | 107,526 | 107,526 | 0.839908 | [
[
[
"# List all NVIDIA GPUs as avaialble in this computer (or Colab's session)\n!nvidia-smi -L",
"zsh:1: command not found: nvidia-smi\n"
],
[
"import sys\nprint( f\"Python {sys.version}\\n\" )\n\nimport numpy as np\nprint( f\"NumPy {np.__version__}\" )\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport tensorflow as tf\nprint( f\"TensorFlow {tf.__version__}\" )\nprint( f\"tf.keras.backend.image_data_format() = {tf.keras.backend.image_data_format()}\" )\n\n# Count the number of GPUs as detected by tensorflow\ngpus = tf.config.experimental.list_physical_devices('GPU')\nprint( f\"TensorFlow detected { len(gpus) } GPU(s):\" )\nfor i, gpu in enumerate(gpus):\n print( f\".... GPU No. {i}: Name = {gpu.name} , Type = {gpu.device_type}\" )",
"Python 3.8.10 | packaged by conda-forge | (default, May 11 2021, 06:27:18) \n[Clang 11.1.0 ]\n\nNumPy 1.19.5\nInit Plugin\nInit Graph Optimizer\nInit Kernel\nTensorFlow 2.5.0\ntf.keras.backend.image_data_format() = channels_last\nTensorFlow detected 1 GPU(s):\n.... GPU No. 0: Name = /physical_device:GPU:0 , Type = GPU\n"
]
],
[
[
"## 1. Load the pre-trained VGG-16 (only the feature extractor)",
"_____no_output_____"
]
],
[
[
"# Load the ImageNet VGG-16 model, ***excluding*** the latter part regarding the classifier\n# Default of input_shape is 224x224x3 for VGG-16\nimg_w,img_h = 32,32\nvgg_extractor = tf.keras.applications.vgg16.VGG16(weights = \"imagenet\", include_top=False, input_shape = (img_w, img_h, 3))\n\nvgg_extractor.summary()",
"Metal device set to: Apple M1\n\nsystemMemory: 16.00 GB\nmaxCacheSize: 5.33 GB\n\n"
]
],
[
[
"## 2. Extend VGG-16 to match our requirement",
"_____no_output_____"
]
],
[
[
"# Freeze all layers in VGG-16\nfor i,layer in enumerate(vgg_extractor.layers): \n print( f\"Layer {i}: name = {layer.name} , trainable = {layer.trainable} => {False}\" )\n layer.trainable = False # freeze this layer",
"Layer 0: name = input_1 , trainable = True => False\nLayer 1: name = block1_conv1 , trainable = True => False\nLayer 2: name = block1_conv2 , trainable = True => False\nLayer 3: name = block1_pool , trainable = True => False\nLayer 4: name = block2_conv1 , trainable = True => False\nLayer 5: name = block2_conv2 , trainable = True => False\nLayer 6: name = block2_pool , trainable = True => False\nLayer 7: name = block3_conv1 , trainable = True => False\nLayer 8: name = block3_conv2 , trainable = True => False\nLayer 9: name = block3_conv3 , trainable = True => False\nLayer 10: name = block3_pool , trainable = True => False\nLayer 11: name = block4_conv1 , trainable = True => False\nLayer 12: name = block4_conv2 , trainable = True => False\nLayer 13: name = block4_conv3 , trainable = True => False\nLayer 14: name = block4_pool , trainable = True => False\nLayer 15: name = block5_conv1 , trainable = True => False\nLayer 16: name = block5_conv2 , trainable = True => False\nLayer 17: name = block5_conv3 , trainable = True => False\nLayer 18: name = block5_pool , trainable = True => False\n"
],
[
"x = vgg_extractor.output\n\n# Add our custom layer(s) to the end of the existing model \nx = tf.keras.layers.Flatten()(x)\nx = tf.keras.layers.Dense(1024, activation=\"relu\")(x)\nx = tf.keras.layers.Dropout(0.5)(x)\nnew_outputs = tf.keras.layers.Dense(10, activation=\"softmax\")(x)\n\n# Create the final model \nmodel = tf.keras.models.Model(inputs=vgg_extractor.input, outputs=new_outputs)\nmodel.summary()",
"Model: \"model\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) [(None, 32, 32, 3)] 0 \n_________________________________________________________________\nblock1_conv1 (Conv2D) (None, 32, 32, 64) 1792 \n_________________________________________________________________\nblock1_conv2 (Conv2D) (None, 32, 32, 64) 36928 \n_________________________________________________________________\nblock1_pool (MaxPooling2D) (None, 16, 16, 64) 0 \n_________________________________________________________________\nblock2_conv1 (Conv2D) (None, 16, 16, 128) 73856 \n_________________________________________________________________\nblock2_conv2 (Conv2D) (None, 16, 16, 128) 147584 \n_________________________________________________________________\nblock2_pool (MaxPooling2D) (None, 8, 8, 128) 0 \n_________________________________________________________________\nblock3_conv1 (Conv2D) (None, 8, 8, 256) 295168 \n_________________________________________________________________\nblock3_conv2 (Conv2D) (None, 8, 8, 256) 590080 \n_________________________________________________________________\nblock3_conv3 (Conv2D) (None, 8, 8, 256) 590080 \n_________________________________________________________________\nblock3_pool (MaxPooling2D) (None, 4, 4, 256) 0 \n_________________________________________________________________\nblock4_conv1 (Conv2D) (None, 4, 4, 512) 1180160 \n_________________________________________________________________\nblock4_conv2 (Conv2D) (None, 4, 4, 512) 2359808 \n_________________________________________________________________\nblock4_conv3 (Conv2D) (None, 4, 4, 512) 2359808 \n_________________________________________________________________\nblock4_pool (MaxPooling2D) (None, 2, 2, 512) 0 \n_________________________________________________________________\nblock5_conv1 (Conv2D) (None, 2, 2, 512) 2359808 \n_________________________________________________________________\nblock5_conv2 (Conv2D) (None, 2, 2, 512) 2359808 \n_________________________________________________________________\nblock5_conv3 (Conv2D) (None, 2, 2, 512) 2359808 \n_________________________________________________________________\nblock5_pool (MaxPooling2D) (None, 1, 1, 512) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 512) 0 \n_________________________________________________________________\ndense (Dense) (None, 1024) 525312 \n_________________________________________________________________\ndropout (Dropout) (None, 1024) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 10) 10250 \n=================================================================\nTotal params: 15,250,250\nTrainable params: 535,562\nNon-trainable params: 14,714,688\n_________________________________________________________________\n"
]
],
[
[
"## 3. Prepare our own dataset",
"_____no_output_____"
]
],
[
[
"# Load CIFAR-10 color image dataset\n(x_train , y_train), (x_test , y_test) = tf.keras.datasets.cifar10.load_data()\n\n# Inspect the dataset\nprint( f\"x_train: type={type(x_train)} dtype={x_train.dtype} shape={x_train.shape} max={x_train.max(axis=None)} min={x_train.min(axis=None)}\" )\nprint( f\"y_train: type={type(y_train)} dtype={y_train.dtype} shape={y_train.shape} max={max(y_train)} min={min(y_train)}\" )\nprint( f\"x_test: type={type(x_test)} dtype={x_test.dtype} shape={x_test.shape} max={x_test.max(axis=None)} min={x_test.min(axis=None)}\" )\nprint( f\"y_test: type={type(y_test)} dtype={y_test.dtype} shape={y_test.shape} max={max(y_test)} min={min(y_test)}\" )",
"x_train: type=<class 'numpy.ndarray'> dtype=uint8 shape=(50000, 32, 32, 3) max=255 min=0\ny_train: type=<class 'numpy.ndarray'> dtype=uint8 shape=(50000, 1) max=[9] min=[0]\nx_test: type=<class 'numpy.ndarray'> dtype=uint8 shape=(10000, 32, 32, 3) max=255 min=0\ny_test: type=<class 'numpy.ndarray'> dtype=uint8 shape=(10000, 1) max=[9] min=[0]\n"
],
[
"y_train[0:5]",
"_____no_output_____"
],
[
"cifar10_labels = [ 'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck' ]\n\n# Visualize the first five images in x_train\nplt.figure(figsize=(15,5))\nfor i in range(5):\n plt.subplot(150 + 1 + i).set_title( f\"class no. {y_train[i]}: {cifar10_labels[ int(y_train[i]) ]}\" ) \n plt.imshow( x_train[i] )\n plt.setp( plt.gcf().get_axes(), xticks=[], yticks=[]) # remove all tick marks \nplt.show()",
"_____no_output_____"
],
[
"# Preprocess CIFAR-10 dataset to match VGG-16's requirements\nx_train_vgg = tf.keras.applications.vgg16.preprocess_input(x_train)\nx_test_vgg = tf.keras.applications.vgg16.preprocess_input(x_test)\n\nprint( x_train_vgg.dtype, x_train_vgg.shape, np.min(x_train_vgg), np.max(x_train_vgg) )\nprint( x_test_vgg.dtype, x_test_vgg.shape, np.min(x_test_vgg), np.max(x_test_vgg) )",
"_____no_output_____"
]
],
[
[
"## 4. Transfer learning",
"_____no_output_____"
]
],
[
[
"# Set loss function, optimizer and evaluation metric\nmodel.compile( loss=\"sparse_categorical_crossentropy\", optimizer=\"adam\", metrics=[\"acc\"] ) ",
"_____no_output_____"
],
[
"history = model.fit( x_train_vgg, y_train, batch_size=128, epochs=20, verbose=1, validation_data=(x_test_vgg,y_test) )",
"_____no_output_____"
],
[
"# Summarize history for accuracy\nplt.figure(figsize=(15,5))\nplt.plot(history.history['acc'])\nplt.plot(history.history['val_acc'])\nplt.title('Train accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.grid()\nplt.show()\n\n# Summarize history for loss\nplt.figure(figsize=(15,5))\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('Train loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.grid()\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 5. Evaluate and test the model",
"_____no_output_____"
]
],
[
[
"# Evaluate the trained model on the test set\nresults = model.evaluate(x_test_vgg, y_test, batch_size=128)\nprint(\"test loss, test acc:\", results)",
"_____no_output_____"
],
[
"# Test using the model on x_test_vgg[0]\ni = 0\ny_pred = model.predict( x_test_vgg[i].reshape(1,32,32,3) )\n\nplt.imshow( x_test[i] )\nplt.title( f\"x_test[{i}]: predict=[{np.argmax(y_pred)}]{cifar10_labels[np.argmax(y_pred)]}, true={y_test[i]}{cifar10_labels[int(y_test[i])]}\" )\nplt.show()",
"_____no_output_____"
],
[
"# Test using the model on the first 20 images in x_test\nfor i in range(20):\n y_pred = model.predict( x_test_vgg[i].reshape(1,32,32,3) )\n\n plt.imshow( x_test[i] )\n plt.title( f\"x_test[{i}]: predict=[{np.argmax(y_pred)}]{cifar10_labels[np.argmax(y_pred)]}, true={y_test[i]}{cifar10_labels[int(y_test[i])]}\" )\n plt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d00a261b63ad16c5dac45894d350f0e895f6bca6 | 8,866 | ipynb | Jupyter Notebook | .ipynb_checkpoints/Python_101-checkpoint.ipynb | abdelrahman-ayad/MiCM-StatsPython-F21 | 5a8b81a06a801536980c09becebc0ff35315ddc6 | [
"MIT"
] | 1 | 2021-09-24T17:34:00.000Z | 2021-09-24T17:34:00.000Z | .ipynb_checkpoints/Python_101-checkpoint.ipynb | abdelrahman-ayad/MiCM-StatsPython-F21 | 5a8b81a06a801536980c09becebc0ff35315ddc6 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/Python_101-checkpoint.ipynb | abdelrahman-ayad/MiCM-StatsPython-F21 | 5a8b81a06a801536980c09becebc0ff35315ddc6 | [
"MIT"
] | 1 | 2021-11-08T21:06:34.000Z | 2021-11-08T21:06:34.000Z | 18.130879 | 213 | 0.465599 | [
[
[
"## This notebook serves as a refresher with some basic Python code and functions",
"_____no_output_____"
],
[
"### 1) Define a variable called x, with initial value of 5. multiply by 2 four times and print the value each time",
"_____no_output_____"
]
],
[
[
"x = 5\nfor i in range(4):\n x = x*2\n print(i, x)\n ",
"0 10\n1 20\n2 40\n3 80\n"
]
],
[
[
"### 2) Define a list ",
"_____no_output_____"
]
],
[
[
"p = [9, 4, -5, 0, 10.9]",
"_____no_output_____"
],
[
"# Get length of list\nlen(p)",
"_____no_output_____"
],
[
"# index of a specific element\np.index(0)",
"_____no_output_____"
],
[
"# first element in list\np[0]",
"_____no_output_____"
],
[
"print(sum(p))",
"18.9\n"
]
],
[
[
"### 3) Create a numpy array",
"_____no_output_____"
]
],
[
[
"import numpy as np\na = np.array([5, -19, 30, 10])",
"_____no_output_____"
],
[
"# Get first element\na[0]",
"_____no_output_____"
],
[
"# Get last element\na[-1]",
"_____no_output_____"
],
[
"# Get first 3 elements\nprint(a[0:3])\nprint(a[:3])",
"[ 5 -19 30]\n[ 5 -19 30]\n"
],
[
"# Get size of the array\na.shape",
"_____no_output_____"
]
],
[
[
"### 4) Define a dictionary that stores the age of three students. \n### Mark: 26, Camilla: 23, Jason: 30",
"_____no_output_____"
]
],
[
[
"students = {'Mark':26, \n 'Camilla': 23,\n 'Jason':30}",
"_____no_output_____"
],
[
"students['Mark']",
"_____no_output_____"
],
[
"students.keys()",
"_____no_output_____"
]
],
[
[
"### 5) Create a square function",
"_____no_output_____"
]
],
[
[
"def square_number(x):\n x2 = x**2\n return x2\nx_squared = square_number(5)\nprint(x_squared)",
"25\n"
]
],
[
[
"### 6) List comprehension",
"_____no_output_____"
]
],
[
[
"# add 2 to every element in the numpy array\nnumber_array = np.arange(10, 21)\nprint(\"original array:\", number_array)\nnumber_array_plus_two = [x+2 for x in number_array]\nprint(\"array plus 2:\", number_array_plus_two)",
"original array: [10 11 12 13 14 15 16 17 18 19 20]\narray plus 2: [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]\n"
],
[
"# select only even numbers\neven_numbers =[x for x in numbers_array if x%2==0]\nprint(even_numbers)",
"[10, 12, 14, 16, 18, 20]\n"
]
],
[
[
"### 7) Random numbers",
"_____no_output_____"
]
],
[
[
"np.random.seed(42)\nrand_number = np.random.random(size =5)\nprint(rand_number)",
"[0.37454012 0.95071431 0.73199394 0.59865848 0.15601864]\n"
],
[
"np.random.seed(42)\nrand_number2 = np.random.random(size =5)\nprint(rand_number2)",
"[0.37454012 0.95071431 0.73199394 0.59865848 0.15601864]\n"
]
],
[
[
"## Exercises",
"_____no_output_____"
],
[
"1- Create a list with numbers from 1 to 10 (inclusive). Determine the mean of the list using two methods \n",
"_____no_output_____"
],
[
"2- Define a function called \"expectation\" that returns the mathematical expectation of variable $x$ (given as an array) and its probabilities (given as an array). Example: x = [0, 1], x_prob = [0.4, 0.6]\n",
"_____no_output_____"
],
[
" 3- Use the function \"expectation\" inside another function that calculates $\\mathbb E(X)$, where $X\\sim$ Bernoulli $(p)$. The only input of the function is $p$.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d00a2c33e4a0b20a4941c0476fd31c2281a5f7d6 | 47,835 | ipynb | Jupyter Notebook | notebooks/pytorch/pytorch_benchmarking.ipynb | MichoelSnow/data_science | 7f6c054624268308ec4126a601c9fa8bc5de157c | [
"MIT"
] | null | null | null | notebooks/pytorch/pytorch_benchmarking.ipynb | MichoelSnow/data_science | 7f6c054624268308ec4126a601c9fa8bc5de157c | [
"MIT"
] | 8 | 2020-03-24T15:29:05.000Z | 2022-02-10T00:14:06.000Z | notebooks/pytorch/pytorch_benchmarking.ipynb | MichoelSnow/data_science | 7f6c054624268308ec4126a601c9fa8bc5de157c | [
"MIT"
] | null | null | null | 53.446927 | 1,893 | 0.579659 | [
[
[
"# Imports and Paths",
"_____no_output_____"
]
],
[
[
"import torch\nfrom torch import nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport numpy as np\nimport pandas as pd\nimport os\nimport shutil\nfrom skimage import io, transform\n\nimport torchvision\nimport torchvision.transforms as transforms\nfrom torch.utils.data import Dataset, DataLoader\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"PATH = '/data/msnow/nih_cxr/'",
"_____no_output_____"
]
],
[
[
"# Load the Data",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(f'{PATH}Data_Entry_2017.csv')\ndf.shape",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"img_list = os.listdir(f'{PATH}images')\nlen(img_list)",
"_____no_output_____"
]
],
[
[
"## Collate the data",
"_____no_output_____"
]
],
[
[
"df_pa = df.loc[df.view=='PA',:]\ndf_pa.reset_index(drop=True, inplace=True)",
"_____no_output_____"
],
[
"trn_sz = int(df_pa.shape[0]/2)\ndf_pa_trn = df_pa.loc[:trn_sz,:]\ndf_pa_tst = df_pa.loc[trn_sz:,:]",
"_____no_output_____"
],
[
"df_pa_tst.shape",
"_____no_output_____"
],
[
"pneumo = []\nfor i,v in df_pa_trn.iterrows():\n if \"pneumo\" in v['labels'].lower():\n pneumo.append('pneumo')\n else:\n pneumo.append('no pneumo')\ndf_pa_trn['pneumo'] = pneumo",
"/data/msnow/miniconda3/envs/data_sci/lib/python3.6/site-packages/ipykernel_launcher.py:7: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n import sys\n"
],
[
"pneumo = []\nfor i,v in df_pa_tst.iterrows():\n if \"pneumo\" in v['labels'].lower():\n pneumo.append(pneumo)\n else:\n pneumo.append('no pneumo')\ndf_pa_tst['pneumo'] = pneumo",
"/data/msnow/miniconda3/envs/data_sci/lib/python3.6/site-packages/ipykernel_launcher.py:7: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n import sys\n"
],
[
"df_pa_trn.shape",
"_____no_output_____"
]
],
[
[
"Copy images to train and test folders",
"_____no_output_____"
]
],
[
[
"# dst = os.path.join(PATH,'trn')\n# src = os.path.join(PATH,'images')\n# for i,v in df_pa_trn.iterrows():\n# src2 = os.path.join(src,v.image) \n# shutil.copy2(src2,dst)",
"_____no_output_____"
],
[
"# dst = os.path.join(PATH,'tst')\n# src = os.path.join(PATH,'images')\n# for i,v in df_pa_tst.iterrows():\n# src2 = os.path.join(src,v.image) \n# shutil.copy2(src2,dst)",
"_____no_output_____"
]
],
[
[
"# Create the Dataset and Dataloader",
"_____no_output_____"
]
],
[
[
"class TDataset(Dataset):\n\n def __init__(self, df, root_dir, transform=None):\n \"\"\"\n Args:\n df (dataframe): df with all the annotations.\n root_dir (string): Directory with all the images.\n transform (callable, optional): Optional transform to be applied\n on a sample.\n \"\"\"\n# self.landmarks_frame = pd.read_csv(csv_file)\n self.df = df\n self.root_dir = root_dir\n self.transform = transform\n\n def __len__(self):\n return len(self.df)\n\n def __getitem__(self, idx):\n img_name = os.path.join(self.root_dir,self.df.image[idx])\n image = io.imread(img_name)\n categ = self.df.pneumo[idx]\n\n return image, categ",
"_____no_output_____"
],
[
"aa = trainset[0]",
"_____no_output_____"
],
[
"trainset = TDataset(df_pa_trn,f'{PATH}trn')\ntestset = TDataset(df_pa_tst,f'{PATH}tst')",
"_____no_output_____"
],
[
"trainloader = DataLoader(trainset, batch_size=4,shuffle=True, num_workers=4)\ntestloader = DataLoader(testset, batch_size=4,shuffle=False, num_workers=4)",
"_____no_output_____"
],
[
"aa[0].shape",
"_____no_output_____"
]
],
[
[
" # Define and train a CNN",
"_____no_output_____"
]
],
[
[
"class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n # 1 input image channel, 6 output channels, 5x5 square convolution kernel\n self.conv1 = nn.Conv2d(1, 6, 5)\n self.pool = nn.MaxPool2d(2, 2)\n # 6 input image channel, 16 output channels, 5x5 square convolution kernel\n self.conv2 = nn.Conv2d(6, 16, 5)\n self.fc1 = nn.Linear(16 * 5 * 5, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 10)\n\n def forward(self, x):\n print(f'input shape {x.shape}')\n x = self.pool(F.relu(self.conv1(x)))\n print(f'Lin (1,6,5) + relu + pool shape {x.shape}')\n x = self.pool(F.relu(self.conv2(x)))\n print(f'Lin (6,16,5) + relu + pool shape {x.shape}')\n x = x.view(x.shape[0],-1)\n print(f'reshape shape {x.shape}')\n# x = x.view(-1, 16 * 5 * 5)\n x = F.relu(self.fc1(x))\n# x = F.relu(self.fc2(x))\n# x = self.fc3(x)\n return x\n\n\nnet = Net()",
"_____no_output_____"
],
[
"input = torch.randn(1, 1, 1024,1024)\nout = net(input)\n# print(out)",
"torch.Size([1, 1, 1024, 1024])\n"
],
[
"criterion = nn.CrossEntropyLoss()\noptimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)",
"_____no_output_____"
],
[
"for i, data in enumerate(trainloader, 0):\n break",
"_____no_output_____"
],
[
"inputs, labels = data",
"_____no_output_____"
],
[
"tst = inputs.view(-1,1,1024,1024)\ntst = tst.type('torch.FloatTensor')\nout = net(tst)",
"input shape torch.Size([4, 1, 1024, 1024])\nLin (1,6,5) + relu + pool shape torch.Size([4, 6, 510, 510])\nLin (6,16,5) + relu + pool shape torch.Size([4, 16, 253, 253])\nreshape shape torch.Size([4, 1024144])\n"
],
[
"16*253*253",
"_____no_output_____"
],
[
"tst.shape",
"_____no_output_____"
],
[
"# tst = inputs[:,None,:,:]\ntst.type(torch.FloatTensor)",
"_____no_output_____"
],
[
"type(tst)",
"_____no_output_____"
],
[
"list(net.parameters())[0].size()",
"_____no_output_____"
],
[
"net(tst)",
"torch.Size([4, 1, 1024, 1024])\n"
],
[
"conv1_tst(tst)",
"_____no_output_____"
],
[
"for epoch in range(2): # loop over the dataset multiple times\n\n running_loss = 0.0\n for i, data in enumerate(trainloader, 0):\n # get the inputs\n inputs, labels = data\n\n # zero the parameter gradients\n optimizer.zero_grad()\n\n # forward + backward + optimize\n outputs = net(inputs)\n loss = criterion(outputs, labels)\n loss.backward()\n optimizer.step()\n\n # print statistics\n running_loss += loss.item()\n if i % 2000 == 1999: # print every 2000 mini-batches\n print('[%d, %5d] loss: %.3f' %\n (epoch + 1, i + 1, running_loss / 2000))\n running_loss = 0.0\n\nprint('Finished Training')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00a34d7d7187a8c2e06395e7973a183e958e0b6 | 4,425 | ipynb | Jupyter Notebook | 0.12/_downloads/plot_lcmv_beamformer_volume.ipynb | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 0.12/_downloads/plot_lcmv_beamformer_volume.ipynb | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 0.12/_downloads/plot_lcmv_beamformer_volume.ipynb | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 61.458333 | 2,297 | 0.621921 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# Compute LCMV inverse solution on evoked data in volume source space\n\n\nCompute LCMV inverse solution on an auditory evoked dataset in a volume source\nspace. It stores the solution in a nifti file for visualisation e.g. with\nFreeview.\n\n\n",
"_____no_output_____"
]
],
[
[
"# Author: Alexandre Gramfort <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.beamformer import lcmv\n\nfrom nilearn.plotting import plot_stat_map\nfrom nilearn.image import index_img\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'\nfname_fwd = data_path + '/MEG/sample/sample_audvis-meg-vol-7-fwd.fif'",
"_____no_output_____"
]
],
[
[
"Get epochs\n\n",
"_____no_output_____"
]
],
[
[
"event_id, tmin, tmax = 1, -0.2, 0.5\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname, preload=True, proj=True)\nraw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels\nevents = mne.read_events(event_fname)\n\n# Set up pick list: EEG + MEG - bad channels (modify to your needs)\nleft_temporal_channels = mne.read_selection('Left-temporal')\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,\n exclude='bads', selection=left_temporal_channels)\n\n# Pick the channels of interest\nraw.pick_channels([raw.ch_names[pick] for pick in picks])\n# Re-normalize our empty-room projectors, so they are fine after subselection\nraw.info.normalize_proj()\n\n# Read epochs\nproj = False # already applied\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax,\n baseline=(None, 0), preload=True, proj=proj,\n reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))\nevoked = epochs.average()\n\nforward = mne.read_forward_solution(fname_fwd)\n\n# Read regularized noise covariance and compute regularized data covariance\nnoise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='shrunk')\ndata_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15,\n method='shrunk')\n\n# Run free orientation (vector) beamformer. Source orientation can be\n# restricted by setting pick_ori to 'max-power' (or 'normal' but only when\n# using a surface-based source space)\nstc = lcmv(evoked, forward, noise_cov, data_cov, reg=0.01, pick_ori=None)\n\n# Save result in stc files\nstc.save('lcmv-vol')\n\nstc.crop(0.0, 0.2)\n\n# Save result in a 4D nifti file\nimg = mne.save_stc_as_volume('lcmv_inverse.nii.gz', stc,\n forward['src'], mri_resolution=False)\n\nt1_fname = data_path + '/subjects/sample/mri/T1.mgz'\n\n# Plotting with nilearn ######################################################\nplot_stat_map(index_img(img, 61), t1_fname, threshold=0.8,\n title='LCMV (t=%.1f s.)' % stc.times[61])\n\n# plot source time courses with the maximum peak amplitudes\nplt.figure()\nplt.plot(stc.times, stc.data[np.argsort(np.max(stc.data, axis=1))[-40:]].T)\nplt.xlabel('Time (ms)')\nplt.ylabel('LCMV value')\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d00a4116a1ffb3d5c0e14b769a3b044300ff4e74 | 8,893 | ipynb | Jupyter Notebook | nbs/dataset.dataset.ipynb | tezike/Hasoc | b29c5ec877a1751b04f86227a6ad264be8c06d81 | [
"Apache-2.0"
] | 1 | 2020-11-24T07:48:55.000Z | 2020-11-24T07:48:55.000Z | nbs/dataset.dataset.ipynb | tezike/Hasoc | b29c5ec877a1751b04f86227a6ad264be8c06d81 | [
"Apache-2.0"
] | null | null | null | nbs/dataset.dataset.ipynb | tezike/Hasoc | b29c5ec877a1751b04f86227a6ad264be8c06d81 | [
"Apache-2.0"
] | null | null | null | 27.965409 | 110 | 0.424604 | [
[
[
"#default_exp dataset.dataset",
"_____no_output_____"
],
[
"#export\nimport os\nimport torch\nimport transformers\n\nimport pandas as pd\nimport numpy as np\nimport Hasoc.config as config",
"_____no_output_____"
],
[
"#hide\ndf = pd.read_csv(config.DATA_PATH/'fold_df.csv')",
"_____no_output_____"
],
[
"#hide\ndf.head(2)",
"_____no_output_____"
],
[
"#hide\nfrom sklearn.preprocessing import LabelEncoder\nle = LabelEncoder()\nle.fit_transform(df.task1)\nle.classes_",
"_____no_output_____"
],
[
"#hide\ndf['task1_encoded'] = le.transform(df.task1.values)",
"_____no_output_____"
],
[
"#hide\n# TOKENIZER = transformers.BertTokenizer.from_pretrained(\n# pretrained_model_name_or_path='bert-base-uncased',\n# do_lower_case=True,\n# # force_download = True,\n# )\n\n# MAX_LEN = 72",
"_____no_output_____"
],
[
"#export\nclass BertDataset(torch.utils.data.Dataset):\n def __init__(self,text, target=None, is_test=False):\n self.text, self.target = text, target\n self.tokenizer = config.TOKENIZER\n self.max_len = config.MAX_LEN\n self.is_test = is_test\n\n def __len__(self):\n return len(self.target)\n\n def __getitem__(self, i):\n # sanity check\n text = ' '.join(self.text[i].split())\n\n # tokenize using Huggingface tokenizers\n out = self.tokenizer.encode_plus(text, None,\n add_special_tokens=True,\n max_length = self.max_len,\n truncation=True)\n\n ids = out['input_ids']\n mask = out['attention_mask']\n token_type_ids = out['token_type_ids']\n\n padding_length = self.max_len - len(ids)\n ids = ids + ([0] * padding_length)\n mask = mask + ([0] * padding_length)\n token_type_ids = token_type_ids + ([0] * padding_length)\n\n if not self.is_test:\n return {\n 'input_ids': torch.tensor(ids, dtype=torch.long),\n 'attention_mask': torch.tensor(mask, dtype=torch.long),\n 'token_type_ids': torch.tensor(token_type_ids, dtype=torch.long),\n 'targets': self.onehot(len(np.unique(self.target)), self.target[i])\n }\n else:\n return{\n 'input_ids': torch.tensor(ids, dtype=torch.long),\n 'attention_mask': torch.tensor(mask, dtype=torch.long),\n 'token_type_ids': torch.tensor(token_type_ids, dtype=torch.long),\n }\n\n @staticmethod\n def onehot(size, target):\n vec = torch.zeros(size, dtype=torch.long)\n vec[target] = 1.\n return vec\n\n def get_labels(self):\n return list(self.target)",
"_____no_output_____"
],
[
"#hide\nd = BertDataset(df.text.values, df.task1_encoded.values)",
"_____no_output_____"
],
[
"#hide\nd[10]",
"_____no_output_____"
],
[
"c = d[0]['targets']",
"_____no_output_____"
],
[
"c.argmax(dim=-1)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00a7a721975437d94e9c1f72d9d3d082fb06497 | 9,094 | ipynb | Jupyter Notebook | notebooks/ensemble_hist_gradient_boosting.ipynb | ThomasBourgeois/scikit-learn-mooc | 1c4bd0fb9a8466d396dd5daa64ee500546c9d834 | [
"CC-BY-4.0"
] | null | null | null | notebooks/ensemble_hist_gradient_boosting.ipynb | ThomasBourgeois/scikit-learn-mooc | 1c4bd0fb9a8466d396dd5daa64ee500546c9d834 | [
"CC-BY-4.0"
] | null | null | null | notebooks/ensemble_hist_gradient_boosting.ipynb | ThomasBourgeois/scikit-learn-mooc | 1c4bd0fb9a8466d396dd5daa64ee500546c9d834 | [
"CC-BY-4.0"
] | null | null | null | 36.522088 | 102 | 0.636134 | [
[
[
"# Speeding-up gradient-boosting\nIn this notebook, we present a modified version of gradient boosting which\nuses a reduced number of splits when building the different trees. This\nalgorithm is called \"histogram gradient boosting\" in scikit-learn.\n\nWe previously mentioned that random-forest is an efficient algorithm since\neach tree of the ensemble can be fitted at the same time independently.\nTherefore, the algorithm scales efficiently with both the number of cores and\nthe number of samples.\n\nIn gradient-boosting, the algorithm is a sequential algorithm. It requires\nthe `N-1` trees to have been fit to be able to fit the tree at stage `N`.\nTherefore, the algorithm is quite computationally expensive. The most\nexpensive part in this algorithm is the search for the best split in the\ntree which is a brute-force approach: all possible split are evaluated and\nthe best one is picked. We explained this process in the notebook \"tree in\ndepth\", which you can refer to.\n\nTo accelerate the gradient-boosting algorithm, one could reduce the number of\nsplits to be evaluated. As a consequence, the statistical performance of such\na tree would be reduced. However, since we are combining several trees in a\ngradient-boosting, we can add more estimators to overcome this issue.\n\nWe will make a naive implementation of such algorithm using building blocks\nfrom scikit-learn. First, we will load the California housing dataset.",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import fetch_california_housing\n\ndata, target = fetch_california_housing(return_X_y=True, as_frame=True)\ntarget *= 100 # rescale the target in k$",
"_____no_output_____"
]
],
[
[
"<div class=\"admonition note alert alert-info\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n<p class=\"last\">If you want a deeper overview regarding this dataset, you can refer to the\nAppendix - Datasets description section at the end of this MOOC.</p>\n</div>",
"_____no_output_____"
],
[
"We will make a quick benchmark of the original gradient boosting.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import cross_validate\nfrom sklearn.ensemble import GradientBoostingRegressor\n\ngradient_boosting = GradientBoostingRegressor(n_estimators=200)\ncv_results_gbdt = cross_validate(gradient_boosting, data, target, n_jobs=-1)",
"_____no_output_____"
],
[
"print(\"Gradient Boosting Decision Tree\")\nprint(f\"R2 score via cross-validation: \"\n f\"{cv_results_gbdt['test_score'].mean():.3f} +/- \"\n f\"{cv_results_gbdt['test_score'].std():.3f}\")\nprint(f\"Average fit time: \"\n f\"{cv_results_gbdt['fit_time'].mean():.3f} seconds\")\nprint(f\"Average score time: \"\n f\"{cv_results_gbdt['score_time'].mean():.3f} seconds\")",
"_____no_output_____"
]
],
[
[
"We recall that a way of accelerating the gradient boosting is to reduce the\nnumber of split considered within the tree building. One way is to bin the\ndata before to give them into the gradient boosting. A transformer called\n`KBinsDiscretizer` is doing such transformation. Thus, we can pipeline\nthis preprocessing with the gradient boosting.\n\nWe can first demonstrate the transformation done by the `KBinsDiscretizer`.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom sklearn.preprocessing import KBinsDiscretizer\n\ndiscretizer = KBinsDiscretizer(\n n_bins=256, encode=\"ordinal\", strategy=\"quantile\")\ndata_trans = discretizer.fit_transform(data)\ndata_trans",
"_____no_output_____"
]
],
[
[
"<div class=\"admonition note alert alert-info\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n<p class=\"last\">The code cell above will generate a couple of warnings. Indeed, for some of\nthe features, we requested too much bins in regard of the data dispersion\nfor those features. The smallest bins will be removed.</p>\n</div>\nWe see that the discretizer transforms the original data into an integer.\nThis integer represents the bin index when the distribution by quantile is\nperformed. We can check the number of bins per feature.",
"_____no_output_____"
]
],
[
[
"[len(np.unique(col)) for col in data_trans.T]",
"_____no_output_____"
]
],
[
[
"After this transformation, we see that we have at most 256 unique values per\nfeatures. Now, we will use this transformer to discretize data before\ntraining the gradient boosting regressor.",
"_____no_output_____"
]
],
[
[
"from sklearn.pipeline import make_pipeline\n\ngradient_boosting = make_pipeline(\n discretizer, GradientBoostingRegressor(n_estimators=200))\ncv_results_gbdt = cross_validate(gradient_boosting, data, target, n_jobs=-1)",
"_____no_output_____"
],
[
"print(\"Gradient Boosting Decision Tree with KBinsDiscretizer\")\nprint(f\"R2 score via cross-validation: \"\n f\"{cv_results_gbdt['test_score'].mean():.3f} +/- \"\n f\"{cv_results_gbdt['test_score'].std():.3f}\")\nprint(f\"Average fit time: \"\n f\"{cv_results_gbdt['fit_time'].mean():.3f} seconds\")\nprint(f\"Average score time: \"\n f\"{cv_results_gbdt['score_time'].mean():.3f} seconds\")",
"_____no_output_____"
]
],
[
[
"Here, we see that the fit time has been drastically reduced but that the\nstatistical performance of the model is identical. Scikit-learn provides a\nspecific classes which are even more optimized for large dataset, called\n`HistGradientBoostingClassifier` and `HistGradientBoostingRegressor`. Each\nfeature in the dataset `data` is first binned by computing histograms, which\nare later used to evaluate the potential splits. The number of splits to\nevaluate is then much smaller. This algorithm becomes much more efficient\nthan gradient boosting when the dataset has over 10,000 samples.\n\nBelow we will give an example for a large dataset and we will compare\ncomputation times with the experiment of the previous section.",
"_____no_output_____"
]
],
[
[
"from sklearn.experimental import enable_hist_gradient_boosting\nfrom sklearn.ensemble import HistGradientBoostingRegressor\n\nhistogram_gradient_boosting = HistGradientBoostingRegressor(\n max_iter=200, random_state=0)\ncv_results_hgbdt = cross_validate(gradient_boosting, data, target, n_jobs=-1)",
"_____no_output_____"
],
[
"print(\"Histogram Gradient Boosting Decision Tree\")\nprint(f\"R2 score via cross-validation: \"\n f\"{cv_results_hgbdt['test_score'].mean():.3f} +/- \"\n f\"{cv_results_hgbdt['test_score'].std():.3f}\")\nprint(f\"Average fit time: \"\n f\"{cv_results_hgbdt['fit_time'].mean():.3f} seconds\")\nprint(f\"Average score time: \"\n f\"{cv_results_hgbdt['score_time'].mean():.3f} seconds\")",
"_____no_output_____"
]
],
[
[
"The histogram gradient-boosting is the best algorithm in terms of score.\nIt will also scale when the number of samples increases, while the normal\ngradient-boosting will not.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d00a7c716edde84a6b9ec1d33d1564b0fb328310 | 1,381 | ipynb | Jupyter Notebook | examples/powershell/powershell.ipynb | dfinke/qsharp-server | ff09855561fa38fb3aa3312a1245e99dea420f4c | [
"MIT"
] | null | null | null | examples/powershell/powershell.ipynb | dfinke/qsharp-server | ff09855561fa38fb3aa3312a1245e99dea420f4c | [
"MIT"
] | null | null | null | examples/powershell/powershell.ipynb | dfinke/qsharp-server | ff09855561fa38fb3aa3312a1245e99dea420f4c | [
"MIT"
] | null | null | null | 19.450704 | 66 | 0.511224 | [
[
[
"Import-Module ./qsharp.psd1",
"_____no_output_____"
],
[
"Build-QuantumProgram @\"\n open Microsoft.Quantum.Diagnostics;\n operation SampleQrng() : Bool {\n use q = Qubit();\n H(q);\n DumpMachine();\n return M(q) == One;\n }\n\"@",
"_____no_output_____"
],
[
"Invoke-QuantumProgram SampleQrng -Simulator QuantumSimulator",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d00a9747ce159b26e440b93a7f057d61a7fe8e3f | 33,141 | ipynb | Jupyter Notebook | notebooks/titanic_explore4_recursive_feature_elimination.ipynb | EmilMachine/kaggle_titanic | 48dea13e964dcaf65f540d64cb4ab46c14ac6a41 | [
"MIT"
] | null | null | null | notebooks/titanic_explore4_recursive_feature_elimination.ipynb | EmilMachine/kaggle_titanic | 48dea13e964dcaf65f540d64cb4ab46c14ac6a41 | [
"MIT"
] | null | null | null | notebooks/titanic_explore4_recursive_feature_elimination.ipynb | EmilMachine/kaggle_titanic | 48dea13e964dcaf65f540d64cb4ab46c14ac6a41 | [
"MIT"
] | null | null | null | 59.074866 | 17,216 | 0.767207 | [
[
[
"import pandas as pd\nimport numpy as np\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import Imputer\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import GridSearchCV\n\nfrom sklearn import metrics\nimport sklearn.preprocessing\n",
"_____no_output_____"
],
[
"train_file = \"../data/train.csv\"\ntest_file = \"../data/test.csv\"\n\n\ntrain_data_raw = pd.read_csv(train_file)\ntest_data_raw = pd.read_csv(test_file)\n\ntarget = \"Survived\"\n",
"_____no_output_____"
],
[
"\n### CLEAN DATA FUNC\n\ndef clean_func(train_data):\n \n ## DO IMPUTATION \n # FARE\n imp_fare = Imputer(missing_values=\"NaN\", strategy=\"mean\")\n imp_fare.fit(train_data[[\"Fare\"]])\n train_data[[\"Fare\"]]=imp_fare.transform(train_data[[\"Fare\"]]).ravel() \n\n # Age\n imp=Imputer(missing_values=\"NaN\", strategy=\"mean\")\n imp.fit(train_data[[\"Age\"]])\n train_data[[\"Age\"]]=imp.transform(train_data[[\"Age\"]]).ravel() \n \n # Filna\n train_data[\"Cabin\"] = train_data[\"Cabin\"].fillna(\"\")\n\n \n # one hot encoding\n sex_features = pd.get_dummies(train_data[\"Sex\"])\n embarked_features = pd.get_dummies(train_data[\"Embarked\"])\n \n # rename embarked features\n embarked_features = embarked_features.rename(columns={'C': 'embarked_cobh'\n , 'Q': 'embark_queenstown'\n , 'S': 'embark_southampton'})\n\n # Concat new features\n train_data_extras = pd.concat([train_data,sex_features,embarked_features],axis=1)\n\n \n \n # HACK - REMOVE T WHICH IS NOT IN TEST LIKELY ERRROR \n cabin_letters = pd.get_dummies(train_data['Cabin'].map(lambda x: \"empty\" if len(x)==0 or x[0]==\"T\" else x[0]))\n\n# cabin_letters = pd.get_dummies(train_data['Cabin'].map(lambda x: \"empty\" if len(x)==0 else x[0]))\n cabin_letters.columns = [\"Cabin_letter_\"+i for i in cabin_letters.columns]\n train_data_extras = pd.concat([train_data_extras,cabin_letters],axis=1)\n \n\n train_data_extras[\"Cabin_number\"] = train_data['Cabin'].map(lambda x: -99 if len(x)==0 else x.split(\" \")[0][1:]) \n\n # ONLY RETURN NUMERIC COLUMNS \n num_types = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64','uint8']\n train_data_numerics = train_data_extras.select_dtypes(include=num_types)\n\n return train_data_numerics\n",
"_____no_output_____"
]
],
[
[
"## Select only numeric columns",
"_____no_output_____"
]
],
[
[
"train_data_raw2 = clean_func(train_data_raw)\ntrain_data = train_data_raw2.iloc[:, train_data_raw2.columns != target]\ntrain_data_target = train_data_raw2[target].values\n",
"/Users/emil/.pyenv/versions/3.7.1/lib/python3.7/site-packages/sklearn/utils/deprecation.py:58: DeprecationWarning: Class Imputer is deprecated; Imputer was deprecated in version 0.20 and will be removed in 0.22. Import impute.SimpleImputer from sklearn instead.\n warnings.warn(msg, category=DeprecationWarning)\n/Users/emil/.pyenv/versions/3.7.1/lib/python3.7/site-packages/sklearn/utils/deprecation.py:58: DeprecationWarning: Class Imputer is deprecated; Imputer was deprecated in version 0.20 and will be removed in 0.22. Import impute.SimpleImputer from sklearn instead.\n warnings.warn(msg, category=DeprecationWarning)\n"
],
[
"X_train,X_test,Y_train,Y_test = train_test_split(train_data\n ,train_data_target\n ,test_size=0.3\n ,random_state=42)",
"_____no_output_____"
]
],
[
[
"# Models\n- logreg\n- random forest",
"_____no_output_____"
],
[
"### random forest naive",
"_____no_output_____"
]
],
[
[
"model_rf = RandomForestClassifier(\nn_estimators=100\n)\n\nmodel_rf.fit(X_train, Y_train)",
"_____no_output_____"
],
[
"# Cross Validation RF\n\nscores = cross_val_score(model_rf, X_train, Y_train, cv=10)\nprint(scores)",
"[0.84375 0.80952381 0.88709677 0.82258065 0.79032258 0.75806452\n 0.82258065 0.77419355 0.82258065 0.91935484]\n"
],
[
"pred_rf = model_rf.predict(X_test)\nmetrics.accuracy_score(Y_test,pred_rf)",
"_____no_output_____"
]
],
[
[
"### Random Forest Grid Search",
"_____no_output_____"
]
],
[
[
"model_rf_gs = RandomForestClassifier()\n\n",
"_____no_output_____"
],
[
"# parmeter dict\nparam_grid = dict(\n n_estimators=np.arange(60,101,20)\n , min_samples_leaf=np.arange(2,4,1)\n #, criterion = [\"gini\",\"entropy\"]\n #, max_features = np.arange(0.1,0.5,0.1)\n)\nprint(param_grid)",
"{'n_estimators': array([ 60, 80, 100]), 'min_samples_leaf': array([2, 3])}\n"
],
[
"grid = GridSearchCV(model_rf_gs,param_grid=param_grid,scoring = \"accuracy\", cv = 5)\ngrid.fit(train_data, train_data_target)\n\"\"\n\n# model_rf.fit(train_data, train_data[target])",
"_____no_output_____"
],
[
"# print(grid)\n# for i in ['params',\"mean_train_score\",\"mean_test_score\"]:\n# print(i)\n# print(grid.cv_results_[i])\n# grid.cv_results_",
"_____no_output_____"
],
[
"print(grid.best_params_)\nprint(grid.best_score_)\n",
"{'min_samples_leaf': 3, 'n_estimators': 60}\n0.8237934904601572\n"
],
[
"model_rf_gs_best = RandomForestClassifier(**grid.best_params_)\nmodel_rf_gs_best.fit(X_train,Y_train)",
"_____no_output_____"
],
[
"## print feture importance\nmodel = model_rf_gs_best\nfeature_names = X_train.columns.values\nfeature_importance2 = sorted(zip(map(lambda x: round(x, 4), model.feature_importances_), feature_names), reverse=True)\n\nprint(len(feature_importance2))\n\nfor feature in feature_importance2:\n print('%f:%s' % feature )",
"19\n0.175900:male\n0.171100:female\n0.153000:Fare\n0.127200:Age\n0.110800:PassengerId\n0.073200:Pclass\n0.043100:Cabin_letter_empty\n0.039600:SibSp\n0.028100:Parch\n0.019300:embarked_cobh\n0.015800:embark_southampton\n0.012300:Cabin_letter_E\n0.008100:Cabin_letter_B\n0.006000:embark_queenstown\n0.005100:Cabin_letter_D\n0.004900:Cabin_letter_C\n0.003800:Cabin_letter_F\n0.001600:Cabin_letter_A\n0.001100:Cabin_letter_G\n"
],
[
"### \n# Recursive feature elimination\nfrom sklearn.feature_selection import RFECV\n\nmodel = model_rf_gs_best\n\nrfecv = RFECV(estimator=model, step=1, cv=3, scoring='accuracy')\n\nrfecv.fit(X_train,Y_train)\n",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nfrom sklearn import base\n\nmodel = model_rf_gs_best\n\nprint(\"Optimal number of features : %d\" % rfecv.n_features_)\nplt.plot(range(1, len(rfecv.grid_scores_) + 1), rfecv.grid_scores_)\nplt.title('Recursive feature elemination')\nplt.xlabel('Nr of features')\nplt.ylabel('Acc')\n\nfeature_short = feature_names[rfecv.support_]\nprint('== Feature short list ==')\nprint(feature_short)\n\nmodel_simple = base.clone(model)\nmodel_simple.fit(X_train[feature_short],Y_train)\n\n",
"Optimal number of features : 16\n== Feature short list ==\n['PassengerId' 'Pclass' 'Age' 'SibSp' 'Parch' 'Fare' 'female' 'male'\n 'embarked_cobh' 'embark_queenstown' 'embark_southampton' 'Cabin_letter_B'\n 'Cabin_letter_C' 'Cabin_letter_D' 'Cabin_letter_E' 'Cabin_letter_empty']\n"
]
],
[
[
"- Converge about 16\n- let us comare 16 vs full features on test set",
"_____no_output_____"
]
],
[
[
"\nY_pred = model.predict(X_test)\nmodel_score = metrics.accuracy_score(Y_test,Y_pred)\n\nY_pred_simple = model_simple.predict(X_test[feature_short])\nmodel_simple_score = metrics.accuracy_score(Y_test,Y_pred_simple)\n\nprint(\"model acc: %.3f\" % model_score)\nprint(\"simple model acc: %.3f\" % model_simple_score)",
"model acc: 0.806\nsimple model acc: 0.825\n"
]
],
[
[
"ie. we sligtly increase test scores by removing extra variables with recursive feature elimination. (ie we remove extra variable that only seem to overfit on noise and don't contribute to acc)\n\nOften an even more conservative cutoff can be used and go for 90% of max accurracy for f",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d00aa17f3395f1d1af46eeb7c34135c6b0668dde | 17,121 | ipynb | Jupyter Notebook | 2nd place - Ensemble/Brainiac_Numpy_Extration_for_25_Periods.ipynb | RadiantMLHub/spot-the-crop-xl-challenge | 5382b37d58ad70c09d1e19fe9f9698352efb70b8 | [
"Apache-2.0"
] | 6 | 2021-12-24T09:25:08.000Z | 2022-03-23T12:24:39.000Z | 2nd place - Ensemble/Brainiac_Numpy_Extration_for_25_Periods.ipynb | RadiantMLHub/spot-the-crop-xl-challenge | 5382b37d58ad70c09d1e19fe9f9698352efb70b8 | [
"Apache-2.0"
] | null | null | null | 2nd place - Ensemble/Brainiac_Numpy_Extration_for_25_Periods.ipynb | RadiantMLHub/spot-the-crop-xl-challenge | 5382b37d58ad70c09d1e19fe9f9698352efb70b8 | [
"Apache-2.0"
] | null | null | null | 35.084016 | 255 | 0.438059 | [
[
[
"# Install libraries\n!pip -qq install rasterio tifffile",
"_____no_output_____"
],
[
"# Import libraries\nimport os\nimport glob\nimport shutil\nimport gc\nfrom joblib import Parallel, delayed\nfrom tqdm import tqdm_notebook\nimport h5py\n\nimport pandas as pd\nimport numpy as np\nimport datetime as dt\nfrom datetime import datetime, timedelta\nimport matplotlib.pyplot as plt\n\n\nimport rasterio\nimport tifffile as tiff\n\n%matplotlib inline\npd.set_option('display.max_colwidth', None)",
"_____no_output_____"
],
[
"# Download data with a frequency of 10 days\ndef date_finder(start_date):\n season_dates = []\n m = str(start_date)[:10]\n s = str(start_date)[:10]\n for i in range(24):\n date = datetime.strptime(s, \"%Y-%m-%d\")\n s = str(date + timedelta(days = 10))[:10]\n season_dates.append(datetime.strptime(s, \"%Y-%m-%d\"))\n seasons_dates = [datetime.strptime(m, \"%Y-%m-%d\")] + season_dates\n seasons_dates = [np.datetime64(x) for x in seasons_dates]\n return list(seasons_dates)\n\n# If day not in a frequency of 10 days, find the nearest date\ndef nearest(items, pivot):\n return min(items, key=lambda x: abs(x - pivot))",
"_____no_output_____"
],
[
"%%time\n# Unpack data saved in gdrive to colab\nshutil.unpack_archive('/content/drive/MyDrive/CompeData/Radiant/Radiant_Data.zip', '/content/radiant')\ngc.collect()",
"CPU times: user 13min 43s, sys: 4min 22s, total: 18min 5s\nWall time: 27min 23s\n"
],
[
"# Load files\ntrain = pd.concat([pd.read_csv(f'/content/radiant/train{i}.csv', parse_dates=['datetime']) for i in range(1, 5)]).reset_index(drop = True)\ntest = pd.concat([pd.read_csv(f'/content/radiant/test{i}.csv', parse_dates=['datetime']) for i in range(1, 5)]).reset_index(drop = True)\ntrain.file_path = train.file_path.apply(lambda x: '/'.join(['/content', 'radiant'] + x.split('/')[2:]))\ntest.file_path = test.file_path.apply(lambda x: '/'.join(['/content', 'radiant'] + x.split('/')[2:]))\ntrain.datetime, test.datetime = pd.to_datetime(train.datetime.dt.date), pd.to_datetime(test.datetime.dt.date)\ntrain['month'], test['month'] = train.datetime.dt.month, test.datetime.dt.month\ntrain.head()",
"_____no_output_____"
],
[
"# Unique months\ntrain.month.unique()",
"_____no_output_____"
],
[
"# Bands\nbands = ['B01','B02','B03','B04','B05','B06','B07','B08','B8A','B09','B11','B12','CLM']",
"_____no_output_____"
],
[
"# Function to load tile and extract fields data into a numpy array and convert the same to a dataframe\n# Train\ndef process_tile_train(tile):\n tile_df = train[(train.tile_id == tile)].reset_index(drop = True)\n\n y = np.expand_dims(rasterio.open(tile_df[tile_df.asset == 'labels'].file_path.values[0]).read(1).flatten(), axis = 1)\n fields = np.expand_dims(rasterio.open(tile_df[tile_df.asset == 'field_ids'].file_path.values[0]).read(1).flatten(), axis = 1)\n\n tile_df = train[(train.tile_id == tile) & (train.satellite_platform == 's2')].reset_index(drop = True)\n\n unique_dates = list(tile_df.datetime.unique())\n start_date = tile_df.datetime.unique()[0]\n # Assert\n diff = set([str(x)[:10] for x in date_finder(start_date)]) - set([str(x)[:10] for x in unique_dates])\n if len(diff) > 0:\n missing = list(set([str(x)[:10] for x in date_finder(start_date)]) - set(diff))\n for d in diff:\n missing.append(str(nearest(unique_dates, np.datetime64(d)))[:10])\n dates = sorted([np.datetime64(x) for x in missing]) \n else:\n dates = date_finder(start_date)\n\n X_tile = np.empty((256 * 256, 0))\n\n colls = []\n for date, datec in zip(dates, range(25)):\n for band in bands:\n tif_file = tile_df[(tile_df.asset == band) & (tile_df.datetime == date)].file_path.values[0]\n X_tile = np.append(X_tile, (np.expand_dims(rasterio.open(tif_file).read(1).flatten(), axis = 1)), axis = 1)\n colls.append(str(datec) + '_' + band)\n df = pd.DataFrame(X_tile, columns = colls)\n df['y'], df['fields'] = y, fields\n return df",
"_____no_output_____"
],
[
"# Preprocessing the data in chunks to avoid outofmemmory error\n# Train\ntiles = train.tile_id.unique()\nchunks = [tiles[x:x+50] for x in range(0, len(tiles), 50)]\n[len(x) for x in chunks], len(chunks)",
"_____no_output_____"
],
[
"# Preprocessing the tiles without storing them in memory but saving them as csvs in gdrive\n# Train\nfor i in range(len(chunks)):\n pd.DataFrame(np.vstack(Parallel(n_jobs=-1, verbose=1, backend=\"multiprocessing\")(map(delayed(process_tile_train), [x for x in chunks[i]])))).to_csv(f'/content/drive/MyDrive/CompeData/Radiant/Seasonality/train/train{i}.csv', index = False)\n gc.collect()",
"_____no_output_____"
],
[
"# Function to load tile and extract fields data into a numpy array and convert the same to a dataframe\n# Test\ndef process_tile_test(tile):\n tile_df = test[(test.tile_id == tile)].reset_index(drop = True)\n\n fields = np.expand_dims(rasterio.open(tile_df[tile_df.asset == 'field_ids'].file_path.values[0]).read(1).flatten(), axis = 1)\n\n tile_df = test[(test.tile_id == tile) & (test.satellite_platform == 's2')].reset_index(drop = True)\n\n unique_dates = list(tile_df.datetime.unique())\n start_date = tile_df.datetime.unique()[0]\n # Assert\n diff = set([str(x)[:10] for x in date_finder(start_date)]) - set([str(x)[:10] for x in unique_dates])\n if len(diff) > 0:\n missing = list(set([str(x)[:10] for x in date_finder(start_date)]) - set(diff))\n for d in diff:\n missing.append(str(nearest(unique_dates, np.datetime64(d)))[:10])\n dates = sorted([np.datetime64(x) for x in missing]) \n else:\n dates = date_finder(start_date)\n\n X_tile = np.empty((256 * 256, 0))\n\n colls = []\n for date, datec in zip(dates, range(25)):\n for band in bands:\n tif_file = tile_df[(tile_df.asset == band) & (tile_df.datetime == date)].file_path.values[0]\n X_tile = np.append(X_tile, (np.expand_dims(rasterio.open(tif_file).read(1).flatten(), axis = 1)), axis = 1)\n colls.append(str(datec) + '_' + band)\n df = pd.DataFrame(X_tile, columns = colls)\n df['fields'] = fields\n return df",
"_____no_output_____"
],
[
"# Preprocessing the data in chunks to avoid outofmemmory error\n# Train\ntiles = test.tile_id.unique()\nchunks = [tiles[x:x+50] for x in range(0, len(tiles), 50)]\n[len(x) for x in chunks], len(chunks)",
"_____no_output_____"
],
[
"# Preprocessing the tiles without storing them in memory but saving them as csvs in gdrive\n# Train\nfor i in range(len(chunks)):\n pd.DataFrame(np.vstack(Parallel(n_jobs=-1, verbose=1, backend=\"multiprocessing\")(map(delayed(process_tile_test), [x for x in chunks[i]])))).to_csv(f'/content/drive/MyDrive/CompeData/Radiant/Seasonality/test/test{i}.csv', index = False)\n gc.collect()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00aaa4ec284e386e7c3b01368111bb888d2ee6e | 3,213 | ipynb | Jupyter Notebook | labs/notebooks/non_linear_classifiers/exercise_4.ipynb | mpc97/lxmls | 3debbf262e35cbe3b126a2da5ac0ae2c68474cc5 | [
"MIT"
] | null | null | null | labs/notebooks/non_linear_classifiers/exercise_4.ipynb | mpc97/lxmls | 3debbf262e35cbe3b126a2da5ac0ae2c68474cc5 | [
"MIT"
] | null | null | null | labs/notebooks/non_linear_classifiers/exercise_4.ipynb | mpc97/lxmls | 3debbf262e35cbe3b126a2da5ac0ae2c68474cc5 | [
"MIT"
] | null | null | null | 22.787234 | 116 | 0.548708 | [
[
[
"### Amazon Sentiment Data",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"import lxmls.readers.sentiment_reader as srs\nfrom lxmls.deep_learning.utils import AmazonData\ncorpus = srs.SentimentCorpus(\"books\")\ndata = AmazonData(corpus=corpus)",
"_____no_output_____"
]
],
[
[
"### Implement Pytorch Forward pass",
"_____no_output_____"
],
[
"As the final exercise today implement the log `forward()` method in \n\n lxmls/deep_learning/pytorch_models/mlp.py\n\nUse the previous exercise as reference. After you have completed this you can run both systems for comparison.",
"_____no_output_____"
]
],
[
[
"# Model\ngeometry = [corpus.nr_features, 20, 2]\nactivation_functions = ['sigmoid', 'softmax']\n\n# Optimization\nlearning_rate = 0.05\nnum_epochs = 10\nbatch_size = 30",
"_____no_output_____"
],
[
"import numpy as np\nfrom lxmls.deep_learning.pytorch_models.mlp import PytorchMLP\nmodel = PytorchMLP(\n geometry=geometry,\n activation_functions=activation_functions,\n learning_rate=learning_rate\n)",
"_____no_output_____"
],
[
"# Get batch iterators for train and test\ntrain_batches = data.batches('train', batch_size=batch_size)\ntest_set = data.batches('test', batch_size=None)[0]\n\n# Epoch loop\nfor epoch in range(num_epochs):\n\n # Batch loop\n for batch in train_batches:\n model.update(input=batch['input'], output=batch['output'])\n\n # Prediction for this epoch\n hat_y = model.predict(input=test_set['input'])\n\n # Evaluation\n accuracy = 100*np.mean(hat_y == test_set['output'])\n\n # Inform user\n print(\"Epoch %d: accuracy %2.2f %%\" % (epoch+1, accuracy))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
d00aacf7724155caa9af204723cedf22638e860c | 333,226 | ipynb | Jupyter Notebook | notebooks/Eszti/unesco_endangered_lang_europe.ipynb | e8725144/lang-changes | 60dbde8a604f5957b9e67364ec146bf398f536b4 | [
"MIT"
] | 1 | 2021-12-10T10:03:52.000Z | 2021-12-10T10:03:52.000Z | notebooks/Eszti/unesco_endangered_lang_europe.ipynb | e8725144/lang-changes | 60dbde8a604f5957b9e67364ec146bf398f536b4 | [
"MIT"
] | 8 | 2021-12-07T06:50:03.000Z | 2022-01-22T21:32:54.000Z | notebooks/Eszti/unesco_endangered_lang_europe.ipynb | e8725144/lang-changes | 60dbde8a604f5957b9e67364ec146bf398f536b4 | [
"MIT"
] | null | null | null | 163.90851 | 121,312 | 0.830869 | [
[
[
"# Explore endangered languages from UNESCO Atlas of the World's Languages in Danger\n\n### Input\n\nEndangered languages\n\n- https://www.kaggle.com/the-guardian/extinct-languages/version/1 (updated in 2016)\n- original data: http://www.unesco.org/languages-atlas/index.php?hl=en&page=atlasmap (published in 2010)\n\nCountries of the world\n\n- https://www.ethnologue.com/sites/default/files/CountryCodes.tab\n\n\n### Output\n- `endangered_languages_europe.csv`",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport geopandas as gpd\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"## Load data",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(\"../../data/endangerment/extinct_languages.csv\")\nprint(df.shape)\nprint(df.dtypes)\ndf.head()",
"(2722, 15)\nID int64\nName in English object\nName in French object\nName in Spanish object\nCountries object\nCountry codes alpha 3 object\nISO639-3 codes object\nDegree of endangerment object\nAlternate names object\nName in the language object\nNumber of speakers float64\nSources object\nLatitude float64\nLongitude float64\nDescription of the location object\ndtype: object\n"
],
[
"df.columns",
"_____no_output_____"
],
[
"ENDANGERMENT_MAP = {\n \"Vulnerable\": 1,\n \"Definitely endangered\": 2,\n \"Severely endangered\": 3,\n \"Critically endangered\": 4,\n \"Extinct\": 5,\n}",
"_____no_output_____"
],
[
"df[\"Endangerment code\"] = df[\"Degree of endangerment\"].apply(lambda x: ENDANGERMENT_MAP[x])\ndf[[\"Degree of endangerment\", \"Endangerment code\"]]",
"_____no_output_____"
]
],
[
[
"## Distribution of the degree of endangerment",
"_____no_output_____"
]
],
[
[
"plt.xticks(fontsize=16)\nplt.yticks(fontsize=16)\ndf[\"Degree of endangerment\"].hist(figsize=(15,5)).get_figure().savefig('endangered_hist.png', format=\"png\")",
"_____no_output_____"
]
],
[
[
"## Show distribution on map",
"_____no_output_____"
]
],
[
[
"countries_map = gpd.read_file(gpd.datasets.get_path(\"naturalearth_lowres\"))\ncountries_map.head()",
"_____no_output_____"
],
[
"# Plot Europe\nfig, ax = plt.subplots(figsize=(20, 10))\n\ncountries_map.plot(color='lightgrey', ax=ax)\nplt.xlim([-30, 50])\nplt.ylim([30, 75])\n\ndf.plot(\n x=\"Longitude\", \n y=\"Latitude\", \n kind=\"scatter\", \n title=\"Endangered languages in Europe (1=Vulnerable, 5=Extinct)\", \n c=\"Endangerment code\", \n colormap=\"YlOrRd\",\n ax=ax,\n)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Get endangered languages only for Europe",
"_____no_output_____"
]
],
[
[
"countries = pd.read_csv(\"../../data/general/country_codes.tsv\", sep=\"\\t\")\neurope = countries[countries[\"Area\"] == \"Europe\"]\neurope",
"_____no_output_____"
],
[
"europe_countries = set(europe[\"Name\"].to_list())\neurope_countries",
"_____no_output_____"
],
[
"df[df[\"Countries\"].isna()]",
"_____no_output_____"
],
[
"df = df[df[\"Countries\"].notna()]\ndf[df[\"Countries\"].isna()]",
"_____no_output_____"
],
[
"df[\"In Europe\"] = df[\"Countries\"].apply(lambda x: len(europe_countries.intersection(set(x.split(\",\")))) > 0)\ndf_europe = df.loc[df[\"In Europe\"] == True]\n\nprint(df_europe.shape)\ndf_europe.head(20)",
"(247, 17)\n"
],
[
"# Plot only European endangered languages\nfig, ax = plt.subplots(figsize=(20, 10))\n\ncountries_map.plot(color='lightgrey', ax=ax)\nplt.xlim([-30, 50])\nplt.ylim([30, 75])\n\ndf_europe.plot(\n x=\"Longitude\", \n y=\"Latitude\", \n kind=\"scatter\", \n title=\"Endangered languages in Europe\", \n c=\"Endangerment code\", \n colormap=\"YlOrRd\",\n ax=ax,\n)\n\nplt.xticks(fontsize=16)\nplt.yticks(fontsize=16)\nplt.xlabel('Longitude', fontsize=18)\nplt.ylabel('Latitude', fontsize=18)\nplt.title(\"Endangered languages in Europe (1=Vulnerable, 5=Extinct)\", fontsize=18)\n\nplt.show()\nfig.savefig(\"endangered_languages_in_europe.png\", format=\"png\", bbox_inches=\"tight\")",
"_____no_output_____"
]
],
[
[
"## Save output",
"_____no_output_____"
]
],
[
[
"df_europe.to_csv(\"../../data/endangerment/endangered_languages_europe.csv\", index=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d00acb65cc95f0af05cf30bc458df4ea17a23496 | 14,522 | ipynb | Jupyter Notebook | Polinomi.ipynb | RiccardoTancredi/Polynomials | 6ddeb927284092cbb52308065d1119a2f7f7e277 | [
"MIT"
] | null | null | null | Polinomi.ipynb | RiccardoTancredi/Polynomials | 6ddeb927284092cbb52308065d1119a2f7f7e277 | [
"MIT"
] | null | null | null | Polinomi.ipynb | RiccardoTancredi/Polynomials | 6ddeb927284092cbb52308065d1119a2f7f7e277 | [
"MIT"
] | null | null | null | 36.951654 | 222 | 0.413924 | [
[
[
"# Polynomials Class",
"_____no_output_____"
]
],
[
[
"from sympy import *\nimport numpy as np\nx = Symbol('x')\nclass polinomio:\n def __init__(self, coefficienti: list):\n self.coefficienti = coefficienti\n self.grado = 0 if len(self.coefficienti) == 0 else len(\n self.coefficienti) - 1\n i = 0\n while i < len(self.coefficienti):\n if self.coefficienti[0] == 0:\n self.coefficienti.pop(0)\n i += 1\n\n # scrittura del polinomio:\n def __str__(self):\n output = \"\"\n for i in range(0, len(self.coefficienti)):\n # and x[grado_polinomio]!=0):\n if (((self.coefficienti[i] == 1 or self.coefficienti[i] == 1.0) and self.grado-i == 1)):\n output += \"x \"\n if self.grado-i == 1 and (self.coefficienti[i] != 0 and self.coefficienti[i] != 1 and self.coefficienti[i] != -1 and self.coefficienti[i] != 1.0 and self.coefficienti[i] != -1.0):\n output += \"{}x \".format(self.coefficienti[i])\n if self.coefficienti[i] == 0:\n pass\n # continue\n if self.grado-i != 0 and self.grado-i != 1 and (self.coefficienti[i] != 0 and self.coefficienti[i] != 1 and self.coefficienti[i] != -1 and self.coefficienti[i] != 1.0 and self.coefficienti[i] != -1.0):\n output += \"{}x^{} \".format(\n self.coefficienti[i], self.grado-i)\n # continue\n #print(x[i], \"$x^\", grado_polinomio-i, \"$ + \")\n if (self.coefficienti[i] == 1 or self.coefficienti[i] == 1.0) and self.grado-i != 1 and self.grado-i != 0:\n output += \"x^{} \".format(self.grado-i)\n # continue\n elif (self.coefficienti[i] == -1 or self.coefficienti[i] == -1.0) and self.grado-i != 1 and self.grado-i != 0:\n output += \"- x^{} \".format(self.grado-i)\n # continue\n elif self.coefficienti[i] != 0 and self.grado-i == 0 and (self.coefficienti[i] != 1 or self.coefficienti[i] != 1.0):\n output += \"{} \".format(self.coefficienti[i])\n elif self.coefficienti[i] != 0 and self.grado-i == 0 and (self.coefficienti[i] == 1 or self.coefficienti[i] == 1.0):\n output += \"1 \"\n if ((self.coefficienti[i] == -1 or self.coefficienti[i] == -1.0) and self.grado-i == 1):\n output += \"- x \"\n if (i != self.grado and self.grado-i != 0) and self.coefficienti[i+1] > 0:\n output += \"+ \"\n continue\n return output\n \n def latex(self):\n latex_polinomio = 0\n for i in range(0, len(self.coefficienti)):\n # and x[grado_polinomio]!=0):\n if (((self.coefficienti[i] == 1 or self.coefficienti[i] == 1.0) and self.grado-i == 1)):\n latex_polinomio += x\n if self.grado-i == 1 and (self.coefficienti[i] != 0 and self.coefficienti[i] != 1 and self.coefficienti[i] != -1 and self.coefficienti[i] != 1.0 and self.coefficienti[i] != -1.0):\n latex_polinomio += self.coefficienti[i]*x\n if self.coefficienti[i] == 0:\n pass\n # continue\n if self.grado-i != 0 and self.grado-i != 1 and (self.coefficienti[i] != 0 and self.coefficienti[i] != 1 and self.coefficienti[i] != -1 and self.coefficienti[i] != 1.0 and self.coefficienti[i] != -1.0):\n latex_polinomio += self.coefficienti[i]*x**(self.grado-i)\n # continue\n #print(x[i], \"$x^\", grado_polinomio-i, \"$ + \")\n if (self.coefficienti[i] == 1 or self.coefficienti[i] == 1.0) and self.grado-i != 1 and self.grado-i != 0:\n latex_polinomio += x**(self.grado-i)\n # continue\n elif (self.coefficienti[i] == -1 or self.coefficienti[i] == -1.0) and self.grado-i != 1 and self.grado-i != 0:\n latex_polinomio += -x**(self.grado-i)\n # continue\n elif self.coefficienti[i] != 0 and self.grado-i == 0 and (self.coefficienti[i] != 1 or self.coefficienti[i] != 1.0):\n latex_polinomio += self.coefficienti[i]\n elif self.coefficienti[i] != 0 and self.grado-i == 0 and (self.coefficienti[i] == 1 or self.coefficienti[i] == 1.0):\n latex_polinomio += 1\n if ((self.coefficienti[i] == -1 or self.coefficienti[i] == -1.0) and self.grado-i == 1):\n latex_polinomio += -x\n# if (i != self.grado and self.grado-i != 0) and self.coefficienti[i+1] > 0:\n# latex_polinomio += +\n# continue\n return latex_polinomio\n\n def __add__(self, y):\n if type(y).__name__ != \"polinomio\":\n raise Exception(\n f\"You are trying to sum a polinomio with a {type(y).__name__}\")\n\n c = []\n n = min(len(self.coefficienti), len(y.coefficienti))\n m = max(len(self.coefficienti), len(y.coefficienti))\n d = []\n if m == len(self.coefficienti):\n d = self.coefficienti\n else:\n d = y.coefficienti\n for i in range(0, m-n):\n c.append(d[i])\n if m == len(self.coefficienti):\n for j in range(m-n, m):\n z = self.coefficienti[j] + y.coefficienti[j-m+n]\n c.append(z)\n else:\n for j in range(m-n, m):\n z = self.coefficienti[j-m+n] + y.coefficienti[j]\n c.append(z)\n i = 0\n while i < len(c):\n if c[0] == 0:\n c.pop(0)\n i += 1\n d = polinomio(c)\n return d\n\n def __sub__(self, y):\n c = []\n for i in y.coefficienti:\n c.append(-i)\n f = self + polinomio(c)\n return f\n\n def __mul__(self, y):\n grado_prodotto = self.grado + y.grado\n d = [[], []]\n for i in range(len(self.coefficienti)):\n for j in range(len(y.coefficienti)):\n d[0].append(self.coefficienti[i]*y.coefficienti[j])\n d[1].append(i+j) # grado del monomio\n d[1] = d[1][::-1]\n # print(d)\n for i in range(grado_prodotto+1):\n if d[1].count(grado_prodotto-i) > 1:\n j = d[1].index(grado_prodotto - i)\n #print(\"j vale: \", j)\n z = j+1\n while z < len(d[1]):\n if d[1][z] == d[1][j]:\n #print(\"z vale:\", z)\n d[0][j] = d[0][j]+d[0][z]\n d[1].pop(z)\n d[0].pop(z)\n # print(d)\n z += 1\n i = 0\n while i < len(d[0]):\n if d[0][0] == 0:\n d[0].pop(0)\n i += 1\n return polinomio(d[0])\n\n def __pow__(self, var: int):\n p = self\n i = 0\n while i < var-1:\n p *= self\n i += 1\n return p\n\n def __truediv__(self, y, c=[]):\n d = []\n s = self.grado\n v = y.grado\n grado_polinomio_risultante = s-v\n output = 0\n if grado_polinomio_risultante > 0:\n d.append(self.coefficienti[0]/y.coefficienti[0])\n i = 0\n while i < grado_polinomio_risultante:\n d.append(0)\n i += 1\n c.append(d[0])\n a = polinomio(d)\n g = a*y\n f = self - g\n if (f.grado - y.grado) == 0 and (len(f.coefficienti)-len(c)) > 1:\n c.append(0)\n if (f.grado-y.grado) < 0 and f.grado != 0:\n j = 0\n while j < y.grado-f.grado:\n c.append(0)\n self = f\n return f.__truediv__(y, c)\n elif grado_polinomio_risultante == 0:\n d.append(self.coefficienti[0]/y.coefficienti[0])\n c.append(d[0])\n a = polinomio(d)\n g = a*y\n f = self - g\n if f.grado == 0 and (f.coefficienti == [] or f.coefficienti[0] == 0):\n return polinomio(c).latex()\n elif f.grado >= 0:\n self = f\n return f.__truediv__(y, c)\n elif grado_polinomio_risultante < 0:\n output += polinomio(c).latex() + self.latex()/y.latex()\n# output += self.latex()/y.latex()\n# output += y.latex()\n# if polinomio(c).grado != 0:\n# output += \"+\"\n# output += \"(\" + str(self) + \")/(\"\n# output += str(y) + \")\"\n return output\n\n elif s == 0:\n return polinomio(c).latex()\n\n def __eq__(self, y):\n equality = 0\n if len(self.coefficienti) != len(y.coefficienti):\n return False\n for i in range(len(self.coefficienti)):\n if self.coefficienti[i] == y.coefficienti[i]:\n equality += 1\n if equality == len(self.coefficienti):\n return True\n else:\n return False\n\n def __ne__(self, y):\n inequality = 0\n if len(self.coefficienti) != len(y.coefficienti):\n return True\n else:\n for i in range(len(self.coefficienti)):\n if self.coefficienti[i] != y.coefficienti[i]:\n inequality += 1\n if inequality == len(self.coefficienti):\n return True\n else:\n return False\n",
"_____no_output_____"
],
[
"a = [1, 1, 2, 1, 1]\nb = [1, 1, 2, 1, 1]\nc = polinomio(a)\nd = polinomio(b)\n(c+d).latex()",
"_____no_output_____"
],
[
"# a = [1, 0, 2, 0, 1]\n# b = [1, 0, 1]\n# c = polinomio(a)\n# d = polinomio(b)\n# c/d",
"_____no_output_____"
],
[
"a = [1,1,1]\nb = [1,0]\nc = polinomio(a)\nd = polinomio(b)\n(c*d).latex()",
"_____no_output_____"
],
[
"a = [1]\nb = [1,1]\nc = polinomio(a)\nd = polinomio(b)\nc/d",
"_____no_output_____"
],
[
"# a = [3,3,3]\n# b = [3]\n# c = polinomio(a)\n# d = polinomio(b)\n# c/d",
"_____no_output_____"
],
[
"a = [1, 1, 2, 1, 1]\nb = [1, 1, 2, 1, 1]\nc = polinomio(a)\nd = polinomio(b)\nprint(c+d)",
"2x^4 + 2x^3 + 4x^2 + 2x + 2 \n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00acfd8ba30f9073f25cf3271fe2363e4791449 | 2,950 | ipynb | Jupyter Notebook | Math Programs/Factorial.ipynb | iamstarstuff/PhysicStuff | 99b057ff028ef10b0b4228fee5db7f7c7f2630ee | [
"MIT"
] | 3 | 2021-06-12T16:14:06.000Z | 2021-08-04T05:22:07.000Z | Math Programs/Factorial.ipynb | iamstarstuff/PhysicStuff | 99b057ff028ef10b0b4228fee5db7f7c7f2630ee | [
"MIT"
] | null | null | null | Math Programs/Factorial.ipynb | iamstarstuff/PhysicStuff | 99b057ff028ef10b0b4228fee5db7f7c7f2630ee | [
"MIT"
] | null | null | null | 18.09816 | 77 | 0.429492 | [
[
[
"# Using for loop\r\n\r\ndef factorial_1(n):\r\n f = 1\r\n for i in range(1,n+1):\r\n f = f*i\r\n return f\r\n ",
"_____no_output_____"
],
[
"factorial_1(5)",
"_____no_output_____"
],
[
"for i in range(1,11):\r\n print(f\"{i} ! = {factorial_1(i)}\")",
"1 ! = 1\n2 ! = 2\n3 ! = 6\n4 ! = 24\n5 ! = 120\n6 ! = 720\n7 ! = 5040\n8 ! = 40320\n9 ! = 362880\n10 ! = 3628800\n"
],
[
"# Using while loop\r\n\r\ndef factorial_2(m):\r\n f = 1\r\n i = 1\r\n while i <= m:\r\n f = f*i\r\n i += 1\r\n return f",
"_____no_output_____"
],
[
"factorial_2(6)",
"_____no_output_____"
],
[
"# Without loop, calling function inside itself\r\n\r\ndef factorial_3(x):\r\n if x > 0:\r\n return x * factorial_3(x - 1)\r\n else:\r\n return 1\r\n ",
"_____no_output_____"
],
[
"factorial_3(11)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00ad7d74369e542bf65d434f86e5cc3c046a3cb | 1,544 | ipynb | Jupyter Notebook | notebooks/TestDataFrame.ipynb | ArtDou/trees_of_grenoble | 96382039d35feb8d34b7712271dafd0b540aa230 | [
"MIT"
] | null | null | null | notebooks/TestDataFrame.ipynb | ArtDou/trees_of_grenoble | 96382039d35feb8d34b7712271dafd0b540aa230 | [
"MIT"
] | null | null | null | notebooks/TestDataFrame.ipynb | ArtDou/trees_of_grenoble | 96382039d35feb8d34b7712271dafd0b540aa230 | [
"MIT"
] | null | null | null | 23.393939 | 64 | 0.400259 | [
[
[
"# import pandas as pd\n# data = {'Name':['Tom', 'Jack', 'nick', 'juli'],\n# 'marks':[99, 98, 95, 90], \n# 'ddn':[1958, 2012, 1235, 1023]}\n \n# # Creates pandas DataFrame.\n# df = pd.DataFrame(data, index =['rank1',\n# 'rank2',\n# 'rank3',\n# 'rank4'])\n# df\n\n# df.loc[ df[\"ddn\"].isin(range(1100,2000)) , : ]\n\n# df.loc[\"rank1\": \"rank3\", :]\n\n# df.loc[:, \"Name\":\"marks\"]\n\n# df.loc[\"rank1\":\"rank2\", \"Name\":\"marks\"]\n\n# df[\"Name\"][\"rank1\"]\n# df_col = df[\"Name\"]\n# val = df_col[\"rank1\"]\n# val\n\n# df.iloc[:2, :]\n\n# df.iloc[:2, :2]\n\n# df[[\"Name\"]]\n\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d00ad7f1a523443f06ebb826fe6c73243a432c0c | 16,867 | ipynb | Jupyter Notebook | notebooks/testing/previously in ignored file/f-Find-Overlapping-Landsat-Scenes-TEST.ipynb | sarahmjaffe/sagebrush-ecosystem-modeling-with-landsat8 | 4f71a1fabf6c8b3ccc7f6c65c6e7962683c3f113 | [
"BSD-3-Clause"
] | 1 | 2020-06-30T19:30:07.000Z | 2020-06-30T19:30:07.000Z | scripts/f-Find-Overlapping-Landsat-Scenes-TEST.ipynb | kessb/sagebrush-ecosystem-modeling | ef789a8a8476e4ba8145350df87878b4a1e32d97 | [
"BSD-3-Clause"
] | 13 | 2020-06-04T05:46:05.000Z | 2020-07-03T03:24:24.000Z | scripts/f-Find-Overlapping-Landsat-Scenes-TEST.ipynb | kessb/sagebrush-ecosystem-modeling | ef789a8a8476e4ba8145350df87878b4a1e32d97 | [
"BSD-3-Clause"
] | 1 | 2020-05-20T01:47:18.000Z | 2020-05-20T01:47:18.000Z | 34.634497 | 199 | 0.452125 | [
[
[
"# Function to list overlapping Landsat 8 scenes\nThis function is based on the following tutorial: http://geologyandpython.com/get-landsat-8.html\n\nThis function uses the area of interest (AOI) to retrieve overlapping Landsat 8 scenes. It will also output on the scenes with the largest portion of overlap and with less than 5% cloud cover.",
"_____no_output_____"
]
],
[
[
"def landsat_scene_list(aoi, start_date, end_date):\n '''Creates a list of Landsat 8, level 1, tier 1\n scenes that overlap with an aoi and are captured\n within a specified date range.\n\n Parameters\n ----------\n aoi : str\n The path to a shape file of an aoi with geometry.\n\n start-date : str\n The first date from which to start looking for\n Landsat image capture in the format yyyy-mm,dd, \n e.g. '2017-10-01'.\n \n end-date : str\n The last date from which to looking for\n Landsat image capture in the format yyyy-mm,dd, \n e.g. '2017-10-31'.\n\n Returns\n -------\n wrs : shapefile\n A catalog of Landsat 8 scenes.\n \n scenes : geopandas.geodataframe.GeoDataFrame\n A dataframe containing the information\n of Landsat 8, Level 1, Tier 1 scenes that \n overlap with the aoi.\n '''\n # Download Landsat 8 catalog from USGS (get_data auto unzips)\n USGS_url = 'https://landsat.usgs.gov/sites/default/files/documents/WRS2_descending.zip'\n et.data.get_data(url=USGS_url, replace=True)\n\n # Open Landsat catalog\n wrs = gpd.GeoDataFrame.from_file(os.path.join('data', 'earthpy-downloads',\n 'WRS2_descending',\n 'WRS2_descending.shp'))\n \n # Find polygons that intersect Landsat catalog and aoi \n wrs_intersection = wrs[wrs.intersects(aoi.geometry[0])]\n \n # Calculated paths and rows \n paths, rows = wrs_intersection['PATH'].values, wrs_intersection['ROW'].values\n \n # Iterate through each Polygon of paths and rows intersecting the area\n for i, row in wrs_intersection.iterrows():\n # Create a string for the name containing the path and row of this Polygon\n name = 'path: %03d, row: %03d' % (row.PATH, row.ROW)\n \n # Removing scenes with small amounts of overlap using threshold of intersection area\n b = (paths > 23) & (paths < 26)\n paths = paths[b]\n rows = rows[b]\n \n# # Path(s) and row(s) covering the intersection\n# ############################ WHY NOT PRINTING? ###################################\n# for i, (path, row) in enumerate(zip(paths, rows)):\n# print('Image', i+1, ' - path:', path, 'row:', row)\n \n # Check scene availability in Amazon S3 bucket list of Landsat scenes\n s3_scenes = pd.read_csv('http://landsat-pds.s3.amazonaws.com/c1/L8/scene_list.gz', \n compression='gzip', parse_dates=['acquisitionDate'], \n index_col=['acquisitionDate'])\n \n # Capture only Landsat T1 scenes within dates of interest\n scene_mask = (s3_scenes.index > start_date) & (s3_scenes.index <= end_date) \n scene_dates = s3_scenes.loc[scene_mask]\n \n scene_product = scene_dates[scene_dates['productId'].str.contains(\"_T1\")]\n \n # Geodataframe of scenes with <5% cloud cover, the url to retrieve them\n #############################row.ROW and row.PATH will need to be fixed##################\n scenes = scene_product[(scene_product.path == row.PATH) & \n (scene_product.row == row.ROW) & \n (scene_product.cloudCover <= 5)]\n\n return wrs, scenes",
"_____no_output_____"
]
],
[
[
"# TEST\n**Can DELETE everything below once tested and approved!**",
"_____no_output_____"
]
],
[
[
"# WILL DELETE WHEN FUNCTIONS ARE SEPARATED OUT\ndef NEON_site_extent(path_to_NEON_boundaries, site):\n '''Extracts a NEON site extent from an individual site as\n long as the original NEON site extent shape file contains \n a column named 'siteID'.\n\n Parameters\n ----------\n path_to_NEON_boundaries : str\n The path to a shape file that contains the list\n of all NEON site extents, also known as field\n sampling boundaries (can be found at NEON and\n ESRI sites)\n\n site : str\n One siteID contains 4 capital letters, \n e.g. CPER, HARV, ONAQ or SJER.\n\n Returns\n -------\n site_boundary : geopandas.geodataframe.GeoDataFrame\n A vector containing a single polygon \n per the site specified. \n '''\n NEON_boundaries = gpd.read_file(path_to_NEON_boundaries)\n boundaries_indexed = NEON_boundaries.set_index(['siteID'])\n\n site_boundary = boundaries_indexed.loc[[site]]\n site_boundary.reset_index(inplace=True)\n\n return site_boundary",
"_____no_output_____"
],
[
"# Import packages\nimport os\nfrom glob import glob\nimport requests\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport folium\nimport geopandas as gpd\nimport rasterio as rio\n#from bs4 import BeautifulSoup\nimport shutil\nimport earthpy as et\n\n# Set working directory\nos.chdir(os.path.join(et.io.HOME, 'earth-analytics'))",
"_____no_output_____"
],
[
"# Download shapefile of all NEON site boundaries\nurl = 'https://www.neonscience.org/sites/default/files/Field_Sampling_Boundaries_2020.zip'\net.data.get_data(url=url, replace=True)\n\n# Create path to shapefile\nterrestrial_sites = os.path.join(\n 'data', 'earthpy-downloads',\n 'Field_Sampling_Boundaries_2020',\n 'terrestrialSamplingBoundaries.shp')\n\n# Retrieving the boundaries of CPER\naoi = NEON_site_extent(terrestrial_sites, 'ONAQ')",
"Downloading from https://www.neonscience.org/sites/default/files/Field_Sampling_Boundaries_2020.zip\nExtracted output to C:\\Users\\Smells\\earth-analytics\\data\\earthpy-downloads\\Field_Sampling_Boundaries_2020\n"
],
[
"# Test out new landsat retrieval process\nscene_catalog, scene_df = landsat_scene_list(aoi, '2017-10-01', '2017-10-31')",
"Downloading from https://landsat.usgs.gov/sites/default/files/documents/WRS2_descending.zip\nExtracted output to C:\\Users\\Smells\\earth-analytics\\data\\earthpy-downloads\\WRS2_descending\n"
],
[
"# Visualize the catalog\nscene_catalog.head(3)",
"_____no_output_____"
],
[
"# Visualize the scenes of interest based on the input parameters\nscene_df",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00ae3fb3869b8ae08843690be18d0484b10970b | 28,483 | ipynb | Jupyter Notebook | site/id/tutorials/keras/save_and_load.ipynb | NarimaneHennouni/docs-l10n | 39a48e0d5aa34950e29efd5c1f111c120185e9d9 | [
"Apache-2.0"
] | 2 | 2020-09-29T07:31:21.000Z | 2020-10-13T08:16:18.000Z | site/id/tutorials/keras/save_and_load.ipynb | NarimaneHennouni/docs-l10n | 39a48e0d5aa34950e29efd5c1f111c120185e9d9 | [
"Apache-2.0"
] | null | null | null | site/id/tutorials/keras/save_and_load.ipynb | NarimaneHennouni/docs-l10n | 39a48e0d5aa34950e29efd5c1f111c120185e9d9 | [
"Apache-2.0"
] | null | null | null | 36.516667 | 594 | 0.571113 | [
[
[
"##### Copyright 2019 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
],
[
"#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.",
"_____no_output_____"
]
],
[
[
"# Menyimpan dan memuat model",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/keras/save_and_load\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />Liht di TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/id/tutorials/keras/save_and_load.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Jalankan di Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/id/tutorials/keras/save_and_load.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />Lihat kode di GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/id/tutorials/keras/save_and_load.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Unduh notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"Progres dari model dapat disimpan ketika proses training dan setelah training. Ini berarti sebuah model dapat melanjutkan proses training dengan kondisi yang sama dengan ketika proses training sebelumnya dihentikan dan dapat menghindari waktu training yang panjng. Menyimpan juga berarti Anda dapat membagikan model Anda dan orang lain dapat membuat ulang proyek Anda. Ketika mempublikasikan hasil riset dan teknik dari suatu model, kebanyakan praktisi *machine learning* membagikan:\n\n* kode untuk membuat model, dan\n* berat, atau parameter, dari sebuah model\n\nMembagikan data ini akan membantu orang lain untuk memahami bagaimana model bekerja dan mencoba model tersebut dengan data yang baru.\n\nPerhatian: Hati-hati dengan kode yang tidak dapat dipercaya—model-model TensorFlow adalah kode. Lihat [Menggunakan TensorFlow dengan aman](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) untuk lebih detail.\n\n### Opsi\n\nTerdapat beberapa cara untuk menyimpan model TensorFlow—bergantung kepada API yang Anda gunakan. Panduan ini menggunakan [tf.keras](https://www.tensorflow.org/guide/keras), sebuah API tingkat tinggi yang digunakan untuk membangun dan melatih model di TensorFlow. Untuk pendekatan lainnya, lihat panduan Tensorflow [Simpan dan Restorasi](https://www.tensorflow.org/guide/saved_model) atau [Simpan sesuai keinginan](https://www.tensorflow.org/guide/eager#object-based_saving).\n",
"_____no_output_____"
],
[
"## Pengaturan\n\n### Instal dan import",
"_____no_output_____"
],
[
"Install dan import TensorFlow dan beberapa *dependency*:",
"_____no_output_____"
]
],
[
[
"try:\n # %tensorflow_version only exists in Colab.\n %tensorflow_version 2.x\nexcept Exception:\n pass\n\n!pip install pyyaml h5py # Required to save models in HDF5 format",
"_____no_output_____"
],
[
"from __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\n\nimport tensorflow as tf\nfrom tensorflow import keras\n\nprint(tf.version.VERSION)",
"_____no_output_____"
]
],
[
[
"### Memperoleh dataset\n\nUntuk menunjukan bagaimana cara untuk menyimpan dan memuat berat dari model, Anda akan menggunakan [Dataset MNIST](http://yann.lecun.com/exdb/mnist/). Untuk mempercepat operasi ini, gunakan hanya 1000 data pertama:",
"_____no_output_____"
]
],
[
[
"(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()\n\ntrain_labels = train_labels[:1000]\ntest_labels = test_labels[:1000]\n\ntrain_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0\ntest_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0",
"_____no_output_____"
]
],
[
[
"### Mendefinisikan sebuah model",
"_____no_output_____"
],
[
"Mulai dengan membangun sebuah model sekuensial sederhana:",
"_____no_output_____"
]
],
[
[
"# Define a simple sequential model\ndef create_model():\n model = tf.keras.models.Sequential([\n keras.layers.Dense(512, activation='relu', input_shape=(784,)),\n keras.layers.Dropout(0.2),\n keras.layers.Dense(10, activation='softmax')\n ])\n\n model.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\n return model\n\n# Create a basic model instance\nmodel = create_model()\n\n# Display the model's architecture\nmodel.summary()",
"_____no_output_____"
]
],
[
[
"## Menyimpan cek poin ketika proses training",
"_____no_output_____"
],
[
"You can use a trained model without having to retrain it, or pick-up training where you left off—in case the training process was interrupted. The `tf.keras.callbacks.ModelCheckpoint` callback allows to continually save the model both *during* and at *the end* of training.\n\nAnda dapat menggunakan model terlatih tanpa harus melatihnya kembali, atau melanjutkan proses training di titik di mana proses training sebelumnya berhenti. *Callback* `tf.keras.callbacks.ModelCheckpoint` memungkinkan sebuah model untuk disimpan ketika dan setelah proses training dilakukan.\n\n### Penggunaan *callback* cek poin\n\nBuat sebuah callback `tf.keras.callbacks.ModelCheckpoint` yang menyimpan berat hanya ketika proses training berlangsung:",
"_____no_output_____"
]
],
[
[
"checkpoint_path = \"training_1/cp.ckpt\"\ncheckpoint_dir = os.path.dirname(checkpoint_path)\n\n# Create a callback that saves the model's weights\ncp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,\n save_weights_only=True,\n verbose=1)\n\n# Train the model with the new callback\nmodel.fit(train_images, \n train_labels, \n epochs=10,\n validation_data=(test_images,test_labels),\n callbacks=[cp_callback]) # Pass callback to training\n\n# This may generate warnings related to saving the state of the optimizer.\n# These warnings (and similar warnings throughout this notebook)\n# are in place to discourage outdated usage, and can be ignored.",
"_____no_output_____"
]
],
[
[
"This creates a single collection of TensorFlow checkpoint files that are updated at the end of each epoch:",
"_____no_output_____"
]
],
[
[
"!ls {checkpoint_dir}",
"_____no_output_____"
]
],
[
[
"Create a new, untrained model. When restoring a model from weights-only, you must have a model with the same architecture as the original model. Since it's the same model architecture, you can share weights despite that it's a different *instance* of the model.\n\nNow rebuild a fresh, untrained model, and evaluate it on the test set. An untrained model will perform at chance levels (~10% accuracy):",
"_____no_output_____"
]
],
[
[
"# Create a basic model instance\nmodel = create_model()\n\n# Evaluate the model\nloss, acc = model.evaluate(test_images, test_labels, verbose=2)\nprint(\"Untrained model, accuracy: {:5.2f}%\".format(100*acc))",
"_____no_output_____"
]
],
[
[
"Then load the weights from the checkpoint and re-evaluate:",
"_____no_output_____"
]
],
[
[
"# Loads the weights\nmodel.load_weights(checkpoint_path)\n\n# Re-evaluate the model\nloss,acc = model.evaluate(test_images, test_labels, verbose=2)\nprint(\"Restored model, accuracy: {:5.2f}%\".format(100*acc))",
"_____no_output_____"
]
],
[
[
"### Checkpoint callback options\n\nThe callback provides several options to provide unique names for checkpoints and adjust the checkpointing frequency.\n\nTrain a new model, and save uniquely named checkpoints once every five epochs:",
"_____no_output_____"
]
],
[
[
"# Include the epoch in the file name (uses `str.format`)\ncheckpoint_path = \"training_2/cp-{epoch:04d}.ckpt\"\ncheckpoint_dir = os.path.dirname(checkpoint_path)\n\n# Create a callback that saves the model's weights every 5 epochs\ncp_callback = tf.keras.callbacks.ModelCheckpoint(\n filepath=checkpoint_path, \n verbose=1, \n save_weights_only=True,\n period=5)\n\n# Create a new model instance\nmodel = create_model()\n\n# Save the weights using the `checkpoint_path` format\nmodel.save_weights(checkpoint_path.format(epoch=0))\n\n# Train the model with the new callback\nmodel.fit(train_images, \n train_labels,\n epochs=50, \n callbacks=[cp_callback],\n validation_data=(test_images,test_labels),\n verbose=0)",
"_____no_output_____"
]
],
[
[
"Sekarang, lihat hasil cek poin dan pilih yang terbaru:",
"_____no_output_____"
]
],
[
[
"!ls {checkpoint_dir}",
"_____no_output_____"
],
[
"latest = tf.train.latest_checkpoint(checkpoint_dir)\nlatest",
"_____no_output_____"
]
],
[
[
"Catatan: secara default format tensorflow hanya menyimpan 5 cek poin terbaru.\n\nUntuk tes, reset model dan muat cek poin terakhir:",
"_____no_output_____"
]
],
[
[
"# Create a new model instance\nmodel = create_model()\n\n# Load the previously saved weights\nmodel.load_weights(latest)\n\n# Re-evaluate the model\nloss, acc = model.evaluate(test_images, test_labels, verbose=2)\nprint(\"Restored model, accuracy: {:5.2f}%\".format(100*acc))",
"_____no_output_____"
]
],
[
[
"## Apa sajakah file-file ini?",
"_____no_output_____"
],
[
"Kode di atas menyimpan berat dari model ke sebuah kumpulan [cek poin](https://www.tensorflow.org/guide/saved_model#save_and_restore_variables)-file yang hanya berisikan berat dari model yan sudah dilatih dalam format biner. Cek poin terdiri atas:\n* Satu atau lebih bagian (*shard*) yang berisi berat dari model Anda.\n* Sebuah file index yang mengindikasikan suatu berat disimpan pada *shard* yang mana.\n\nJika Anda hanya melakukan proses training dari sebuah model pada sebuah komputer, Anda akan hanya memiliki satu *shard* dengan sufiks `.data-00000-of-00001`",
"_____no_output_____"
],
[
"## Menyimpan berat secara manual\n\nAnda telah melihat bagaimana caranya memuat berat yang telah disimpan sebelumnya menjadi model. Menyimpannya secara manual dapat dilakukan dengan mudah dengan *method* `Model.save_weights`. Secara default, `tf.keras`—dan `save_weights` menggunakan format TensorFlow [cek poin](../../guide/keras/checkpoints) dengan ekstensi `.ckpt` (menyimpan dalam format [HDF5](https://js.tensorflow.org/tutorials/import-keras.html) dengan ekstensi `.h5` dijelaskan dalam panduan ini [Menyimpan dan serialisasi model](../../guide/keras/save_and_serialize#weights-only_saving_in_savedmodel_format)):",
"_____no_output_____"
]
],
[
[
"# Save the weights\nmodel.save_weights('./checkpoints/my_checkpoint')\n\n# Create a new model instance\nmodel = create_model()\n\n# Restore the weights\nmodel.load_weights('./checkpoints/my_checkpoint')\n\n# Evaluate the model\nloss,acc = model.evaluate(test_images, test_labels, verbose=2)\nprint(\"Restored model, accuracy: {:5.2f}%\".format(100*acc))",
"_____no_output_____"
]
],
[
[
"## Menyimpan keseluruhan model\n\nGunakan [`model.save`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#save) untuk menyimpan arsitektur dari model, berat, dan konfigurasi training dalam satu file/folder. Hal ini menyebabkan Anda dapat melakukan ekspor dari suatu model sehingga model tersebut dapat digunakan tanpa harus mengakses kode Python secara langsung*. Karena kondisi optimizer dipulihkan, Anda dapat melanjutkan proses training tepat ketika proses training sebelumnya ditinggalkan.\n\nMeneyimpan sebuah model fungsional sangat berguna—Anda dapat memuatnya di TensorFlow.js [HDF5](https://js.tensorflow.org/tutorials/import-keras.html), [Saved Model](https://js.tensorflow.org/tutorials/import-saved-model.html)) dan kemudian melatih dan menggunakan model tersebut di web browser, atau mengubahnya sehingga dapat beroperasi di perangkat *mobile* menggunakan TensorFlw Lite [HDF5](https://www.tensorflow.org/lite/convert/python_api#exporting_a_tfkeras_file_), [Saved Model](https://www.tensorflow.org/lite/convert/python_api#exporting_a_savedmodel_))\n\n\\*Objek-objek custom (model subkelas atau layer) membutuhkan perhatian lebih ketika proses disimpan atau dimuat. Lihat bagian **Penyimpanan objek custom** di bawah.",
"_____no_output_____"
],
[
"### Format HDF5\n\nKeras menyediakan format penyimpanan menggunakan menggunakan [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format) ",
"_____no_output_____"
]
],
[
[
"# Create and train a new model instance.\nmodel = create_model()\nmodel.fit(train_images, train_labels, epochs=5)\n\n# Save the entire model to a HDF5 file.\n# The '.h5' extension indicates that the model shuold be saved to HDF5.\nmodel.save('my_model.h5') ",
"_____no_output_____"
]
],
[
[
"Sekarang, buat ulang model dari file tersebut:",
"_____no_output_____"
]
],
[
[
"# Recreate the exact same model, including its weights and the optimizer\nnew_model = tf.keras.models.load_model('my_model.h5')\n\n# Show the model architecture\nnew_model.summary()",
"_____no_output_____"
]
],
[
[
"Cek akurasi dari model:",
"_____no_output_____"
]
],
[
[
"loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)\nprint('Restored model, accuracy: {:5.2f}%'.format(100*acc))",
"_____no_output_____"
]
],
[
[
"Teknik ini menyimpan semuanya:\n\n* Nilai berat\n* Konfigurasi model (arsitektur)\n* konfigurasi dari optimizer\n\nKeras menyimpan model dengan cara menginspeksi arsitekturnya. Saat ini, belum bisa menyimpan optimizer TensorFlow (dari `tf.train`). Ketika menggunakannya, Anda harus mengkompilasi kembali model setelah dimuat, dan Anda akan kehilangan kondisi dari optimizer.",
"_____no_output_____"
],
[
"### Format SavedModel ",
"_____no_output_____"
],
[
"Format SavedModel adalah suatu cara lainnya untuk melakukan serialisasi model. Model yang disimpan dalam format ini dapat direstorasi menggunakan `tf.keras.models.load_model` dan kompatibel dengan TensorFlow Serving. [Panduan SavedModel](https://www.tensorflow.org/guide/saved_model) menjelaskan detail bagaimana untuk menyediakan/memeriksa SavedModel. Kode di bawah ini mengilustrasikan langkah-langkah yang dilakukan untuk menyimpan dan memuat kembali model.",
"_____no_output_____"
]
],
[
[
"# Create and train a new model instance.\nmodel = create_model()\nmodel.fit(train_images, train_labels, epochs=5)\n\n# Save the entire model as a SavedModel.\n!mkdir -p saved_model\nmodel.save('saved_model/my_model') ",
"_____no_output_____"
]
],
[
[
"Format SavedModel merupakan direktori yang berisi sebuah *protobuf binary* dan sebuah cek poin TensorFlow. Mememiksa direktori dari model tersimpan:",
"_____no_output_____"
]
],
[
[
"# my_model directory\n!ls saved_model\n\n# Contains an assets folder, saved_model.pb, and variables folder.\n!ls saved_model/my_model",
"_____no_output_____"
]
],
[
[
"Muat ulang Keras model yang baru dari model tersimpan:",
"_____no_output_____"
]
],
[
[
"new_model = tf.keras.models.load_model('saved_model/my_model')\n\n# Check its architecture\nnew_model.summary()",
"_____no_output_____"
]
],
[
[
"Model yang sudah terestorasi dikompilasi dengan argument yang sama dengan model asli. Coba lakukan evaluasi dan prediksi menggunakan model tersebut:",
"_____no_output_____"
]
],
[
[
"# Evaluate the restored model\nloss, acc = new_model.evaluate(test_images, test_labels, verbose=2)\nprint('Restored model, accuracy: {:5.2f}%'.format(100*acc))\n\nprint(new_model.predict(test_images).shape)",
"_____no_output_____"
]
],
[
[
"### Menyimpan objek custom\n\nApabila Anda menggunakan format SavedModel, Anda dapat melewati bagian ini. Perbedaan utama antara HDF5 dan SavedModel adalah HDF5 menggunakan konfigurasi objek untuk menyimpan arsitektur dari model, sementara SavedModel menyimpan *execution graph*. Sehingga, SavedModel dapat menyimpan objek custom seperti model subkelas dan layer custom tanpa membutuhkan kode yang asli.\n\nUntuk menyimpan objek custom dalam bentuk HDF5, Anda harus melakukan hal sebagai berikut:\n\n1. Mendefinisikan sebuah *method* `get_config` pada objek Anda, dan mendefinisikan *classmethod* (opsional) `from_config`.\n * `get_config(self)` mengembalikan *JSON-serializable dictionary* dari parameter-parameter yang dibutuhkan untuk membuat kembali objek.\n * `from_config(cls, config)` menggunakan dan mengembalikan konfigurasi dari `get_config` untuk membuat objek baru. Secara default, fungsi ini menggunakan konfigurasi teresbut untuk menginisialisasi kwargs (`return cls(**config)`).\n2. Gunakan objek tersebut sebagai argumen dari `custom_objects` ketika memuat model. Argumen tersebut harus merupakan sebuah *dictionary* yang memetakan string dari nama kelas ke class Python. Misalkan `tf.keras.models.load_model(path, custom_objects={'CustomLayer': CustomLayer})`\n\nLihat tutorial [Menulis layers and models from awal](https://www.tensorflow.org/guide/keras/custom_layers_and_models) untuk contoh dari objek custom dan `get_config`.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d00ae433be8263682eee11dfb59d4afd7cc59902 | 62,939 | ipynb | Jupyter Notebook | 03_CNN/04_2_CatAndDog_Augmentation.ipynb | seungbinahn/START_AI | 8fb28ea25b234ecab441e97a0d3bf2e29285d771 | [
"MIT"
] | null | null | null | 03_CNN/04_2_CatAndDog_Augmentation.ipynb | seungbinahn/START_AI | 8fb28ea25b234ecab441e97a0d3bf2e29285d771 | [
"MIT"
] | null | null | null | 03_CNN/04_2_CatAndDog_Augmentation.ipynb | seungbinahn/START_AI | 8fb28ea25b234ecab441e97a0d3bf2e29285d771 | [
"MIT"
] | null | null | null | 62,939 | 62,939 | 0.863693 | [
[
[
"# 증식을 통한 데이터셋 크기 확장",
"_____no_output_____"
],
[
"## 1. Google Drive와 연동",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount(\"/content/gdrive\")",
"Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/gdrive\n"
],
[
"path = \"gdrive/'My Drive'/'Colab Notebooks'/CNN\"\n\n!ls gdrive/'My Drive'/'Colab Notebooks'/CNN/datasets",
"cats_and_dogs_small\n"
]
],
[
[
"## 2. 모델 생성",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras import layers, models, optimizers",
"_____no_output_____"
]
],
[
[
"0. Sequential 객체 생성\n1. conv layer(filter32, kernel size(3,3), activation 'relu', input_shape()\n2. pooling layer(pool_size(2,2))\n3. conv layer(filter 64, kernel size(3,3), activation 'relu'\n4. pooling layer(pool_size(2,2))\n5. conv layer(filter 128, kernel size(3,3), activation 'relu'\n6. pooling layer(pool_size(2,2))\n7. conv layer(filter 128, kernel size(3,3), activation 'relu'\n8. pooling layer(pool_size(2,2))\n-------\n9. flatten layer\n10. Dense layer 512, relu\n11. Dense layer 1, sigmoid",
"_____no_output_____"
]
],
[
[
"model = models.Sequential()\nmodel.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3)))\nmodel.add(layers.MaxPooling2D((2, 2)))\nmodel.add(layers.Conv2D(64, (3, 3), activation='relu'))\nmodel.add(layers.MaxPooling2D((2, 2)))\nmodel.add(layers.Conv2D(128, (3, 3), activation='relu'))\nmodel.add(layers.MaxPooling2D((2, 2)))\nmodel.add(layers.Conv2D(128, (3, 3), activation='relu'))\nmodel.add(layers.MaxPooling2D((2, 2)))\n\nmodel.add(layers.Flatten())\nmodel.add(layers.Dropout(0.5))\nmodel.add(layers.Dense(512, activation='relu'))\nmodel.add(layers.Dense(1, activation='sigmoid'))",
"_____no_output_____"
],
[
"model.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) (None, 148, 148, 32) 896 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 74, 74, 32) 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 72, 72, 64) 18496 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 36, 36, 64) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 34, 34, 128) 73856 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 17, 17, 128) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 15, 15, 128) 147584 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 7, 7, 128) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 6272) 0 \n_________________________________________________________________\ndropout (Dropout) (None, 6272) 0 \n_________________________________________________________________\ndense (Dense) (None, 512) 3211776 \n_________________________________________________________________\ndense_1 (Dense) (None, 1) 513 \n=================================================================\nTotal params: 3,453,121\nTrainable params: 3,453,121\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"from tensorflow.keras import optimizers",
"_____no_output_____"
],
[
"model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['accuracy'])",
"_____no_output_____"
],
[
"from tensorflow.keras.preprocessing.image import ImageDataGenerator",
"_____no_output_____"
]
],
[
[
"## 3. 데이터 전처리",
"_____no_output_____"
]
],
[
[
"import os\nbase_dir = '/content/gdrive/My Drive/Colab Notebooks/CNN/datasets/cats_and_dogs_small'\n\ntrain_dir = os.path.join(base_dir,'train')\nvalidation_dir = os.path.join(base_dir,'validation')\ntest_dir=os.path.join(base_dir,'test')",
"_____no_output_____"
],
[
"# [코드작성]\n# train_datagen이라는 ImageDataGenerator 객체 생성 \n# train_datagen의 증식 옵션 \n# 1. scale : 0~1\n# 2. 회전 각도 범위 : -40~+40\n# 3. 수평이동 범위 : 전체 너비의 20% 비율\n# 4. 수직이동 범위 : 전체 높이의 20% 비율\n# 5. 전단 변환(shearing) 각도 범위 : 10%\n# 6. 사진 확대 범위 : 20%\n# 7. 이미지를 수평으로 뒤집기 : True\n# 8. 회전이나 가로/세로 이동으로 인해 새롭게 생성해야 할 픽셀을 채울 전략 : 'nearest'\n\ntrain_datagen = ImageDataGenerator(rescale=1./255, \n rotation_range=40, \n width_shift_range=0.2, \n height_shift_range=0.2, \n shear_range=0.1, \n zoom_range=0.2,\n horizontal_flip=True, \n fill_mode='nearest')",
"_____no_output_____"
],
[
"validation_datagen = ImageDataGenerator(rescale=1./255)\ntest_datagen = ImageDataGenerator(rescale=1./255)",
"_____no_output_____"
],
[
"train_generator = train_datagen.flow_from_directory(\n train_dir, target_size=(150,150), batch_size=20,class_mode='binary')\n\nvalidation_generator = validation_datagen.flow_from_directory(\n validation_dir, target_size=(150,150), batch_size=20,class_mode='binary')\n\ntest_generator = test_datagen.flow_from_directory(\n test_dir, target_size=(150,150), batch_size=20,class_mode='binary')\n",
"Found 2000 images belonging to 2 classes.\nFound 1000 images belonging to 2 classes.\nFound 1000 images belonging to 2 classes.\n"
]
],
[
[
"## 4. 모델 훈련",
"_____no_output_____"
]
],
[
[
"history = model.fit_generator(train_generator, \n steps_per_epoch=100, \n epochs=30, \n validation_data=validation_generator, \n validation_steps=50)",
"WARNING:tensorflow:From <ipython-input-16-c480ae1e8dcf>:5: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use Model.fit, which supports generators.\nEpoch 1/30\n100/100 [==============================] - 526s 5s/step - loss: 0.6970 - accuracy: 0.5140 - val_loss: 0.6874 - val_accuracy: 0.5410\nEpoch 2/30\n100/100 [==============================] - 23s 229ms/step - loss: 0.6865 - accuracy: 0.5545 - val_loss: 0.6869 - val_accuracy: 0.5320\nEpoch 3/30\n100/100 [==============================] - 22s 225ms/step - loss: 0.6742 - accuracy: 0.5895 - val_loss: 0.6685 - val_accuracy: 0.5890\nEpoch 4/30\n100/100 [==============================] - 23s 226ms/step - loss: 0.6668 - accuracy: 0.6040 - val_loss: 0.6377 - val_accuracy: 0.6310\nEpoch 5/30\n100/100 [==============================] - 23s 227ms/step - loss: 0.6580 - accuracy: 0.6155 - val_loss: 0.6327 - val_accuracy: 0.6370\nEpoch 6/30\n100/100 [==============================] - 23s 229ms/step - loss: 0.6377 - accuracy: 0.6310 - val_loss: 0.6227 - val_accuracy: 0.6420\nEpoch 7/30\n100/100 [==============================] - 23s 233ms/step - loss: 0.6299 - accuracy: 0.6495 - val_loss: 0.6394 - val_accuracy: 0.6020\nEpoch 8/30\n100/100 [==============================] - 23s 225ms/step - loss: 0.6145 - accuracy: 0.6640 - val_loss: 0.5787 - val_accuracy: 0.6870\nEpoch 9/30\n100/100 [==============================] - 22s 225ms/step - loss: 0.6018 - accuracy: 0.6680 - val_loss: 0.6183 - val_accuracy: 0.6330\nEpoch 10/30\n100/100 [==============================] - 23s 227ms/step - loss: 0.5918 - accuracy: 0.6940 - val_loss: 0.6036 - val_accuracy: 0.6570\nEpoch 11/30\n100/100 [==============================] - 23s 226ms/step - loss: 0.5928 - accuracy: 0.6690 - val_loss: 0.6776 - val_accuracy: 0.6090\nEpoch 12/30\n100/100 [==============================] - 22s 223ms/step - loss: 0.5764 - accuracy: 0.6960 - val_loss: 0.5405 - val_accuracy: 0.7140\nEpoch 13/30\n100/100 [==============================] - 22s 222ms/step - loss: 0.5730 - accuracy: 0.7060 - val_loss: 0.5361 - val_accuracy: 0.7180\nEpoch 14/30\n100/100 [==============================] - 22s 225ms/step - loss: 0.5678 - accuracy: 0.6925 - val_loss: 0.5781 - val_accuracy: 0.6880\nEpoch 15/30\n100/100 [==============================] - 23s 229ms/step - loss: 0.5682 - accuracy: 0.6990 - val_loss: 0.5299 - val_accuracy: 0.7190\nEpoch 16/30\n100/100 [==============================] - 23s 225ms/step - loss: 0.5564 - accuracy: 0.7070 - val_loss: 0.5587 - val_accuracy: 0.6990\nEpoch 17/30\n100/100 [==============================] - 23s 226ms/step - loss: 0.5492 - accuracy: 0.7155 - val_loss: 0.5078 - val_accuracy: 0.7350\nEpoch 18/30\n100/100 [==============================] - 23s 227ms/step - loss: 0.5573 - accuracy: 0.7065 - val_loss: 0.5177 - val_accuracy: 0.7370\nEpoch 19/30\n100/100 [==============================] - 23s 229ms/step - loss: 0.5490 - accuracy: 0.7140 - val_loss: 0.6353 - val_accuracy: 0.6600\nEpoch 20/30\n100/100 [==============================] - 24s 236ms/step - loss: 0.5443 - accuracy: 0.7075 - val_loss: 0.4860 - val_accuracy: 0.7640\nEpoch 21/30\n100/100 [==============================] - 23s 226ms/step - loss: 0.5408 - accuracy: 0.7230 - val_loss: 0.5099 - val_accuracy: 0.7400\nEpoch 22/30\n100/100 [==============================] - 23s 226ms/step - loss: 0.5256 - accuracy: 0.7405 - val_loss: 0.5177 - val_accuracy: 0.7410\nEpoch 23/30\n100/100 [==============================] - 23s 226ms/step - loss: 0.5321 - accuracy: 0.7390 - val_loss: 0.5267 - val_accuracy: 0.7250\nEpoch 24/30\n100/100 [==============================] - 23s 226ms/step - loss: 0.5251 - accuracy: 0.7385 - val_loss: 0.6157 - val_accuracy: 0.6870\nEpoch 25/30\n100/100 [==============================] - 22s 224ms/step - loss: 0.5353 - accuracy: 0.7305 - val_loss: 0.5214 - val_accuracy: 0.7300\nEpoch 26/30\n100/100 [==============================] - 23s 225ms/step - loss: 0.5129 - accuracy: 0.7460 - val_loss: 0.5402 - val_accuracy: 0.7170\nEpoch 27/30\n100/100 [==============================] - 22s 223ms/step - loss: 0.5099 - accuracy: 0.7550 - val_loss: 0.4921 - val_accuracy: 0.7560\nEpoch 28/30\n100/100 [==============================] - 23s 226ms/step - loss: 0.5256 - accuracy: 0.7350 - val_loss: 0.4778 - val_accuracy: 0.7600\nEpoch 29/30\n100/100 [==============================] - 23s 228ms/step - loss: 0.5287 - accuracy: 0.7350 - val_loss: 0.5037 - val_accuracy: 0.7450\nEpoch 30/30\n100/100 [==============================] - 23s 228ms/step - loss: 0.5062 - accuracy: 0.7485 - val_loss: 0.5342 - val_accuracy: 0.7140\n"
]
],
[
[
"## 5. 성능 시각화",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\nacc = history.history['accuracy']\nval_acc = history.history['val_accuracy']\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\nepochs = range(1, len(acc) +1)",
"_____no_output_____"
],
[
"plt.plot(epochs, acc, 'bo', label='Training acc')\nplt.plot(epochs, val_acc, 'b', label='Validation acc')\nplt.title('Training and validation accuracy')\nplt.legend()\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"* acc와 val_acc 모두 증가하는 경향을 보아 과적합이 발생하지 않았음",
"_____no_output_____"
]
],
[
[
"plt.plot(epochs, loss, 'bo', label='Training loss')\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\nplt.legend()\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 6. 모델 평가하기",
"_____no_output_____"
]
],
[
[
"test_loss, test_accuracy = model.evaluate_generator(test_generator, steps=50)",
"WARNING:tensorflow:From <ipython-input-21-9dc8391d24c5>:1: Model.evaluate_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use Model.evaluate, which supports generators.\n"
],
[
"print(test_loss)\nprint(test_accuracy)",
"0.5713947415351868\n0.7160000205039978\n"
]
],
[
[
"## 7. 모델 저장",
"_____no_output_____"
]
],
[
[
"model.save('/content/gdrive/My Drive/Colab Notebooks/CNN/datasets/cats_and_dogs_small_augmentation.h5')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d00b007b5128e7c2c6b1dea6ee5e492bdc2ca5f5 | 31,158 | ipynb | Jupyter Notebook | MNIST.ipynb | coenarrow/MNistTests | ac363c558d0c2f8a755db710a4333814094c8126 | [
"MIT"
] | null | null | null | MNIST.ipynb | coenarrow/MNistTests | ac363c558d0c2f8a755db710a4333814094c8126 | [
"MIT"
] | null | null | null | MNIST.ipynb | coenarrow/MNistTests | ac363c558d0c2f8a755db710a4333814094c8126 | [
"MIT"
] | null | null | null | 59.235741 | 9,984 | 0.684993 | [
[
[
"<a href=\"https://colab.research.google.com/github/coenarrow/MNistTests/blob/main/MNIST.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"### Start Julia evironment",
"_____no_output_____"
]
],
[
[
"# Install any required python packages here\n# !pip install <packages>",
"_____no_output_____"
],
[
"# Here we install Julia\n%%capture\n%%shell\nif ! command -v julia 3>&1 > /dev/null\nthen\n wget -q 'https://julialang-s3.julialang.org/bin/linux/x64/1.6/julia-1.6.2-linux-x86_64.tar.gz' \\\n -O /tmp/julia.tar.gz\n tar -x -f /tmp/julia.tar.gz -C /usr/local --strip-components 1\n rm /tmp/julia.tar.gz\nfi\njulia -e 'using Pkg; pkg\"add IJulia; precompile;\"'\necho 'Done'",
"_____no_output_____"
]
],
[
[
"After you run the first cell (the the cell directly above this text), go to Colab's menu bar and select **Edit** and select **Notebook settings** from the drop down. Select *Julia 1.6* in Runtime type. You can also select your prefered harwdware acceleration (defaults to GPU). \n\n<br/>You should see something like this:\n\n> ![Colab Img](https://raw.githubusercontent.com/Dsantra92/Julia-on-Colab/master/misc/julia_menu.png)\n\n<br/>Click on SAVE\n<br/>**We are ready to get going**\n\n\n\n",
"_____no_output_____"
]
],
[
[
"VERSION",
"_____no_output_____"
]
],
[
[
"**The next three cells are for GPU benchmarking. If you are using this notebook for the first time and have GPU enabled, you can give it a try.** ",
"_____no_output_____"
],
[
"### Import all the Julia Packages",
"_____no_output_____"
],
[
"Here, we first import all the required packages. CUDA is used to offload some of the processing to the gpu, Flux is the package for putting together the NN. MLDatasets contains the MNIST dataset which we will use in this example. Images contains some functionality to actually view images. Makie, and CairoMakie are used for plotting.",
"_____no_output_____"
]
],
[
[
"import Pkg\nPkg.add([\"CUDA\",\"Flux\",\"MLDatasets\",\"Images\",\"Makie\",\"CairoMakie\",\"ImageMagick\"])\nusing CUDA, Flux, MLDatasets, Images, Makie, Statistics, CairoMakie,ImageMagick\nusing Base.Iterators: partition",
"\u001b[32m\u001b[1m Resolving\u001b[22m\u001b[39m package versions...\n\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/.julia/environments/v1.6/Project.toml`\n\u001b[32m\u001b[1m No Changes\u001b[22m\u001b[39m to `~/.julia/environments/v1.6/Manifest.toml`\n"
]
],
[
[
"Let's look at the functions we can call from the MNIST set itself",
"_____no_output_____"
]
],
[
[
"names(MNIST)",
"_____no_output_____"
]
],
[
[
"Let's assume we want to get the training data from the MNIST package. Now, let's see what we get returned if we call that function",
"_____no_output_____"
]
],
[
[
"Base.return_types(MNIST.traindata)",
"_____no_output_____"
]
],
[
[
"This does not mean a heck of a lot to me initially, but we can basically see we get 2 tuples returned. So let's go ahead and assign some x and y to each of the tuples so we can probe further.",
"_____no_output_____"
]
],
[
[
"x, y = MNIST.traindata();",
"_____no_output_____"
]
],
[
[
"Let's now further investigate the x",
"_____no_output_____"
]
],
[
[
"size(x)",
"_____no_output_____"
]
],
[
[
"We know from the MNIST dataset, that the set contains 60000 images, each of size 28x28. So clearly we are looking at the images themselves. So this is our input. Let's plot an example to make sure.",
"_____no_output_____"
]
],
[
[
"i = rand(1:60000)\nheatmap(x[:,:,i],colormap = :grays)",
"_____no_output_____"
]
],
[
[
"Similarly, let's have a quick look at the size of y. I expect this is the label associated with the images",
"_____no_output_____"
]
],
[
[
"y[i]",
"_____no_output_____"
]
],
[
[
"And then let's check that the image above is labelled as what we expect.",
"_____no_output_____"
]
],
[
[
"y[7]",
"_____no_output_____"
],
[
"show(names(Images))",
"[Symbol(\"@colorant_str\"), Symbol(\"@test_approx_eq_sigma_eps\"), :ABGR, :ADIN99, :ADIN99d, :ADIN99o, :AGray, :AGray32, :AHSI, :AHSL, :AHSV, :ALCHab, :ALCHuv, :ALMS, :ALab, :ALuv, :ARGB, :ARGB32, :AXYZ, :AYCbCr, :AYIQ, :AbstractAGray, :AbstractARGB, :AbstractGray, :AbstractGrayA, :AbstractRGB, :AbstractRGBA, :AdaptiveEqualization, :Algorithm, :AlphaColor, :Axis, :AxisArray, :AxisWeightedHausdorff, :AxyY, :BGR, :BGRA, :BlobLoG, :BorderArray, :CIE1931JV_CMF, :CIE1931J_CMF, :CIE1931_CMF, :CIE1964_CMF, :CIE2006_10_CMF, :CIE2006_2_CMF, :CIEDE2000, :ChannelView, :Cityblock, :Color, :Color3, :ColorAlpha, :ColorTypes, :ColorView, :Colorant, :ColorantNormed, :ColorizedArray, :Colors, :ContrastStretching, :DE_2000, :DE_94, :DE_AB, :DE_BFD, :DE_CMC, :DE_DIN99, :DE_DIN99d, :DE_DIN99o, :DE_JPC79, :DIN99, :DIN99A, :DIN99d, :DIN99dA, :DIN99o, :DIN99oA, :Equalization, :Euclidean, :Fill, :Fixed, :FixedPoint, :FixedPointNumbers, :Fractional, :GammaCorrection, :GenericHausdorff, :Gray, :Gray24, :GrayA, :GuoAlgo, :HASLER_AND_SUSSTRUNK_M3, :HSB, :HSI, :HSIA, :HSL, :HSLA, :HSV, :HSVA, :Hamming, :HasDimNames, :HasProperties, :HasTimeAxis, :Hausdorff, :HomogeneousPoint, :ImageAxes, :ImageContrastAdjustment, :ImageCore, :ImageDistances, :ImageFiltering, :ImageMeta, :ImageMetadata, :ImageMorphology, :ImageQualityIndexes, :ImageTransformations, :Images, :IndexAny, :IndexIncremental, :Inner, :InvWarpedView, :Kernel, :KernelFactors, :LCHab, :LCHabA, :LCHuv, :LCHuvA, :LMS, :LMSA, :Lab, :LabA, :LinearStretching, :Luv, :LuvA, :MSC, :MSSSIM, :Matching, :MaxTree, :MeanAbsoluteError, :MeanSquaredError, :Metric, :MidwayEqualization, :Minkowski, :ModifiedHausdorff, :MosaicView, :MosaicViews, :N0f16, :N0f32, :N0f64, :N0f8, :N10f22, :N10f54, :N10f6, :N11f21, :N11f5, :N11f53, :N12f20, :N12f4, :N12f52, :N13f19, :N13f3, :N13f51, :N14f18, :N14f2, :N14f50, :N15f1, :N15f17, :N15f49, :N16f16, :N16f48, :N17f15, :N17f47, :N18f14, :N18f46, :N19f13, :N19f45, :N1f15, :N1f31, :N1f63, :N1f7, :N20f12, :N20f44, :N21f11, :N21f43, :N22f10, :N22f42, :N23f41, :N23f9, :N24f40, :N24f8, :N25f39, :N25f7, :N26f38, :N26f6, :N27f37, :N27f5, :N28f36, :N28f4, :N29f3, :N29f35, :N2f14, :N2f30, :N2f6, :N2f62, :N30f2, :N30f34, :N31f1, :N31f33, :N32f32, :N33f31, :N34f30, :N35f29, :N36f28, :N37f27, :N38f26, :N39f25, :N3f13, :N3f29, :N3f5, :N3f61, :N40f24, :N41f23, :N42f22, :N43f21, :N44f20, :N45f19, :N46f18, :N47f17, :N48f16, :N49f15, :N4f12, :N4f28, :N4f4, :N4f60, :N50f14, :N51f13, :N52f12, :N53f11, :N54f10, :N55f9, :N56f8, :N57f7, :N58f6, :N59f5, :N5f11, :N5f27, :N5f3, :N5f59, :N60f4, :N61f3, :N62f2, :N63f1, :N6f10, :N6f2, :N6f26, :N6f58, :N7f1, :N7f25, :N7f57, :N7f9, :N8f24, :N8f56, :N8f8, :N9f23, :N9f55, :N9f7, :NA, :NCC, :NoPad, :Normed, :PSNR, :Pad, :PaddedView, :PaddedViews, :Percentile, :PreMetric, :Q0f15, :Q0f31, :Q0f63, :Q0f7, :Q10f21, :Q10f5, :Q10f53, :Q11f20, :Q11f4, :Q11f52, :Q12f19, :Q12f3, :Q12f51, :Q13f18, :Q13f2, :Q13f50, :Q14f1, :Q14f17, :Q14f49, :Q15f0, :Q15f16, :Q15f48, :Q16f15, :Q16f47, :Q17f14, :Q17f46, :Q18f13, :Q18f45, :Q19f12, :Q19f44, :Q1f14, :Q1f30, :Q1f6, :Q1f62, :Q20f11, :Q20f43, :Q21f10, :Q21f42, :Q22f41, :Q22f9, :Q23f40, :Q23f8, :Q24f39, :Q24f7, :Q25f38, :Q25f6, :Q26f37, :Q26f5, :Q27f36, :Q27f4, :Q28f3, :Q28f35, :Q29f2, :Q29f34, :Q2f13, :Q2f29, :Q2f5, :Q2f61, :Q30f1, :Q30f33, :Q31f0, :Q31f32, :Q32f31, :Q33f30, :Q34f29, :Q35f28, :Q36f27, :Q37f26, :Q38f25, :Q39f24, :Q3f12, :Q3f28, :Q3f4, :Q3f60, :Q40f23, :Q41f22, :Q42f21, :Q43f20, :Q44f19, :Q45f18, :Q46f17, :Q47f16, :Q48f15, :Q49f14, :Q4f11, :Q4f27, :Q4f3, :Q4f59, :Q50f13, :Q51f12, :Q52f11, :Q53f10, :Q54f9, :Q55f8, :Q56f7, :Q57f6, :Q58f5, :Q59f4, :Q5f10, :Q5f2, :Q5f26, :Q5f58, :Q60f3, :Q61f2, :Q62f1, :Q63f0, :Q6f1, :Q6f25, :Q6f57, :Q6f9, :Q7f0, :Q7f24, :Q7f56, :Q7f8, :Q8f23, :Q8f55, :Q8f7, :Q9f22, :Q9f54, :Q9f6, :RGB, :RGB24, :RGBA, :RGBX, :RootMeanSquaredError, :SSIM, :SemiMetric, :SqEuclidean, :StackedView, :StreamIndexStyle, :StreamingContainer, :SumAbsoluteDifference, :SumSquaredDifference, :TimeAxis, :TotalVariation, :Transparent3, :TransparentColor, :TransparentGray, :TransparentRGB, :WarpedView, :XRGB, :XYZ, :XYZA, :YCbCr, :YCbCrA, :YIQ, :YIQA, :adjust_gamma, :adjust_histogram, :adjust_histogram!, :alpha, :alphacolor, :area_closing, :area_closing!, :area_opening, :area_opening!, :areas, :arraydata, :assert_timedim_last, :assess, :assess_msssim, :assess_psnr, :assess_ssim, :atindex, :atvalue, :axisdim, :axisnames, :axisvalues, :backdiffx, :backdiffy, :base_color_type, :base_colorant_type, :bilinear_interpolation, :blob_LoG, :blue, :bothat, :boundingboxes, :boxdiff, :build_histogram, :canny, :ccolor, :center, :centered, :channelview, :chroma, :cie_color_match, :ciede2000, :cityblock, :clahe, :clamp01, :clamp01!, :clamp01nan, :clamp01nan!, :clearborder, :cliphist, :closing, :collapse, :color, :color_type, :coloralpha, :colordiff, :colordim, :colorfulness, :colormap, :colormatch, :colorsigned, :colorview, :colwise, :colwise!, :comp1, :comp2, :comp3, :comp4, :comp5, :complement, :component_boxes, :component_centroids, :component_indices, :component_lengths, :component_subscripts, :convexhull, :coords_spatial, :copy!, :copyproperties, :corner2subpixel, :delete!, :deuteranopic, :diameter_closing, :diameter_closing!, :diameter_opening, :diameter_opening!, :diameters, :dilate, :dilate!, :distance_transform, :distinguishable_colors, :diverging_palette, :entropy, :erode, :erode!, :euclidean, :evaluate, :fastcorners, :feature_transform, :findlocalmaxima, :findlocalminima, :float32, :float64, :floattype, :forwarddiffx, :forwarddiffy, :freqkernel, :gammacovs, :gamutmax, :gamutmin, :gaussian_pyramid, :get, :getindex, :getindex!, :getproperty, :gray, :green, :hamming, :harris, :haskey, :hasler_and_susstrunk_m3, :hasproperty, :hausdorff, :height, :hex, :histeq, :histmatch, :hue, :imROF, :imadjustintensity, :imcorner, :imcorner_subpixel, :imedge, :imfill, :imfilter, :imfilter!, :imgaussiannoise, :imgradients, :imhist, :imlineardiffusion, :imresize, :imrotate, :imstretch, :indices_spatial, :integral_image, :invwarpedview, :istimeaxis, :kernelfactors, :kitchen_rosenfeld, :label_components, :label_components!, :load, :local_maxima, :local_maxima!, :local_minima, :local_minima!, :mae, :magnitude, :magnitude_phase, :mapc, :mapreducec, :mapwindow, :mapwindow!, :maxabsfinite, :maxfinite, :meancovs, :meanfinite, :minfinite, :minkowski, :modified_hausdorff, :morphogradient, :morpholaplace, :mosaic, :mosaicview, :mse, :n0f16, :n0f8, :n2f14, :n4f12, :n6f10, :namedaxes, :ncc, :nimages, :normalize_hue, :normedview, :opening, :orientation, :otsu_threshold, :padarray, :paddedviews, :pairwise, :parametric_colorant, :permuteddimsview, :phase, :pixelspacing, :properties, :protanopic, :rawview, :red, :reducec, :reflect, :reinterpretc, :restrict, :result_type, :rmse, :sad, :sadn, :save, :scaledual, :scaleminmax, :scalesigned, :sdims, :sequential_palette, :setindex!, :setproperty!, :shareproperties, :shepp_logan, :shi_tomasi, :size_spatial, :spacedirections, :spacekernel, :spatialorder, :spatialproperties, :sqeuclidean, :squeeze1, :ssd, :ssdn, :sym_paddedviews, :takemap, :thin_edges, :thin_edges_nonmaxsup, :thin_edges_nonmaxsup_subpix, :thin_edges_subpix, :thinning, :timeaxis, :timedim, :tophat, :totalvariation, :tritanopic, :warp, :warpedview, :weighted_color_mean, :whitebalance, :width, :widthheight, :xyY, :xyYA, :yen_threshold, :zeroarray]"
],
[
"?imshow",
"search:\n\nCouldn't find \u001b[36mimshow\u001b[39m\nPerhaps you meant @show, show, ispow2, @cushow, rmskew or Dims\n"
],
[
"names(ImageShow)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d00b0497866d56e261de7d96476ac984d68e1410 | 10,806 | ipynb | Jupyter Notebook | 0917 Compilers with variables and conditionals.ipynb | hnu-pl/compiler2019fall | 2d6cdb49755d1bbdec176a330bcf1bf9fef81609 | [
"MIT"
] | null | null | null | 0917 Compilers with variables and conditionals.ipynb | hnu-pl/compiler2019fall | 2d6cdb49755d1bbdec176a330bcf1bf9fef81609 | [
"MIT"
] | null | null | null | 0917 Compilers with variables and conditionals.ipynb | hnu-pl/compiler2019fall | 2d6cdb49755d1bbdec176a330bcf1bf9fef81609 | [
"MIT"
] | 1 | 2019-09-03T04:10:50.000Z | 2019-09-03T04:10:50.000Z | 24.174497 | 120 | 0.445216 | [
[
[
"# 컴파일러에서 변수, 조건문 다루기\n\n변수를 다루기 위해서는 기계상태에 메모리를 추가하고 메모리 연산을 위한 저급언어 명령을 추가한다.\n\n조건문을 다루기 위해서는 실행코드를 순차적으로만 실행하는 것이 아니라 특정 코드 위치로 이동하여 실행하는 저급언어 명령을 추가한다.",
"_____no_output_____"
]
],
[
[
"data Expr = Var Name -- x\n | Val Value -- n\n | Add Expr Expr -- e1 + e2\n -- | Sub Expr Expr\n -- | Mul Expr Expr\n -- | Div Expr Expr\n | If Expr Expr Expr -- if e then e1 else e0\n deriving Show\n\ntype Name = String -- 변수의 이름은 문자열로 표현\ntype Value = Int -- 상수값은 정수\n\ntype Stack = [Value]\n\ndata Inst = ADD | PUSH Value -- 스택 명령\n | GOTO Code | JMPZ Code -- 실행코드 명령\n | READ Addr -- 메모리 명령\n deriving Show\ntype Code = [Inst]\n\n-- type Env = [ (Name, Value) ] 라는 인터프리터 실행 환경을\n-- 두 단계로 아래와 같이 나눈다\ntype SymTbl = [ (Name, Addr) ] -- 컴파일 단계에서 사용하는 심볼 테이블\ntype Memory = [ (Addr, Value) ] -- 기계(가상머신) 실행 단계에서 사용하는 메모리\n\ntype Addr = Int -- 주소는 정수로 표현\n\n-- 이제 Kont는 스택만이 아니라 세 요소로 이루어진 기계상태를 변화시키는 함수 타입이다\ntype Kont = (Stack,Memory,Code) -> (Stack,Memory,Code)",
"_____no_output_____"
],
[
"-- 더 이상 실행할 코드가 없는 기계상태로 변화시키는 함수\nhaltK :: Kont\nhaltK (s, mem, _) = (s, mem, [])\n\n-- 스택 명령을 실행시키기 위한 기계상태변화 함수들\npushK :: Int -> Kont\npushK n (s, mem, code) = (n:s, mem, code)\n\naddK :: Kont\naddK (n2:n1:s, mem, code) = ((n1+n2):s, mem, code)\n\n-- 실행코드 명령을 실행시키기 위한 기계상태변화 함수들\njmpzK :: Code -> Kont\njmpzK code (0:s, mem, _) = (s, mem, code) -- 스택 맨 위 값이 0이면 새로운 code 위치로 점프\njmpzK _ (_:s, mem, c) = (s, mem, c) -- 스택 맨 위가 0이 아니면 원래 실행하던 코드 c실행\n\ngotoK :: Code -> Kont\ngotoK code (s, mem, _) = (s, mem, code) -- 무조건 새로운 code 위치로 이동\n\n-- 메모리 명령을 실행시키기 위한 기계상태변화 함수\n-- (메모리에서 값을 읽어 스택 맨 위에 쌓는다)\nreadK a (s, mem, code) = case lookup a mem of\n Nothing -> error (show a ++ \" uninitialized memory address\")\n Just v -> (v:s, mem, code)",
"_____no_output_____"
],
[
"\ncompile :: SymTbl -> Expr -> Code\ncompile tbl (Var x) = case lookup x tbl of\n Nothing -> error (x ++ \" not found\")\n Just a -> [READ a]\ncompile tbl (Val n) = [PUSH n]\ncompile tbl (Add e1 e2) = compile tbl e1 ++ compile tbl e2 ++ [ADD]\ncompile tbl (If e e1 e0) =\n compile tbl e ++ [JMPZ c0] ++ c1 ++ [GOTO []] ++ c0\n where\n c1 = compile tbl e1\n c0 = compile tbl e0\n\nstep :: Inst -> Kont\nstep (PUSH n) = pushK n\nstep ADD = addK\nstep (GOTO c) = gotoK c\nstep (JMPZ c) = jmpzK c\nstep (READ a) = readK a\n\nrun :: Kont\nrun (s, mem, []) = (s, mem, [])\nrun (s, mem, c:cs) = run (step c (s, mem, cs))",
"_____no_output_____"
],
[
"import Data.List (union)\n\nvars (Var x) = [x]\nvars (Val _) = []\nvars (Add e1 e2) = vars e1 `union` vars e2\nvars (If e e1 e0) = vars e `union` vars e1 `union` vars e0",
"_____no_output_____"
],
[
"-- 인터프리터에서는 아래 식을 실행하려면 [(\"x\",2),(\"y\",3)]와 같은\n-- 실행환경을 만들어 한방에 실행하면 되지만 컴파일러에는 두 단계\ne0 = Add (Add (Var \"x\") (Var \"y\")) (Val 100)\ne0\n-- 컴파일할 때는 변수를 메모리 주소에 대응시키는 심볼테이블이 필요\ncode0 = compile [(\"x\",102),(\"y\",103)] e0\ncode0\n-- 실행할 때는 해당 주소에 적절한 값을 할당한 메모리가 필요\nvm0 = ([], [(102,7), (103,3)], code0)\nrun vm0",
"_____no_output_____"
],
[
"{- b = 2, x = 12, y = 123 -}\n\n-- if b then (x + 3) else y\ne1 = If (Var \"b\") (Add (Var \"x\") (Val 3)) (Var \"y\")\n-- (if b then (x + 3) else y) + 1000\ne2 = e1 `Add` Val 1000",
"_____no_output_____"
],
[
"tbl0 = [(\"b\",101),(\"x\",102),(\"y\",103)]\ntbl0\n\nmem0 = [(101,2), (102,12), (103,123)]\nmem0",
"_____no_output_____"
],
[
"code1 = compile tbl0 e1\ncode1\n\ncode2 = compile tbl0 e2\ncode2\n\n{-\nimport GHC.HeapView\n\nputStr =<< ppHeapGraph <$> buildHeapGraph 15 code2 (asBox code2)\n-}",
"_____no_output_____"
],
[
"-- 예상대로 e1의 계산 결과 스택 맨 위에 15가 나온다\nrun ([], mem0, code1)\n\n-- e2의 계산 결과는 1015이어야 하는데 e1과 마찬가지로 15가 되어버린다\nrun ([], mem0, code2)",
"_____no_output_____"
]
],
[
[
"아래는 e2를 컴파일한 code2를 실행한 결과가 왜 원하는 대로 나오지 않는지 좀더 자세히 살펴보기 위해\nstep 함수를 한단계씩 호출해 가며 각각의 명령 실행 전후의 기계상태 vm0,...,vm6를 알아본 내용이다.",
"_____no_output_____"
]
],
[
[
"vm0@(s0, _,c0:cs0) = ([], mem0, code2)\nvm0\nvm1@(s1,mem1,c1:cs1) = step c0 (s0,mem0,cs0)\nvm1\nvm2@(s2,mem2,c2:cs2) = step c1 (s1,mem1,cs1)\nvm2\nvm3@(s3,mem3,c3:cs3) = step c2 (s2,mem2,cs2)\nvm3\nvm4@(s4,mem4,c4:cs4) = step c3 (s3,mem3,cs3)\nvm4\nvm5@(s5,mem5,c5:cs5) = step c4 (s4,mem4,cs4)\nvm5\nvm6 = step c5 (s5,mem5,cs5)\nvm6",
"_____no_output_____"
]
],
[
[
"----\n\n# HW02-compiler2019fall (10점)\n\n지금까지 살펴본 `compile` 함수의 문제점을 해결하여\n제대로 된 컴파일러를 정의하려면 다음과 같은 개념으로 접근하면 된다.\n\n> 지금 내가 주목해서 컴파일하는 부분의 목적 코드를 생성해서 \n> 그 다음에 뒤이어 할 일의 코드 앞에다 이어붙이는\n> 코드변환함수를 만드는 것이 컴파일러다.\n\n그러니까 다음과 같은 타입으로 `compile` 함수를 재작성해야 한다. \n```haskell\ntype Control = Code -> Code -- 코드변환함수 타입\ncompile :: SymTbl -> Expr -> Control\n```\n\n(테스트 코드 작성조건 안내 추가예정)",
"_____no_output_____"
],
[
"과제에 도움이 될만한 내용 (아래에 추가예정)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d00b060c320a9a72bf3595cecd39120de78cc9ea | 3,294 | ipynb | Jupyter Notebook | introductory-tutorials/intro-to-julia/calculate_pi.ipynb | rajsardhara/Julia_Lang_Tutoria | 69c2fedb72751493f0f101f348c888133ff08830 | [
"MIT"
] | 535 | 2020-07-15T14:56:11.000Z | 2022-03-25T12:50:32.000Z | introductory-tutorials/intro-to-julia/calculate_pi.ipynb | rajsardhara/Julia_Lang_Tutoria | 69c2fedb72751493f0f101f348c888133ff08830 | [
"MIT"
] | 42 | 2018-02-25T22:53:47.000Z | 2020-05-14T02:15:50.000Z | introductory-tutorials/intro-to-julia/calculate_pi.ipynb | rajsardhara/Julia_Lang_Tutoria | 69c2fedb72751493f0f101f348c888133ff08830 | [
"MIT"
] | 394 | 2020-07-14T23:22:24.000Z | 2022-03-28T20:12:57.000Z | 41.175 | 397 | 0.6017 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d00b0dc8f27206415526706663288c12ef94091f | 373,556 | ipynb | Jupyter Notebook | .ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb | megatharun/basic-python-for-researcher | 9f1fb1b545f52ed1eab43d21616eadf85791e625 | [
"Artistic-2.0"
] | 1 | 2017-04-04T22:59:25.000Z | 2017-04-04T22:59:25.000Z | .ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb | megatharun/basic-python-for-researcher | 9f1fb1b545f52ed1eab43d21616eadf85791e625 | [
"Artistic-2.0"
] | null | null | null | .ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb | megatharun/basic-python-for-researcher | 9f1fb1b545f52ed1eab43d21616eadf85791e625 | [
"Artistic-2.0"
] | 2 | 2018-03-11T19:50:37.000Z | 2020-04-18T19:51:10.000Z | 79.905027 | 99,018 | 0.752144 | [
[
[
"# <span style=\"color: #B40486\">BASIC PYTHON FOR RESEARCHERS</span>",
"_____no_output_____"
],
[
"_by_ [**_Megat Harun Al Rashid bin Megat Ahmad_**](https://www.researchgate.net/profile/Megat_Harun_Megat_Ahmad) \nlast updated: April 14, 2016",
"_____no_output_____"
],
[
"-------\n## _<span style=\"color: #29088A\">8. Database and Data Analysis</span>_\n",
"_____no_output_____"
],
[
"---\n<span style=\"color: #0000FF\">$Pandas$</span> is an open source library for data analysis in _Python_. It gives _Python_ similar capabilities to _R_ programming language and even though it is possible to run _R_ in _Jupyter Notebook_, it would be more practical to do data analysis with a _Python_ friendly syntax. Similar to other libraries, the first step to use <span style=\"color: #0000FF\">$Pandas$</span> is to import the library and usually together with the <span style=\"color: #0000FF\">$Numpy$</span> library. ",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"***\n### **_8.1 Data Structures_**",
"_____no_output_____"
],
[
"Data structures (similar to _Sequence_ in _Python_) of <span style=\"color: #0000FF\">$Pandas$</span> revolves around the **_Series_** and **_DataFrame_** structures. Both are fast as they are built on top of <span style=\"color: #0000FF\">$Numpy$</span>.",
"_____no_output_____"
],
[
"A **_Series_** is a one-dimensional object with a lot of similar properties similar to a list or dictionary in _Python_'s _Sequence_. Each element or item in a **_Series_** will be assigned by default an index label from _0_ to _N-1_ (where _N_ is the length of the **_Series_**) and it can contains the various type of _Python_'s data.",
"_____no_output_____"
]
],
[
[
"# Creating a series (with different type of data)\n\ns1 = pd.Series([34, 'Material', 4*np.pi, 'Reactor', [100,250,500,750], 'kW'])\ns1",
"_____no_output_____"
]
],
[
[
"The index of a **_Series_** can be specified during its creation and giving it a similar function to a dictionary.",
"_____no_output_____"
]
],
[
[
"# Creating a series with specified index\n\nlt = [34, 'Material', 4*np.pi, 'Reactor', [100,250,500,750], 'kW']\n\ns2 = pd.Series(lt, index = ['b1', 'r1', 'solid angle', 18, 'reactor power', 'unit'])\ns2",
"_____no_output_____"
]
],
[
[
"Data can be extracted by specifying the element position or index (similar to list/dictionary).",
"_____no_output_____"
]
],
[
[
"s1[3], s2['solid angle']",
"_____no_output_____"
]
],
[
[
"**_Series_** can also be constructed from a dictionary.",
"_____no_output_____"
]
],
[
[
"pop_cities = {'Kuala Lumpur':1588750, 'Seberang Perai':818197, 'Kajang':795522,\n 'Klang':744062, 'Subang Jaya':708296}\ncities = pd.Series(pop_cities)\ncities",
"_____no_output_____"
]
],
[
[
"The elements can be sort using the <span style=\"color: #0000FF\">$Series.order()$</span> function. This will not change the structure of the original variable.",
"_____no_output_____"
]
],
[
[
"cities.order(ascending=False)",
"/home/megatharun/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:1: FutureWarning: order is deprecated, use sort_values(...)\n if __name__ == '__main__':\n"
],
[
"cities",
"_____no_output_____"
]
],
[
[
"Another sorting function is the <span style=\"color: #0000FF\">$sort()$</span> function but this will change the structure of the **_Series_** variable.",
"_____no_output_____"
]
],
[
[
"# Sorting with descending values\n\ncities.sort(ascending=False)\ncities",
"/home/megatharun/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:3: FutureWarning: sort is deprecated, use sort_values(inplace=True) for for INPLACE sorting\n app.launch_new_instance()\n"
],
[
"cities",
"_____no_output_____"
]
],
[
[
"Conditions can be applied to the elements.",
"_____no_output_____"
]
],
[
[
"# cities with population less than 800,000\ncities[cities<800000] ",
"_____no_output_____"
],
[
"# cities with population between 750,000 and 800,000\ncities[cities<800000][cities>750000]",
"_____no_output_____"
]
],
[
[
"---\nA **_DataFrame_** is a 2-dimensional data structure with named rows and columns. It is similar to _R_'s _data.frame_ object and function like a spreadsheet. **_DataFrame_** can be considered to be made of series of **_Series_** data according to the column names. **_DataFrame_** can be created by passing a 2-dimensional array of data and specifying the rows and columns names.",
"_____no_output_____"
]
],
[
[
"# Creating a DataFrame by passing a 2-D numpy array of random number \n\n# Creating first the date-time index using date_range function\n# and checking it.\ndates = pd.date_range('20140801', periods = 8, freq = 'D')\ndates",
"_____no_output_____"
],
[
"# Creating the column names as list\nKedai = ['Kedai A', 'Kedai B', 'Kedai C', 'Kedai D', 'Kedai E']\n\n# Creating the DataFrame with specified rows and columns\ndf = pd.DataFrame(np.random.randn(8,5),index=dates,columns=Kedai)\ndf",
"_____no_output_____"
]
],
[
[
"---\nSome of the useful functions that can be applied to a **_DataFrame_** include:",
"_____no_output_____"
]
],
[
[
"df.head() # Displaying the first five (default) rows\n",
"_____no_output_____"
],
[
"df.head(3) # Displaying the first three (specified) rows\n",
"_____no_output_____"
],
[
"df.tail(2) # Displaying the last two (specified) rows\n",
"_____no_output_____"
],
[
"df.index # Showing the index of rows\n",
"_____no_output_____"
],
[
"df.columns # Showing the fields of columns\n",
"_____no_output_____"
],
[
"df.values # Showing the data only in its original 2-D array\n",
"_____no_output_____"
],
[
"df.describe() # Simple statistical data for each column\n",
"_____no_output_____"
],
[
"df.T # Transposing the DataFrame (index becomes column and vice versa)\n",
"_____no_output_____"
],
[
"df.sort_index(axis=1,ascending=False) # Sorting with descending column ",
"_____no_output_____"
],
[
"df.sort(columns='Kedai D') # Sorting according to ascending specific column",
"/home/megatharun/anaconda/lib/python2.7/site-packages/ipykernel/__main__.py:1: FutureWarning: sort(columns=....) is deprecated, use sort_values(by=.....)\n if __name__ == '__main__':\n"
],
[
"df['Kedai A'] # Extract specific column (using python list syntax)",
"_____no_output_____"
],
[
"df['Kedai A'][2:4] # Slicing specific column (using python list syntax)\n",
"_____no_output_____"
],
[
"df[2:4] # Slicing specific row data (using python list syntax)",
"_____no_output_____"
],
[
"# Slicing specific index range\ndf['2014-08-03':'2014-08-05'] ",
"_____no_output_____"
],
[
"# Slicing specific index range for a particular column\ndf['2014-08-03':'2014-08-05']['Kedai B'] ",
"_____no_output_____"
],
[
"# Using the loc() function\n\n# Slicing specific index and column ranges\ndf.loc['2014-08-03':'2014-08-05','Kedai B':'Kedai D']",
"_____no_output_____"
],
[
"# Slicing specific index range with specific column names\ndf.loc['2014-08-03':'2014-08-05',['Kedai B','Kedai D']]",
"_____no_output_____"
],
[
"# Possibly not yet to have something like this\n\ndf.loc[['2014-08-01','2014-08-03':'2014-08-05'],['Kedai B','Kedai D']]\n",
"_____no_output_____"
],
[
"# Using the iloc() function\n\ndf.iloc[3] # Specific row location",
"_____no_output_____"
],
[
"df.iloc[:,3] # Specific column location (all rows)",
"_____no_output_____"
],
[
"df.iloc[2:4,1:3] # Python like slicing for range",
"_____no_output_____"
],
[
"df.iloc[[2,4],[1,3]] # Slicing with python like list",
"_____no_output_____"
],
[
"# Conditionals on the data\n\ndf>0 # Array values > 0 OR",
"_____no_output_____"
],
[
"df[df>0] # Directly getting the value",
"_____no_output_____"
]
],
[
[
"**_NaN_** means empty, missing data or unavailable.",
"_____no_output_____"
]
],
[
[
"df[df['Kedai B']<0] # With reference to specific value in a column (e.g. Kedai B)",
"_____no_output_____"
],
[
"df2 = df.copy() # Made a copy of a database",
"_____no_output_____"
],
[
"df2",
"_____no_output_____"
],
[
"# Adding column\n\ndf2['Tambah'] = ['satu','satu','dua','tiga','empat','tiga','lima','enam'] ",
"_____no_output_____"
],
[
"df2",
"_____no_output_____"
],
[
"# Adding row using append() function. The previous loc() is possibly deprecated.\n\n# Assign a new name to the new row (with the same format)\nnew_row_name = pd.date_range('20140809', periods = 1, freq = 'D')\n\n# Appending new row with new data\ndf2.append(list(np.random.randn(5))+['sembilan'])\n\n# Renaming the new row (here actually is a reassignment)\ndf2 = df2.rename(index={10: new_row_name[0]})\ndf2",
"_____no_output_____"
],
[
"# Assigning new data to a row\ndf2.loc['2014-08-05'] = list(np.random.randn(5))+['tujuh']\ndf2",
"_____no_output_____"
],
[
"# Assigning new data to a specific element\ndf2.loc['2014-08-05','Tambah'] = 'lapan'\ndf2",
"_____no_output_____"
],
[
"# Using the isin() function (returns boolean data frame)\ndf2.isin(['satu','tiga'])",
"_____no_output_____"
],
[
"# Select specific row based on additonal column\ndf2[df2['Tambah'].isin(['satu','tiga'])]",
"_____no_output_____"
],
[
"# Use previous command - select certain column based on selected additional column\ndf2[df2['Tambah'].isin(['satu','tiga'])].loc[:,'Kedai B':'Kedai D']",
"_____no_output_____"
],
[
"# Select > 0 from previous cell...\n(df2[df2['Tambah'].isin(['satu','tiga'])].loc[:,'Kedai B':'Kedai D']>0)",
"_____no_output_____"
]
],
[
[
"***\n### **_8.2 Data Operations_**",
"_____no_output_____"
],
[
"We have seen few operations previously on **_Series_** and **_DataFrame_** and here this will be explored further.",
"_____no_output_____"
]
],
[
[
"df.mean() # Statistical mean (column) - same as df.mean(0), 0 means column",
"_____no_output_____"
],
[
"df.mean(1) # Statistical mean (row) - 1 means row",
"_____no_output_____"
],
[
"df.mean()['Kedai C':'Kedai E'] # Statistical mean (range of columns)",
"_____no_output_____"
],
[
"df.max() # Statistical max (column)",
"_____no_output_____"
],
[
"df.max()['Kedai C'] # Statistical max (specific column)",
"_____no_output_____"
],
[
"df.max(1)['2014-08-04':'2014-08-07'] # Statistical max (specific row)",
"_____no_output_____"
],
[
"df.max(1)[dates[3]] # Statistical max (specific row by variable)",
"_____no_output_____"
]
],
[
[
"---\nOther statistical functions can be checked by typing df._< TAB >_.",
"_____no_output_____"
],
[
"The data in a **_DataFrame_** can be represented by a variable declared using the <span style=\"color: #0000FF\">$lambda$</span> operator.",
"_____no_output_____"
]
],
[
[
"df.apply(lambda x: x.max() - x.min()) # Operating array values with function",
"_____no_output_____"
],
[
"df.apply(lambda z: np.log(z)) # Operating array values with function",
"_____no_output_____"
]
],
[
[
"Replacing, rearranging and operations of data between columns can be done much like spreadsheet.",
"_____no_output_____"
]
],
[
[
"df3 = df.copy()",
"_____no_output_____"
],
[
"df3[r'Kedai A^2/Kedai E'] = df3['Kedai A']**2/df3['Kedai E']\ndf3",
"_____no_output_____"
]
],
[
[
"Tables can be split, rearranged and combined.",
"_____no_output_____"
]
],
[
[
"df4 = df.copy()\ndf4",
"_____no_output_____"
],
[
"pieces = [df4[6:], df4[3:6], df4[:3]] # split row 2+3+3\npieces",
"_____no_output_____"
],
[
"df5 = pd.concat(pieces) # concantenate (rearrange/combine)\ndf5",
"_____no_output_____"
],
[
"df4+df5 # Operation between tables with original index sequence",
"_____no_output_____"
],
[
"df0 = df.loc[:,'Kedai A':'Kedai C'] # Slicing and extracting columns\npd.concat([df4, df0], axis = 1) # Concatenating columns (axis = 1 -> refers to column)",
"_____no_output_____"
]
],
[
[
"***\n### **_8.3 Plotting Functions_**",
"_____no_output_____"
],
[
"---\nLet us look on some of the simple plotting function on <span style=\"color: #0000FF\">$Pandas$</span> (requires <span style=\"color: #0000FF\">$Matplotlib$</span> library).",
"_____no_output_____"
]
],
[
[
"df_add = df.copy()",
"_____no_output_____"
],
[
"# Simple auto plotting\n%matplotlib inline\n\ndf_add.cumsum().plot()",
"_____no_output_____"
],
[
"# Reposition the legend\n\nimport matplotlib.pyplot as plt\n\ndf_add.cumsum().plot()\nplt.legend(bbox_to_anchor=[1.3, 1])",
"_____no_output_____"
]
],
[
[
"In the above example, repositioning the legend requires the legend function in <span style=\"color: #0000FF\">$Matplotlib$</span> library. Therefore, the <span style=\"color: #0000FF\">$Matplotlib$</span> library must be explicitly imported.",
"_____no_output_____"
]
],
[
[
"df_add.cumsum().plot(kind='bar')\nplt.legend(bbox_to_anchor=[1.3, 1])",
"_____no_output_____"
],
[
"df_add.cumsum().plot(kind='barh', stacked=True)",
"_____no_output_____"
],
[
"df_add.cumsum().plot(kind='hist', alpha=0.5)",
"_____no_output_____"
],
[
"df_add.cumsum().plot(kind='area', alpha=0.4, stacked=False)\nplt.legend(bbox_to_anchor=[1.3, 1])",
"_____no_output_____"
]
],
[
[
"A 3-dimensional plot can be projected on a canvas but requires the <span style=\"color: #0000FF\">$Axes3D$</span> library with slightly complicated settings.",
"_____no_output_____"
]
],
[
[
"# Plotting a 3D bar plot\n\nfrom mpl_toolkits.mplot3d import Axes3D\nimport numpy as np\n\n# Convert the time format into ordinary strings\ntime_series = pd.Series(df.index.format())\n\nfig = plt.figure(figsize=(8,6))\nax = fig.add_subplot(111, projection='3d')\n\n# Plotting the bar graph column by column\nfor c, z in zip(['r', 'g', 'b', 'y','m'], np.arange(len(df.columns))):\n xs = df.index\n ys = df.values[:,z]\n ax.bar(xs, ys, zs=z, zdir='y', color=c, alpha=0.5)\n \nax.set_zlabel('Z')\nax.set_xticklabels(time_series, va = 'baseline', ha = 'right', rotation = 15)\nax.set_yticks(np.arange(len(df.columns)))\nax.set_yticklabels(df.columns, va = 'center', ha = 'left', rotation = -42)\n\nax.view_init(30, -30)\n\nfig.tight_layout()",
"_____no_output_____"
]
],
[
[
"***\n### **_8.4 Reading And Writing Data To File_**",
"_____no_output_____"
],
[
"Data in **_DataFrame_** can be exported into **_csv_** (comma separated value) and **_Excel_** file. The users can also create a **_DataFrame_** from data in **_csv_** and **_Excel_** file, the data can then be processed.",
"_____no_output_____"
]
],
[
[
"# Export data to a csv file but separated with < TAB > rather than comma\n# the default separation is with comma\n\ndf.to_csv('Tutorial8/Kedai.txt', sep='\\t')",
"_____no_output_____"
],
[
"# Export to Excel file\n\ndf.to_excel('Tutorial8/Kedai.xlsx', sheet_name = 'Tarikh', index = True)",
"_____no_output_____"
],
[
"# Importing data from csv file (without header)\n\nfrom_file = pd.read_csv('Tutorial8/Malaysian_Town.txt',sep='\\t',header=None)\nfrom_file.head()",
"_____no_output_____"
],
[
"# Importing data from Excel file (with header (the first row) that became the column names)\n\nfrom_excel = pd.read_excel('Tutorial8/Malaysian_Town.xlsx','Sheet1')\nfrom_excel.head()",
"_____no_output_____"
]
],
[
[
"---\nFurther <span style=\"color: #0000FF\">$Pandas$</span> features can be found in http://pandas.pydata.org/. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d00b0ec145647c738797e92107fa6f558b663dc9 | 13,405 | ipynb | Jupyter Notebook | Presentations/PASS2019-Docker/content/common/Additional-resources.ipynb | SQLvariant/Demo | b1184fc4817ec21c3ed87df6d2af9a6afe78dd12 | [
"MIT"
] | 33 | 2019-04-27T19:29:15.000Z | 2022-01-14T21:07:17.000Z | Presentations/PASS2019-Docker/content/common/Additional-resources.ipynb | SQLvariant/Demo | b1184fc4817ec21c3ed87df6d2af9a6afe78dd12 | [
"MIT"
] | null | null | null | Presentations/PASS2019-Docker/content/common/Additional-resources.ipynb | SQLvariant/Demo | b1184fc4817ec21c3ed87df6d2af9a6afe78dd12 | [
"MIT"
] | 10 | 2019-04-29T17:29:36.000Z | 2022-02-09T09:32:43.000Z | 90.574324 | 4,004 | 0.446475 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d00b1cd951009e3ad602d0fa154f70bb0f61c826 | 5,069 | ipynb | Jupyter Notebook | ipynb/Germany-Niedersachsen-LK-Aurich.ipynb | oscovida/oscovida.github.io | c74d6da79feda1b5ccce107ad3acd48cf0e74c1c | [
"CC-BY-4.0"
] | 2 | 2020-06-19T09:16:14.000Z | 2021-01-24T17:47:56.000Z | ipynb/Germany-Niedersachsen-LK-Aurich.ipynb | oscovida/oscovida.github.io | c74d6da79feda1b5ccce107ad3acd48cf0e74c1c | [
"CC-BY-4.0"
] | 8 | 2020-04-20T16:49:49.000Z | 2021-12-25T16:54:19.000Z | ipynb/Germany-Niedersachsen-LK-Aurich.ipynb | oscovida/oscovida.github.io | c74d6da79feda1b5ccce107ad3acd48cf0e74c1c | [
"CC-BY-4.0"
] | 4 | 2020-04-20T13:24:45.000Z | 2021-01-29T11:12:12.000Z | 30.353293 | 185 | 0.522588 | [
[
[
"# Germany: LK Aurich (Niedersachsen)\n\n* Homepage of project: https://oscovida.github.io\n* Plots are explained at http://oscovida.github.io/plots.html\n* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Niedersachsen-LK-Aurich.ipynb)",
"_____no_output_____"
]
],
[
[
"import datetime\nimport time\n\nstart = datetime.datetime.now()\nprint(f\"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}\")",
"_____no_output_____"
],
[
"%config InlineBackend.figure_formats = ['svg']\nfrom oscovida import *",
"_____no_output_____"
],
[
"overview(country=\"Germany\", subregion=\"LK Aurich\", weeks=5);",
"_____no_output_____"
],
[
"overview(country=\"Germany\", subregion=\"LK Aurich\");",
"_____no_output_____"
],
[
"compare_plot(country=\"Germany\", subregion=\"LK Aurich\", dates=\"2020-03-15:\");\n",
"_____no_output_____"
],
[
"# load the data\ncases, deaths = germany_get_region(landkreis=\"LK Aurich\")\n\n# get population of the region for future normalisation:\ninhabitants = population(country=\"Germany\", subregion=\"LK Aurich\")\nprint(f'Population of country=\"Germany\", subregion=\"LK Aurich\": {inhabitants} people')\n\n# compose into one table\ntable = compose_dataframe_summary(cases, deaths)\n\n# show tables with up to 1000 rows\npd.set_option(\"max_rows\", 1000)\n\n# display the table\ntable",
"_____no_output_____"
]
],
[
[
"# Explore the data in your web browser\n\n- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Niedersachsen-LK-Aurich.ipynb)\n- and wait (~1 to 2 minutes)\n- Then press SHIFT+RETURN to advance code cell to code cell\n- See http://jupyter.org for more details on how to use Jupyter Notebook",
"_____no_output_____"
],
[
"# Acknowledgements:\n\n- Johns Hopkins University provides data for countries\n- Robert Koch Institute provides data for within Germany\n- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)\n- Open source and scientific computing community for the data tools\n- Github for hosting repository and html files\n- Project Jupyter for the Notebook and binder service\n- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))\n\n--------------------",
"_____no_output_____"
]
],
[
[
"print(f\"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and \"\n f\"deaths at {fetch_deaths_last_execution()}.\")",
"_____no_output_____"
],
[
"# to force a fresh download of data, run \"clear_cache()\"",
"_____no_output_____"
],
[
"print(f\"Notebook execution took: {datetime.datetime.now()-start}\")\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
d00b267440108ee7f7bc6fff0a17a7c5c4c84253 | 255,171 | ipynb | Jupyter Notebook | No-show-dataset-investigation.ipynb | ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows | 85e0beeedcdd32dfa40751a0910c4a0e5734d7c8 | [
"MIT"
] | null | null | null | No-show-dataset-investigation.ipynb | ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows | 85e0beeedcdd32dfa40751a0910c4a0e5734d7c8 | [
"MIT"
] | null | null | null | No-show-dataset-investigation.ipynb | ilkycakir/Investigate-A-Dataset-Medical-Appt-No-Shows | 85e0beeedcdd32dfa40751a0910c4a0e5734d7c8 | [
"MIT"
] | null | null | null | 202.516667 | 73,580 | 0.894584 | [
[
[
"\n# Investigation of No-show Appointments Data\n\n## Table of Contents\n<ul>\n<li><a href=\"#intro\">Introduction</a></li>\n<li><a href=\"#wrangling\">Data Wrangling</a></li>\n<li><a href=\"#eda\">Exploratory Data Analysis</a></li>\n<li><a href=\"#conclusions\">Conclusions</a></li>\n</ul>",
"_____no_output_____"
],
[
"<a id='intro'></a>\n## Introduction\n\n\nThe data includes some information about more than 100,000 Braxzilian medical appointments. It gives if the patient shows up or not for the appointment as well as some characteristics of patients and appointments. When we calculate overall no-show rate for all records, we see that it is pretty high; above 20%. It means more than one out of 5 patients does not show up at all. In this project, we specifically look at the associatons between no show-up rate and other variables and try to understand why the rate is at the level it is. ",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport seaborn as sb\nimport numpy as np\nimport matplotlib.pyplot as plt\n% matplotlib inline",
"_____no_output_____"
]
],
[
[
"<a id='wrangling'></a>\n## Data Wrangling\n\n",
"_____no_output_____"
]
],
[
[
"# Load your data and print out a few lines. Perform operations to inspect data\n# types and look for instances of missing or possibly errant data.\nfilename = 'noshowappointments-kagglev2-may-2016.csv'\n\ndf= pd.read_csv(filename)\ndf.head()",
"_____no_output_____"
],
[
"df.info() # no missing values",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 110527 entries, 0 to 110526\nData columns (total 14 columns):\nPatientId 110527 non-null float64\nAppointmentID 110527 non-null int64\nGender 110527 non-null object\nScheduledDay 110527 non-null object\nAppointmentDay 110527 non-null object\nAge 110527 non-null int64\nNeighbourhood 110527 non-null object\nScholarship 110527 non-null int64\nHipertension 110527 non-null int64\nDiabetes 110527 non-null int64\nAlcoholism 110527 non-null int64\nHandcap 110527 non-null int64\nSMS_received 110527 non-null int64\nNo-show 110527 non-null object\ndtypes: float64(1), int64(8), object(5)\nmemory usage: 11.8+ MB\n"
]
],
[
[
"The data gives information about gender and age of the patient, neighbourhood of the hospital, if the patient has hypertension, diabetes, alcoholism or not, date and time of appointment and schedule, if the patient is registered in scholarship or not, and if SMS received or not as a reminder. \n\nWhen I look at the data types of columns, it is realized that AppointmentDay and ScheduledDay are recorded as object, or string to be more specific. And also, PatientId is recorded as float instead of integer. But I most probably will not make use of this information since it is very specific to the patient.\n\nThe data seems pretty clear. There is no missing value or duplicated rows.\n\nFirst, I start with creating a dummy variable for No-show variable. So it makes easier for us to look at no show rate across different groups.",
"_____no_output_____"
]
],
[
[
"df.describe()",
"_____no_output_____"
],
[
"df.isnull().any().sum() # no missing value",
"_____no_output_____"
],
[
"df.duplicated().any()",
"_____no_output_____"
]
],
[
[
"\n\n### Data Cleaning ",
"_____no_output_____"
],
[
"A dummy variable named no_showup is created. It takes the value 1 if the patient did not show up, and 0 otherwise. I omitted PatientId, AppointmentID and No-show columns. \n\nThere are some rows with Age value of -1 which does not make much sense. So I dropped these rows.\n\nOther than that, the data seems pretty clean; no missing values, no duplicated rows. ",
"_____no_output_____"
]
],
[
[
"df['No-show'].unique()",
"_____no_output_____"
],
[
"df['no_showup'] = np.where(df['No-show'] == 'Yes', 1, 0)",
"_____no_output_____"
],
[
"df.drop(['PatientId', 'AppointmentID', 'No-show'], axis = 1, inplace = True)",
"_____no_output_____"
],
[
"noshow = df.no_showup == 1\nshow = df.no_showup == 0",
"_____no_output_____"
],
[
"index = df[df.Age == -1].index\ndf.drop(index, inplace = True)",
"_____no_output_____"
]
],
[
[
"<a id='eda'></a>\n## Exploratory Data Analysis\n\n\n### What factors are important in predicting no-show rate?",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize = (10,6))\ndf.Age[noshow].plot(kind = 'hist', alpha= 0.5, color= 'green', bins =20, label = 'no-show');\ndf.Age[show].plot(kind = 'hist', alpha= 0.4, color= 'orange', bins =20, label = 'show');\nplt.legend();\nplt.xlabel('Age');\nplt.ylabel('Number of Patients');",
"_____no_output_____"
]
],
[
[
"I started exploratory data analysis by first looking at the relationship between age and no_showup. By looking age distributions for patients who showed up and not showed up, we can not say much. There is a spike at around age 0, and no show up number is not that high compared to other ages. We can infer that adults are careful about babies' appointments. As age increases, the number of patients in both groups decreases, which is plausible taking general demographics into account. To be able to say more about showup rate across different age groups we need to look at ratio of one group to another.\n\nFirst, I created age bins which are equally spaced from age 0 to the maximum age which is 115. It is called age_bins. Basically, it shows which bin the age of patient falls in. So I can look at the no_showup rate across different age bins.",
"_____no_output_____"
]
],
[
[
"bin_edges = np.arange(0, df.Age.max()+3, 3)\ndf['age_bins'] = pd.cut(df.Age, bin_edges)",
"_____no_output_____"
],
[
"base_color = sb.color_palette()[0]\nage_order = df.age_bins.unique().sort_values()\ng= sb.FacetGrid(data= df, row= 'Gender', row_order = ['M', 'F'], height=4, aspect = 2);\ng = g.map(sb.barplot, 'age_bins', 'no_showup', color = base_color, ci = None, order = age_order);\ng.axes[0,0].set_ylabel('No-show Rate');\ng.axes[1,0].set_ylabel('No-show Rate');\nplt.xlabel('Age Intervals')\nplt.xticks(rotation = 90);",
"_____no_output_____"
]
],
[
[
"No-show rate is smaller than average for babies ((0-3] interval). Then it increases as age get larger and it reaches peak at around 15-18 depending on gender. After that point, as age gets larger the no-show rate gets smaller. So middle-aged and old people are much more careful about their doctor appointments which is understandable as you get older, your health might not be in a good condition, you become more concerned about your health and do not miss your appointments. Or another explanation might be that as person ages, it is more probable to have a health condition which requires close doctor watch which incentivizes you attend to your scheduled appointments.\n\nThere are spikes at the end of graphs, I suspect this happens due to small number of patients in corresponding bins. \nThere are only 5 people in (114,117] bin which proves my suspicion right.",
"_____no_output_____"
]
],
[
[
"df.groupby('age_bins').size().sort_values().head(8)",
"_____no_output_____"
],
[
"df.groupby('Gender').no_showup.mean()",
"_____no_output_____"
]
],
[
[
"There is no much difference across genders. No-show rates are close.",
"_____no_output_____"
]
],
[
[
"order_scholar = [0, 1]\ng= sb.FacetGrid(data= df, col= 'Gender', col_order = ['M', 'F'], height=4);\ng = g.map(sb.barplot, 'Scholarship', 'no_showup', order = order_scholar, color = base_color, ci = None,);\ng.axes[0,0].set_ylabel('No-show Rate');\ng.axes[0,1].set_ylabel('No-show Rate');",
"_____no_output_____"
]
],
[
[
"If the patient is in Brazilian welfare program, then the probability of her not showing up for the appointment is larger than the probablity of a patient which is not registered in welfare program. There is no significant difference between males and females. ",
"_____no_output_____"
]
],
[
[
"order_hyper = [0, 1]\ng= sb.FacetGrid(data= df, col= 'Gender', col_order = ['M', 'F'], height=4);\ng = g.map(sb.barplot, 'Hipertension', 'no_showup', order = order_hyper, color = base_color, ci = None,);\ng.axes[0,0].set_ylabel('No-show Rate');\ng.axes[0,1].set_ylabel('No-show Rate');",
"_____no_output_____"
]
],
[
[
"When the patient has hypertension or diabetes, she would not want to miss doctor appointments. So having a disease to be watched closely incentivizes you to show up for your appointments. Again, being male or female does not make a significant difference in no-show rate.\n\n",
"_____no_output_____"
]
],
[
[
"order_diabetes = [0, 1]\nsb.barplot(data =df, x = 'Diabetes', y = 'no_showup', hue = 'Gender', ci = None, order = order_diabetes);\nsb.despine();\nplt.ylabel('No-show Rate');\nplt.legend(loc = 'lower right');",
"_____no_output_____"
],
[
"order_alcol = [0, 1]\nsb.barplot(data =df, x = 'Alcoholism', y = 'no_showup', hue = 'Gender', ci = None, order = order_alcol);\nsb.despine();\nplt.ylabel('No-show Rate');\nplt.legend(loc = 'lower right');",
"_____no_output_____"
]
],
[
[
"The story for alcoholism is a bit different. If the patient is a male with alcoholism, the probability of his no showing up is smaller than the one of male with no alcoholism. On the other hand, having alcoholism makes a female patient's probability of not showing up larger. Here I suspect if the number of females having alcoholism is very small or not, but I see below that the numbers in both groups are comparable.",
"_____no_output_____"
]
],
[
[
"df.groupby(['Gender', 'Alcoholism']).size()",
"_____no_output_____"
],
[
"order_handcap = [0, 1, 2, 3, 4]\nsb.barplot(data =df, x = 'Handcap', y = 'no_showup', hue = 'Gender', ci = None, order = order_handcap);\nsb.despine();\nplt.ylabel('No-show Rate');\nplt.legend(loc = 'lower right');",
"_____no_output_____"
],
[
"df.groupby(['Handcap', 'Gender']).size()",
"_____no_output_____"
]
],
[
[
"We cannot see a significant difference across levels of Handcap variable. Label 4 for females is 1 but I do not pay attention to this since there are only 2 data points in this group. So being in different Handcap levels does not say much when predicting if a patient will show up.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize = (16,6))\nsb.barplot(data = df, x='Neighbourhood', y='no_showup', color =base_color, ci = None);\nplt.xticks(rotation = 90);\nplt.ylabel('No-show Rate');",
"_____no_output_____"
],
[
"df.groupby('Neighbourhood').size().sort_values(ascending = True).head(10)",
"_____no_output_____"
]
],
[
[
"I want to see no-show rate in different neighborhoods. There is no significant difference across neighborhoods except ILHAS OCEÂNICAS DE TRINDADE. There are only 2 data points from this place in the dataset. The exceptions can occur with only 2 data points. \nLastly, I want to look at how sending SMS to patients to remind their appointments effects no-show rate. ",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize = (5,5))\nsb.barplot(data = df, x='SMS_received', y='no_showup', color =base_color, ci = None);\nplt.title('No-show Rate vs SMS received');\nplt.ylabel('No-show Rate');",
"_____no_output_____"
]
],
[
[
"The association between SMS_received variable and no-show rate is very counterintuitive. I expect that when the patient receives SMS as a reminder, she is more likely to go to the appointment. Here the graph says exact opposite thing; when no SMS, the rate is around 16% whereas when SMS received it is more than 27%. It needs further and deeper examination.",
"_____no_output_____"
],
[
"### Understanding Negative Association between No-show Rate and SMS_received Variable",
"_____no_output_____"
]
],
[
[
"sb.barplot(data = df, x = 'SMS_received', y = 'no_showup', hue = 'Gender', ci = None);\nplt.title('No-show Rate vs SMS received');\nplt.ylabel('No-show Rate');\nplt.legend(loc ='lower right');",
"_____no_output_____"
]
],
[
[
"Gender does not make a significant impact on the rate with SMS and no SMS.\n\nBelow I try to look at how no-show rate changes with time to appointment day. I convert ScheduledDay and AppointmentDay to datetime. There is no information about hour in AppointmentDay variable. It includes 00:00:00 for all rows whereas ScheduledDay column includes hour information.\n\nNew variable named time_to_app represent time difference between AppointmentDay and ScheduledDay. It is supposed to be positive but because AppointmentDay includes 00:00:00 as hour for all appointments, time_to_app value is negative if both variables are on the same day. For example, if the patient schedules at 10 am for the appointment at 3pm the same day, time_to_app value for this appointment is (-1 days + 10 hours) since instead of 3 pm, midnight is recorded in AppointmentDay variable.",
"_____no_output_____"
]
],
[
[
"df['ScheduledDay'] = pd.to_datetime(df['ScheduledDay'])\ndf['AppointmentDay'] = pd.to_datetime(df['AppointmentDay'])",
"_____no_output_____"
],
[
"df['time_to_app']= df['AppointmentDay'] - df['ScheduledDay']",
"_____no_output_____"
],
[
"import datetime as dt\nrows_to_drop = df[df.time_to_app < dt.timedelta(days = -1)].index\ndf.drop(rows_to_drop, inplace = True)",
"_____no_output_____"
]
],
[
[
"All time_to_app values smaller than 1 day are omitted since it points another error.",
"_____no_output_____"
]
],
[
[
"time_bins = [dt.timedelta(days=-1, hours= 0), dt.timedelta(days=-1, hours= 6), dt.timedelta(days=-1, hours= 12), dt.timedelta(days=-1, hours= 15),\n dt.timedelta(days=-1, hours = 18),\n dt.timedelta(days=1), dt.timedelta(days=2), dt.timedelta(days=3), dt.timedelta(days=7), dt.timedelta(days=15), \n dt.timedelta(days=30), dt.timedelta(days=90), dt.timedelta(days=180)]\ndf['time_bins'] = pd.cut(df['time_to_app'], time_bins)",
"_____no_output_____"
],
[
"df.groupby('time_bins').size()",
"_____no_output_____"
]
],
[
[
"I created bins for time_to_app variable. They are not equally spaced. I notice that there are significant number of patients in (-1 days, 0 days] bin. I partitioned it into smaller time bins to see the picture. The number of points in each bin is given above.\n\nI group the data by time_bins and look at no-show rate. ",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize =(9,6))\nsb.barplot(data= df, y ='time_bins', x = 'no_showup', hue = 'SMS_received', ci = None);\nplt.xlabel('No-show Rate');\nplt.ylabel('Time to Appointment');",
"_____no_output_____"
]
],
[
[
"When patient schedules an appointment for the same day which represented by the first 4 upper rows in the graaph above, no-show rate is pretty smaller than the average rate higher than 20%. If patients schedule an appointment for the same day (meaning patients make a schedule several hours before the appointment hour), with more than 95% probability they show up in the appointment. And unless there is more than 2 days to the appointment at the time the patient schedules the appointment, he does not receive SMS as a reminder. This explains why we see counterintuitive negative association between no-show rate and SMS_received variable. All patients schedule an appointment for the same day fall in no SMS received group with very low no-show rate and high number of patients and they pull down averall no-show rate of the group substantially. At the end, the rate for no SMS ends up much smaller than the rate for SMS getting patients. \n\nWe can see the effect of SMS on grouped data in the graph. SMS lowers no-show rate in every group including both 0 and 1 values for SMS_received variable. For instance, no-show rate when no SMS is a bit higher than 27% whereas it is 24% when SMS is sent for (3 days, 7 days) group. As time to appointment gets larger, SMS is being more effective. For example, SMS improves no-show rate by 3%, 5.5% and 7.7% when there are 3-7, 7-15, 30-90 days to appointment when it is scheduled, respectively.\n\nWe can see the overall effect of SMS on no-show rate by taking only those groups which have both SMS sent and no SMS sent patients. Excluding time bins smaller than 2 days, it is found that the rate is 0.33% with no SMS, and 28% with SMS sent.\n\nBut it is pretty interesting that patient attends appointment with high probability if it is the same day, and no-show rate jumps abruptly from below 5% to above 20% even if schedule day and appointment day are only 1 day apart. ",
"_____no_output_____"
]
],
[
[
"sms_sent = df[( df.AppointmentDay - df.ScheduledDay) >= dt.timedelta(days = 2) ]\nsms_sent.groupby('SMS_received').no_showup.mean()",
"_____no_output_____"
]
],
[
[
"<a id='conclusions'></a>\n## Conclusions\n\n\n\n- If schedule day and appointment day are on the same day, the patient will show up with very high probability (higher than 95%). \n\n- This probability drops abruptly below 80% even when scheduled appointment is as early as tomorrow.\n\n- The probability of no showing up increases as appointment is scheduled for distant future than near future.\n\n- No-show rate does not show significant difference across neighbourhoods.\n\n- Having hypertension and diabetes, being old, non-being registered in Brazilian welfare program and receving SMS as a reminder of appointment increases the probability of patient showing up for her scheduled appointment.\n\n- The effect of receiving SMS increases as the appointment is scheduled more in advance. \n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d00b2eb54234affd169122b640ce690a36af9fac | 8,907 | ipynb | Jupyter Notebook | additional/notebooks/embedding.ipynb | Mees-Molenaar/protein_location | ce4f3e83e7dcaa2628a354823dc6b96c4175e7a0 | [
"MIT"
] | null | null | null | additional/notebooks/embedding.ipynb | Mees-Molenaar/protein_location | ce4f3e83e7dcaa2628a354823dc6b96c4175e7a0 | [
"MIT"
] | null | null | null | additional/notebooks/embedding.ipynb | Mees-Molenaar/protein_location | ce4f3e83e7dcaa2628a354823dc6b96c4175e7a0 | [
"MIT"
] | null | null | null | 28.366242 | 310 | 0.513416 | [
[
[
"# Test for Embedding, to later move it into a layer",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
],
[
"# Set-up numpy generator for random numbers\nrandom_number_generator = np.random.default_rng()",
"_____no_output_____"
],
[
"# First tokenize the protein sequence (or any sequence) in kmers.\ndef tokenize(protein_seqs, kmer_sz):\n kmers = set()\n # Loop over protein sequences\n for protein_seq in protein_seqs:\n # Loop over the whole sequence\n for i in range(len(protein_seq) - (kmer_sz - 1)):\n # Add kmers to the set, thus only unique kmers will remain\n kmers.add(protein_seq[i: i + kmer_sz])\n \n # Map kmers for one hot-encoding\n kmer_to_id = dict()\n id_to_kmer = dict()\n \n for ind, kmer in enumerate(kmers):\n kmer_to_id[kmer] = ind\n id_to_kmer[ind] = kmer\n \n vocab_sz = len(kmers)\n \n assert vocab_sz == len(kmer_to_id.keys())\n \n # Tokenize the protein sequence to integers\n tokenized = []\n for protein_seq in protein_seqs:\n sequence = []\n for i in range(len(protein_seq) - (kmer_sz -1)):\n # Convert kmer to integer\n kmer = protein_seq[i: i + kmer_sz]\n sequence.append(kmer_to_id[kmer])\n \n tokenized.append(sequence)\n \n \n return tokenized, vocab_sz, kmer_to_id, id_to_kmer",
"_____no_output_____"
],
[
"# Embedding dictionary to embed the tokenized sequence\ndef embed(EMBEDDING_DIM, vocab_sz, rng):\n embedding = {}\n for i in range(vocab_sz):\n # Use random number generator to fill the embedding with embedding_dimension random numbers \n embedding[i] = rng.random(size=(embedding_dim, 1))\n \n return embedding",
"_____no_output_____"
],
[
"if __name__ == '__main__':\n # Globals\n KMER_SIZE = 3 # Choose a Kmer_size (this is a hyperparameter which can be optimized)\n EMBEDDING_DIM = 10 # Also a hyperparameter\n \n # Store myoglobin protein sequence in a list of protein sequences\n protein_seqs = ['MGLSDGEWQLVLNVWGKVEADIPGHGQEVLIRLFKGHPETLEKFDKFKHLKSEDEMKASEDLKKHGATVLTALGGILKKKGHHEAEIKPLAQSHATKHKIPVKYLEFISECIIQVLQSKHPGDFGADAQGAMNKALELFRKDMASNYKELGFQG']\n\n # Tokenize the protein sequence\n tokenized_seqs, vocab_sz, kmer_to_id, id_to_kmer = tokenize(protein_seqs, KMER_SIZE)\n \n embedding = embed(embedding_dim, vocab_sz, random_number_generator)\n \n assert vocab_sz == len(embedding)\n \n # Embed the tokenized protein sequence\n for protein_seq in tokenized_seqs:\n for token in protein_seq:\n print(embedding[token])\n break",
"[[0.43408572]\n [0.22779265]\n [0.16100185]\n [0.25035082]\n [0.2350088 ]\n [0.89969624]\n [0.08257031]\n [0.58393399]\n [0.69324331]\n [0.43377967]]\n"
],
[
"# Embedding matrix to embed the tokenized sequence\ndef embed(embedding_dim, vocab_sz, rng):\n embedding = rng.random(size=(embedding_dim, vocab_sz))\n return embedding",
"_____no_output_____"
],
[
"emb = embed(EMBEDDING_DIM, vocab_sz, random_number_generator)",
"_____no_output_____"
],
[
"emb.shape",
"_____no_output_____"
],
[
"# First tokenize the protein sequence (or any sequence) in kmers.\ndef tokenize(protein_seqs, kmer_sz):\n kmers = set()\n # Loop over protein sequences\n for protein_seq in protein_seqs:\n # Loop over the whole sequence\n for i in range(len(protein_seq) - (kmer_sz - 1)):\n # Add kmers to the set, thus only unique kmers will remain\n kmers.add(protein_seq[i: i + kmer_sz])\n \n # Map kmers for one hot-encoding\n kmer_to_id = dict()\n id_to_kmer = dict()\n \n for ind, kmer in enumerate(kmers):\n kmer_to_id[kmer] = ind\n id_to_kmer[ind] = kmer\n \n vocab_sz = len(kmers)\n \n assert vocab_sz == len(kmer_to_id.keys())\n \n # Tokenize the protein sequence to a one-hot-encoded matrix\n tokenized = []\n for protein_seq in protein_seqs:\n sequence = []\n for i in range(len(protein_seq) - (kmer_sz -1)):\n # Convert kmer to integer\n kmer = protein_seq[i: i + kmer_sz]\n \n # One hot encode the kmer\n x = kmer_to_id[kmer]\n x_vec = np.zeros((vocab_sz, 1)) \n x_vec[x] = 1\n \n sequence.append(x_vec)\n \n tokenized.append(sequence)\n \n \n return tokenized, vocab_sz, kmer_to_id, id_to_kmer",
"_____no_output_____"
],
[
"# Tokenize the protein sequence\ntokenized_seqs, vocab_sz, kmer_to_id, id_to_kmer = tokenize(protein_seqs, KMER_SIZE)",
"_____no_output_____"
],
[
"for tokenized_seq in tokenized_seqs:\n y = np.dot(emb, tokenized_seq)",
"_____no_output_____"
],
[
"y.shape",
"_____no_output_____"
],
[
"np.array()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00b3592623c5b584831438fdd409ce47bb3a8b7 | 111,015 | ipynb | Jupyter Notebook | spectral-analysis/spectral-encoding-of-categorical-features.ipynb | mlarionov/machine_learning_POC | 52cdece108f285a4e67212fd289f6dbf9035dca0 | [
"Apache-2.0"
] | 24 | 2018-12-07T13:16:43.000Z | 2022-03-24T11:22:29.000Z | spectral-analysis/spectral-encoding-of-categorical-features.ipynb | mlarionov/machine_learning_POC | 52cdece108f285a4e67212fd289f6dbf9035dca0 | [
"Apache-2.0"
] | 1 | 2021-02-25T07:07:17.000Z | 2021-02-25T07:07:17.000Z | spectral-analysis/spectral-encoding-of-categorical-features.ipynb | mlarionov/machine_learning_POC | 52cdece108f285a4e67212fd289f6dbf9035dca0 | [
"Apache-2.0"
] | 13 | 2019-06-03T17:29:49.000Z | 2022-01-05T01:41:13.000Z | 131.53436 | 18,684 | 0.861136 | [
[
[
"# Spectral encoding of categorical features\n\nAbout a year ago I was working on a regression model, which had over a million features. Needless to say, the training was super slow, and the model was overfitting a lot. After investigating this issue, I realized that most of the features were created using 1-hot encoding of the categorical features, and some of them had tens of thousands of unique values. \n\nThe problem of mapping categorical features to lower-dimensional space is not new. Recently one of the popular way to deal with it is using entity embedding layers of a neural network. However that method assumes that neural networks are used. What if we decided to use tree-based algorithms instead? In tis case we can use Spectral Graph Theory methods to create low dimensional embedding of the categorical features.\n\nThe idea came from spectral word embedding, spectral clustering and spectral dimensionality reduction algorithms.\nIf you can define a similarity measure between different values of the categorical features, we can use spectral analysis methods to find the low dimensional representation of the categorical feature. \n\nFrom the similarity function (or kernel function) we can construct an Adjacency matrix, which is a symmetric matrix, where the ij element is the value of the kernel function between category values i and j:\n\n$$ A_{ij} = K(i,j) \\tag{1}$$\n\nIt is very important that I only need a Kernel function, not a high-dimensional representation. This means that 1-hot encoding step is not necessary here. Also for the kernel-base machine learning methods, the categorical variable encoding step is not necessary as well, because what matters is the kernel function between two points, which can be constructed using the individual kernel functions.\n\nOnce the adjacency matrix is constructed, we can construct a degree matrix:\n\n$$ D_{ij} = \\delta_{ij} \\sum_{k}{A_{ik}} \\tag{2} $$\n\nHere $\\delta$ is the Kronecker delta symbol. The Laplacian matrix is the difference between the two:\n\n$$ L = D - A \\tag{3} $$\n\nAnd the normalize Laplacian matrix is defined as:\n\n$$ \\mathscr{L} = D^{-\\frac{1}{2}} L D^{-\\frac{1}{2}} \\tag{4} $$\n\nFollowing the Spectral Graph theory, we proceed with eigendecomposition of the normalized Laplacian matrix. The number of zero eigenvalues correspond to the number of connected components. In our case, let's assume that our categorical feature has two sets of values that are completely dissimilar. This means that the kernel function $K(i,j)$ is zero if $i$ and $j$ belong to different groups. In this case we will have two zero eigenvalues of the normalized Laplacian matrix.\n\nIf there is only one connected component, we will have only one zero eigenvalue. Normally it is uninformative and is dropped to prevent multicollinearity of features. However we can keep it if we are planning to use tree-based models.\n\nThe lower eigenvalues correspond to \"smooth\" eigenvectors (or modes), that are following the similarity function more closely. We want to keep only these eigenvectors and drop the eigenvectors with higher eigenvalues, because they are more likely represent noise. It is very common to look for a gap in the matrix spectrum and pick the eigenvalues below the gap. The resulting truncated eigenvectors can be normalized and represent embeddings of the categorical feature values. \n\nAs an example, let's consider the Day of Week. 1-hot encoding assumes every day is similar to any other day ($K(i,j) = 1$). This is not a likely assumption, because we know that days of the week are different. For example, the bar attendance spikes on Fridays and Saturdays (at least in USA) because the following day is a weekend. Label encoding is also incorrect, because it will make the \"distance\" between Monday and Wednesday twice higher than between Monday and Tuesday. And the \"distance\" between Sunday and Monday will be six times higher, even though the days are next to each other. By the way, the label encoding corresponds to the kernel $K(i, j) = exp(-\\gamma |i-j|)$\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nnp.set_printoptions(linewidth=130)",
"_____no_output_____"
],
[
"def normalized_laplacian(A):\n 'Compute normalized Laplacian matrix given the adjacency matrix'\n d = A.sum(axis=0)\n D = np.diag(d)\n L = D-A\n D_rev_sqrt = np.diag(1/np.sqrt(d))\n return D_rev_sqrt @ L @ D_rev_sqrt",
"_____no_output_____"
]
],
[
[
"We will consider an example, where weekdays are similar to each other, but differ a lot from the weekends. ",
"_____no_output_____"
]
],
[
[
"#The adjacency matrix for days of the week\nA_dw = np.array([[0,10,9,8,5,2,1],\n [0,0,10,9,5,2,1],\n [0,0,0,10,8,2,1],\n [0,0,0,0,10,2,1],\n [0,0,0,0,0,5,3],\n [0,0,0,0,0,0,10],\n [0,0,0,0,0,0,0]])\nA_dw = A_dw + A_dw.T\nA_dw",
"_____no_output_____"
],
[
"#The normalized Laplacian matrix for days of the week\nL_dw_noem = normalized_laplacian(A_dw)\nL_dw_noem",
"_____no_output_____"
],
[
"#The eigendecomposition of the normalized Laplacian matrix\nsz, sv = np.linalg.eig(L_dw_noem)\nsz",
"_____no_output_____"
]
],
[
[
"Notice, that the eigenvalues are not ordered here. Let's plot the eigenvalues, ignoring the uninformative zero.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nsns.stripplot(data=sz[1:], jitter=False, );\n",
"_____no_output_____"
]
],
[
[
"We can see a pretty substantial gap between the first eigenvalue and the rest of the eigenvalues. If this does not give enough model performance, you can include the second eigenvalue, because the gap between it and the higher eigenvalues is also quite substantial. \n\nLet's print all eigenvectors:",
"_____no_output_____"
]
],
[
[
"sv",
"_____no_output_____"
]
],
[
[
"Look at the second eigenvector. The weekend values have a different size than the weekdays and Friday is close to zero. This proves the transitional role of Friday, that, being a day of the week, is also the beginning of the weekend.\n\nIf we are going to pick two lowest non-zero eigenvalues, our categorical feature encoding will result in these category vectors:",
"_____no_output_____"
]
],
[
[
"#Picking only two eigenvectors\ncategory_vectors = sv[:,[1,3]]\ncategory_vectors",
"_____no_output_____"
],
[
"category_vector_frame=pd.DataFrame(category_vectors, index=['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun'], \n columns=['col1', 'col2']).reset_index()\nsns.scatterplot(data=category_vector_frame, x='col1', y='col2', hue='index');",
"_____no_output_____"
]
],
[
[
"In the plot above we see that Monday and Tuesday, and also Saturday and Sunday are clustered close together, while Wednesday, Thursday and Friday are far apart. ",
"_____no_output_____"
],
[
"## Learning the kernel function\nIn the previous example we assumed that the similarity function is given. Sometimes this is the case, where it can be defined based on the business rules. However it may be possible to learn it from data.\n\nOne of the ways to compute the Kernel is using [Wasserstein distance](https://en.wikipedia.org/wiki/Wasserstein_metric). It is a good way to tell how far apart two distributions are. \n\nThe idea is to estimate the data distribution (including the target variable, but excluding the categorical variable) for each value of the categorical variable. If for two values the distributions are similar, then the divergence will be small and the similarity value will be large. As a measure of similarity I choose the RBF kernel (Gaussian radial basis function):\n\n$$ A_{ij} = exp(-\\gamma W(i, j)^2) \\tag{5}$$\n\n\n\nWhere $W(i,j)$ is the Wasserstein distance between the data distributions for the categories i and j, and $\\gamma$ is a hyperparameter that has to be tuned\n",
"_____no_output_____"
],
[
"To try this approach will will use [liquor sales data set](https://www.kaggle.com/residentmario/iowa-liquor-sales/downloads/iowa-liquor-sales.zip/1). To keep the file small I removed some columns and aggregated the data.\n",
"_____no_output_____"
]
],
[
[
"liq = pd.read_csv('Iowa_Liquor_agg.csv', dtype={'Date': 'str', 'Store Number': 'str', 'Category': 'str', 'orders': 'int', 'sales': 'float'},\n parse_dates=True)\nliq.Date = pd.to_datetime(liq.Date)\nliq.head()",
"_____no_output_____"
]
],
[
[
"Since we care about sales, let's encode the day of week using the information from the sales column\nLet's check the histogram first:",
"_____no_output_____"
]
],
[
[
"sns.distplot(liq.sales, kde=False);",
"_____no_output_____"
]
],
[
[
"We see that the distribution is very skewed, so let's try to use log of sales columns instead",
"_____no_output_____"
]
],
[
[
"sns.distplot(np.log10(1+liq.sales), kde=False);",
"_____no_output_____"
]
],
[
[
"This is much better. So we will use a log for our distribution",
"_____no_output_____"
]
],
[
[
"liq[\"log_sales\"] = np.log10(1+liq.sales)",
"_____no_output_____"
]
],
[
[
"Here we will follow [this blog](https://amethix.com/entropy-in-machine-learning/) for computation of the Kullback-Leibler divergence.\nAlso note, that since there are no liquor sales on Sunday, we consider only six days in a week",
"_____no_output_____"
]
],
[
[
"from scipy.stats import wasserstein_distance\nfrom numpy import histogram\nfrom scipy.stats import iqr\ndef dw_data(i):\n return liq[liq.Date.dt.dayofweek == i].log_sales\n\ndef wass_from_data(i,j):\n return wasserstein_distance(dw_data(i), dw_data(j)) if i > j else 0.0 \n\ndistance_matrix = np.fromfunction(np.vectorize(wass_from_data), (6,6))\ndistance_matrix += distance_matrix.T\ndistance_matrix",
"_____no_output_____"
]
],
[
[
"As we already mentioned, the hyperparameter $\\gamma$ has to be tuned. Here we just pick the value that will give a plausible result",
"_____no_output_____"
]
],
[
[
"gamma = 100\nkernel = np.exp(-gamma * distance_matrix**2)\nnp.fill_diagonal(kernel, 0)\nkernel",
"_____no_output_____"
],
[
"norm_lap = normalized_laplacian(kernel)",
"_____no_output_____"
],
[
"sz, sv = np.linalg.eig(norm_lap)\nsz",
"_____no_output_____"
],
[
"sns.stripplot(data=sz[1:], jitter=False, );",
"_____no_output_____"
]
],
[
[
"Ignoring the zero eigenvalue, we can see that there is a bigger gap between the first eigenvalue and the rest of the eigenvalues, even though the values are all in the range between 1 and 1.3. Looking at the eigenvectors,",
"_____no_output_____"
]
],
[
[
"sv",
"_____no_output_____"
]
],
[
[
"Ultimately the number of eigenvectors to use is another hyperparameter, that should be optimized on a supervised learning task. The Category field is another candidate to do spectral analysis, and is, probably, a better choice since it has more unique values",
"_____no_output_____"
]
],
[
[
"len(liq.Category.unique())",
"_____no_output_____"
],
[
"unique_categories = liq.Category.unique()\ndef dw_data_c(i):\n return liq[liq.Category == unique_categories[int(i)]].log_sales\n\ndef wass_from_data_c(i,j):\n return wasserstein_distance(dw_data_c(i), dw_data_c(j)) if i > j else 0.0\n\n\n#WARNING: THIS WILL TAKE A LONG TIME\ndistance_matrix = np.fromfunction(np.vectorize(wass_from_data_c), (107,107))\ndistance_matrix += distance_matrix.T\ndistance_matrix",
"_____no_output_____"
],
[
"def plot_eigenvalues(gamma):\n \"Eigendecomposition of the kernel and plot of the eigenvalues\"\n kernel = np.exp(-gamma * distance_matrix**2)\n np.fill_diagonal(kernel, 0)\n norm_lap = normalized_laplacian(kernel)\n sz, sv = np.linalg.eig(norm_lap)\n sns.stripplot(data=sz[1:], jitter=True, );",
"_____no_output_____"
],
[
"plot_eigenvalues(100);",
"_____no_output_____"
]
],
[
[
"We can see, that a lot of eigenvalues are grouped around the 1.1 mark. The eigenvalues that are below that cluster can be used for encoding the Category feature.\nPlease also note that this method is highly sensitive on selection of hyperparameter $\\gamma$. For illustration let me pick a higher and a lower gamma",
"_____no_output_____"
]
],
[
[
"plot_eigenvalues(500);",
"_____no_output_____"
],
[
"plot_eigenvalues(10)",
"_____no_output_____"
]
],
[
[
"## Conclusion and next steps\n\nWe presented a way to encode the categorical features as a low dimensional vector that preserves most of the feature similarity information. For this we use methods of Spectral analysis on the values of the categorical feature. In order to find the kernel function we can either use heuristics, or learn it using a variety of methods, for example, using Kullback–Leibler divergence of the data distribution conditional on the category value. To select the subset of the eigenvectors we used gap analysis, but what we really need is to validate this methods by analyzing a variety of datasets and both classification and regression problems. We also need to compare it with other encoding methods, for example, entity embedding using Neural Networks. The kernel function we used can also include the information about category frequency, which will help us deal with high information, but low frequency values.\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d00b3cfee679cc700cfef207f30721ac60f8e9df | 7,030 | ipynb | Jupyter Notebook | Python-Programming/Python-3-Bootcamp/00-Python Object and Data Structure Basics/06-Tuples.ipynb | vivekparasharr/Learn-Programming | 1ae07ef5143bff3c504978e1d375698820f59af0 | [
"MIT"
] | null | null | null | Python-Programming/Python-3-Bootcamp/00-Python Object and Data Structure Basics/06-Tuples.ipynb | vivekparasharr/Learn-Programming | 1ae07ef5143bff3c504978e1d375698820f59af0 | [
"MIT"
] | null | null | null | Python-Programming/Python-3-Bootcamp/00-Python Object and Data Structure Basics/06-Tuples.ipynb | vivekparasharr/Learn-Programming | 1ae07ef5143bff3c504978e1d375698820f59af0 | [
"MIT"
] | null | null | null | 26.329588 | 398 | 0.51266 | [
[
[
"# Tuples\n\nIn Python tuples are very similar to lists, however, unlike lists they are *immutable* meaning they can not be changed. You would use tuples to present things that shouldn't be changed, such as days of the week, or dates on a calendar. \n\nIn this section, we will get a brief overview of the following:\n\n 1.) Constructing Tuples\n 2.) Basic Tuple Methods\n 3.) Immutability\n 4.) When to Use Tuples\n\nYou'll have an intuition of how to use tuples based on what you've learned about lists. We can treat them very similarly with the major distinction being that tuples are immutable.\n\n## Constructing Tuples\n\nThe construction of a tuples use () with elements separated by commas. For example:",
"_____no_output_____"
]
],
[
[
"# Create a tuple\nt = (1,2,3)",
"_____no_output_____"
],
[
"# Check len just like a list\nlen(t)",
"_____no_output_____"
],
[
"# Can also mix object types\nt = ('one',2)\n\n# Show\nt",
"_____no_output_____"
],
[
"# Use indexing just like we did in lists\nt[0]",
"_____no_output_____"
],
[
"# Slicing just like a list\nt[-1]",
"_____no_output_____"
]
],
[
[
"## Basic Tuple Methods\n\nTuples have built-in methods, but not as many as lists do. Let's look at two of them:",
"_____no_output_____"
]
],
[
[
"# Use .index to enter a value and return the index\nt.index('one')",
"_____no_output_____"
],
[
"# Use .count to count the number of times a value appears\nt.count('one')",
"_____no_output_____"
]
],
[
[
"## Immutability\n\nIt can't be stressed enough that tuples are immutable. To drive that point home:",
"_____no_output_____"
]
],
[
[
"t[0]= 'change'",
"_____no_output_____"
]
],
[
[
"Because of this immutability, tuples can't grow. Once a tuple is made we can not add to it.",
"_____no_output_____"
]
],
[
[
"t.append('nope')",
"_____no_output_____"
]
],
[
[
"## When to use Tuples\n\nYou may be wondering, \"Why bother using tuples when they have fewer available methods?\" To be honest, tuples are not used as often as lists in programming, but are used when immutability is necessary. If in your program you are passing around an object and need to make sure it does not get changed, then a tuple becomes your solution. It provides a convenient source of data integrity.\n\nYou should now be able to create and use tuples in your programming as well as have an understanding of their immutability.\n\nUp next Sets and Booleans!!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d00b611a7c7484253aca0e6fb044bae4fb48d08f | 20,595 | ipynb | Jupyter Notebook | pipeline/mvenloc.ipynb | seriousbamboo/xqtl-pipeline | c40bcefb1477190f3daa8a4adc3aac1b2f5c19c8 | [
"MIT"
] | 2 | 2022-02-01T19:24:09.000Z | 2022-02-15T14:26:08.000Z | pipeline/mvenloc.ipynb | seriousbamboo/xqtl-pipeline | c40bcefb1477190f3daa8a4adc3aac1b2f5c19c8 | [
"MIT"
] | 113 | 2021-12-20T15:17:51.000Z | 2022-03-30T15:04:54.000Z | pipeline/mvenloc.ipynb | seriousbamboo/xqtl-pipeline | c40bcefb1477190f3daa8a4adc3aac1b2f5c19c8 | [
"MIT"
] | 10 | 2021-12-17T19:45:33.000Z | 2022-02-24T21:38:05.000Z | 28.485477 | 589 | 0.512017 | [
[
[
"# Multivariate SuSiE and ENLOC model",
"_____no_output_____"
],
[
"## Aim",
"_____no_output_____"
],
[
"This notebook aims to demonstrate a workflow of generating posterior inclusion probabilities (PIPs) from GWAS summary statistics using SuSiE regression and construsting SNP signal clusters from global eQTL analysis data obtained from multivariate SuSiE models.",
"_____no_output_____"
],
[
"## Methods overview\n\nThis procedure assumes that molecular phenotype summary statistics and GWAS summary statistics are aligned and harmonized to have consistent allele coding (see [this module](../../misc/summary_stats_merger.html) for implementation details). Both molecular phenotype QTL and GWAS should be fine-mapped beforehand using mvSusiE or SuSiE. We further assume (and require) that molecular phenotype and GWAS data come from the same population ancestry. Violations from this assumption may not cause an error in the analysis computational workflow but the results obtained may not be valid.",
"_____no_output_____"
],
[
"## Input",
"_____no_output_____"
],
[
"1) GWAS Summary Statistics with the following columns:\n - chr: chromosome number\n - bp: base pair position\n - a1: effect allele\n - a2: other allele\n - beta: effect size\n - se: standard error of beta\n - z: z score\n\n2) eQTL data from multivariate SuSiE model with the following columns:\n - chr: chromosome number\n - bp: base pair position\n - a1: effect allele\n - a2: other allele\n - pip: posterior inclusion probability\n\n3) LD correlation matrix",
"_____no_output_____"
],
[
"## Output",
"_____no_output_____"
],
[
"Intermediate files:\n\n1) GWAS PIP file with the following columns\n - var_id\n - ld_block \n - snp_pip\n - block_pip\n\n2) eQTL annotation file with the following columns\n - chr\n - bp\n - var_id\n - a1\n - a2\n - annotations, in the format: `gene:cs_num@tissue=snp_pip[cs_pip:cs_total_snps]`\n\n\nFinal Outputs:\n\n1) Enrichment analysis result prefix.enloc.enrich.rst: estimated enrichment parameters and standard errors.\n\n2) Signal-level colocalization result prefix.enloc.sig.out: the main output from the colocalization analysis with the following format\n - column 1: signal cluster name (from eQTL analysis)\n - column 2: number of member SNPs\n - column 3: cluster PIP of eQTLs\n - column 4: cluster PIP of GWAS hits (without eQTL prior)\n - column 5: cluster PIP of GWAS hits (with eQTL prior)\n - column 6: regional colocalization probability (RCP)\n\n3) SNP-level colocalization result prefix.enloc.snp.out: SNP-level colocalization output with the following form at\n - column 1: signal cluster name\n - column 2: SNP name\n - column 3: SNP-level PIP of eQTLs\n - column 4: SNP-level PIP of GWAS (without eQTL prior)\n - column 5: SNP-level PIP of GWAS (with eQTL prior)\n - column 6: SNP-level colocalization probability\n\n4) Sorted list of colocalization signals",
"_____no_output_____"
],
[
"Takes into consideration 3 situations: \n\n1) \"Major\" and \"minor\" alleles flipped\n\n2) Different strand but same variant\n\n3) Remove variants with A/T and C/G alleles due to ambiguity",
"_____no_output_____"
],
[
"## Minimal working example",
"_____no_output_____"
]
],
[
[
"sos run mvenloc.ipynb merge \\\n --cwd output \\\n --eqtl-sumstats .. \\\n --gwas-sumstats ..",
"_____no_output_____"
],
[
"sos run mvenloc.ipynb eqtl \\\n --cwd output \\\n --sumstats-file .. \\\n --ld-region ..",
"_____no_output_____"
],
[
"sos run mvenloc.ipynb gwas \\\n --cwd output \\\n --sumstats-file .. \\\n --ld-region ..",
"_____no_output_____"
],
[
"sos run mvenloc.ipynb enloc \\\n --cwd output \\\n --eqtl-pip .. \\\n --gwas-pip ..",
"_____no_output_____"
]
],
[
[
"### Summary",
"_____no_output_____"
]
],
[
[
"head enloc.enrich.out",
"_____no_output_____"
],
[
"head enloc.sig.out",
"_____no_output_____"
],
[
"head enloc.snp.out",
"_____no_output_____"
]
],
[
[
"## Command interface",
"_____no_output_____"
]
],
[
[
"sos run mvenloc.ipynb -h",
"_____no_output_____"
]
],
[
[
"## Implementation",
"_____no_output_____"
]
],
[
[
"[global]\nparameter: cwd = path\nparameter: container = \"\"",
"_____no_output_____"
]
],
[
[
"### Step 0: data formatting\n\n#### Extract common SNPS between the GWAS summary statistics and eQTL data ",
"_____no_output_____"
]
],
[
[
"[merger]\n# eQTL summary statistics as a list of RData\nparameter: eqtl_sumstats = path \n# GWAS summary stats in gz format\nparameter: gwas_sumstats = path \ninput: eqtl_sumstats, gwas_sumstats\noutput: f\"{cwd:a}/{eqtl_sumstats:bn}.standardized.gz\", f\"{cwd:a}/{gwas_sumstats:bn}.standardized.gz\"\nR: expand = \"${ }\"\n\n ###\n # functions\n ###\n\n allele.qc = function(a1,a2,ref1,ref2) {\n # a1 and a2 are the first data-set\n # ref1 and ref2 are the 2nd data-set\n # Make all the alleles into upper-case, as A,T,C,G:\n a1 = toupper(a1)\n a2 = toupper(a2)\n ref1 = toupper(ref1)\n ref2 = toupper(ref2)\n # Strand flip, to change the allele representation in the 2nd data-set\n strand_flip = function(ref) {\n flip = ref\n flip[ref == \"A\"] = \"T\"\n flip[ref == \"T\"] = \"A\"\n flip[ref == \"G\"] = \"C\"\n flip[ref == \"C\"] = \"G\"\n flip\n }\n flip1 = strand_flip(ref1)\n flip2 = strand_flip(ref2)\n snp = list()\n # Remove strand ambiguous SNPs (scenario 3)\n snp[[\"keep\"]] = !((a1==\"A\" & a2==\"T\") | (a1==\"T\" & a2==\"A\") | (a1==\"C\" & a2==\"G\") | (a1==\"G\" & a2==\"C\"))\n # Remove non-ATCG coding\n snp[[\"keep\"]][ a1 != \"A\" & a1 != \"T\" & a1 != \"G\" & a1 != \"C\" ] = F\n snp[[\"keep\"]][ a2 != \"A\" & a2 != \"T\" & a2 != \"G\" & a2 != \"C\" ] = F\n # as long as scenario 1 is involved, sign_flip will return TRUE\n snp[[\"sign_flip\"]] = (a1 == ref2 & a2 == ref1) | (a1 == flip2 & a2 == flip1)\n # as long as scenario 2 is involved, strand_flip will return TRUE\n snp[[\"strand_flip\"]] = (a1 == flip1 & a2 == flip2) | (a1 == flip2 & a2 == flip1)\n # remove other cases, eg, tri-allelic, one dataset is A C, the other is A G, for example.\n exact_match = (a1 == ref1 & a2 == ref2) \n snp[[\"keep\"]][!(exact_match | snp[[\"sign_flip\"]] | snp[[\"strand_flip\"]])] = F\n return(snp)\n }\n\n # Extract information from RData\n eqtl.split = function(eqtl){\n rows = length(eqtl)\n chr = vector(length = rows)\n pos = vector(length = rows)\n a1 = vector(length = rows)\n a2 = vector(length = rows)\n for (i in 1:rows){\n split1 = str_split(eqtl[i], \":\")\n split2 = str_split(split1[[1]][2], \"_\")\n chr[i]= split1[[1]][1]\n pos[i] = split2[[1]][1]\n a1[i] = split2[[1]][2]\n a2[i] = split2[[1]][3]\n\n }\n eqtl.df = data.frame(eqtl,chr,pos,a1,a2)\n }\n\n remove.dup = function(df){\n df = df %>% arrange(PosGRCh37, -N)\n df = df[!duplicated(df$PosGRCh37),]\n return(df)\n }\n \n ###\n # Code\n ###\n \n # gene regions: \n # 1 = ENSG00000203710\n # 2 = ENSG00000064687\n # 3 = ENSG00000203710\n \n # eqtl\n gene.name = scan(${_input[0]:r}, what='character')\n\n # initial filter of gwas variants that are in eqtl\n gwas = gwas_sumstats\n gwas_filter = gwas[which(gwas$id %in% var),]\n\n # create eqtl df\n eqtl.df = eqtl.split(eqtl$var)\n\n # allele flip\n f_gwas = gwas %>% filter(chr %in% eqtl.df$chr & PosGRCh37 %in% eqtl.df$pos)\n eqtl.df.f = eqtl.df %>% filter(pos %in% f_gwas$PosGRCh37)\n\n # check if there are duplicate pos\n length(unique(f_gwas$PosGRCh37))\n\n # multiple snps with same pos\n dup.pos = f_gwas %>% group_by(PosGRCh37) %>% filter(n() > 1) \n\n f_gwas = remove.dup(f_gwas)\n\n qc = allele.qc(f_gwas$testedAllele, f_gwas$otherAllele, eqtl.df.f$a1, eqtl.df.f$a2)\n keep = as.data.frame(qc$keep)\n sign = as.data.frame(qc$sign_flip)\n strand = as.data.frame(qc$strand_flip)\n\n # sign flip\n f_gwas$z[qc$sign_flip] = -1 * f_gwas$z[qc$sign_flip]\n f_gwas$testedAllele[qc$sign_flip] = eqtl.df.f$a1[qc$sign_flip]\n f_gwas$otherAllele[qc$sign_flip] = eqtl.df.f$a2[qc$sign_flip]\n\n f_gwas$testedAllele[qc$strand_flip] = eqtl.df.f$a1[qc$strand_flip]\n f_gwas$otherAllele[qc$strand_flip] = eqtl.df.f$a2[qc$strand_flip]\n\n # remove ambigiuous \n if ( sum(!qc$keep) > 0 ) {\n eqtl.df.f = eqtl.df.f[qc$keep,]\n f_gwas = f_gwas[qc$keep,]\n }",
"_____no_output_____"
]
],
[
[
"#### Extract common SNPS between the summary statistics and LD",
"_____no_output_____"
]
],
[
[
"[eqtl_1, gwas_1 (filter LD file and sumstat file)]\nparameter: sumstat_file = path\n# LD and region information: chr, start, end, LD file\nparamter: ld_region = path\ninput: sumstat_file, for_each = 'ld_region'\noutput: f\"{cwd:a}/{sumstat_file:bn}_{region[0]}_{region[1]}_{region[2]}.z.rds\",\n f\"{cwd:a}/{sumstat_file:bn}_{region[0]}_{region[1]}_{region[2]}.ld.rds\"\nR:\n # FIXME: need to filter both ways for sumstats and for LD\n # lds filtered\n \n eqtl_id = which(var %in% eqtl.df.f$eqtl)\n ld_f = ld[eqtl_id, eqtl_id]\n\n # ld missing\n miss = which(is.na(ld_f), arr.ind=TRUE)\n miss_r = unique(as.data.frame(miss)$row)\n miss_c = unique(as.data.frame(miss)$col)\n\n total_miss = unique(union(miss_r,miss_c))\n # FIXME: LD should not have missing data if properly processed by our pipeline\n # In the future we should throw an error when it happens\n\n if (length(total_miss)!=0){\n ld_f2 = ld_f[-total_miss,]\n ld_f2 = ld_f2[,-total_miss]\n dim(ld_f2)\n }else{ld_f2 = ld_f}\n \n f_gwas.f = f_gwas %>% filter(id %in% eqtl_id.f$eqtl)\n",
"_____no_output_____"
]
],
[
[
"### Step 1: fine-mapping",
"_____no_output_____"
]
],
[
[
"[eqtl_2, gwas_2 (finemapping)]\n# FIXME: RDS file should have included region information\noutput: f\"{_input[0]:nn}.susieR.rds\", f\"{_input[0]:nn}.susieR_plot.rds\"\nR:\n susie_results = susieR::susie_rss(z = f_gwas.f$z,R = ld_f2, check_prior = F)\n susieR::susie_plot(susie_results,\"PIP\")\n susie_results$z = f_gwas.f$z\n susieR::susie_plot(susie_results,\"z_original\")",
"_____no_output_____"
]
],
[
[
"### Step 2: fine-mapping results processing",
"_____no_output_____"
],
[
"#### Construct eQTL annotation file using eQTL SNP PIPs and credible sets ",
"_____no_output_____"
]
],
[
[
"[eqtl_3 (create signal cluster using CS)]\noutput: f\"{_input[0]:nn}.enloc_annot.gz\"\nR:\n cs = eqtl[[\"sets\"]][[\"cs\"]][[\"L1\"]]\n o_id = which(var %in% eqtl_id.f$eqtl)\n pip = eqtl$pip[o_id]\n eqtl_annot = cbind(eqtl_id.f, pip) %>% mutate(gene = gene.name,cluster = -1, cluster_pip = 0, total_snps = 0)\n\n for(snp in cs){\n eqtl_annot$cluster[snp] = 1\n eqtl_annot$cluster_pip[snp] = eqtl[[\"sets\"]][[\"coverage\"]]\n eqtl_annot$total_snps[snp] = length(cs)\n }\n\n eqtl_annot1 = eqtl_annot %>% filter(cluster != -1)%>% \n mutate(annot = sprintf(\"%s:%d@=%e[%e:%d]\",gene,cluster,pip,cluster_pip,total_snps)) %>%\n select(c(chr,pos,eqtl,a1,a2,annot))\n \n \n # FIXME: repeats whole process (extracting+fine-mapping+cs creation) 3 times before this next step \n eqtl_annot_comb = rbind(eqtl_annot3, eqtl_annot1, eqtl_annot2)\n\n # FIXME: write to a zip file\n write.table(eqtl_annot_comb, file = \"eqtl.annot.txt\", col.names = T, row.names = F, quote = F)",
"_____no_output_____"
]
],
[
[
"#### Export GWAS PIP",
"_____no_output_____"
]
],
[
[
"[gwas_3 (format PIP into enloc GWAS input)]\noutput: f\"{_input[0]:nn}.enloc_gwas.gz\"\nR:\n gwas_annot1 = f_gwas.f %>% mutate(pip = susie_results$pip)\n \n # FIXME: repeat whole process (extracting common snps + fine-mapping) 3 times before the next steps\n \n gwas_annot_comb = rbind(gwas_annot3, gwas_annot1, gwas_annot2)\n gwas_loc_annot = gwas_annot_comb %>% select(id, chr, PosGRCh37,z)\n write.table(gwas_loc_annot, file = \"loc.gwas.txt\", col.names = F, row.names = F, quote = F)\n\nbash: \n perl format2torus.pl loc.gwas.txt > loc2.gwas.txt\n \nR:\n loc = data.table::fread(\"loc2.gwas.txt\")\n loc = loc[[\"V2\"]]\n \n gwas_annot_comb2 = gwas_annot_comb %>% select(id, chr, PosGRCh37,pip)\n gwas_annot_comb2 = cbind(gwas_annot_comb2, loc) %>% select(id, loc, pip)\n \n write.table(gwas_annot_comb2, file = \"gwas.pip.txt\", col.names = F, row.names = F, quote = F)\n\nbash:\n perl format2torus.pl gwas.pip.txt | gzip --best > gwas.pip.gz",
"_____no_output_____"
]
],
[
[
"### Step 3: Colocalization with FastEnloc",
"_____no_output_____"
]
],
[
[
"[enloc]\n# eQTL summary statistics as a list of RData\n# FIXME: to replace later\nparameter: eqtl_pip = path \n# GWAS summary stats in gz format\nparameter: gwas_pip = path \ninput: eqtl_pip, gwas_pip\noutput: f\"{cwd:a}/{eqtl_pip:bnn}.{gwas_pip:bnn}.xx.gz\"\nbash:\n fastenloc -eqtl eqtl.annot.txt.gz -gwas gwas.pip.txt.gz\n sort -grk6 prefix.enloc.sig.out | gzip --best > prefix.enloc.sig.sorted.gz\n rm -f prefix.enloc.sig.out",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d00b6f5587b8e24824dac4da896c76594e664df5 | 10,714 | ipynb | Jupyter Notebook | notebooks/strain_transmission.ipynb | garudlab/mother_infant | 98a27c83bf5ece9497d5a030c6c9396a8c514781 | [
"BSD-2-Clause"
] | null | null | null | notebooks/strain_transmission.ipynb | garudlab/mother_infant | 98a27c83bf5ece9497d5a030c6c9396a8c514781 | [
"BSD-2-Clause"
] | null | null | null | notebooks/strain_transmission.ipynb | garudlab/mother_infant | 98a27c83bf5ece9497d5a030c6c9396a8c514781 | [
"BSD-2-Clause"
] | null | null | null | 35.713333 | 223 | 0.591936 | [
[
[
"from utils import config, parse_midas_data, sample_utils as su, temporal_changes_utils, stats_utils, midas_db_utils, parse_patric\nfrom collections import defaultdict\nimport math, random, numpy as np\nimport pickle, sys, bz2\nimport matplotlib.pyplot as plt\n\n# Cohort list\ncohorts = ['backhed', 'ferretti', 'yassour', 'shao', 'olm', 'hmp']\n\n# Plot directory\nplot_dir = \"%s/\" % (config.analysis_directory)\n\n# Species list\ngood_species_list = parse_midas_data.load_pickled_good_species_list()\n\n# Sample-subject-order maps\nsys.stderr.write(\"Loading sample metadata...\\n\")\nsubject_sample_map = su.parse_subject_sample_map()\nsample_order_map = su.parse_sample_order_map()\nsample_subject_map = su.parse_sample_subject_map()\nsame_mi_pair_dict = su.get_same_mi_pair_dict(subject_sample_map)\nsys.stderr.write(\"Done!\\n\")\n\n# Timepoint pair types\ntp_pair_names = ['MM', 'MI', 'II', 'AA']\n\n# Cohorts\ncohorts = ['backhed', 'ferretti', 'yassour', 'shao', 'hmp']\nmi_cohorts = ['backhed', 'ferretti', 'yassour', 'shao']\n\n# Samples for each cohort\nsamples = {cohort: su.get_sample_names(cohort) for cohort in cohorts}\nhmp_samples = su.get_sample_names('hmp')\nmother_samples = su.get_sample_names('mother')\ninfant_samples = su.get_sample_names('infant')\nolm_samples = su.get_sample_names('olm')\ninfant_samples_no_olm = [sample for sample in infant_samples if sample not in olm_samples]\nmi_samples_no_olm = [sample for sample in (mother_samples + infant_samples) if sample not in olm_samples]\n\n# Sample-cohort map\nsample_cohort_map = su.parse_sample_cohort_map()\n\n# Sample-timepoint map\nmi_sample_day_dict = su.get_mi_sample_day_dict(exclude_cohorts=['olm'])\nmi_tp_sample_dict = su.get_mi_tp_sample_dict(exclude_cohorts=['olm']) # no binning\nmi_tp_sample_dict_binned, mi_tp_binned_labels = su.get_mi_tp_sample_dict(exclude_cohorts=['olm'], binned=True)",
"Loading sample metadata...\nDone!\n"
],
[
"# ======================================================================\n# Load pickled data\n# ======================================================================\n\n# Parameters\nsweep_type = 'full' # assume full for now\npp_prev_cohort = 'all'\nmin_coverage = 0\n\nddir = config.data_directory\npdir = \"%s/pickles/cov%i_prev_%s/\" % (ddir, min_coverage, pp_prev_cohort)\n\nsnp_changes = pickle.load(open('%s/big_snp_changes_%s.pkl' % (pdir, sweep_type), 'rb'))\ngene_changes = pickle.load(open('%s/big_gene_changes_%s.pkl' % (pdir, sweep_type), 'rb'))\nsnp_change_freqs = pickle.load(open('%s/snp_change_freqs_%s.pkl' % (pdir, sweep_type), 'rb'))\nsnp_change_null_freqs = pickle.load(open('%s/snp_change_null_freqs_%s.pkl' % (pdir, sweep_type), 'rb'))\ngene_gain_freqs = pickle.load(open('%s/gene_gain_freqs_%s.pkl' % (pdir, sweep_type), 'rb'))\ngene_loss_freqs = pickle.load(open('%s/gene_loss_freqs_%s.pkl' % (pdir, sweep_type), 'rb'))\ngene_loss_null_freqs = pickle.load(open('%s/gene_loss_null_freqs_%s.pkl' % (pdir, sweep_type), 'rb'))\nbetween_snp_change_counts = pickle.load(open('%s/between_snp_change_counts_%s.pkl' % (pdir, sweep_type), 'rb'))\nbetween_gene_change_counts = pickle.load(open('%s/between_gene_change_counts_%s.pkl' % (pdir, sweep_type), 'rb'))",
"_____no_output_____"
],
[
"# Relative abundance file\nrelab_fpath = \"%s/species/relative_abundance.txt.bz2\" % (config.data_directory)\nrelab_file = open(relab_fpath, 'r')\ndecompressor = bz2.BZ2Decompressor()\nraw = decompressor.decompress(relab_file.read())\ndata = [row.split('\\t') for row in raw.split('\\n')]\ndata.pop() # Get rid of extra element due to terminal newline\nheader = su.parse_merged_sample_names(data[0]) # species_id, samples...\n\n# Load species presence/absence information\nsample_species_dict = defaultdict(set)\n\nfor row in data[1:]:\n species = row[0]\n for relab_str, sample in zip(row[1:], header[1:]):\n relab = float(relab_str)\n if relab > 0:\n sample_species_dict[sample].add(species)",
"_____no_output_____"
],
[
"# Custom sample pair cohorts [not just sample!]\n\nis_mi = lambda sample_i, sample_j: ((sample_i in mother_samples and sample_j in infant_samples_no_olm) and mi_sample_day_dict[sample_i] >= 0 and mi_sample_day_dict[sample_i] <= 7 and mi_sample_day_dict[sample_j] <= 7)",
"_____no_output_____"
],
[
"num_transmission = 0 # Number of MI QP pairs which are strain transmissions\nnum_transmission_shared_species = []\nnum_replacement = 0 # Number of MI QP pairs which are strain replacements\nnum_replacement_shared_species = []\nnum_total = 0 # Total number of MI QP pairs (sanity check)\nnum_shared_species_per_dyad = {}\nshared_highcov_species_per_dyad = defaultdict(set)\nexisting_hosts = set()\n\n# For every mother-infant QP pair, also count number of shared species\nfor species in snp_changes:\n for sample_i, sample_j in snp_changes[species]:\n \n # Only consider mother-infant QP pairs\n if not is_mi(sample_i, sample_j):\n continue\n \n # Make sure only one sample pair per host\n host = sample_order_map[sample_i][0][:-2]\n if host in existing_hosts:\n continue\n existing_hosts.add(host)\n \n num_total += 1\n \n # Get number of shared species\n shared_species = sample_species_dict[sample_i].intersection(sample_species_dict[sample_j])\n num_shared_species = len(shared_species)\n \n num_shared_species_per_dyad[(sample_i, sample_j)] = num_shared_species\n shared_highcov_species_per_dyad[(sample_i, sample_j)].add(species)\n \n # Get number of SNP differences\n val = snp_changes[species][(sample_i, sample_j)]\n \n if (type(val) == type(1)): # Replacement\n num_replacement += 1\n num_replacement_shared_species.append(num_shared_species)\n else: # Modification or no change\n num_transmission += 1\n num_transmission_shared_species.append(num_shared_species)",
"_____no_output_____"
],
[
"hosts = defaultdict(int)\nfor s1, s2 in num_shared_species_per_dyad:\n if sample_order_map[s1][0][:-2] != sample_order_map[s2][0][:-2]:\n print(\"Weird\")\n hosts[sample_order_map[s1][0][:-2]] += 1",
"_____no_output_____"
],
[
"print(\"%i transmissions\" % num_transmission)\nprint(\"%i shared species aggregated over transmissions\" % sum([nss for nss in num_transmission_shared_species]))\nprint(\"%i replacements\" % num_replacement)\nprint(\"%i shared species aggregated over replacements\" % sum([nss for nss in num_replacement_shared_species]))\nprint(\"%i total mother-infant QP pairs\" % num_total)\nprint(\"%i total shared species aggregated over dyads\" % sum(num_shared_species_per_dyad.values()))\nprint(\"%i dyads\" % len(shared_highcov_species_per_dyad))\nprint(\"%i total shared highcov species aggregated over dyads\" % sum([len(shared_highcov_species_per_dyad[dyad]) for dyad in shared_highcov_species_per_dyad]))",
"40 transmissions\n1397 shared species aggregated over transmissions\n6 replacements\n247 shared species aggregated over replacements\n46 total mother-infant QP pairs\n1644 total shared species aggregated over dyads\n46 dyads\n46 total shared highcov species aggregated over dyads\n"
],
[
"float(num_transmission)/(sum([len(nss) for nss in num_transmission_shared_species])*2)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00b7b9c7c0d9a0eef65345c6d1087c4a4501a3d | 18,094 | ipynb | Jupyter Notebook | .ipynb_checkpoints/Caltech 256 - Dark Knowledge-checkpoint.ipynb | aliasvishnu/Keras-VGG16-TransferLearning | a49dfed3b1b5b1cd6cea201e95cbeed701ca8d46 | [
"MIT"
] | 6 | 2017-09-23T07:27:25.000Z | 2018-09-21T08:42:24.000Z | .ipynb_checkpoints/Caltech 256 - Dark Knowledge-checkpoint.ipynb | aliasvishnu/Keras-VGG16-TransferLearning | a49dfed3b1b5b1cd6cea201e95cbeed701ca8d46 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/Caltech 256 - Dark Knowledge-checkpoint.ipynb | aliasvishnu/Keras-VGG16-TransferLearning | a49dfed3b1b5b1cd6cea201e95cbeed701ca8d46 | [
"MIT"
] | null | null | null | 47.615789 | 1,430 | 0.56411 | [
[
[
"import keras\nfrom keras.applications import VGG16\nfrom keras.models import Model\nfrom keras.layers import Dense, Dropout, Input\nfrom keras.regularizers import l2, activity_l2,l1\nfrom keras.utils import np_utils\nfrom keras.preprocessing.image import array_to_img, img_to_array, load_img\nfrom keras.applications.vgg16 import preprocess_input\nfrom PIL import Image\nfrom scipy import misc\nfrom keras.optimizers import SGD\n# from keras.utils.visualize_util import plot\nfrom os import listdir\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy",
"_____no_output_____"
],
[
"temperature=1\n\ndef softmaxTemp(x):\n return K.softmax(x/temperature)\n\ndef getModel( output_dim):\n # output_dim: the number of classes (int)\n # return: compiled model (keras.engine.training.Model)\n \n vgg_model = VGG16( weights='imagenet', include_top=True )\n vgg_out = vgg_model.layers[-1].output\n \n out = Dense( output_dim, activation='softmax')( vgg_out )\n \n tl_model = Model( input=vgg_model.input, output=out)\n \n tl_model.layers[-2].activation=softmaxTemp\n \n for layer in tl_model.layers[0:-1]:\n layer.trainable = False \n\n tl_model.compile(loss= \"categorical_crossentropy\", optimizer=\"adagrad\", metrics=[\"accuracy\"])\n tl_model.summary()\n return tl_model",
"_____no_output_____"
],
[
"# define functions to laod images\ndef loadBatchImages(path,s, nVal = 2):\n # return array of images\n catList = listdir(path)\n loadedImagesTrain = []\n loadedLabelsTrain = []\n loadedImagesVal = []\n loadedLabelsVal = []\n\n\n for cat in catList[0:256]:\n deepPath = path+cat+\"/\"\n # if cat == \".DS_Store\": continue\n imageList = listdir(deepPath)\n indx = 0\n for images in imageList[0:s + nVal]: \n img = load_img(deepPath + images)\n img = img_to_array(img)\n img = misc.imresize(img, (224,224))\n img = scipy.misc.imrotate(img,180)\n if indx < s:\n loadedLabelsTrain.append(int(images[0:3])-1)\n loadedImagesTrain.append(img)\n else:\n loadedLabelsVal.append(int(images[0:3])-1)\n loadedImagesVal.append(img)\n indx += 1\n \n# return np.asarray(loadedImages), np.asarray(loadedLabels)\n return loadedImagesTrain, np_utils.to_categorical(loadedLabelsTrain), loadedImagesVal, np_utils.to_categorical(loadedLabelsVal) \n \ndef shuffledSet(a, b):\n # shuffle the entire dataset\n assert np.shape(a)[0] == np.shape(b)[0]\n p = np.random.permutation(np.shape(a)[0])\n return (a[p], b[p])\n",
"_____no_output_____"
],
[
"path = \"/mnt/cube/VGG_/256_ObjectCategories/\" \nsamCat = 8 # number of samples per category\n\ndata, labels, dataVal, labelsVal = loadBatchImages(path,samCat, nVal = 2)\n\ndata = preprocess_input(np.float64(data))\ndata = data.swapaxes(1, 3).swapaxes(2, 3)\n\ndataVal = preprocess_input(np.float64(dataVal))\ndataVal = dataVal.swapaxes(1, 3).swapaxes(2, 3)\n\ntrain = shuffledSet(np.asarray(data),labels)\nval = shuffledSet(np.asarray(dataVal),labelsVal)",
"_____no_output_____"
],
[
"# plt.imshow(train[0][0][0])\n# plt.show()\nprint train[0].shape, val[0].shape",
"_____no_output_____"
],
[
"output_dim = 256\ntl_model = getModel(output_dim) ",
"____________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n====================================================================================================\ninput_8 (InputLayer) (None, 3, 224, 224) 0 \n____________________________________________________________________________________________________\nblock1_conv1 (Convolution2D) (None, 64, 224, 224) 1792 input_8[0][0] \n____________________________________________________________________________________________________\nblock1_conv2 (Convolution2D) (None, 64, 224, 224) 36928 block1_conv1[0][0] \n____________________________________________________________________________________________________\nblock1_pool (MaxPooling2D) (None, 64, 112, 112) 0 block1_conv2[0][0] \n____________________________________________________________________________________________________\nblock2_conv1 (Convolution2D) (None, 128, 112, 112) 73856 block1_pool[0][0] \n____________________________________________________________________________________________________\nblock2_conv2 (Convolution2D) (None, 128, 112, 112) 147584 block2_conv1[0][0] \n____________________________________________________________________________________________________\nblock2_pool (MaxPooling2D) (None, 128, 56, 56) 0 block2_conv2[0][0] \n____________________________________________________________________________________________________\nblock3_conv1 (Convolution2D) (None, 256, 56, 56) 295168 block2_pool[0][0] \n____________________________________________________________________________________________________\nblock3_conv2 (Convolution2D) (None, 256, 56, 56) 590080 block3_conv1[0][0] \n____________________________________________________________________________________________________\nblock3_conv3 (Convolution2D) (None, 256, 56, 56) 590080 block3_conv2[0][0] \n____________________________________________________________________________________________________\nblock3_pool (MaxPooling2D) (None, 256, 28, 28) 0 block3_conv3[0][0] \n____________________________________________________________________________________________________\nblock4_conv1 (Convolution2D) (None, 512, 28, 28) 1180160 block3_pool[0][0] \n____________________________________________________________________________________________________\nblock4_conv2 (Convolution2D) (None, 512, 28, 28) 2359808 block4_conv1[0][0] \n____________________________________________________________________________________________________\nblock4_conv3 (Convolution2D) (None, 512, 28, 28) 2359808 block4_conv2[0][0] \n____________________________________________________________________________________________________\nblock4_pool (MaxPooling2D) (None, 512, 14, 14) 0 block4_conv3[0][0] \n____________________________________________________________________________________________________\nblock5_conv1 (Convolution2D) (None, 512, 14, 14) 2359808 block4_pool[0][0] \n____________________________________________________________________________________________________\nblock5_conv2 (Convolution2D) (None, 512, 14, 14) 2359808 block5_conv1[0][0] \n____________________________________________________________________________________________________\nblock5_conv3 (Convolution2D) (None, 512, 14, 14) 2359808 block5_conv2[0][0] \n____________________________________________________________________________________________________\nblock5_pool (MaxPooling2D) (None, 512, 7, 7) 0 block5_conv3[0][0] \n____________________________________________________________________________________________________\nflatten (Flatten) (None, 25088) 0 block5_pool[0][0] \n____________________________________________________________________________________________________\nfc1 (Dense) (None, 4096) 102764544 flatten[0][0] \n____________________________________________________________________________________________________\nfc2 (Dense) (None, 4096) 16781312 fc1[0][0] \n____________________________________________________________________________________________________\npredictions (Dense) (None, 1000) 4097000 fc2[0][0] \n____________________________________________________________________________________________________\ndense_8 (Dense) (None, 256) 256256 predictions[0][0] \n====================================================================================================\nTotal params: 138,613,800\nTrainable params: 256,256\nNon-trainable params: 138,357,544\n____________________________________________________________________________________________________\n"
],
[
"nb_epoch = 20\n\nhistory = tl_model.fit(train[0], train[1], batch_size = 16, nb_epoch = nb_epoch, validation_data = val, \n shuffle = True)\n\nkeras.callbacks.EarlyStopping(monitor='val_loss', min_delta = 0, patience = 2, verbose = 0, mode='auto')",
"Train on 2048 samples, validate on 512 samples\nEpoch 1/20\n2048/2048 [==============================] - 60s - loss: 5.5295 - acc: 0.0264 - val_loss: 5.5011 - val_acc: 0.0586\nEpoch 2/20\n2048/2048 [==============================] - 60s - loss: 5.4816 - acc: 0.0859 - val_loss: 5.4778 - val_acc: 0.0840\nEpoch 3/20\n2048/2048 [==============================] - 60s - loss: 5.4542 - acc: 0.1299 - val_loss: 5.4598 - val_acc: 0.1113\nEpoch 4/20\n2048/2048 [==============================] - 60s - loss: 5.4322 - acc: 0.1890 - val_loss: 5.4446 - val_acc: 0.1367\nEpoch 5/20\n2048/2048 [==============================] - 60s - loss: 5.4134 - acc: 0.2334 - val_loss: 5.4313 - val_acc: 0.1523\nEpoch 6/20\n2048/2048 [==============================] - 60s - loss: 5.3966 - acc: 0.2671 - val_loss: 5.4192 - val_acc: 0.1660\nEpoch 7/20\n2048/2048 [==============================] - 60s - loss: 5.3813 - acc: 0.2979 - val_loss: 5.4081 - val_acc: 0.1719\nEpoch 8/20\n2048/2048 [==============================] - 60s - loss: 5.3672 - acc: 0.3198 - val_loss: 5.3978 - val_acc: 0.1836\nEpoch 9/20\n2048/2048 [==============================] - 59s - loss: 5.3541 - acc: 0.3501 - val_loss: 5.3882 - val_acc: 0.1953\nEpoch 10/20\n2048/2048 [==============================] - 60s - loss: 5.3417 - acc: 0.3691 - val_loss: 5.3791 - val_acc: 0.2207\nEpoch 11/20\n2048/2048 [==============================] - 59s - loss: 5.3300 - acc: 0.3862 - val_loss: 5.3705 - val_acc: 0.2266\nEpoch 12/20\n2048/2048 [==============================] - 60s - loss: 5.3189 - acc: 0.4038 - val_loss: 5.3622 - val_acc: 0.2266\nEpoch 13/20\n2048/2048 [==============================] - 59s - loss: 5.3083 - acc: 0.4209 - val_loss: 5.3543 - val_acc: 0.2324\nEpoch 14/20\n2048/2048 [==============================] - 60s - loss: 5.2980 - acc: 0.4385 - val_loss: 5.3467 - val_acc: 0.2441\nEpoch 15/20\n2048/2048 [==============================] - 60s - loss: 5.2882 - acc: 0.4541 - val_loss: 5.3394 - val_acc: 0.2500\nEpoch 16/20\n2048/2048 [==============================] - 60s - loss: 5.2787 - acc: 0.4702 - val_loss: 5.3324 - val_acc: 0.2578\nEpoch 17/20\n2048/2048 [==============================] - 59s - loss: 5.2696 - acc: 0.4873 - val_loss: 5.3255 - val_acc: 0.2637\nEpoch 18/20\n2048/2048 [==============================] - 59s - loss: 5.2607 - acc: 0.4985 - val_loss: 5.3189 - val_acc: 0.2617\nEpoch 19/20\n2048/2048 [==============================] - 59s - loss: 5.2521 - acc: 0.5059 - val_loss: 5.3125 - val_acc: 0.2695\nEpoch 20/20\n2048/2048 [==============================] - 60s - loss: 5.2437 - acc: 0.5181 - val_loss: 5.3062 - val_acc: 0.2734\n"
],
[
"plt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('Model loss for %d samples per category' % samCat)\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='right left')\nplt.show()\n\nplt.plot(history.history['val_acc'])\nplt.title('model accuracy for %d samples per category' % samCat)\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.show()",
"_____no_output_____"
],
[
"1 22.07\n2 19.82\n4 25.20\n8 18.36\n16 18.75\n",
"_____no_output_____"
],
[
"X=[2, 4, 8, 16, 64]\nY=[19.82, 25.20, 18.36, 18.75]\nplt.plot(X,Y)\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00b9ff8279a7ba8da420ca1cb8f73ac1b9b02f7 | 7,306 | ipynb | Jupyter Notebook | Notebooks/Guided Investigation - Anomaly Lookup.ipynb | CrisRomeo/Azure-Sentinel | a8c7f8cf74bade06d92f5cc89132e25ef60583f6 | [
"MIT"
] | 4 | 2020-02-14T10:29:46.000Z | 2021-03-12T02:34:27.000Z | Notebooks/Guided Investigation - Anomaly Lookup.ipynb | CrisRomeo/Azure-Sentinel | a8c7f8cf74bade06d92f5cc89132e25ef60583f6 | [
"MIT"
] | 1 | 2022-01-22T10:38:31.000Z | 2022-01-22T10:38:31.000Z | Notebooks/Guided Investigation - Anomaly Lookup.ipynb | CrisRomeo/Azure-Sentinel | a8c7f8cf74bade06d92f5cc89132e25ef60583f6 | [
"MIT"
] | 3 | 2020-01-21T11:58:47.000Z | 2022-02-24T06:46:55.000Z | 43.748503 | 1,066 | 0.641254 | [
[
[
"# Guided Investigation - Anomaly Lookup\n\n__Notebook Version:__ 1.0<br>\n__Python Version:__ Python 3.6 (including Python 3.6 - AzureML)<br>\n__Required Packages:__ azure 4.0.0, azure-cli-profile 2.1.4<br>\n__Platforms Supported:__<br>\n - Azure Notebooks Free Compute\n - Azure Notebook on DSVM\n \n__Data Source Required:__<br>\n - Log Analytics tables \n \n### Description\nGain insights into the possible root cause of an alert by searching for related anomalies on the corresponding entities around the alert’s time. This notebook will provide valuable leads for an alert’s investigation, listing all suspicious increase in event counts or their properties around the time of the alert, and linking to the corresponding raw records in Log Analytics for the investigator to focus on and interpret.\n\n<font>When you switch between Azure Notebooks Free Compute and Data Science Virtual Machine (DSVM), you may need to select Python version: please select Python 3.6 for Free Compute, and Python 3.6 - AzureML for DSVM.</font>",
"_____no_output_____"
],
[
"## Table of Contents\n\n1. Initialize Azure Resource Management Clients\n2. Looking up for anomaly entities",
"_____no_output_____"
],
[
"## 1. Initialize Azure Resource Management Clients",
"_____no_output_____"
]
],
[
[
"# only run once\n!pip install --upgrade Azure-Sentinel-Utilities\n!pip install azure-cli-core",
"_____no_output_____"
],
[
"# User Input and Save to Environmental store\nimport os\nfrom SentinelWidgets import WidgetViewHelper\nenv_dir = %env\nhelper = WidgetViewHelper()",
"_____no_output_____"
],
[
"# Enter Tenant Domain\nhelper.set_env(env_dir, 'tenant_domain')",
"_____no_output_____"
],
[
"# Enter Azure Subscription Id\nhelper.set_env(env_dir, 'subscription_id')",
"_____no_output_____"
],
[
"# Enter Azure Resource Group\nhelper.set_env(env_dir, 'resource_group')",
"_____no_output_____"
],
[
"env_dir = %env\nif 'tenant_domain' in env_dir:\n tenant_domain = env_dir['tenant_domain']\nif 'subscription_id' in env_dir:\n subscription_id = env_dir['subscription_id']\nif 'resource_group' in env_dir:\n resource_group = env_dir['resource_group']",
"_____no_output_____"
],
[
"from azure.loganalytics import LogAnalyticsDataClient\nfrom azure.loganalytics.models import QueryBody\nfrom azure.mgmt.loganalytics import LogAnalyticsManagementClient\nimport SentinelAzure\nfrom SentinelAnomalyLookup import AnomalyFinder, AnomalyLookupViewHelper\n\nfrom pandas.io.json import json_normalize\nimport sys\nimport timeit\nimport datetime as dt\nimport pandas as pd\nimport copy\nfrom IPython.display import HTML",
"_____no_output_____"
],
[
"# Authentication to Log Analytics\nfrom azure.common.client_factory import get_client_from_cli_profile\nfrom azure.common.credentials import get_azure_cli_credentials\n# please enter your tenant domain below, for Microsoft, using: microsoft.onmicrosoft.com\n!az login --tenant $tenant_domain\nla_client = get_client_from_cli_profile(LogAnalyticsManagementClient, subscription_id = subscription_id)\nla = SentinelAzure.azure_loganalytics_helper.LogAnalyticsHelper(la_client)\ncreds, _ = get_azure_cli_credentials(resource=\"https://api.loganalytics.io\")\nla_data_client = LogAnalyticsDataClient(creds)",
"_____no_output_____"
]
],
[
[
"## 2. Looking up for anomaly entities",
"_____no_output_____"
]
],
[
[
"# Select a workspace\nselected_workspace = WidgetViewHelper.select_log_analytics_workspace(la)\ndisplay(selected_workspace)",
"_____no_output_____"
],
[
"import ipywidgets as widgets\nworkspace_id = la.get_workspace_id(selected_workspace.value)\n#DateTime format: 2019-07-15T07:05:20.000\nq_timestamp = widgets.Text(value='2019-09-15',description='DateTime: ')\ndisplay(q_timestamp)\n#Entity format: computer\nq_entity = widgets.Text(value='computer',description='Entity for search: ')\ndisplay(q_entity)",
"_____no_output_____"
],
[
"anomaly_lookup = AnomalyFinder(workspace_id, la_data_client)\nselected_tables = WidgetViewHelper.select_multiple_tables(anomaly_lookup)\ndisplay(selected_tables)",
"_____no_output_____"
],
[
"# This action may take a few minutes or more, please be patient.\nstart = timeit.default_timer()\nanomalies, queries = anomaly_lookup.run(q_timestamp.value, q_entity.value, list(selected_tables.value))\ndisplay(anomalies)\n\nif queries is not None:\n url = WidgetViewHelper.construct_url_for_log_analytics_logs(tenant_domain, subscription_id, resource_group, selected_workspace.value)\n WidgetViewHelper.display_html(WidgetViewHelper.copy_to_clipboard(url, queries, 'Add queries to clipboard and go to Log Analytics'))\n\nprint('==================')\nprint('Elapsed time: ', timeit.default_timer() - start, ' seconds')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d00ba1d0548013e44a060550b7f7ef520a726005 | 4,392 | ipynb | Jupyter Notebook | Exam_1_answers.ipynb | JOAQUINGR/Accelerated_Intro_WilkinsonExams | ed7747e154fddbf166f71722b55ff8bfb5f3dfc5 | [
"MIT"
] | null | null | null | Exam_1_answers.ipynb | JOAQUINGR/Accelerated_Intro_WilkinsonExams | ed7747e154fddbf166f71722b55ff8bfb5f3dfc5 | [
"MIT"
] | null | null | null | Exam_1_answers.ipynb | JOAQUINGR/Accelerated_Intro_WilkinsonExams | ed7747e154fddbf166f71722b55ff8bfb5f3dfc5 | [
"MIT"
] | null | null | null | 24 | 321 | 0.574909 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d00ba2518db6e41f7bacedd0036caa6b503e6742 | 29,982 | ipynb | Jupyter Notebook | 4 - Train models and make predictions.ipynb | oyiakoumis/tensorflow2-course | 935a3e5b1d14eb6009b06cea59da04e7061e235e | [
"Apache-2.0"
] | null | null | null | 4 - Train models and make predictions.ipynb | oyiakoumis/tensorflow2-course | 935a3e5b1d14eb6009b06cea59da04e7061e235e | [
"Apache-2.0"
] | null | null | null | 4 - Train models and make predictions.ipynb | oyiakoumis/tensorflow2-course | 935a3e5b1d14eb6009b06cea59da04e7061e235e | [
"Apache-2.0"
] | null | null | null | 59.488095 | 17,340 | 0.779668 | [
[
[
"# 4 - Train models and make predictions\n\n## Motivation\n- **`tf.keras`** API offers built-in functions for training, validation and prediction.\n- Those functions are easy to use and enable you to train any ML model.\n- They also give you a high level of customizability.\n\n## Objectives\n- Understand the common training workflow in TensorFlow.\n- Set an optimizer, a loss functions, and metrics with `Model.compile()`\n- Create custom losses and metrics from scratch\n- Train your model with `Model.fit()`\n- Evaluate your model with `Model.evaluate()`\n- Make predictions with `Model.predict()`\n- Discover useful callbacks during training like checkpointing and learning rate scheduling\n- Create custom callbacks to get 100% control on your training\n- Practise what you learned on a concrete example",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"tf.__version__",
"_____no_output_____"
]
],
[
[
"# Table of contents:\n* ### [Overview](#overview)\n* ### [Part 1: Setting an optimizer, a loss function, and metrics](#part-1)\n* ### [Part 2: Training models and make predictions](#part-2)\n* ### [Part 3: Using callbacks](#part-3)\n* ### [Part 4: Exercise](#part-4)\n* ### [Summary](#summary)\n* ### [Where to go next](#next)\n\n# Overview <a class=\"anchor\" id=\"overview\"></a>\n- Model training and evaluation works exactly the same whether your model is a Sequential model, a model built with the Functional API, or a model written from scratch via model subclassing.\n- Here's what the typical end-to-end workflow looks like:\n - Define optimizer, training loss, and evaluation metrics (via `Model.compile()`)\n - Train your model on your training data (via `Model.fit()`)\n - Validate on a holdout set generated from the original training data\n - Evaluate the model on the test data (via `Model.evaluate()`)\n \nIn the next sections we will use the **MNIST dataset** to explained in details how to train a model with the `keras.Model`'s methods listed above. As a reminder from chapter 2, the **MNIST dataset** is a large dataset of handwritten digits. Each image is a 28x28 matrix having values between 0 and 255.\n\n![mnist.png](./ressources/mnist.png)\n\nThe following code cells build a `tf.data` pipeline for the MNIST dataset, splitting the dataset into a training set, a validation set and a test set (resp. 60%, 20%, and 20%), and build a simple artificial neural network model (ANN) for classification.",
"_____no_output_____"
]
],
[
[
"# Load the MNIST dataset\ntrain, test = tf.keras.datasets.mnist.load_data()",
"_____no_output_____"
],
[
"# Overview of the dataset:\nimages, labels = train\nprint(type(images), type(labels))\nprint(images.shape, labels.shape)",
"<class 'numpy.ndarray'> <class 'numpy.ndarray'>\n(60000, 28, 28) (60000,)\n"
],
[
"# First 9 images of the training set:\nplt.figure(figsize=(3,3))\nfor i in range(9):\n plt.subplot(3,3,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(images[i], cmap=plt.cm.binary)\nplt.show()",
"_____no_output_____"
],
[
"# creates tf.data.Dataset\ntrain_ds = tf.data.Dataset.from_tensor_slices(train) \ntest_ds = tf.data.Dataset.from_tensor_slices(test)",
"_____no_output_____"
],
[
"# split train into train and validation\nnum_val = train_ds.cardinality().numpy() * 0.2\ntrain_ds = train_ds.skip(num_val)\nval_ds = train_ds.take(num_val)",
"_____no_output_____"
],
[
"def configure_dataset(ds, is_training=True):\n if is_training:\n ds = ds.shuffle(48000).repeat()\n ds = ds.batch(64) \n ds = ds.map(lambda image, label: (image/255, label), num_parallel_calls=tf.data.AUTOTUNE)\n ds = ds.prefetch(tf.data.AUTOTUNE)\n return ds",
"_____no_output_____"
],
[
"train_ds = configure_dataset(train_ds, is_training=True)\nval_ds = configure_dataset(val_ds, is_training=False)",
"_____no_output_____"
],
[
"# Build the model:\nmodel = keras.Sequential([\n keras.Input(shape=(28, 28), name=\"digits\"),\n layers.Flatten(),\n layers.Dense(64, activation=\"relu\", name=\"dense_1\"),\n layers.Dense(64, activation=\"relu\", name=\"dense_2\"),\n layers.Dense(10, activation=\"softmax\", name=\"predictions\"),\n])",
"_____no_output_____"
],
[
"model.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nflatten (Flatten) (None, 784) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 64) 50240 \n_________________________________________________________________\ndense_2 (Dense) (None, 64) 4160 \n_________________________________________________________________\npredictions (Dense) (None, 10) 650 \n=================================================================\nTotal params: 55,050\nTrainable params: 55,050\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"# Part 1: Setting an optimizer, a loss function, and metrics <a class=\"anchor\" id=\"part-1\"></a>\n## 1.1 The `compile()` method <a class=\"anchor\" id=\"1.1\"></a>",
"_____no_output_____"
],
[
"## 1.2 The `compile()` method <a class=\"anchor\" id=\"1.2\"></a>",
"_____no_output_____"
],
[
"## 1.3 The `compile()` method <a class=\"anchor\" id=\"1.3\"></a>",
"_____no_output_____"
],
[
"## 1.4 The `compile()` method <a class=\"anchor\" id=\"1.4\"></a>",
"_____no_output_____"
],
[
"## 1.5 The `compile()` method <a class=\"anchor\" id=\"1.5\"></a>",
"_____no_output_____"
],
[
"# Part 2: Training models and make predictions <a class=\"anchor\" id=\"part-2\"></a>",
"_____no_output_____"
],
[
"# Part 3: Using callbacks <a class=\"anchor\" id=\"part-3\"></a>",
"_____no_output_____"
],
[
"# Part 4: Exercise <a class=\"anchor\" id=\"part-4\"></a>",
"_____no_output_____"
],
[
"# Summary <a class=\"anchor\" id=\"summary\"></a>\n\n# Where to go next <a class=\"anchor\" id=\"next\"></a>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d00bb362b8f7e7e8f859c680a22db2ec22ac3d4e | 660 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/runningopt-checkpoint.ipynb | lwcook/horsetail-matching | f3d5f8d01249debbca978f412ce4eae017458119 | [
"MIT"
] | 2 | 2017-05-17T17:07:08.000Z | 2018-03-29T12:42:36.000Z | notebooks/.ipynb_checkpoints/runningopt-checkpoint.ipynb | lwcook/horsetail-matching | f3d5f8d01249debbca978f412ce4eae017458119 | [
"MIT"
] | null | null | null | notebooks/.ipynb_checkpoints/runningopt-checkpoint.ipynb | lwcook/horsetail-matching | f3d5f8d01249debbca978f412ce4eae017458119 | [
"MIT"
] | null | null | null | 17.837838 | 74 | 0.55 | [
[
[
"This tutorial shows you how to run a horsetail matching optimization",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d00bbca9bf44230265eccce9acc8973705a0067b | 188,338 | ipynb | Jupyter Notebook | demos/CLIP_GradCAM_Visualization.ipynb | AdMoR/clipit | 47bf2bfcceb8af8d3f4bd7e52496a100eae4d7cc | [
"MIT"
] | 1 | 2022-01-22T10:07:10.000Z | 2022-01-22T10:07:10.000Z | demos/CLIP_GradCAM_Visualization.ipynb | AdMoR/clipit | 47bf2bfcceb8af8d3f4bd7e52496a100eae4d7cc | [
"MIT"
] | null | null | null | demos/CLIP_GradCAM_Visualization.ipynb | AdMoR/clipit | 47bf2bfcceb8af8d3f4bd7e52496a100eae4d7cc | [
"MIT"
] | null | null | null | 621.577558 | 176,926 | 0.939975 | [
[
[
"<a href=\"https://colab.research.google.com/github/dribnet/clipit/blob/future/demos/CLIP_GradCAM_Visualization.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# CLIP GradCAM Colab\n\nThis Colab notebook uses [GradCAM](https://arxiv.org/abs/1610.02391) on OpenAI's [CLIP](https://openai.com/blog/clip/) model to produce a heatmap highlighting which regions in an image activate the most to a given caption.\n\n**Note:** Currently only works with the ResNet variants of CLIP. ViT support coming soon.",
"_____no_output_____"
]
],
[
[
"#@title Install dependencies\n\n#@markdown Please execute this cell by pressing the _Play_ button \n#@markdown on the left.\n\n#@markdown **Note**: This installs the software on the Colab \n#@markdown notebook in the cloud and not on your computer.\n\n%%capture\n!pip install ftfy regex tqdm matplotlib opencv-python scipy scikit-image\n!pip install git+https://github.com/openai/CLIP.git\n\nimport numpy as np\nimport torch\nimport os\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport cv2\nimport urllib.request\nimport matplotlib.pyplot as plt\nimport clip\nfrom PIL import Image\nfrom skimage import transform as skimage_transform\nfrom scipy.ndimage import filters",
"_____no_output_____"
],
[
"#@title Helper functions\n\n#@markdown Some helper functions for overlaying heatmaps on top\n#@markdown of images and visualizing with matplotlib.\n\ndef normalize(x: np.ndarray) -> np.ndarray:\n # Normalize to [0, 1].\n x = x - x.min()\n if x.max() > 0:\n x = x / x.max()\n return x\n\n# Modified from: https://github.com/salesforce/ALBEF/blob/main/visualization.ipynb\ndef getAttMap(img, attn_map, blur=True):\n if blur:\n attn_map = filters.gaussian_filter(attn_map, 0.02*max(img.shape[:2]))\n attn_map = normalize(attn_map)\n cmap = plt.get_cmap('jet')\n attn_map_c = np.delete(cmap(attn_map), 3, 2)\n attn_map = 1*(1-attn_map**0.7).reshape(attn_map.shape + (1,))*img + \\\n (attn_map**0.7).reshape(attn_map.shape+(1,)) * attn_map_c\n return attn_map\n\ndef viz_attn(img, attn_map, blur=True):\n fig, axes = plt.subplots(1, 2, figsize=(10, 5))\n axes[0].imshow(img)\n axes[1].imshow(getAttMap(img, attn_map, blur))\n for ax in axes:\n ax.axis(\"off\")\n plt.show()\n \ndef load_image(img_path, resize=None):\n image = Image.open(image_path).convert(\"RGB\")\n if resize is not None:\n image = image.resize((resize, resize))\n return np.asarray(image).astype(np.float32) / 255.",
"_____no_output_____"
],
[
"#@title GradCAM: Gradient-weighted Class Activation Mapping\n\n#@markdown Our gradCAM implementation registers a forward hook\n#@markdown on the model at the specified layer. This allows us\n#@markdown to save the intermediate activations and gradients\n#@markdown at that layer.\n\n#@markdown To visualize which parts of the image activate for\n#@markdown a given caption, we use the caption as the target\n#@markdown label and backprop through the network using the\n#@markdown image as the input.\n#@markdown In the case of CLIP models with resnet encoders,\n#@markdown we save the activation and gradients at the\n#@markdown layer before the attention pool, i.e., layer4.\n\nclass Hook:\n \"\"\"Attaches to a module and records its activations and gradients.\"\"\"\n\n def __init__(self, module: nn.Module):\n self.data = None\n self.hook = module.register_forward_hook(self.save_grad)\n \n def save_grad(self, module, input, output):\n self.data = output\n output.requires_grad_(True)\n output.retain_grad()\n \n def __enter__(self):\n return self\n \n def __exit__(self, exc_type, exc_value, exc_traceback):\n self.hook.remove()\n \n @property\n def activation(self) -> torch.Tensor:\n return self.data\n \n @property\n def gradient(self) -> torch.Tensor:\n return self.data.grad\n\n\n# Reference: https://arxiv.org/abs/1610.02391\ndef gradCAM(\n model: nn.Module,\n input: torch.Tensor,\n target: torch.Tensor,\n layer: nn.Module\n) -> torch.Tensor:\n # Zero out any gradients at the input.\n if input.grad is not None:\n input.grad.data.zero_()\n \n # Disable gradient settings.\n requires_grad = {}\n for name, param in model.named_parameters():\n requires_grad[name] = param.requires_grad\n param.requires_grad_(False)\n \n # Attach a hook to the model at the desired layer.\n assert isinstance(layer, nn.Module)\n with Hook(layer) as hook: \n # Do a forward and backward pass.\n output = model(input)\n output.backward(target)\n\n grad = hook.gradient.float()\n act = hook.activation.float()\n \n # Global average pool gradient across spatial dimension\n # to obtain importance weights.\n alpha = grad.mean(dim=(2, 3), keepdim=True)\n # Weighted combination of activation maps over channel\n # dimension.\n gradcam = torch.sum(act * alpha, dim=1, keepdim=True)\n # We only want neurons with positive influence so we\n # clamp any negative ones.\n gradcam = torch.clamp(gradcam, min=0)\n\n # Resize gradcam to input resolution.\n gradcam = F.interpolate(\n gradcam,\n input.shape[2:],\n mode='bicubic',\n align_corners=False)\n \n # Restore gradient settings.\n for name, param in model.named_parameters():\n param.requires_grad_(requires_grad[name])\n \n return gradcam",
"_____no_output_____"
],
[
"#@title Run\n\n#@markdown #### Image & Caption settings\nimage_url = 'https://images2.minutemediacdn.com/image/upload/c_crop,h_706,w_1256,x_0,y_64/f_auto,q_auto,w_1100/v1554995050/shape/mentalfloss/516438-istock-637689912.jpg' #@param {type:\"string\"}\n\nimage_caption = 'the cat' #@param {type:\"string\"}\n#@markdown ---\n#@markdown #### CLIP model settings\nclip_model = \"RN50\" #@param [\"RN50\", \"RN101\", \"RN50x4\", \"RN50x16\"]\nsaliency_layer = \"layer4\" #@param [\"layer4\", \"layer3\", \"layer2\", \"layer1\"]\n#@markdown ---\n#@markdown #### Visualization settings\nblur = True #@param {type:\"boolean\"}\n\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nmodel, preprocess = clip.load(clip_model, device=device, jit=False)\n\n# Download the image from the web.\nimage_path = 'image.png'\nurllib.request.urlretrieve(image_url, image_path)\n\nimage_input = preprocess(Image.open(image_path)).unsqueeze(0).to(device)\nimage_np = load_image(image_path, model.visual.input_resolution)\ntext_input = clip.tokenize([image_caption]).to(device)\n\nattn_map = gradCAM(\n model.visual,\n image_input,\n model.encode_text(text_input).float(),\n getattr(model.visual, saliency_layer)\n)\nattn_map = attn_map.squeeze().detach().cpu().numpy()\n\nviz_attn(image_np, attn_map, blur)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d00bca002721d27e28de17d98d6737c1944ff73f | 7,560 | ipynb | Jupyter Notebook | LECTURE 1.ipynb | ayushkr007/Python_Training | c87db7ebb83812f6840f5040161b0dbd8f5de041 | [
"MIT"
] | 1 | 2021-12-11T05:23:14.000Z | 2021-12-11T05:23:14.000Z | LECTURE 1.ipynb | Kushagra-2006/Python_Training | c87db7ebb83812f6840f5040161b0dbd8f5de041 | [
"MIT"
] | null | null | null | LECTURE 1.ipynb | Kushagra-2006/Python_Training | c87db7ebb83812f6840f5040161b0dbd8f5de041 | [
"MIT"
] | 1 | 2021-08-07T13:25:19.000Z | 2021-08-07T13:25:19.000Z | 15.241935 | 44 | 0.420238 | [
[
[
"first_num=5\nsecond_num=10\nthird_variable=first_num+second_num\nprint(third_variable)\n",
"15\n"
],
[
"first_num=5\nsecond_num=10\n\nprint(first_num+second_num)",
"15\n"
],
[
"first_num=5\nsecond_num=10\n\nprint(first_num*second_num)",
"50\n"
],
[
"first_num=5\nsecond_num=10\n\nprint(first_num-second_num)",
"-5\n"
],
[
"first_num=6\nsecond_num=10\n\nprint(second_num/first_num)",
"1.6666666666666667\n"
],
[
"first_num=5.0\nsecond_num=10\n\nprint(second_num/first_num)",
"2.0\n"
],
[
"first_num=6\nsecond_num=10\n\nprint(int(second_num/first_num))",
"1\n"
],
[
"first_num=6\nsecond_num=10\n\nprint(round(second_num/first_num))",
"2\n"
],
[
"#Printing odd and Even Number\nx=10\ny=2\nif 10%y==0:\n print(\"Even\")\nelse:\n print(\"Odd\")\n ",
"Even\n"
],
[
"exp=2**2\nprint(exp)",
"4\n"
],
[
"5//2",
"_____no_output_____"
],
[
"5/2",
"_____no_output_____"
],
[
"5%2",
"_____no_output_____"
],
[
"2%2",
"_____no_output_____"
],
[
"2|5|2\n -4\n-----\n 1/0 ",
"_____no_output_____"
],
[
"a=1\nprint(a)\na=1.0\nprint(a)\n",
"1\n1.0\n"
],
[
"a=1\nprint(a)\nb=a\nprint(a)\nprint(b)\na=2\nprint(a)\n\nprint(b)",
"1\n1\n1\n2\n1\n"
],
[
" bool has 2 values-- True False",
"_____no_output_____"
],
[
"print(10)",
"10\n"
],
[
"rohit=\"Rohit\"\nprint(rohit)\nprint(\"Rohit\")",
"Rohit\nRohit\n"
],
[
"10>5",
"_____no_output_____"
],
[
"bool_val=10>5\nprint(bool_val)",
"True\n"
],
[
"print(10>5)",
"True\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00bd6807501cae588414cef94cd7368cbc640f8 | 51,572 | ipynb | Jupyter Notebook | notebooks/capstone-flightDelay.ipynb | davicsilva/dsintensive | 73ff2015d14798f7a00bb316e9b00b897ac30cf0 | [
"Apache-2.0"
] | null | null | null | notebooks/capstone-flightDelay.ipynb | davicsilva/dsintensive | 73ff2015d14798f7a00bb316e9b00b897ac30cf0 | [
"Apache-2.0"
] | null | null | null | notebooks/capstone-flightDelay.ipynb | davicsilva/dsintensive | 73ff2015d14798f7a00bb316e9b00b897ac30cf0 | [
"Apache-2.0"
] | null | null | null | 32.932312 | 253 | 0.353642 | [
[
[
"# Capstone Project - Flight Delays\n# Does weather events have impact the delay of flights (Brazil)?",
"_____no_output_____"
],
[
"### It is important to see this notebook with the step-by-step of the dataset cleaning process:\n[https://github.com/davicsilva/dsintensive/blob/master/notebooks/flightDelayPrepData_v2.ipynb](https://github.com/davicsilva/dsintensive/blob/master/notebooks/flightDelayPrepData_v2.ipynb)",
"_____no_output_____"
]
],
[
[
"from datetime import datetime\n\n# Pandas and NumPy\nimport pandas as pd\nimport numpy as np\n\n# Matplotlib for additional customization\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\n# Seaborn for plotting and styling\nimport seaborn as sns",
"_____no_output_____"
],
[
"# 1. Flight delay: any flight with (real_departure - planned_departure >= 15 minutes)\n# 2. The Brazilian Federal Agency for Civil Aviation (ANAC) does not define exactly what is a \"flight delay\" (in minutes)\n# 3. Anyway, the ANAC has a resolution for this subject: https://goo.gl/YBwbMy (last access: nov, 15th, 2017)\n# ---\n# DELAY, for this analysis, is defined as greater than 15 minutes (local flights only)\nDELAY = 15",
"_____no_output_____"
]
],
[
[
"### 1 - Local flights dataset. For now, only flights from January to September, 2017\n\n**A note about date columns on this dataset**\n* In the original dataset (CSV file from ANAC), the date was not in ISO8601 format (e.g. '2017-10-31 09:03:00')\n* To fix this I used regex (regular expression) to transform this column directly on CSV file\n* The original date was \"31/10/2017 09:03\" (october, 31, 2017 09:03)",
"_____no_output_____"
]
],
[
[
"#[flights] dataset_01 => all \"Active Regular Flights\" from 2017, from january to september\n#source: http://www.anac.gov.br/assuntos/dados-e-estatisticas/historico-de-voos\n#Last access this website: nov, 14th, 2017\nflights = pd.read_csv('data/arf2017ISO.csv', sep = ';', dtype = str)",
"_____no_output_____"
],
[
"flights['departure-est'] = flights[['departure-est']].apply(lambda row: row.str.replace(\"(?P<day>\\d{2})/(?P<month>\\d{2})/(?P<year>\\d{4}) (?P<HOUR>\\d{2}):(?P<MIN>\\d{2})\", \"\\g<year>/\\g<month>/\\g<day> \\g<HOUR>:\\g<MIN>:00\"), axis=1)",
"_____no_output_____"
],
[
"flights['departure-real'] = flights[['departure-real']].apply(lambda row: row.str.replace(\"(?P<day>\\d{2})/(?P<month>\\d{2})/(?P<year>\\d{4}) (?P<HOUR>\\d{2}):(?P<MIN>\\d{2})\", \"\\g<year>/\\g<month>/\\g<day> \\g<HOUR>:\\g<MIN>:00\"), axis=1)\nflights['arrival-est'] = flights[['arrival-est']].apply(lambda row: row.str.replace(\"(?P<day>\\d{2})/(?P<month>\\d{2})/(?P<year>\\d{4}) (?P<HOUR>\\d{2}):(?P<MIN>\\d{2})\", \"\\g<year>/\\g<month>/\\g<day> \\g<HOUR>:\\g<MIN>:00\"), axis=1)\nflights['arrival-real'] = flights[['arrival-real']].apply(lambda row: row.str.replace(\"(?P<day>\\d{2})/(?P<month>\\d{2})/(?P<year>\\d{4}) (?P<HOUR>\\d{2}):(?P<MIN>\\d{2})\", \"\\g<year>/\\g<month>/\\g<day> \\g<HOUR>:\\g<MIN>:00\"), axis=1)",
"_____no_output_____"
],
[
"# Departure and Arrival columns: from 'object' to 'date' format\nflights['departure-est'] = pd.to_datetime(flights['departure-est'], errors='ignore')\nflights['departure-real'] = pd.to_datetime(flights['departure-real'], errors='ignore')\nflights['arrival-est'] = pd.to_datetime(flights['arrival-est'], errors='ignore')\nflights['arrival-real'] = pd.to_datetime(flights['arrival-real'], errors='ignore')",
"_____no_output_____"
],
[
"# translate the flight status from portuguese to english\nflights['flight-status'] = flights[['flight-status']].apply(lambda row: row.str.replace(\"REALIZADO\", \"ACCOMPLISHED\"), axis=1)\nflights['flight-status'] = flights[['flight-status']].apply(lambda row: row.str.replace(\"CANCELADO\", \"CANCELED\"), axis=1)",
"_____no_output_____"
],
[
"flights.head()",
"_____no_output_____"
],
[
"flights.size",
"_____no_output_____"
],
[
"flights.to_csv(\"flights_csv.csv\")",
"_____no_output_____"
]
],
[
[
"## Some EDA's tasks",
"_____no_output_____"
]
],
[
[
"# See: https://stackoverflow.com/questions/37287938/sort-pandas-dataframe-by-value \n#\ndf_departures = flights.groupby(['airport-A']).size().reset_index(name='number_departures')",
"_____no_output_____"
],
[
"df_departures.sort_values(by=['number_departures'], ascending=False, inplace=True)",
"_____no_output_____"
],
[
"df_departures",
"_____no_output_____"
]
],
[
[
"### 2 - Local airports (list with all the ~600 brazilian public airports)\n\nSource: https://goo.gl/mNFuPt (a XLS spreadsheet in portuguese; last access on nov, 15th, 2017)",
"_____no_output_____"
]
],
[
[
"# Airports dataset: all brazilian public airports (updated until october, 2017)\nairports = pd.read_csv('data/brazilianPublicAirports-out2017.csv', sep = ';', dtype= str)",
"_____no_output_____"
],
[
"airports.head()",
"_____no_output_____"
],
[
"# Merge \"flights\" dataset with \"airports\" in order to identify \n# local flights (origin and destination are in Brazil)\nflights = pd.merge(flights, airports, left_on=\"airport-A\", right_on=\"airport\", how='left')\nflights = pd.merge(flights, airports, left_on=\"airport-B\", right_on=\"airport\", how='left')",
"_____no_output_____"
],
[
"flights.tail()",
"_____no_output_____"
]
],
[
[
"### 3 - List of codes (two letters) used when there was a flight delay (departure)\nI have found two lists that define two-letter codes used by the aircraft crew to justify the delay of the flights: a short and a long one.\n\nSource: https://goo.gl/vUC8BX (last access: nov, 15th, 2017)\n",
"_____no_output_____"
]
],
[
[
"# ------------------------------------------------------------------\n# List of codes (two letters) used to justify a delay on the flight\n# - delayCodesShortlist.csv: list with YYY codes\n# - delayCodesLongList.csv: list with XXX codes\n# ------------------------------------------------------------------\ndelaycodes = pd.read_csv('data/delayCodesShortlist.csv', sep = ';', dtype = str)\ndelaycodesLongList = pd.read_csv('data/delayCodesLonglist.csv', sep = ';', dtype = str)",
"_____no_output_____"
],
[
"delaycodes.head()",
"_____no_output_____"
]
],
[
[
"### 4 - The Weather data from https://www.wunderground.com/history\n\nFrom this website I captured a sample data from local airport (Campinas, SP, Brazil): January to September, 2017.\n\nThe website presents data like this (see [https://goo.gl/oKwzyH](https://goo.gl/oKwzyH)):",
"_____no_output_____"
]
],
[
[
"# Weather sample: load the CSV with weather historical data (from Campinas, SP, Brazil, 2017)\nweather = pd.read_csv('data/DataScience-Intensive-weatherAtCampinasAirport-2017-Campinas_Airport_2017Weather.csv', \\\n sep = ',', dtype = str)",
"_____no_output_____"
],
[
"weather[\"date\"] = weather[\"year\"].map(str) + \"-\" + weather[\"month\"].map(str) + \"-\" + weather[\"day\"].map(str) \nweather[\"date\"] = pd.to_datetime(weather['date'],errors='ignore')",
"_____no_output_____"
],
[
"weather.head()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d00bd9963fda38b68478a1cd702608893e355856 | 99,534 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb | lvikt/ekostat_calculator | 499e3ad6c5c1ef757a854ab00b08a4a28d5866a8 | [
"MIT"
] | 1 | 2017-08-29T06:44:22.000Z | 2017-08-29T06:44:22.000Z | notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb | lvikt/ekostat_calculator | 499e3ad6c5c1ef757a854ab00b08a4a28d5866a8 | [
"MIT"
] | null | null | null | notebooks/.ipynb_checkpoints/mw_requests_flow-checkpoint.ipynb | lvikt/ekostat_calculator | 499e3ad6c5c1ef757a854ab00b08a4a28d5866a8 | [
"MIT"
] | 4 | 2017-08-23T14:08:35.000Z | 2019-06-13T12:09:30.000Z | 46.927864 | 426 | 0.581922 | [
[
[
"# Reload when code changed:\n%load_ext autoreload\n%reload_ext autoreload\n%autoreload 2\n%pwd\nimport sys\nimport os\npath = \"../\"\nsys.path.append(path)\n#os.path.abspath(\"../\")\nprint(os.path.abspath(path))",
"D:\\git\\ekostat_calculator\n"
],
[
"import os \nimport core\nimport logging\nimport importlib\nimportlib.reload(core) \ntry:\n logging.shutdown()\n importlib.reload(logging)\nexcept:\n pass\nimport pandas as pd\nimport numpy as np\nimport json\nimport time\n\nfrom event_handler import EventHandler\nfrom event_handler import get_list_from_interval\nprint(core.__file__)\npd.__version__",
"..\\core\\__init__.py\n"
],
[
"user_id_1 = 'user_1'\nuser_id_2 = 'user_2'\nuser_1_ws_1 = 'mw1'\nprint(path)\npaths = {'user_id': user_id_1, \n 'workspace_directory': 'D:/git/ekostat_calculator/workspaces', \n 'resource_directory': path + '/resources', \n 'log_directory': path + '/log', \n 'test_data_directory': path + '/test_data', \n 'temp_directory': path + '/temp', \n 'cache_directory': path + '/cache'}\n\nekos = EventHandler(**paths)\nekos.test_timer()\n",
"2018-09-20 19:02:51,219\tlogger.py\t85\tadd_log\tDEBUG\t\n2018-09-20 19:02:51,223\tlogger.py\t86\tadd_log\tDEBUG\t========================================================================================================================\n2018-09-20 19:02:51,227\tlogger.py\t87\tadd_log\tDEBUG\t### Log added for log_id \"event_handler\" at locaton: ..\\log\\main_event_handler.log\n2018-09-20 19:02:51,231\tlogger.py\t88\tadd_log\tDEBUG\t------------------------------------------------------------------------------------------------------------------------\n2018-09-20 19:02:51,235\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 19:02:51,239\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n"
],
[
"ekos.mapping_objects['quality_element'].get_indicator_list_for_quality_element('secchi')",
"_____no_output_____"
],
[
"def update_workspace_uuid_in_test_requests(workspace_alias='New test workspace'):\n ekos = EventHandler(**paths)\n\n workspace_uuid = ekos.get_unique_id_for_alias(workspace_alias=workspace_alias)\n \n if workspace_uuid: \n print('Updating user {} with uuid: {}'.format(user_id_1, workspace_uuid))\n print('-'*70)\n ekos.update_workspace_uuid_in_test_requests(workspace_uuid)\n else:\n print('No workspaces for user: {}'.format(user_id_1))\n \n\n \ndef update_subset_uuid_in_test_requests(workspace_alias='New test workspace', \n subset_alias=False):\n ekos = EventHandler(**paths)\n\n workspace_uuid = ekos.get_unique_id_for_alias(workspace_alias=workspace_alias)\n \n if workspace_uuid: \n ekos.load_workspace(workspace_uuid)\n subset_uuid = ekos.get_unique_id_for_alias(workspace_alias=workspace_alias, subset_alias=subset_alias)\n print('Updating user {} with workspace_uuid {} and subset_uuid {}'.format(user_id_1, workspace_uuid, subset_uuid))\n print(workspace_uuid, subset_uuid)\n print('-'*70)\n ekos.update_subset_uuid_in_test_requests(subset_uuid=subset_uuid)\n else:\n print('No workspaces for user: {}'.format(user_id_1))\n \n\n \ndef print_boolean_structure(workspace_uuid): \n workspace_object = ekos.get_workspace(unique_id=workspace_uuid) \n workspace_object.index_handler.print_boolean_keys()",
"_____no_output_____"
],
[
"# update_workspace_uuid_in_test_requests()",
"_____no_output_____"
]
],
[
[
"### Request workspace add",
"_____no_output_____"
]
],
[
[
"t0 = time.time()\nekos = EventHandler(**paths)\nrequest = ekos.test_requests['request_workspace_add_1']\nresponse_workspace_add = ekos.request_workspace_add(request)\nekos.write_test_response('request_workspace_add_1', response_workspace_add)\n\n# request = ekos.test_requests['request_workspace_add_2']\n# response_workspace_add = ekos.request_workspace_add(request)\n# ekos.write_test_response('request_workspace_add_2', response_workspace_add)\nprint('-'*50)\nprint('Time for request: {}'.format(time.time()-t0))",
"2018-09-20 19:02:54,811\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 19:02:54,814\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 19:02:55,637\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.8230469226837158\n2018-09-20 19:02:55,671\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.8610491752624512\n2018-09-20 19:02:55,684\tevent_handler.py\t50\tf\tDEBUG\tStart: \"request_workspace_add\"\n2018-09-20 19:02:55,689\tevent_handler.py\t4257\trequest_workspace_add\tDEBUG\tStart: request_workspace_add\n2018-09-20 19:02:55,713\tevent_handler.py\t422\tcopy_workspace\tDEBUG\tTrying to copy workspace \"default_workspace\". Copy has alias \"New test workspace\"\n2018-09-20 19:02:55,844\tevent_handler.py\t2984\tload_workspace\tDEBUG\tTrying to load new workspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace\"\n"
]
],
[
[
"#### Update workspace uuid in test requests ",
"_____no_output_____"
]
],
[
[
"update_workspace_uuid_in_test_requests()",
"2018-09-20 19:02:56,883\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 19:02:56,886\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 19:02:57,796\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.9100518226623535\n2018-09-20 19:02:57,854\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.9720554351806641\n"
]
],
[
[
"### Request workspace import default data",
"_____no_output_____"
]
],
[
[
"# ekos = EventHandler(**paths)\n# # When copying data the first time all sources has status=0, i.e. no data will be loaded. \n# request = ekos.test_requests['request_workspace_import_default_data']\n# response_import_data = ekos.request_workspace_import_default_data(request)\n# ekos.write_test_response('request_workspace_import_default_data', response_import_data)\n",
"_____no_output_____"
]
],
[
[
"### Import data from sharkweb",
"_____no_output_____"
]
],
[
[
"ekos = EventHandler(**paths)\nrequest = ekos.test_requests['request_sharkweb_import']\nresponse_sharkweb_import = ekos.request_sharkweb_import(request)\nekos.write_test_response('request_sharkweb_import', response_sharkweb_import)\n",
"2018-09-20 19:02:59,042\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 19:02:59,045\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 19:02:59,952\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.9070520401000977\n2018-09-20 19:02:59,991\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.950054407119751\n2018-09-20 19:02:59,996\tevent_handler.py\t50\tf\tDEBUG\tStart: \"request_sharkweb_import\"\n2018-09-20 19:03:00,054\tevent_handler.py\t2984\tload_workspace\tDEBUG\tTrying to load new workspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace\"\n"
],
[
"ekos.data_params",
"_____no_output_____"
],
[
"ekos.selection_dicts",
"_____no_output_____"
],
[
"# ekos = EventHandler(**paths)\n# ekos.mapping_objects['sharkweb_mapping'].df",
"_____no_output_____"
]
],
[
[
"### Request data source list/edit",
"_____no_output_____"
]
],
[
[
"ekos = EventHandler(**paths)\nrequest = ekos.test_requests['request_workspace_data_sources_list']\nresponse = ekos.request_workspace_data_sources_list(request) \nekos.write_test_response('request_workspace_data_sources_list', response) \n\nrequest = response\nrequest['data_sources'][0]['status'] = False \nrequest['data_sources'][1]['status'] = False \nrequest['data_sources'][2]['status'] = False \nrequest['data_sources'][3]['status'] = False \n# request['data_sources'][4]['status'] = True \n\n\n# Edit data source \nresponse = ekos.request_workspace_data_sources_edit(request) \nekos.write_test_response('request_workspace_data_sources_edit', response)\n\n",
"2018-09-20 19:31:23,369\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 19:31:23,373\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 19:31:24,259\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.8860509395599365\n2018-09-20 19:31:24,295\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.9250528812408447\n2018-09-20 19:31:24,301\tevent_handler.py\t50\tf\tDEBUG\tStart: \"request_workspace_data_sources_list\"\n2018-09-20 19:31:24,305\tevent_handler.py\t4480\trequest_workspace_data_sources_list\tDEBUG\tStart: request_workspace_data_sources_list\n2018-09-20 19:31:24,342\tevent_handler.py\t2991\tload_workspace\tDEBUG\tTrying to load new workspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace\"\n2018-09-20 19:31:24,845\tevent_handler.py\t3009\tload_workspace\tINFO\tWorkspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace loaded.\"\n2018-09-20 19:31:24,858\tevent_handler.py\t54\tf\tDEBUG\tStop: \"request_workspace_data_sources_list\". Time for running method was 0.5530316829681396\n2018-09-20 19:31:24,864\tevent_handler.py\t50\tf\tDEBUG\tStart: \"request_workspace_data_sources_edit\"\n2018-09-20 19:31:24,869\tevent_handler.py\t4438\trequest_workspace_data_sources_edit\tDEBUG\tStart: request_workspace_data_sources_list\n2018-09-20 19:31:24,910\tevent_handler.py\t3002\tload_workspace\tDEBUG\tWorkspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace\" is already loaded. Set reload=True if you want to reload the workspace.\n2018-09-20 19:31:25,055\tworkspaces.py\t1842\tload_all_data\tDEBUG\tData has been loaded from existing all_data.pickle file.\n2018-09-20 19:31:25,058\tevent_handler.py\t50\tf\tDEBUG\tStart: \"request_workspace_data_sources_list\"\n2018-09-20 19:31:25,063\tevent_handler.py\t4480\trequest_workspace_data_sources_list\tDEBUG\tStart: request_workspace_data_sources_list\n2018-09-20 19:31:25,099\tevent_handler.py\t3002\tload_workspace\tDEBUG\tWorkspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace\" is already loaded. Set reload=True if you want to reload the workspace.\n"
]
],
[
[
"### Request subset add",
"_____no_output_____"
]
],
[
[
"ekos = EventHandler(**paths)\nrequest = ekos.test_requests['request_subset_add_1']\nresponse_subset_add = ekos.request_subset_add(request)\nekos.write_test_response('request_subset_add_1', response_subset_add)\n",
"2018-09-20 19:05:14,633\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 19:05:14,636\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 19:05:15,527\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.8900508880615234\n2018-09-20 19:05:15,566\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.9330534934997559\n2018-09-20 19:05:15,572\tevent_handler.py\t50\tf\tDEBUG\tStart: \"request_subset_add\"\n2018-09-20 19:05:15,576\tevent_handler.py\t3256\trequest_subset_add\tDEBUG\tStart: request_subset_add\n2018-09-20 19:05:15,617\tevent_handler.py\t2984\tload_workspace\tDEBUG\tTrying to load new workspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace\"\n2018-09-20 19:05:16,040\tevent_handler.py\t3002\tload_workspace\tINFO\tWorkspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace loaded.\"\n2018-09-20 19:05:16,043\tevent_handler.py\t375\tcopy_subset\tDEBUG\tTrying to copy subset \"default_subset\"\n2018-09-20 19:05:16,211\tlogger.py\t85\tadd_log\tDEBUG\t\n2018-09-20 19:05:16,216\tlogger.py\t86\tadd_log\tDEBUG\t========================================================================================================================\n2018-09-20 19:05:16,219\tlogger.py\t87\tadd_log\tDEBUG\t### Log added for log_id \"59365b60-aadd-4974-8df0-ba7c0a3d98ef\" at locaton: D:\\git\\ekostat_calculator\\workspaces\\a377ee26-cd2d-411b-999c-073cd7a3dbd4\\log\\subset_59365b60-aadd-4974-8df0-ba7c0a3d98ef.log\n2018-09-20 19:05:16,224\tlogger.py\t88\tadd_log\tDEBUG\t------------------------------------------------------------------------------------------------------------------------\n2018-09-20 19:05:16,237\tevent_handler.py\t54\tf\tDEBUG\tStop: \"request_subset_add\". Time for running method was 0.6600375175476074\n"
],
[
"update_subset_uuid_in_test_requests(subset_alias='mw_subset')",
"2018-09-20 19:05:16,853\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 19:05:16,857\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 19:05:17,716\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.8590488433837891\n2018-09-20 19:05:17,754\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.9010515213012695\n2018-09-20 19:05:17,789\tevent_handler.py\t2984\tload_workspace\tDEBUG\tTrying to load new workspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace\"\n2018-09-20 19:05:18,278\tevent_handler.py\t3002\tload_workspace\tINFO\tWorkspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace loaded.\"\n"
]
],
[
[
"### Request subset get data filter",
"_____no_output_____"
]
],
[
[
"ekos = EventHandler(**paths)\nupdate_subset_uuid_in_test_requests(subset_alias='mw_subset')\nrequest = ekos.test_requests['request_subset_get_data_filter']\nresponse_subset_get_data_filter = ekos.request_subset_get_data_filter(request)\nekos.write_test_response('request_subset_get_data_filter', response_subset_get_data_filter)\n\n",
"2018-09-20 19:21:34,611\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 19:21:34,614\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 19:21:35,519\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.9050517082214355\n2018-09-20 19:21:35,555\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.9450538158416748\n2018-09-20 19:21:35,562\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 19:21:35,567\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 19:21:36,485\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.9180526733398438\n2018-09-20 19:21:36,519\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.958054780960083\n2018-09-20 19:21:36,554\tevent_handler.py\t2991\tload_workspace\tDEBUG\tTrying to load new workspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace\"\n2018-09-20 19:21:37,058\tevent_handler.py\t3009\tload_workspace\tINFO\tWorkspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace loaded.\"\n2018-09-20 19:21:37,273\tevent_handler.py\t50\tf\tDEBUG\tStart: \"request_subset_get_data_filter\"\n2018-09-20 19:21:37,276\tevent_handler.py\t3452\trequest_subset_get_data_filter\tDEBUG\tStart: request_subset_get_data_filter\n"
],
[
"# import re\n# string = \"\"\"{\n# \"workspace_uuid\": \"52725df4-b4a0-431c-a186-5e542fc6a3a4\",\n# \"data_sources\": [\n# {\n# \"status\": true,\n# \"loaded\": false,\n# \"filename\": \"physicalchemical_sharkweb_data_all_2013-2014_20180916.txt\",\n# \"datatype\": \"physicalchemical\"\n# }\n# ]\n# }\"\"\"\n\n# r = re.sub('\"workspace_uuid\": \".{36}\"', '\"workspace_uuid\": \"new\"', string)",
"_____no_output_____"
]
],
[
[
"### Request subset set data filter",
"_____no_output_____"
]
],
[
[
"ekos = EventHandler(**paths)\nupdate_subset_uuid_in_test_requests(subset_alias='mw_subset')\nrequest = ekos.test_requests['request_subset_set_data_filter']\nresponse_subset_set_data_filter = ekos.request_subset_set_data_filter(request)\nekos.write_test_response('request_subset_set_data_filter', response_subset_set_data_filter)\n",
"2018-09-20 13:54:00,112\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 13:54:00,112\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 13:54:00,912\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.8000011444091797\n2018-09-20 13:54:00,942\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.8300011157989502\n2018-09-20 13:54:00,942\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 13:54:00,952\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 13:54:01,762\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.8100011348724365\n2018-09-20 13:54:01,792\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.8500008583068848\n2018-09-20 13:54:01,822\tevent_handler.py\t2887\tload_workspace\tDEBUG\tTrying to load new workspace \"fccc7645-8501-4541-975b-bdcfb40a5092\" with alias \"New test workspace\"\n2018-09-20 13:54:02,262\tevent_handler.py\t2905\tload_workspace\tINFO\tWorkspace \"fccc7645-8501-4541-975b-bdcfb40a5092\" with alias \"New test workspace loaded.\"\n2018-09-20 13:54:02,452\tevent_handler.py\t50\tf\tDEBUG\tStart: \"request_subset_set_data_filter\"\n2018-09-20 13:54:02,452\tevent_handler.py\t3249\trequest_subset_set_data_filter\tDEBUG\tStart: request_subset_get_indicator_settings\n"
]
],
[
[
"### Request subset get indicator settings ",
"_____no_output_____"
]
],
[
[
"ekos = EventHandler(**paths)\nrequest = ekos.test_requests['request_subset_get_indicator_settings']\n# request = ekos.test_requests['request_subset_get_indicator_settings_no_areas']\n# print(request['subset']['subset_uuid'])\n# request['subset']['subset_uuid'] = 'fel'\n# print(request['subset']['subset_uuid'])\n\nresponse_subset_get_indicator_settings = ekos.request_subset_get_indicator_settings(request)\nekos.write_test_response('request_subset_get_indicator_settings', response_subset_get_indicator_settings)",
"2018-09-20 06:50:41,643\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 06:50:41,643\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 06:50:42,330\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.6864011287689209\n2018-09-20 06:50:42,361\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.7176012992858887\n2018-09-20 06:50:42,361\tevent_handler.py\t50\tf\tDEBUG\tStart: \"request_subset_get_indicator_settings\"\n2018-09-20 06:50:42,361\tevent_handler.py\t3416\trequest_subset_get_indicator_settings\tDEBUG\tStart: request_subset_get_indicator_settings\n2018-09-20 06:50:42,392\tevent_handler.py\t2887\tload_workspace\tDEBUG\tTrying to load new workspace \"1a349dfd-5e08-4617-85a8-5bdde050a4ee\" with alias \"New test workspace\"\n2018-09-20 06:50:42,798\tevent_handler.py\t2905\tload_workspace\tINFO\tWorkspace \"1a349dfd-5e08-4617-85a8-5bdde050a4ee\" with alias \"New test workspace loaded.\"\n2018-09-20 06:50:42,829\tworkspaces.py\t1842\tload_all_data\tDEBUG\tData has been loaded from existing all_data.pickle file.\n"
]
],
[
[
"### Request subset set indicator settings",
"_____no_output_____"
]
],
[
[
"ekos = EventHandler(**paths)\nrequest = ekos.test_requests['request_subset_set_indicator_settings']\nresponse_subset_set_indicator_settings = ekos.request_subset_set_indicator_settings(request)\nekos.write_test_response('request_subset_set_indicator_settings', response_subset_set_indicator_settings)",
"2018-09-20 12:09:08,454\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 12:09:08,454\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 12:09:09,234\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.780001163482666\n2018-09-20 12:09:09,264\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.8100008964538574\n2018-09-20 12:09:09,284\tevent_handler.py\t50\tf\tDEBUG\tStart: \"request_subset_set_indicator_settings\"\n2018-09-20 12:09:09,294\tevent_handler.py\t3627\trequest_subset_set_indicator_settings\tDEBUG\tStart: request_subset_set_indicator_settings\n2018-09-20 12:09:09,324\tevent_handler.py\t2887\tload_workspace\tDEBUG\tTrying to load new workspace \"1a349dfd-5e08-4617-85a8-5bdde050a4ee\" with alias \"New test workspace\"\n2018-09-20 12:09:09,444\tlogger.py\t85\tadd_log\tDEBUG\t\n2018-09-20 12:09:09,444\tlogger.py\t86\tadd_log\tDEBUG\t========================================================================================================================\n2018-09-20 12:09:09,454\tlogger.py\t87\tadd_log\tDEBUG\t### Log added for log_id \"cc264e56-f958-4ec4-932d-bc0cc1d2caf8\" at locaton: D:\\git\\ekostat_calculator\\workspaces\\1a349dfd-5e08-4617-85a8-5bdde050a4ee\\log\\subset_cc264e56-f958-4ec4-932d-bc0cc1d2caf8.log\n2018-09-20 12:09:09,454\tlogger.py\t88\tadd_log\tDEBUG\t------------------------------------------------------------------------------------------------------------------------\n2018-09-20 12:09:09,544\tlogger.py\t85\tadd_log\tDEBUG\t\n2018-09-20 12:09:09,554\tlogger.py\t86\tadd_log\tDEBUG\t========================================================================================================================\n2018-09-20 12:09:09,554\tlogger.py\t87\tadd_log\tDEBUG\t### Log added for log_id \"default_subset\" at locaton: D:\\git\\ekostat_calculator\\workspaces\\1a349dfd-5e08-4617-85a8-5bdde050a4ee\\log\\subset_default_subset.log\n2018-09-20 12:09:09,564\tlogger.py\t88\tadd_log\tDEBUG\t------------------------------------------------------------------------------------------------------------------------\n"
]
],
[
[
"### Request subset calculate status",
"_____no_output_____"
]
],
[
[
"ekos = EventHandler(**paths)\nrequest = ekos.test_requests['request_subset_calculate_status']\nresponse = ekos.request_subset_calculate_status(request)\nekos.write_test_response('request_subset_calculate_status', response)",
"2018-09-20 19:05:31,914\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 19:05:31,917\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 19:05:32,790\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.8740499019622803\n2018-09-20 19:05:32,826\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.9130520820617676\n2018-09-20 19:05:32,831\tevent_handler.py\t50\tf\tDEBUG\tStart: \"request_subset_calculate_status\"\n2018-09-20 19:05:32,837\tevent_handler.py\t3296\trequest_subset_calculate_status\tDEBUG\tStart: request_subset_calculate_status\n2018-09-20 19:05:32,871\tevent_handler.py\t2984\tload_workspace\tDEBUG\tTrying to load new workspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace\"\n2018-09-20 19:05:33,359\tevent_handler.py\t3002\tload_workspace\tINFO\tWorkspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace loaded.\"\n2018-09-20 19:05:33,453\tworkspaces.py\t1842\tload_all_data\tDEBUG\tData has been loaded from existing all_data.pickle file.\n2018-09-20 19:05:33,493\tevent_handler.py\t2995\tload_workspace\tDEBUG\tWorkspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace\" is already loaded. Set reload=True if you want to reload the workspace.\n2018-09-20 19:05:33,530\tevent_handler.py\t2995\tload_workspace\tDEBUG\tWorkspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace\" is already loaded. Set reload=True if you want to reload the workspace.\n"
]
],
[
[
"### Request subset result get",
"_____no_output_____"
]
],
[
[
"ekos = EventHandler(**paths) \nrequest = ekos.test_requests['request_workspace_result']\nresponse_workspace_result = ekos.request_workspace_result(request)\nekos.write_test_response('request_workspace_result', response_workspace_result)",
"2018-09-20 19:13:01,347\tevent_handler.py\t117\t__init__\tDEBUG\tStart EventHandler: event_handler\n2018-09-20 19:13:01,351\tevent_handler.py\t152\t_load_mapping_objects\tDEBUG\tLoading mapping files from pickle file.\n2018-09-20 19:13:02,227\tevent_handler.py\t128\t__init__\tDEBUG\tTime for mapping: 0.8760499954223633\n2018-09-20 19:13:02,262\tevent_handler.py\t133\t__init__\tDEBUG\tTime for initiating EventHandler: 0.9140522480010986\n2018-09-20 19:13:02,266\tevent_handler.py\t50\tf\tDEBUG\tStart: \"request_workspace_result\"\n2018-09-20 19:13:02,306\tevent_handler.py\t2991\tload_workspace\tDEBUG\tTrying to load new workspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace\"\n2018-09-20 19:13:02,812\tevent_handler.py\t3009\tload_workspace\tINFO\tWorkspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace loaded.\"\n2018-09-20 19:13:02,872\tevent_handler.py\t3002\tload_workspace\tDEBUG\tWorkspace \"a377ee26-cd2d-411b-999c-073cd7a3dbd4\" with alias \"New test workspace\" is already loaded. Set reload=True if you want to reload the workspace.\n"
],
[
"response_workspace_result['subset']['a4e53080-2c68-40d5-957f-8cc4dbf77815']['result']['SE552170-130626']['result']['indicator_din_winter']['data']\n\n\n\n\n\n",
"_____no_output_____"
],
[
"workspace_uuid = 'fccc7645-8501-4541-975b-bdcfb40a5092'\nsubset_uuid = 'a4e53080-2c68-40d5-957f-8cc4dbf77815'\nresult = ekos.dict_data_timeseries(workspace_uuid=workspace_uuid, \n subset_uuid=subset_uuid, \n viss_eu_cd='SE575150-162700',\n element_id='indicator_din_winter')",
"_____no_output_____"
],
[
"print(result['datasets'][0]['x'])\nprint()\nprint(result['y'])",
"[9.2100000000000009, None, 11.92, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, 9.7699999999999996, None, None, None, None, None, 11.09]\n\n['', '12-Jan', '', '12-Mar', '12-May', '12-Jul', '12-Sep', '12-Nov', '13-Jan', '', '', '13-Mar', '13-May', '13-Jul', '13-Sep', '13-Nov', '', '14-Jan', '14-Mar', '14-May', '14-Jul', '14-Sep', '14-Nov', '15-Jan', '', '15-Mar', '15-May', '15-Jul', '15-Sep', '15-Nov', '', '16-Jan', '', '16-Mar', '16-May', '16-Jul', '16-Sep', '16-Nov', '']\n"
],
[
"for k in range(len(result['datasets'])):\n print(result['datasets'][k]['x'])",
"[9.2100000000000009, None, 11.92, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, 9.7699999999999996, None, None, None, None, None, 11.09]\n[None, None, None, None, None, None, None, None, None, None, 18.41, None, None, None, None, None, 11.69, None, None, None, None, None, None, None, 12.41, None, None, None, None, None, 9.6500000000000004, None, None, None, None, None, None, None, None]\n[10.640000000000001, None, 11.210000000000001, None, None, None, None, None, None, 13.69, None, None, None, None, None, None, 10.210000000000001, None, None, None, None, None, None, None, 13.07, None, None, None, None, None, 9.5800000000000001, None, 9.8499999999999996, None, None, None, None, None, 11.0]\n[2.9951999999999988, None, 2.4985999999999979, None, None, None, None, None, None, 4.7332999999999981, 3.4917999999999978, None, None, None, None, None, 2.7468999999999983, None, None, None, None, None, None, None, 3.9139099999999978, None, None, None, None, None, 2.2502999999999975, None, 2.0019999999999989, None, None, None, None, None, 2.4985999999999979]\n[3.7439999999999984, None, 3.1232499999999974, None, None, None, None, None, None, 5.9166249999999971, 4.3647499999999972, None, None, None, None, None, 3.4336249999999979, None, None, None, None, None, None, None, 4.8923874999999972, None, None, None, None, None, 2.8128749999999969, None, 2.5024999999999986, None, None, None, None, None, 3.1232499999999974]\n[4.4704477611940279, None, 3.7292537313432801, None, None, None, None, None, None, 7.0646268656716389, 5.2116417910447721, None, None, None, None, None, 4.099850746268654, None, None, None, None, None, None, None, 5.8416567164179067, None, None, None, None, None, 3.3586567164179066, None, 2.9880597014925354, None, None, None, None, None, 3.7292537313432801]\n[6.807272727272724, None, 5.6786363636363593, None, None, None, None, None, None, 10.757499999999995, 7.9359090909090861, None, None, None, None, None, 6.2429545454545421, None, None, None, None, None, None, None, 8.8952499999999954, None, None, None, None, None, 5.1143181818181764, None, 4.5499999999999972, None, None, None, None, None, 5.6786363636363593]\n[10.328275862068962, None, 8.6158620689655105, None, None, None, None, None, None, 16.321724137931028, 12.040689655172407, None, None, None, None, None, 9.4720689655172361, None, None, None, None, None, None, None, 13.496241379310337, None, None, None, None, None, 7.759655172413785, None, 6.9034482758620657, None, None, None, None, None, 8.6158620689655105]\n"
],
[
"import datetime \n\n# Extend date list \nstart_year = all_dates[0].year\nend_year = all_dates[-1].year+1\n\n\ndate_intervall = []\nfor year in range(start_year, end_year+1):\n for month in range(1, 13):\n d = datetime.datetime(year, month, 1) \n if d >= all_dates[0] and d <= all_dates[-1]:\n date_intervall.append(d)\n\nextended_dates = sorted(set(all_dates + date_intervall))\n\n\n# Loop dates and add/remove values \nnew_x = [] \nnew_y = dict((item, []) for item in date_to_y)\nfor date in extended_dates: \n if date in date_intervall:\n new_x.append(date.strftime('%y-%b'))\n else:\n new_x.append('')\n \n for i in new_y:\n new_y[i].append(date_to_y[i].get(date, None))\n \n \n\n\n# new_y = {}\n# for i in date_to_y:\n# new_y[i] = []\n# for date in all_dates:\n# d = date_to_y[i].get(date)\n# if d:\n# new_y[i].append(d)\n# else:\n# new_y[i].append(None)\n\n\n\n",
"_____no_output_____"
],
[
"new_y[0]",
"_____no_output_____"
],
[
"import datetime \nyear_list = range(2011, 2013+1) \nmonth_list = range(1, 13)\n\ndate_list = []\nfor year in year_list:\n for month in month_list:\n date_list.append(datetime.datetime(year, month, 1))\n\n",
"_____no_output_____"
],
[
"date_list",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"y[3][i]",
"_____no_output_____"
],
[
"sorted(pd.to_datetime(df['SDATE']))\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00bdc2898951b7860ed4266de0a80880f464277 | 11,114 | ipynb | Jupyter Notebook | theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb | CrtomirJuren/python-delavnica | db96470d2cb1870390545cfbe511552a9ef08720 | [
"MIT"
] | null | null | null | theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb | CrtomirJuren/python-delavnica | db96470d2cb1870390545cfbe511552a9ef08720 | [
"MIT"
] | null | null | null | theory/NumPy/01-NumPy-Indexing-and-Selection.ipynb | CrtomirJuren/python-delavnica | db96470d2cb1870390545cfbe511552a9ef08720 | [
"MIT"
] | null | null | null | 19.129088 | 396 | 0.476786 | [
[
[
"Notebook prirejen s strani http://www.pieriandata.com",
"_____no_output_____"
],
[
"# NumPy Indexing and Selection\n\nIn this lecture we will discuss how to select elements or groups of elements from an array.",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
],
[
"#Creating sample array\narr = np.arange(0,11)",
"_____no_output_____"
],
[
"#Show\narr",
"_____no_output_____"
]
],
[
[
"## Bracket Indexing and Selection\nThe simplest way to pick one or some elements of an array looks very similar to python lists:",
"_____no_output_____"
]
],
[
[
"#Get a value at an index\narr[8]",
"_____no_output_____"
],
[
"#Get values in a range\narr[1:5]",
"_____no_output_____"
],
[
"#Get values in a range\narr[0:5]\n\n# l = ['a', 'b', 'c']\n# l[0:2]",
"_____no_output_____"
]
],
[
[
"## Broadcasting\n\nNumPy arrays differ from normal Python lists because of their ability to broadcast. With lists, you can only reassign parts of a list with new parts of the same size and shape. That is, if you wanted to replace the first 5 elements in a list with a new value, you would have to pass in a new 5 element list. With NumPy arrays, you can broadcast a single value across a larger set of values:",
"_____no_output_____"
]
],
[
[
"l = list(range(10))\nl\n",
"_____no_output_____"
],
[
"l[0:5] = [100,100,100,100,100]\nl",
"_____no_output_____"
],
[
"#Setting a value with index range (Broadcasting)\narr[0:5]=100\n\n#Show\narr",
"_____no_output_____"
],
[
"# Reset array, we'll see why I had to reset in a moment\narr = np.arange(0,11)\n\n#Show\narr",
"_____no_output_____"
],
[
"#Important notes on Slices\nslice_of_arr = arr[0:6]\n\n#Show slice\nslice_of_arr",
"_____no_output_____"
],
[
"#Change Slice\nslice_of_arr[:]=99\n\n#Show Slice again\nslice_of_arr",
"_____no_output_____"
]
],
[
[
"Now note the changes also occur in our original array!",
"_____no_output_____"
]
],
[
[
"arr",
"_____no_output_____"
]
],
[
[
"Data is not copied, it's a view of the original array! This avoids memory problems!",
"_____no_output_____"
]
],
[
[
"#To get a copy, need to be explicit\narr_copy = arr.copy()\n\narr_copy",
"_____no_output_____"
]
],
[
[
"## Indexing a 2D array (matrices)\n\nThe general format is **arr_2d[row][col]** or **arr_2d[row,col]**. I recommend using the comma notation for clarity.",
"_____no_output_____"
]
],
[
[
"arr_2d = np.array(([5,10,15],[20,25,30],[35,40,45]))\n\n#Show\narr_2d",
"_____no_output_____"
],
[
"#Indexing row\narr_2d[1]",
"_____no_output_____"
],
[
"# Format is arr_2d[row][col] or arr_2d[row,col]\n\n# Getting individual element value\narr_2d[1][0]",
"_____no_output_____"
],
[
"# Getting individual element value\narr_2d[1,0]",
"_____no_output_____"
],
[
"# 2D array slicing\n\n#Shape (2,2) from top right corner\narr_2d[:2,1:]",
"_____no_output_____"
],
[
"#Shape bottom row\narr_2d[2]",
"_____no_output_____"
],
[
"#Shape bottom row\narr_2d[2,:]",
"_____no_output_____"
]
],
[
[
"## More Indexing Help\nIndexing a 2D matrix can be a bit confusing at first, especially when you start to add in step size. Try google image searching *NumPy indexing* to find useful images, like this one:\n\n<img src= 'numpy_indexing.png' width=500/> Image source: http://www.scipy-lectures.org/intro/numpy/numpy.html",
"_____no_output_____"
],
[
"## Conditional Selection\n\nThis is a very fundamental concept that will directly translate to pandas later on, make sure you understand this part!\n\nLet's briefly go over how to use brackets for selection based off of comparison operators.",
"_____no_output_____"
]
],
[
[
"arr = np.arange(1,11)\narr",
"_____no_output_____"
],
[
"arr > 4",
"_____no_output_____"
],
[
"bool_arr = arr>4",
"_____no_output_____"
],
[
"bool_arr",
"_____no_output_____"
],
[
"arr[bool_arr]",
"_____no_output_____"
],
[
"arr[arr>2]",
"_____no_output_____"
],
[
"x = 2\narr[arr>x]",
"_____no_output_____"
]
],
[
[
"# Great Job!\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d00bdc5c8bb9b07f48abd86c8db65a8c316526b8 | 556,680 | ipynb | Jupyter Notebook | stats_overview/04_LINEAR_MODELS.ipynb | minireference/noBSstatsnotebooks | 1037042a0e2747f65cdca463f58c3a6a18c02e64 | [
"MIT"
] | 13 | 2021-09-18T08:22:51.000Z | 2022-03-29T13:08:59.000Z | stats_overview/04_LINEAR_MODELS.ipynb | minireference/noBSstatsnotebooks | 1037042a0e2747f65cdca463f58c3a6a18c02e64 | [
"MIT"
] | null | null | null | stats_overview/04_LINEAR_MODELS.ipynb | minireference/noBSstatsnotebooks | 1037042a0e2747f65cdca463f58c3a6a18c02e64 | [
"MIT"
] | 2 | 2021-08-24T16:13:44.000Z | 2021-12-05T09:32:04.000Z | 425.59633 | 431,348 | 0.928934 | [
[
[
"# Chapter 4: Linear models\n\n[Link to outline](https://docs.google.com/document/d/1fwep23-95U-w1QMPU31nOvUnUXE2X3s_Dbk5JuLlKAY/edit#heading=h.9etj7aw4al9w)\n\nConcept map:\n![concepts_LINEARMODELS.png](attachment:c335ebb2-f116-486c-8737-22e517de3146.png)",
"_____no_output_____"
],
[
"#### Notebook setup",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport scipy as sp\nimport seaborn as sns\nfrom scipy.stats import uniform, norm\n\n\n# notebooks figs setup\n%matplotlib inline\nimport matplotlib.pyplot as plt\nsns.set(rc={'figure.figsize':(8,5)})\nblue, orange = sns.color_palette()[0], sns.color_palette()[1]\n\n# silence annoying warnings\nimport warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
]
],
[
[
"## 4.1 Linear models for relationship between two numeric variables\n\n- def'n linear model: **y ~ m*x + b**, a.k.a. linear regression\n- Amy has collected a new dataset:\n - Instead of receiving a fixed amount of stats training (100 hours),\n **each employee now receives a variable amount of stats training (anywhere from 0 hours to 100 hours)**\n - Amy has collected ELV values after one year as previously\n - Goal find best fit line for relationship $\\textrm{ELV} \\sim \\beta_0 + \\beta_1\\!*\\!\\textrm{hours}$\n- Limitation: **we assume the change in ELV is proportional to number of hours** (i.e. linear relationship).\n Other types of hours-ELV relationship possible, but we will not be able to model them correctly (see figure below).",
"_____no_output_____"
],
[
"### New dataset\n\n - The `hours` column contains the `x` values (how many hours of statistics training did the employee receive),\n - The `ELV` column contains the `y` values (the employee ELV after one year)\n\n![excel_file_for_linear_models.png](attachment:71dfeb87-78ec-4523-94fa-7df9a6db4aec.png)\n\n\n",
"_____no_output_____"
]
],
[
[
"# Load data into a pandas dataframe\ndf2 = pd.read_excel(\"data/ELV_vs_hours.ods\", sheet_name=\"Data\")\n# df2",
"_____no_output_____"
],
[
"df2.describe()",
"_____no_output_____"
],
[
"# plot ELV vs. hours data\nsns.scatterplot(x='hours', y='ELV', data=df2)",
"_____no_output_____"
],
[
"# linear model plot (preview)\n# sns.lmplot(x='hours', y='ELV', data=df2, ci=False)",
"_____no_output_____"
]
],
[
[
"#### Types of linear relationship between input and output\n\nDifferent possible relationships between the number of hours of stats training and ELV gains:\n\n![figures/ELV_as_function_of_stats_hours.png](figures/ELV_as_function_of_stats_hours.png)",
"_____no_output_____"
],
[
"## 4.2 Fitting linear models\n\n- Main idea: use `fit` method from `statsmodels.ols` and a formula (approach 1)\n- Visual inspection\n- Results of linear model fit are:\n - `beta0` = $\\beta_0$ = baseline ELV (y-intercept)\n - `beta1` = $\\beta_1$ = increase in ELV for each additional hour of stats training (slope)\n- Five more alternative fitting methods (bonus material):\n 2. fit using statsmodels `OLS`\n 3. solution using `linregress` from `scipy`\n 4. solution using `optimize` from `scipy`\n 5. linear algebra solution using `numpy`\n 6. solution using `LinearRegression` model from scikit-learn",
"_____no_output_____"
],
[
"### Using statsmodels formula API\n\nThe `statsmodels` Python library offers a convenient way to specify statistics model as a \"formula\" that describes the relationship we're looking for.\n\nMathematically, the linear model is written:\n\n$\\large \\textrm{ELV} \\ \\ \\sim \\ \\ \\beta_0\\cdot 1 \\ + \\ \\beta_1\\cdot\\textrm{hours}$\n\nand the formula is:\n\n`ELV ~ 1 + hours`\n\nNote the variables $\\beta_0$ and $\\beta_1$ are omitted, since the whole point of fitting a linear model is to find these coefficients. The parameters of the model are:\n- Instead of $\\beta_0$, the constant parameter will be called `Intercept`\n- Instead of a new name $\\beta_1$, we'll call it `hours` coefficient (i.e. the coefficient associated with the `hours` variable in the model)\n",
"_____no_output_____"
]
],
[
[
"import statsmodels.formula.api as smf\n\nmodel = smf.ols('ELV ~ 1 + hours', data=df2)\nresult = model.fit()",
"_____no_output_____"
],
[
"# extact the best-fit model parameters\nbeta0, beta1 = result.params\nbeta0, beta1",
"_____no_output_____"
],
[
"# data points\nsns.scatterplot(x='hours', y='ELV', data=df2)\n\n# linear model for data\nx = df2['hours'].values # input = hours\nymodel = beta0 + beta1*x # output = ELV\nsns.lineplot(x, ymodel)",
"_____no_output_____"
],
[
"result.summary()",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
],
[
"### Alternative model fitting methods\n\n2. fit using statsmodels [`OLS`](https://www.statsmodels.org/stable/generated/statsmodels.regression.linear_model.OLS.html)\n3. solution using [`linregress`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html) from `scipy`\n4. solution using [`minimize`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html) from `scipy`\n5. [linear algebra](https://numpy.org/doc/stable/reference/routines.linalg.html) solution using `numpy`\n6. solution using [`LinearRegression`](https://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares) model from scikit-learn",
"_____no_output_____"
],
[
"#### Data pre-processing\n\nThe `statsmodels` formula `ols` approach we used above was able to get the data\ndirectly from the dataframe `df2`, but some of the other model fitting methods\nrequire data to be provided as regular arrays: the x-values and the y-values.",
"_____no_output_____"
]
],
[
[
"# extract hours and ELV data from df2\nx = df2['hours'].values # hours data as an array\ny = df2['ELV'].values # ELV data as an array\n\nx.shape, y.shape\n# x",
"_____no_output_____"
]
],
[
[
"Two of the approaches required \"packaging\" the x-values along with a column of ones,\nto form a matrix (called a design matrix). Luckily `statsmodels` provides a convenient function for this:",
"_____no_output_____"
]
],
[
[
"import statsmodels.api as sm\n\n# add a column of ones to the x data\nX = sm.add_constant(x)\nX.shape\n# X",
"_____no_output_____"
]
],
[
[
"____\n\n#### 2. fit using statsmodels OLS\n\n",
"_____no_output_____"
]
],
[
[
"model2 = sm.OLS(y, X)\nresult2 = model2.fit()\n# result2.summary()\nresult2.params",
"_____no_output_____"
]
],
[
[
"____\n\n#### 3. solution using `linregress` from `scipy`",
"_____no_output_____"
]
],
[
[
"from scipy.stats import linregress\n\nresult3 = linregress(x, y)\nresult3.intercept, result3.slope",
"_____no_output_____"
]
],
[
[
"____\n\n#### 4. Using an optimization approach\n",
"_____no_output_____"
]
],
[
[
"from scipy.optimize import minimize\n\ndef sse(beta, x=x, y=y):\n \"\"\"Compute the sum-of-squared-errors objective function.\"\"\"\n sumse = 0.0\n for xi, yi in zip(x, y):\n yi_pred = beta[0] + beta[1]*xi\n ei = (yi_pred-yi)**2\n sumse += ei\n return sumse\n\nresult4 = minimize(sse, x0=[0,0])\nbeta0, beta1 = result4.x\nbeta0, beta1",
"_____no_output_____"
]
],
[
[
"____\n\n#### 5. Linear algebra solution\nWe obtain the least squares solution using the Moore–Penrose inverse formula:\n$$ \\large\n \\vec{\\beta} = (X^{\\sf T} X)^{-1}X^{\\sf T}\\; \\vec{y}\n$$",
"_____no_output_____"
]
],
[
[
"# 5. linear algebra solution using `numpy`\nimport numpy as np\n\nresult5 = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)\nbeta0, beta1 = result5\nbeta0, beta1",
"_____no_output_____"
]
],
[
[
"_____\n\n#### Using scikit-learn\n",
"_____no_output_____"
]
],
[
[
"# 6. solution using `LinearRegression` from scikit-learn\nfrom sklearn import linear_model\nmodel6 = linear_model.LinearRegression()\nmodel6.fit(x[:,np.newaxis], y)\nmodel6.intercept_, model6.coef_",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
],
[
"## 4.3 Interpreting linear models\n\n- model fit checks\n\n - $R^2$ [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination)\n = the proportion of the variation in the dependent variable that is predictable from the independent variable\n - plot of residuals\n - many other: see [scikit docs](https://scikit-learn.org/stable/modules/model_evaluation.html#regression-metrics)\n- hypothesis tests\n - is slope zero or nonzero? (and CI interval)\n - caution: cannot make any cause-and-effect claims; only a correlation\n- Predictions\n - given best-fir model obtained from data, we can make predictions (interpolations), \n e.g., what is the expected ELV after 50 hours of stats training?",
"_____no_output_____"
],
[
"### Interpreting the results\n\nLet's review some of the other data included in the `results.summary()` report for the linear model fit we did earlier.",
"_____no_output_____"
]
],
[
[
"result.summary()",
"_____no_output_____"
]
],
[
[
"### Model parameters",
"_____no_output_____"
]
],
[
[
"beta0, beta1 = result.params\nresult.params",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
],
[
"### The $R^2$ coefficient of determination\n\n$R^2 = 1$ corresponds to perfect prediction\n",
"_____no_output_____"
]
],
[
[
"result.rsquared",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
],
[
"### Hypothesis testing for slope coefficient\n\nIs there a non-zero slope coefficient?\n\n- **null hypothesis $H_0$**: `hours` has no effect on `ELV`,\n which is equivalent to $\\beta_1 = 0$:\n $$ \\large\n H_0: \\qquad \\textrm{ELV} \\sim \\mathcal{N}(\\color{red}{\\beta_0}, \\sigma^2) \\qquad \\qquad \\qquad\n $$\n\n- **alternative hypothesis $H_A$**: `hours` has an effect on `ELV`,\n and the slope is not zero, $\\beta_1 \\neq 0$:\n $$ \\large\n H_A: \\qquad \\textrm{ELV} \n \\sim\n \\mathcal{N}\\left(\n \\color{blue}{\\beta_0 + \\beta_1\\!\\cdot\\!\\textrm{hours}},\n \\ \\sigma^2\n \\right)\n $$",
"_____no_output_____"
]
],
[
[
"# p-value under the null hypotheis of zero slope or \"no effect of `hours` on `ELV`\"\nresult.pvalues.loc['hours']",
"_____no_output_____"
],
[
"# 95% confidence interval for the hours-slope parameter\n# result.conf_int()\nCI_hours = list(result.conf_int().loc['hours'])\nCI_hours",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"### Predictions using the model\n\nWe can use the model we obtained to predict (interpolate) the ELV for future employees.",
"_____no_output_____"
]
],
[
[
"sns.scatterplot(x='hours', y='ELV', data=df2)\nymodel = beta0 + beta1*x\nsns.lineplot(x, ymodel)",
"_____no_output_____"
]
],
[
[
"What ELV can we expect from a new employee that takes 50 hours of stats training?",
"_____no_output_____"
]
],
[
[
"result.predict({'hours':[50]})",
"_____no_output_____"
],
[
"result.predict({'hours':[100]})",
"_____no_output_____"
]
],
[
[
"**WARNING**: it's not OK to extrapolate the validity of the model outside of the range of values where we have observed data.\n\nFor example, there is no reason to believe in the model's predictions about ELV for 200 or 2000 hours of stats training:",
"_____no_output_____"
]
],
[
[
"result.predict({'hours':[200]})",
"_____no_output_____"
]
],
[
[
" ",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"## Discussion\n\nFurther topics that will be covered in the book:\n- Generalized linear models, e.g., [Logistic regression](https://en.wikipedia.org/wiki/Logistic_regression)\n- [Everything is a linear model](https://www.eigenfoo.xyz/tests-as-linear/) article\n- The verbs `fit` and `predict` will come up A LOT in machine learning, \n so it's worth learning linear models in detail to be prepared for further studies.\n",
"_____no_output_____"
],
[
"____\n",
"_____no_output_____"
],
[
"Congratulations on completing this overview of statistics! We covered a lot of topics and core ideas from the book. I know some parts seemed kind of complicated at first, but if you think about them a little you'll see there is nothing too difficult to learn. The good news is that the examples in these notebooks contain all the core ideas, and you won't be exposed to anything more complicated that what you saw here!\n\nIf you were able to handle these notebooks, you'll be able to handle the **No Bullshit Guide to Statistics** too! In fact the book will cover the topics in a much smoother way, and with better explanations. You'll have a lot of exercises and problems to help you practice statistical analysis.\n\n\n### Next steps\n\n- I encourage you to check out the [book outline shared gdoc](https://docs.google.com/document/d/1fwep23-95U-w1QMPU31nOvUnUXE2X3s_Dbk5JuLlKAY/edit) if you haven't seen it already. Please leave me a comment in the google document if you see something you don't like in the outline, or if you think some important statistics topics are missing. You can also read the [book proposal blog post](https://minireference.com/blog/no-bullshit-guide-to-statistics-progress-update/) for more info about the book.\n- Check out also the [concept map](https://minireference.com/static/excerpts/noBSstats/conceptmaps/BookSubjectsOverview.pdf). You can print it out and annotate with the concepts you heard about in these notebooks.\n- If you want to be involved in the stats book in the coming months, sign up to the [stats reviewers mailing list](https://confirmsubscription.com/h/t/A17516BF2FCB41B2) to receive chapter drafts as they are being prepared (Nov+Dec 2021). I'll appreciate your feedback on the text. The goal is to have the book finished in the Spring 2022, and feedback and \"user testing\" will be very helpful.\n\n\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d00be4ae55cb527c1ac51dc3799a99c9a2b051b0 | 435,801 | ipynb | Jupyter Notebook | SAS_contrib/Ask_the_Expert_Germany_2021.ipynb | mp675/saspy-examples | a134415a25b5dfd39d958d82e115a29cbcfda2b7 | [
"Apache-2.0"
] | 48 | 2018-10-06T23:09:28.000Z | 2022-02-22T23:50:10.000Z | SAS_contrib/Ask_the_Expert_Germany_2021.ipynb | mp675/saspy-examples | a134415a25b5dfd39d958d82e115a29cbcfda2b7 | [
"Apache-2.0"
] | 7 | 2019-01-10T18:54:57.000Z | 2021-11-29T08:49:20.000Z | SAS_contrib/Ask_the_Expert_Germany_2021.ipynb | mp675/saspy-examples | a134415a25b5dfd39d958d82e115a29cbcfda2b7 | [
"Apache-2.0"
] | 38 | 2018-10-06T23:09:29.000Z | 2022-01-11T16:05:16.000Z | 58.3636 | 70,814 | 0.622268 | [
[
[
"try:\n import saspy\nexcept ImportError as e:\n print('Installing saspy')\n %pip install saspy",
"_____no_output_____"
],
[
"import pandas as pd\n# The following imports are only necessary for automated sascfg_personal.py creation\nfrom pathlib import Path\nimport os\nfrom shutil import copyfile\nimport getpass",
"_____no_output_____"
],
[
"# Imports without the setup check codes\nimport saspy\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"# Set up your connection\n\nThe next cell contains code to check if you already have a sascfg_personal.py file in your current conda environment. If you do not one is created for you.\n\nNext [choose your access method](https://sassoftware.github.io/saspy/install.html#choosing-an-access-method) and the read through the configuration properties in sascfg_personal.py",
"_____no_output_____"
]
],
[
[
"# Setup for the configuration file - for running inside of a conda environment\nsaspyPfad = f\"C:\\\\Users\\\\{getpass.getuser()}\\\\.conda\\\\envs\\\\{os.environ['CONDA_DEFAULT_ENV']}\\\\Lib\\\\site-packages\\\\saspy\\\\\"\nsaspycfg_personal = Path(f'{saspyPfad}sascfg_personal.py')\nif saspycfg_personal.is_file():\n print('All setup and ready to go')\nelse:\n copyfile(f'{saspyPfad}sascfg.py', f'{saspyPfad}sascfg_personal.py')\n print('The configuration file was created for you, please setup your connection method')\n print(f'Find sascfg_personal.py here: {saspyPfad}')",
"All setup and ready to go\n"
]
],
[
[
"# Configuration\nprod = {\n\n 'iomhost': 'rfnk01-0068.exnet.sas.com', <-- SAS Host Name\n\n 'iomport': 8591, <-- SAS Workspace Server Port\n \n 'class_id': '440196d4-90f0-11d0-9f41-00a024bb830c', <-- static, if the value is wrong use proc iomoperate\n \n 'provider': 'sas.iomprovider', <-- static\n \n 'encoding': 'windows-1252' <-- Python encoding for SAS session encoding\n \n}",
"_____no_output_____"
]
],
[
[
"# If no configuration name is specified, you get a list of the configured ones\n# sas = saspy.SASsession(cfgname='prod')\nsas = saspy.SASsession()",
"Please enter the name of the SAS Config you wish to run. Available Configs are: ['prod', 'dev'] prod\nUsername: sasdemo\nPassword: ········\nSAS Connection established. Workspace UniqueIdentifier is 5A182D7A-E928-4CA9-8EC4-9BE60ECB2A79\n\n"
]
],
[
[
"# Explore some interactions with SAS\nGetting a feeling for what SASPy can do.",
"_____no_output_____"
]
],
[
[
"# Let's take a quick look at all the different methods and variables provided by SASSession object\ndir(sas)",
"_____no_output_____"
],
[
"# Get a list of all tables inside of the library sashelp\ntable_df = sas.list_tables(libref='sashelp', results='pandas')\n# Search for a table containing a capital C in its name\ntable_df[table_df['MEMNAME'].str.contains('C')]",
"_____no_output_____"
],
[
"# If teach_me_sas is true instead of executing the code, we get the generated code returned\nsas.teach_me_SAS(True)\nsas.list_tables(libref='sashelp', results='pandas')\n# Let's turn it off again to actually run the code\nsas.teach_me_SAS(False)",
"\n proc datasets dd=sashelp nodetails nolist noprint;\n contents memtype=(data view) nodetails \n dir out=work._saspy_lib_list(keep=memname memtype) data=_all_ noprint;\n run;\n \n proc sql;\n create table work._saspy_lib_list as select distinct * from work._saspy_lib_list;\n quit;\n \n"
],
[
"# Create a sasdata object, based on the table cars in the sashelp library\ncars = sas.sasdata('cars', 'sashelp')\n# Get information about the columns in the table\ncars.columnInfo()",
"_____no_output_____"
],
[
"# Creating a simple heat map \ncars.heatmap('Horsepower', 'EngineSize')",
"_____no_output_____"
],
[
"# Clean up for this section\ndel cars, table_df",
"_____no_output_____"
]
],
[
[
"# Reading in data from local disc with Pandas and uploading it to SAS\n1. First we are going to read in a local csv file\n2. Creating a copy of the base data file in SAS\n3. Append the local data to the data stored in SAS and sort it",
"_____no_output_____"
],
[
"The Opel data set:\n\nMake,Model,Type,Origin,DriveTrain,MSRP,Invoice,EngineSize,Cylinders,Horsepower,MPG_City,MPG_Highway,Weight,Wheelbase,Length\nOpel,Astra Edition,Sedan,Europe,Rear,28495,26155,3,6,22.5,16,23,4023,110,180\nOpel,Astra Design & Tech,Sedan,Europe,Rear,30795,28245,4.4,8,32.5,16,22,4824,111,184\nOpel,Astra Elegance,Sedan,Europe,Rear,37995,34800,2.5,6,18.4,20,29,3219,107,176\nOpel,Astra Ultimate,Sedan,Europe,Rear,42795,38245,2.5,6,18.4,20,29,3197,107,177\nOpel,Astra Business Edition,Sedan,Europe,Rear,28495,24800,2.5,6,18.4,19,27,3560,107,177\nOpel,Astra Elegance,Sedan,Europe,Rear,30245,27745,2.5,6,18.4,19,27,3461,107,176",
"_____no_output_____"
]
],
[
[
"# Read a local csv file with pandas and take a look\nopel = pd.read_csv('cars_opel.csv')\nopel.describe()",
"_____no_output_____"
],
[
"# Looks like the horsepower isn't right, let's fix that\nopel.loc[:, 'Horsepower'] *= 10\nopel.describe()",
"_____no_output_____"
],
[
"# Create a working copy of the cars data set\nsas.submitLOG('''data work.cars; set sashelp.cars; run;''')",
"\f16 The SAS System 14:40 Monday, April 19, 2021\n\n101 \n102 ods listing close;\n103 ods html5 (id=saspy_internal) options(bitmap_mode='inline')\n104 file=\"D:\\opt\\sasinside\\SASWORK\\_TD26692_SAS-AAP_\\Prc2\\saspy_results.html\"\n105 device=svg\n106 style=HTMLBlue;\nNOTE: Writing HTML5(SASPY_INTERNAL) Body file: D:\\opt\\sasinside\\SASWORK\\_TD26692_SAS-AAP_\\Prc2\\saspy_results.html\n107 ods graphics on / outputfmt=png;\n108 ;*';*\";*/;quit;run;\n108 ! data work.cars; set sashelp.cars; run;\n\nNOTE: There were 428 observations read from the data set SASHELP.CARS.\nNOTE: The data set WORK.CARS has 428 observations and 15 variables.\nNOTE: DATA statement used (Total process time):\n real time 0.01 seconds\n cpu time 0.00 seconds\n \n\n108 ! ;*';*\";*/;quit;run;\n109 ods html5 (id=saspy_internal) close;\n110 ods listing;\n111 \n\n"
],
[
"# Append the panda dataframe to the working copy of the cars data set in SAS\ncars = sas.sasdata('cars', 'work')\n# The pandas data frame is appended to the SAS data set\ncars.append(opel)\ncars.tail()",
"\f22 The SAS System 14:40 Monday, April 19, 2021\n\n135 ;*';*\";*/;quit;run;\n135 ! proc append base=work.'cars'n\n136 data=WORK.'_temp_df'n;\n137 run;\n\nNOTE: Appending WORK._TEMP_DF to WORK.CARS.\nWARNING: Variable Make has different lengths on BASE and DATA files (BASE 13 DATA 4).\nWARNING: Variable Model has different lengths on BASE and DATA files (BASE 40 DATA 22).\nWARNING: Variable Type has different lengths on BASE and DATA files (BASE 8 DATA 5).\nWARNING: Variable DriveTrain has different lengths on BASE and DATA files (BASE 5 DATA 4).\nNOTE: There were 6 observations read from the data set WORK._TEMP_DF.\nNOTE: 6 observations added.\nNOTE: The data set WORK.CARS has 434 observations and 15 variables.\nNOTE: PROCEDURE APPEND used (Total process time):\n real time 0.01 seconds\n cpu time 0.01 seconds\n \n137 ! ;*';*\";*/;quit;run;\n\n\n"
],
[
"# Sort the data set in SAS to restore the old order\ncars.sort('make model type')\ncars.tail()",
"_____no_output_____"
],
[
"# Confirm that Opel has been added\ncars.bar('Make')",
"_____no_output_____"
]
],
[
[
"# Reading in data from SAS and manipulating it with Pandas",
"_____no_output_____"
]
],
[
[
"# Short form is sd2df()\ndf = sas.sasdata2dataframe('cars', 'sashelp', dsopts={'where': 'make=\"BMW\"'})\ntype(df)",
"_____no_output_____"
]
],
[
[
"Now that the data set is available as a Pandas DataFrame you can use it in e.g. a sklearn pipeline",
"_____no_output_____"
]
],
[
[
"df",
"_____no_output_____"
]
],
[
[
"# Creating a model\nThe data can be found [here](https://www.kaggle.com/gsr9099/best-model-for-credit-card-approval)",
"_____no_output_____"
]
],
[
[
"# Read two local csv files\ndf_applications = pd.read_csv('application_record.csv')\ndf_credit = pd.read_csv('credit_record.csv')",
"_____no_output_____"
],
[
"# Get a feel for the data\nprint(df_applications.columns)\nprint(df_applications.head(5))\ndf_applications.describe()",
"Index(['ID', 'CODE_GENDER', 'FLAG_OWN_CAR', 'FLAG_OWN_REALTY', 'CNT_CHILDREN',\n 'AMT_INCOME_TOTAL', 'NAME_INCOME_TYPE', 'NAME_EDUCATION_TYPE',\n 'NAME_FAMILY_STATUS', 'NAME_HOUSING_TYPE', 'DAYS_BIRTH',\n 'DAYS_EMPLOYED', 'FLAG_MOBIL', 'FLAG_WORK_PHONE', 'FLAG_PHONE',\n 'FLAG_EMAIL', 'OCCUPATION_TYPE', 'CNT_FAM_MEMBERS'],\n dtype='object')\n ID CODE_GENDER FLAG_OWN_CAR FLAG_OWN_REALTY CNT_CHILDREN \\\n0 5008804 M Y Y 0 \n1 5008805 M Y Y 0 \n2 5008806 M Y Y 0 \n3 5008808 F N Y 0 \n4 5008809 F N Y 0 \n\n AMT_INCOME_TOTAL NAME_INCOME_TYPE NAME_EDUCATION_TYPE \\\n0 427500.0 Working Higher education \n1 427500.0 Working Higher education \n2 112500.0 Working Secondary / secondary special \n3 270000.0 Commercial associate Secondary / secondary special \n4 270000.0 Commercial associate Secondary / secondary special \n\n NAME_FAMILY_STATUS NAME_HOUSING_TYPE DAYS_BIRTH DAYS_EMPLOYED \\\n0 Civil marriage Rented apartment -12005 -4542 \n1 Civil marriage Rented apartment -12005 -4542 \n2 Married House / apartment -21474 -1134 \n3 Single / not married House / apartment -19110 -3051 \n4 Single / not married House / apartment -19110 -3051 \n\n FLAG_MOBIL FLAG_WORK_PHONE FLAG_PHONE FLAG_EMAIL OCCUPATION_TYPE \\\n0 1 1 0 0 NaN \n1 1 1 0 0 NaN \n2 1 0 0 0 Security staff \n3 1 0 1 1 Sales staff \n4 1 0 1 1 Sales staff \n\n CNT_FAM_MEMBERS \n0 2.0 \n1 2.0 \n2 2.0 \n3 1.0 \n4 1.0 \n"
],
[
"# Join the two data sets together\ndf_application_credit = df_applications.join(df_credit, lsuffix='_applications', rsuffix='_credit')\nprint(df_application_credit.head())\ndf_application_credit.columns",
" ID_applications CODE_GENDER FLAG_OWN_CAR FLAG_OWN_REALTY CNT_CHILDREN \\\n0 5008804 M Y Y 0 \n1 5008805 M Y Y 0 \n2 5008806 M Y Y 0 \n3 5008808 F N Y 0 \n4 5008809 F N Y 0 \n\n AMT_INCOME_TOTAL NAME_INCOME_TYPE NAME_EDUCATION_TYPE \\\n0 427500.0 Working Higher education \n1 427500.0 Working Higher education \n2 112500.0 Working Secondary / secondary special \n3 270000.0 Commercial associate Secondary / secondary special \n4 270000.0 Commercial associate Secondary / secondary special \n\n NAME_FAMILY_STATUS NAME_HOUSING_TYPE ... DAYS_EMPLOYED FLAG_MOBIL \\\n0 Civil marriage Rented apartment ... -4542 1 \n1 Civil marriage Rented apartment ... -4542 1 \n2 Married House / apartment ... -1134 1 \n3 Single / not married House / apartment ... -3051 1 \n4 Single / not married House / apartment ... -3051 1 \n\n FLAG_WORK_PHONE FLAG_PHONE FLAG_EMAIL OCCUPATION_TYPE CNT_FAM_MEMBERS \\\n0 1 0 0 NaN 2.0 \n1 1 0 0 NaN 2.0 \n2 0 0 0 Security staff 2.0 \n3 0 1 1 Sales staff 1.0 \n4 0 1 1 Sales staff 1.0 \n\n ID_credit MONTHS_BALANCE STATUS \n0 5001711 0 X \n1 5001711 -1 0 \n2 5001711 -2 0 \n3 5001711 -3 0 \n4 5001712 0 C \n\n[5 rows x 21 columns]\n"
],
[
"# Upload the data to the SAS server\n# Here just a small sample, as the data set is quite large and the data is pre-loaded on SAS server\nsas.df2sd(df_application_credit[:10], table='application_credit_sample', libref='saspy')",
"_____no_output_____"
],
[
"# Create a training data set and test data set in SAS\napplication_credit_sas = sas.sasdata('application_credit', 'saspy')\napplication_credit_part = application_credit_sas.partition(fraction=.7, var='status')\napplication_credit_part.info()",
"_____no_output_____"
],
[
"# Creating a SAS/STAT object\nstat = sas.sasstat()\ndir(stat)",
"_____no_output_____"
],
[
"# Target\ntarget = 'status'\n# Class Variables \nvar_class = ['FLAG_OWN_CAR','FLAG_OWN_REALTY', 'OCCUPATION_TYPE', 'STATUS']",
"_____no_output_____"
]
],
[
[
"The HPSPLIT procedure is a high-performance procedure that builds tree-based statistical models for classification and regression. The procedure produces classification trees, which model a categorical response, and regression trees, which model a continuous response. Both types of trees are referred to as decision trees because the model is expressed as a series of if-then statements - [documentation](https://support.sas.com/documentation/onlinedoc/stat/141/hpsplit.pdf)",
"_____no_output_____"
]
],
[
[
"hpsplit_model = stat.hpsplit(data=application_credit_part, \n cls=var_class, \n model=\"status(event='N')= FLAG_OWN_CAR FLAG_OWN_REALTY OCCUPATION_TYPE MONTHS_BALANCE AMT_INCOME_TOTAL\", \n code='trescore.sas', \n procopts='assignmissing=similar', \n out = 'work.dt_score',\n id = \"ID\",\n partition=\"rolevar=_partind_(TRAIN='1' VALIDATE='0');\")",
"_____no_output_____"
],
[
"dir(hpsplit_model)",
"_____no_output_____"
],
[
"hpsplit_model.ROCPLOT",
"_____no_output_____"
],
[
"hpsplit_model.varimportance",
"_____no_output_____"
],
[
"sas.set_results('HTML')\nhpsplit_model.wholetreeplot",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d00bf11cf7807283814727c46ef6e73a10ef496f | 828,890 | ipynb | Jupyter Notebook | VacationPy/VacationPy.ipynb | ineal12/python-api-challenge | c6225d504f73c85a8b8c415fb936a431064b8246 | [
"ADSL"
] | null | null | null | VacationPy/VacationPy.ipynb | ineal12/python-api-challenge | c6225d504f73c85a8b8c415fb936a431064b8246 | [
"ADSL"
] | null | null | null | VacationPy/VacationPy.ipynb | ineal12/python-api-challenge | c6225d504f73c85a8b8c415fb936a431064b8246 | [
"ADSL"
] | null | null | null | 50.933391 | 488 | 0.35325 | [
[
[
"# VacationPy\n----\n\n#### Note\n* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.",
"_____no_output_____"
]
],
[
[
"# Dependencies and Setup\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport requests\nimport gmaps\nimport os\nimport json\n# Import API key\nfrom config import g_key\n",
"_____no_output_____"
]
],
[
[
"### Store Part I results into DataFrame\n* Load the csv exported in Part I to a DataFrame",
"_____no_output_____"
]
],
[
[
"#read in weather data\nweather_data = pd.read_csv('../cities.csv')\nweather_data.head()",
"_____no_output_____"
]
],
[
[
"### Humidity Heatmap\n* Configure gmaps.\n* Use the Lat and Lng as locations and Humidity as the weight.\n* Add Heatmap layer to map.",
"_____no_output_____"
]
],
[
[
"#Filter columns to be used in weather dataframe\ncols = [\"City\", \"Cloudiness\", \"Country\", \"Date\", \"Humidity\", \"Lat\", \"Lng\", \"Temp\", \"Wind Speed\"]\nweather_data = weather_data[cols]\n\n#configure gmaps\ngmaps.configure(api_key=g_key)\n\n\n#create coordinates \n\nlocations = weather_data[[\"Lat\", \"Lng\"]].astype(float)\n\nhumidity = weather_data[\"Humidity\"].astype(float)\n\nfig = gmaps.figure()\n\n#create heatmap to display humidity across globe\nheat_layer = gmaps.heatmap_layer(locations, weights=humidity, \n dissipating=False, max_intensity=100,\n point_radius = 1)\n\n#add heatmap layer\nfig.add_layer(heat_layer)\n\n#display heatmap\nfig\n",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
]
],
[
[
"### Create new DataFrame fitting weather criteria\n* Narrow down the cities to fit weather conditions.\n* Drop any rows will null values.",
"_____no_output_____"
]
],
[
[
"weather_data = weather_data[weather_data[\"Temp\"].between(70,80,inclusive=True)]\nweather_data = weather_data[weather_data[\"Temp\"] > 70]\nweather_data = weather_data[weather_data[\"Wind Speed\"] < 10]\nweather_data = weather_data[weather_data[\"Cloudiness\"] == 0]",
"_____no_output_____"
],
[
"weather_data.head()",
"_____no_output_____"
]
],
[
[
"### Hotel Map\n* Store into variable named `hotel_df`.\n* Add a \"Hotel Name\" column to the DataFrame.\n* Set parameters to search for hotels with 5000 meters.\n* Hit the Google Places API for each city's coordinates.\n* Store the first Hotel result into the DataFrame.\n* Plot markers on top of the heatmap.",
"_____no_output_____"
]
],
[
[
"hotel_df = weather_data\nhotel_df[\"Hotel Name\"]= ''\nhotel_df",
"_____no_output_____"
],
[
"params = {\n \"types\": \"lodging\",\n \"radius\":5000,\n \"key\": g_key\n}\n\n# Use the lat/lng we recovered to identify airports\nfor index, row in hotel_df.iterrows():\n # get lat, lng from df\n lat = row[\"Lat\"]\n lng = row[\"Lng\"]\n\n # change location each iteration while leaving original params in place\n params[\"location\"] = f\"{lat},{lng}\"\n\n # Use the search term: \"International Airport\" and our lat/lng\n base_url = \"https://maps.googleapis.com/maps/api/place/nearbysearch/json\"\n\n # make request and print url\n name_address = requests.get(base_url, params=params).json()\n\n print(json.dumps(name_address, indent=4, sort_keys=True))\n try:\n hotel_df.loc[index, \"Hotel Name\"] = name_address[\"results\"][0][\"name\"]\n except (KeyError, IndexError):\n print(\"Missing field/result... skipping.\")",
"{\n \"html_attributions\": [],\n \"results\": [\n {\n \"geometry\": {\n \"location\": {\n \"lat\": 18.4305559,\n \"lng\": -73.0802812\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": 18.4319047302915,\n \"lng\": -73.0789316697085\n },\n \"southwest\": {\n \"lat\": 18.4292067697085,\n \"lng\": -73.0816296302915\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"4a63cebc1f822ccda8d7b10ba000167c3131546c\",\n \"name\": \"Robsi Hotel\",\n \"photos\": [\n {\n \"height\": 3024,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/112475528530584998235\\\">April Isaac</a>\"\n ],\n \"photo_reference\": \"CmRaAAAAX4yBBHUj6-xiZ15H0rX3D4_ghbO3M4NTfT1Tdeo_q38i5nuPZC6mNHEj2nKNgIHJsUHvz0wLmsKkZCMbJINP-wvXdg0Ipe1_pNekzeXvL2EhqLYyV81n65i40wl64aWfEhAjb-DoJqdsWtYz1hu9VAgWGhTAJWab1Jx80HUh9AgDOUaEBZ9bqQ\",\n \"width\": 4032\n }\n ],\n \"place_id\": \"ChIJ-Xg1PuQ_uI4RlENaTsUgR4w\",\n \"plus_code\": {\n \"compound_code\": \"CWJ9+6V Miragoane, Haiti\",\n \"global_code\": \"77C8CWJ9+6V\"\n },\n \"rating\": 3.1,\n \"reference\": \"ChIJ-Xg1PuQ_uI4RlENaTsUgR4w\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 63,\n \"vicinity\": \"Route Nationale 2\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": 18.446397,\n \"lng\": -73.117341\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": 18.4478685302915,\n \"lng\": -73.11598666970849\n },\n \"southwest\": {\n \"lat\": 18.44517056970849,\n \"lng\": -73.11868463029151\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"d41279fba5171503423b484c053e10cfd7e19d3f\",\n \"name\": \"Deronsley\",\n \"opening_hours\": {\n \"open_now\": false\n },\n \"place_id\": \"ChIJI85qqhYVuI4RJGmozL8fwpc\",\n \"plus_code\": {\n \"compound_code\": \"CVWM+H3 Reynolds Terminals, Haiti\",\n \"global_code\": \"77C8CVWM+H3\"\n },\n \"rating\": 3.4,\n \"reference\": \"ChIJI85qqhYVuI4RJGmozL8fwpc\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 5,\n \"vicinity\": \"Reynolds Terminals\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": 18.399652,\n \"lng\": -73.09567899999999\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": 18.4007584802915,\n \"lng\": -73.09500421970849\n },\n \"southwest\": {\n \"lat\": 18.3980605197085,\n \"lng\": -73.0977021802915\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"0e697e0b6190a7ccfc613ca8a699869bf828409f\",\n \"name\": \"Anne-sthany Gilles\",\n \"place_id\": \"ChIJE0WNlahruI4R950HVvzEEWw\",\n \"plus_code\": {\n \"compound_code\": \"9WX3+VP Chalon, Haiti\",\n \"global_code\": \"77C89WX3+VP\"\n },\n \"rating\": 3,\n \"reference\": \"ChIJE0WNlahruI4R950HVvzEEWw\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 1,\n \"vicinity\": \"Haiti\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": 18.4395252,\n \"lng\": -73.08627349999999\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": 18.4408741802915,\n \"lng\": -73.0849245197085\n },\n \"southwest\": {\n \"lat\": 18.4381762197085,\n \"lng\": -73.0876224802915\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"5295b672df5db8938dc12a414b06cc5c8e4f31cd\",\n \"name\": \"MMM K-TICHON\",\n \"place_id\": \"ChIJq_R56Lc_uI4RD1nCisorsLU\",\n \"reference\": \"ChIJq_R56Lc_uI4RD1nCisorsLU\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"vicinity\": \"Miragoane\"\n }\n ],\n \"status\": \"OK\"\n}\n{\n \"html_attributions\": [],\n \"next_page_token\": \"CqQCHgEAAEwyYIFqw-C-TLZaHip0KLxwXiI7Iv2DP_58UK_qiceCsna8k2xrcx_NcP5zFctnjVraDrf6HdTcrzv7INOtr028D547XZLdSpJcHgyyFxilweqdSwfbBYsyq3ocYhGkLxjWmOWEaN-oEBcJxaJDVthDmBqACHniADKjV-DNxXgrcA2stHc9ygsf39axL8B-pYXJQ0nHFzGEZZmMydyqOy7mSKI-1S1GIngkThprj1CsiD3zX5AdCO2gj8a_CjKjOILC2AYMEuczmfRrFi16M-VZ9a_Xcfnalqt-gv1LUp2T6wsIGLDhyXEYbTH1-5X2AKrvyvNbh_yA8dqlJXIikS3Y9OsdJ42XPdAon99WAV2TMowKGaHhG_UfhxJ8bVVHXBIQhOHCm7-4sW-Enq6Yyb_MpBoUBb6ZlJrGzWDEiInsikui2598_SY\",\n \"results\": [\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.9533807,\n \"lng\": 115.8491911\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9521179197085,\n \"lng\": 115.8506315302915\n },\n \"southwest\": {\n \"lat\": -31.9548158802915,\n \"lng\": 115.8479335697085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"fd7d95893428de354aa32383cebc05e0ea1618da\",\n \"name\": \"Rendezvous Hotel Perth Central\",\n \"opening_hours\": {\n \"open_now\": true\n },\n \"photos\": [\n {\n \"height\": 1363,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/114961405905941645555\\\">Rendezvous Hotel Perth Central</a>\"\n ],\n \"photo_reference\": \"CmRaAAAAQK3Rf4nMgGMoY72Wo_8gj9UOnxZhtNCFireSeknkmm-jUfrWp5aPnMno9rwVknidBOTjyIYNPjjnp2gPhHvUpGwewOy39116s2pJpWFF1OL_fXjdGaGRCZ7MHxqvOD-0EhBrDHCQvUxjvavvS4JFZ-9JGhSzp6xccCTh17rbRvi0CRVBMCF9SQ\",\n \"width\": 2048\n }\n ],\n \"place_id\": \"ChIJOTf4dSulMioRHr2rdvpdwcU\",\n \"plus_code\": {\n \"compound_code\": \"2RWX+JM Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2RWX+JM\"\n },\n \"rating\": 4,\n \"reference\": \"ChIJOTf4dSulMioRHr2rdvpdwcU\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"restaurant\",\n \"food\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 407,\n \"vicinity\": \"24 Mount Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.95436680000001,\n \"lng\": 115.8529193\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9529603697085,\n \"lng\": 115.8541777302915\n },\n \"southwest\": {\n \"lat\": -31.9556583302915,\n \"lng\": 115.8514797697085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"3ec369f3852887d35c00b702d25fa4be79322288\",\n \"name\": \"Parmelia Hilton Perth\",\n \"photos\": [\n {\n \"height\": 799,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/111085316822416525956\\\">Parmelia Hilton Perth</a>\"\n ],\n \"photo_reference\": \"CmRaAAAAnCScwpcmmW6w84xyvyvXUKSQnnSLTzQv5etMpfBxCGGb1gM5l_YI_UfJljSRZa8zoX_JThIzrhmUxkkIQnkxSVyR7ffh1NZ3hRBYy0QvShSZzfQcryB2YTgCErKcqzemEhDEQ0ds_GYKMWHPawbQhiSLGhQM0xRBwr-NqFL006JZmueOF3IZ7Q\",\n \"width\": 1200\n }\n ],\n \"place_id\": \"ChIJQdM1JtW6MioRxAJto5QX2Qw\",\n \"plus_code\": {\n \"compound_code\": \"2VW3+75 Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2VW3+75\"\n },\n \"rating\": 4.2,\n \"reference\": \"ChIJQdM1JtW6MioRxAJto5QX2Qw\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 1296,\n \"vicinity\": \"14 Mill Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.94939419999999,\n \"lng\": 115.8517162\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9479618197085,\n \"lng\": 115.8531304302915\n },\n \"southwest\": {\n \"lat\": -31.95065978029149,\n \"lng\": 115.8504324697085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"72722fd6ae85ff6dff6f490d37f65bbfa0f02681\",\n \"name\": \"Four Points by Sheraton Perth\",\n \"opening_hours\": {\n \"open_now\": true\n },\n \"photos\": [\n {\n \"height\": 4073,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/104024662815971438514\\\">Four Points by Sheraton Perth</a>\"\n ],\n \"photo_reference\": \"CmRaAAAA0DqyBOPQTLr_TfcyIW8oLMToklXL1dT2qUhEbydM0hrOSVmVYBaomR-vyjTWexSF15a5Iur8fk_vZ4bkeMR0jo0o-RQlyZMdx8Zu3HzfsinGAzNi03hfDeJb39ljxXwzEhD2kCO1l6Tzd9YKr2l6kGpYGhTZ8rc4T-TIV7YGVcSsBw5yKfFbEA\",\n \"width\": 6109\n }\n ],\n \"place_id\": \"ChIJBWMI1iylMioR3UYkhAM3FxU\",\n \"plus_code\": {\n \"compound_code\": \"3V22+6M Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ3V22+6M\"\n },\n \"rating\": 4.1,\n \"reference\": \"ChIJBWMI1iylMioR3UYkhAM3FxU\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 926,\n \"vicinity\": \"707 Wellington Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.9528936,\n \"lng\": 115.8613784\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.95150436970849,\n \"lng\": 115.8626042802915\n },\n \"southwest\": {\n \"lat\": -31.95420233029149,\n \"lng\": 115.8599063197085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"f0115a8171b1e01482507d2801cf84bed57e47e0\",\n \"name\": \"Adina Apartment Hotel Perth Barrack Plaza\",\n \"opening_hours\": {\n \"open_now\": true\n },\n \"photos\": [\n {\n \"height\": 1920,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/107175931796469654758\\\">Adina Apartment Hotel Perth Barrack Plaza</a>\"\n ],\n \"photo_reference\": \"CmRaAAAA6_qcjgrLjqFS8YnJPQWZ-RVxH8IvHDzhtfkD9deR4KqpZACy14yxsGv_sJo-So2SDMav6mla8Y8Uvkzv0LOya1ni7n2F1uv0aVjs5a5E2b_ZAndqeo6gu_JBk64Y6EU9EhC-azDE-qHg0Ouq33JHmJV5GhSBcz7KVYg2x6OKwnIV_afQV7ZCag\",\n \"width\": 2880\n }\n ],\n \"place_id\": \"ChIJBUZtrNe6MioRwkIEJHp4NPk\",\n \"plus_code\": {\n \"compound_code\": \"2VW6+RH Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2VW6+RH\"\n },\n \"rating\": 4,\n \"reference\": \"ChIJBUZtrNe6MioRwkIEJHp4NPk\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"restaurant\",\n \"food\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 315,\n \"vicinity\": \"138 Barrack Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.95381309999999,\n \"lng\": 115.8482513\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9525530197085,\n \"lng\": 115.8495622302915\n },\n \"southwest\": {\n \"lat\": -31.95525098029149,\n \"lng\": 115.8468642697085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"d155dfc31b6d02cd15b67a67780804fc59fa6539\",\n \"name\": \"Mountway Holiday Apartments\",\n \"opening_hours\": {\n \"open_now\": false\n },\n \"photos\": [\n {\n \"height\": 532,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/103683177472726661988\\\">Mountway Holiday Apartments</a>\"\n ],\n \"photo_reference\": \"CmRaAAAA-Cnb0TlsHZjmM4CNc4CdI8HXhfYyfDFf4PQOFbHjPTE-rxONjd6soAD2MHEfrwmzDp3Ndm2c2N-9OX1ofWjmpzGr68LQWeHAfsOhy3RICcY1sgXqsWCah0IzL0CE04AgEhASbPKAw80NB14gGmHarSRhGhTjRHteX6YYnwgUoVdAi-xiiTyXPg\",\n \"width\": 800\n }\n ],\n \"place_id\": \"ChIJnWSCkSulMioR_zKcRl3Owxw\",\n \"plus_code\": {\n \"compound_code\": \"2RWX+F8 Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2RWX+F8\"\n },\n \"rating\": 3,\n \"reference\": \"ChIJnWSCkSulMioR_zKcRl3Owxw\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 65,\n \"vicinity\": \"36 Mount Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.9523681,\n \"lng\": 115.8565647\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9509315697085,\n \"lng\": 115.8579547302915\n },\n \"southwest\": {\n \"lat\": -31.9536295302915,\n \"lng\": 115.8552567697085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"fac3de0c263c5788ecdf2c3fc76b054a3c64785e\",\n \"name\": \"Mantra on Murray Perth\",\n \"opening_hours\": {\n \"open_now\": true\n },\n \"photos\": [\n {\n \"height\": 2000,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/113603882250967720292\\\">Mantra on Murray Perth</a>\"\n ],\n \"photo_reference\": \"CmRaAAAAGw_dmNnbP1x9JyrNO7gsOgrJYKKJNbFoMItWAr-2tBUUr3knpmKzXC9lo8dI-kyOtP-p40w2fcP0228pNgy4kADO5QGgwnVPxPOftODc5g_8ijrRo8oLKykDT2IIZWJqEhDMWLF-dTsj335IaSOLOAhNGhSRzanU1oGhjrgKuuQ8hslPH4DGKg\",\n \"width\": 3000\n }\n ],\n \"place_id\": \"ChIJU2jzP9S6MioRCskxh4iZcMQ\",\n \"plus_code\": {\n \"compound_code\": \"2VX4+3J Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2VX4+3J\"\n },\n \"rating\": 4,\n \"reference\": \"ChIJU2jzP9S6MioRCskxh4iZcMQ\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 440,\n \"vicinity\": \"305 Murray Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.9517161,\n \"lng\": 115.8557121\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9504381197085,\n \"lng\": 115.8570282802915\n },\n \"southwest\": {\n \"lat\": -31.9531360802915,\n \"lng\": 115.8543303197085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"f9678024ce5befda5220e39befa5c05d1409adf6\",\n \"name\": \"ibis Perth\",\n \"photos\": [\n {\n \"height\": 6144,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/101604342793768393975\\\">ibis Perth</a>\"\n ],\n \"photo_reference\": \"CmRaAAAA5q1xftAFLMNJE_Z7nX1daYHO8SIradDHHT-Yrt-DY9hMAe-KQqy7XRwP1TyjCa8Ib91S_RgzcJ15owk3mbSTu9yRBoT-TLAfY9Rel-cNvDLBuD_sZC-cjdsF2Q8MdcFDEhBr55osWxVSvr_CrNUISoebGhT183MF81tORJfgBRLoPhe1YeA7YA\",\n \"width\": 4096\n }\n ],\n \"place_id\": \"ChIJRaeMetS6MioRD8Gs1nMuFIw\",\n \"plus_code\": {\n \"compound_code\": \"2VX4+87 Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2VX4+87\"\n },\n \"rating\": 4,\n \"reference\": \"ChIJRaeMetS6MioRD8Gs1nMuFIw\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 891,\n \"vicinity\": \"334 Murray Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.954359,\n \"lng\": 115.862821\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.95295841970851,\n \"lng\": 115.8640960302915\n },\n \"southwest\": {\n \"lat\": -31.9556563802915,\n \"lng\": 115.8613980697085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"a02e63279ab346f85325ec7a3fb7670ede5134d4\",\n \"name\": \"Pensione Hotel Perth\",\n \"opening_hours\": {\n \"open_now\": true\n },\n \"photos\": [\n {\n \"height\": 3012,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/113104561067478579365\\\">Pensione Hotel Perth</a>\"\n ],\n \"photo_reference\": \"CmRaAAAANxaj8qbQ-JTTBR9sC5GOA8ostFynd8pt2zMCEDJ0-FD54C-Xb4BzGUIjOSr76tlMz0eJPDM7czaTNL1Kus20k8rXTyv3dCdI38fmU3t99PRqJbVWCbudjn0RHf5izbH3EhCBuUyq3ftG-JUXdm7-2OIOGhRQhwjWYr4PeHyonyCczIAu3PmImg\",\n \"width\": 4614\n }\n ],\n \"place_id\": \"ChIJJz-f5te6MioRju7kni7eDtg\",\n \"plus_code\": {\n \"compound_code\": \"2VW7+74 Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2VW7+74\"\n },\n \"rating\": 4.1,\n \"reference\": \"ChIJJz-f5te6MioRju7kni7eDtg\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 435,\n \"vicinity\": \"70 Pier Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.95464,\n \"lng\": 115.861885\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9535154697085,\n \"lng\": 115.8631261802915\n },\n \"southwest\": {\n \"lat\": -31.9562134302915,\n \"lng\": 115.8604282197085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"ef39ad48f7a6c0d5059b8ea0d3e694ea9aefd0c8\",\n \"name\": \"Criterion Cafe\",\n \"opening_hours\": {\n \"open_now\": true\n },\n \"photos\": [\n {\n \"height\": 3024,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/110002192035895078042\\\">OLIE SCAPE</a>\"\n ],\n \"photo_reference\": \"CmRaAAAARNqWj62xzc62AbNeyAQQYjQ0SkJhM34lE0Fa8mNMmox0_m6CJP8CArVE2wjtqiV9U3Qon-mGl0MEyJbuJLuAvoDIGxKK_MuPckdZnvrsWvGiDIOXUOY5eaH3SJGT8BkWEhARji-Ikt0IVSoEwKI8eCVnGhQjBYQe1J1_NfaY4HIkofqDpQXtJg\",\n \"width\": 4032\n }\n ],\n \"place_id\": \"ChIJ7RtU1te6MioR9IExnZIRPI8\",\n \"plus_code\": {\n \"compound_code\": \"2VW6+4Q Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2VW6+4Q\"\n },\n \"price_level\": 1,\n \"rating\": 3.2,\n \"reference\": \"ChIJ7RtU1te6MioR9IExnZIRPI8\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"restaurant\",\n \"food\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 231,\n \"vicinity\": \"560 Hay Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.940312,\n \"lng\": 115.85953\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.93888726970849,\n \"lng\": 115.8607110302915\n },\n \"southwest\": {\n \"lat\": -31.94158523029149,\n \"lng\": 115.8580130697085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"7ffd2a574a257b1cb19f2dccf8cb33b85b310c2b\",\n \"name\": \"The Witch's Hat\",\n \"photos\": [\n {\n \"height\": 600,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/110090071406381870698\\\">Mike P</a>\"\n ],\n \"photo_reference\": \"CmRaAAAATPjC2kVXchtKVp78aRtq4-7wtG5pygv5RvFifI15Z7-180wSZWyK-1ZiUgtDAp11RDRlU5v3ERCjftYupuumI_V4_MmKtxoKkLzZmNeueC0vQB9BtZQqRHcvNKgWmZUGEhDJuFGWH2jEZ1OZGK330k7pGhT-JFUnmI7CIKUKidBCOhcSuTGmgg\",\n \"width\": 800\n }\n ],\n \"place_id\": \"ChIJo3L4t866MioRIU3QmqE55HU\",\n \"plus_code\": {\n \"compound_code\": \"3V55+VR Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ3V55+VR\"\n },\n \"rating\": 4.3,\n \"reference\": \"ChIJo3L4t866MioRIU3QmqE55HU\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 101,\n \"vicinity\": \"148 Palmerston Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.9540003,\n \"lng\": 115.853616\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9525723197085,\n \"lng\": 115.8550031302915\n },\n \"southwest\": {\n \"lat\": -31.9552702802915,\n \"lng\": 115.8523051697085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"d055ee80935c0e2e05a8891f0701b2e9b5ce3939\",\n \"name\": \"Citadines St Georges Terrace Perth\",\n \"opening_hours\": {\n \"open_now\": true\n },\n \"photos\": [\n {\n \"height\": 1000,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/101716013807915246646\\\">Citadines St Georges Terrace Perth</a>\"\n ],\n \"photo_reference\": \"CmRaAAAAJPF2SUmE6b_Vu6lYqlj90a2vN4HdE0pQwjadHRIAaJ8AY8g3Lb5k0FkwAYB5-ELnoomKY-K1bt8YievNBtD29KxfzyDFkinxDpQ5vUtIxp0pO9XMtfdp9E58JsIinPHaEhBsxJa-FvYY7GL-iQv0UvKuGhRfEOwF_8UMf5pyIuZ30GhE44wloQ\",\n \"width\": 666\n }\n ],\n \"place_id\": \"ChIJoUC_39S6MioRFsRnOaCDdu0\",\n \"plus_code\": {\n \"compound_code\": \"2VW3+9C Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2VW3+9C\"\n },\n \"rating\": 4.1,\n \"reference\": \"ChIJoUC_39S6MioRFsRnOaCDdu0\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 180,\n \"vicinity\": \"185 Saint Georges Terrace, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.952984,\n \"lng\": 115.8559137\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9517349697085,\n \"lng\": 115.8572164302915\n },\n \"southwest\": {\n \"lat\": -31.9544329302915,\n \"lng\": 115.8545184697085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"bb1079f284bc82b4f7ce6cec9388c7b1ad528dc3\",\n \"name\": \"Holiday Inn Perth City Centre\",\n \"opening_hours\": {\n \"open_now\": true\n },\n \"photos\": [\n {\n \"height\": 1348,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/100436086830913215587\\\">Holiday Inn Perth City Centre</a>\"\n ],\n \"photo_reference\": \"CmRaAAAAPxHkb0KWWtE-q2r5rxha6clJ_RjqWoiS0IBvXAdmFS4dcrx5aDRFAivHo148tM2xzV03JxvJiC3KwXtq4xee0HTrlIwkI2JTaJWy5c0teCnRejBpy4T0TP9Ph3nZ7yE-EhBzLXlrbVvclKebcstNP7BQGhT4oqrw2_Ee2E9vTHDXS8pWfb4p2g\",\n \"width\": 899\n }\n ],\n \"place_id\": \"ChIJk9zmT9S6MioRCo4m9_hw8Zk\",\n \"plus_code\": {\n \"compound_code\": \"2VW4+R9 Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2VW4+R9\"\n },\n \"rating\": 4.3,\n \"reference\": \"ChIJk9zmT9S6MioRCo4m9_hw8Zk\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 529,\n \"vicinity\": \"778/788 Hay Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.9556935,\n \"lng\": 115.8528176\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9544735197085,\n \"lng\": 115.8540931802915\n },\n \"southwest\": {\n \"lat\": -31.9571714802915,\n \"lng\": 115.8513952197085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"e72a526e238d81991431f0e2b3a924734082b699\",\n \"name\": \"Adina Apartment Hotel Perth\",\n \"opening_hours\": {\n \"open_now\": true\n },\n \"photos\": [\n {\n \"height\": 1920,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/114533277066539936327\\\">Adina Apartment Hotel Perth</a>\"\n ],\n \"photo_reference\": \"CmRaAAAANkc7W5tDaQaPEV7M2nQAYqXw8UvPr4hyrcjgEnJnGK_RhPAvtJNxUlmqvp8VGVXi-Lwhr233KJuLw4BjC1yRVq_9pRoNi2ixBdNmMZTMmndu8djYACDyO4JxQdjEzA0XEhCfr9BMwjFdLt3JIoYYCwI_GhSfI3cWAaAN5zHCU35m_Hk9frbbyg\",\n \"width\": 2880\n }\n ],\n \"place_id\": \"ChIJSSdQSdW6MioRIrIPtPMtzs8\",\n \"plus_code\": {\n \"compound_code\": \"2VV3+P4 Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2VV3+P4\"\n },\n \"rating\": 4,\n \"reference\": \"ChIJSSdQSdW6MioRIrIPtPMtzs8\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"restaurant\",\n \"food\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 423,\n \"vicinity\": \"33 Mounts Bay Road, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.9497232,\n \"lng\": 115.8350976\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9484307197085,\n \"lng\": 115.8364211802915\n },\n \"southwest\": {\n \"lat\": -31.9511286802915,\n \"lng\": 115.8337232197085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"c67c37cd6c730cee8391611738b2b2a81a46e680\",\n \"name\": \"Quest on Rheola\",\n \"opening_hours\": {\n \"open_now\": false\n },\n \"photos\": [\n {\n \"height\": 1365,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/112476878220574105831\\\">Quest on Rheola</a>\"\n ],\n \"photo_reference\": \"CmRaAAAAFdVQYBq0xMdvSKgI_DdkTa_z9zlmNAdp6zeHT_Xlc-w-8s0n2qYBMPJez4syVIKrsNtxCIIHfW3tQJKz9ynH_lE0rbQB_Ak_XzLImhOJs4XEGoqPasNjceR0ObyW292nEhCnPmTPd2DixjjSXYqZiNueGhQ76cFC4XQn8dlw-9aWr80IvLHVQQ\",\n \"width\": 2048\n }\n ],\n \"place_id\": \"ChIJwf0EUDulMioRWIp5i3b1Sc0\",\n \"plus_code\": {\n \"compound_code\": \"3R2P+42 West Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ3R2P+42\"\n },\n \"rating\": 3.9,\n \"reference\": \"ChIJwf0EUDulMioRWIp5i3b1Sc0\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 20,\n \"vicinity\": \"18 Rheola Street, West Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.9562139,\n \"lng\": 115.8633944\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9548430697085,\n \"lng\": 115.8646733802915\n },\n \"southwest\": {\n \"lat\": -31.95754103029149,\n \"lng\": 115.8619754197085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"dfa926ded84e3c2e3d3c1b2310abfa83bc306908\",\n \"name\": \"Mercure Perth\",\n \"opening_hours\": {\n \"open_now\": true\n },\n \"photos\": [\n {\n \"height\": 768,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/115323625113146494529\\\">Mercure Perth</a>\"\n ],\n \"photo_reference\": \"CmRaAAAAh9iD1aW9MOZ_SeVx4BqhlEX43S1iS_O9oyAfsdBqrgtcibwbJDbdfABEhzZAuVP17pWST5EyRLnfcOYcvJmj1rCUQE5l4Ot-25dp2x0jRxQqfD2sTqhNoYZvl5KPCpXfEhB_yQ4EFbFFYsJOkd5X-SLpGhQxNEOflg_KWr_v_FjEk-j5AYWPHg\",\n \"width\": 1024\n }\n ],\n \"place_id\": \"ChIJczH-DCi7MioRqJrsWgcNj00\",\n \"plus_code\": {\n \"compound_code\": \"2VV7+G9 Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2VV7+G9\"\n },\n \"rating\": 4,\n \"reference\": \"ChIJczH-DCi7MioRqJrsWgcNj00\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 1044,\n \"vicinity\": \"10 Irwin Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.949819,\n \"lng\": 115.863759\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9484988197085,\n \"lng\": 115.8651698302915\n },\n \"southwest\": {\n \"lat\": -31.9511967802915,\n \"lng\": 115.8624718697085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"1d6d1c5474ded0293ddd34dbb346bb6959ed3e37\",\n \"name\": \"The Emperor's Crown Hostel\",\n \"photos\": [\n {\n \"height\": 1334,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/107156169224748666742\\\">The Emperor's Crown Hostel</a>\"\n ],\n \"photo_reference\": \"CmRaAAAAC8SAerhQFGXYQcztAT5q9v1LQan83TZ5lBigcFWSrKMIn4-Z-0pkOgbQmMFekxQF9mWqOBkdvusqy-RNDcV9btK85VSuGzWkh_JLwtnUR6_Tt-KkyWMvQJ_9VSbti0y9EhCLg1FtwKH96fAJ6rENlCrRGhTQutG0j7uiNqgmfRIsvcLLwHAyvw\",\n \"width\": 750\n }\n ],\n \"place_id\": \"ChIJNd9DUde6MioRFXKeOdKeVc8\",\n \"plus_code\": {\n \"compound_code\": \"3V27+3G Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ3V27+3G\"\n },\n \"rating\": 3.9,\n \"reference\": \"ChIJNd9DUde6MioRFXKeOdKeVc8\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 227,\n \"vicinity\": \"85 Stirling Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.9417277,\n \"lng\": 115.8608031\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9403556697085,\n \"lng\": 115.8620995802915\n },\n \"southwest\": {\n \"lat\": -31.9430536302915,\n \"lng\": 115.8594016197085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"ba209bce7819c04578af92274c378602ac9c9917\",\n \"name\": \"Hotel Northbridge\",\n \"photos\": [\n {\n \"height\": 2639,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/114412105782350929744\\\">Derek Graham</a>\"\n ],\n \"photo_reference\": \"CmRaAAAA-nykeOliuvkaEZcv_LKplepRRGdeZVkGm5dulpTgh2DHJ0CK7qp4tt9uGr9QyePuyLzF-C-GB-wVZsxDEsfrnbfANYHgfVusbkUNuZ9tfJCImghvKzpo5PNXUgRUv-Y5EhAuijjY2zanWPDatSYccZzRGhR5fVVrMfMyyLAaA_Tk_VNhBmG_Xw\",\n \"width\": 5824\n }\n ],\n \"place_id\": \"ChIJZYMoG866MioRO70qigLyhOc\",\n \"plus_code\": {\n \"compound_code\": \"3V56+88 Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ3V56+88\"\n },\n \"rating\": 4.1,\n \"reference\": \"ChIJZYMoG866MioRO70qigLyhOc\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 315,\n \"vicinity\": \"210 Lake Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.9418736,\n \"lng\": 115.8614531\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.94061756970851,\n \"lng\": 115.8627426802915\n },\n \"southwest\": {\n \"lat\": -31.9433155302915,\n \"lng\": 115.8600447197085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"333d5067ad89a7318a03a05be7653671472d82b0\",\n \"name\": \"Coolibah Lodge Backpackers\",\n \"opening_hours\": {\n \"open_now\": false\n },\n \"photos\": [\n {\n \"height\": 1536,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/103514313706860804778\\\">Coolibah Lodge Backpackers</a>\"\n ],\n \"photo_reference\": \"CmRaAAAAV7WOkfNdmXHaZBWtKmF2dppEadHqe8Q_ZGyEHeS2XpCsA2sec0KmLVmC5bjcvvZ_UXGfHSIOgfCJDjuFiko5kAjnSoeho0wzbzUsnSpmc_C4GeAZoHF0yvrT7fiiFaJyEhBLWyV-7o4P6iEYUW1eN1XpGhRLwBSQbWB6NxDiHqPKkyrFYij2tA\",\n \"width\": 2048\n }\n ],\n \"place_id\": \"ChIJb00eAc-6MioRpa1uguusyy4\",\n \"plus_code\": {\n \"compound_code\": \"3V56+7H Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ3V56+7H\"\n },\n \"rating\": 4.2,\n \"reference\": \"ChIJb00eAc-6MioRpa1uguusyy4\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 75,\n \"vicinity\": \"194 Brisbane Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.9512873,\n \"lng\": 115.8578095\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9499536197085,\n \"lng\": 115.8592453802915\n },\n \"southwest\": {\n \"lat\": -31.9526515802915,\n \"lng\": 115.8565474197085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"df9338f6c25905c61a2e1db35db9fa40d76459b3\",\n \"name\": \"The Royal Hotel\",\n \"photos\": [\n {\n \"height\": 3024,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/104773266897188323800\\\">Dianne McNevin</a>\"\n ],\n \"photo_reference\": \"CmRaAAAAAQ29kQJC2vtzwoVxNL3tTTckq2sQWLjhHJChHtBQ8WP3hmfRdZbCFwx-uRN0BP9gef1gYxpmb4UWTbGFkrHdRbAM0JRdCOwtFyBaTSUKxQHmqc3Q1ZeFGHYWxnD3ApiHEhDm4YXkx9Y4YWUbQJGNCF3aGhQzTXhyCfjlx9YJUREuqEhjEvz58Q\",\n \"width\": 4032\n }\n ],\n \"place_id\": \"ChIJ4d-0HNS6MioR9fcT9iuzUhE\",\n \"plus_code\": {\n \"compound_code\": \"2VX5+F4 Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2VX5+F4\"\n },\n \"price_level\": 2,\n \"rating\": 4.1,\n \"reference\": \"ChIJ4d-0HNS6MioR9fcT9iuzUhE\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 37,\n \"vicinity\": \"531 Wellington Street, Perth\"\n },\n {\n \"geometry\": {\n \"location\": {\n \"lat\": -31.9599179,\n \"lng\": 115.8669156\n },\n \"viewport\": {\n \"northeast\": {\n \"lat\": -31.9586639197085,\n \"lng\": 115.8682488802915\n },\n \"southwest\": {\n \"lat\": -31.9613618802915,\n \"lng\": 115.8655509197085\n }\n }\n },\n \"icon\": \"https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png\",\n \"id\": \"95c01f2aeb43950bb16cc898cc570dac4aac141e\",\n \"name\": \"City Waters Motel\",\n \"photos\": [\n {\n \"height\": 2352,\n \"html_attributions\": [\n \"<a href=\\\"https://maps.google.com/maps/contrib/117912872433455124085\\\">Hasnadi Supathir</a>\"\n ],\n \"photo_reference\": \"CmRaAAAAQP3lENEgnTxO61kVWByrHJDe0TQobT4zVtVtX6s9WuLO8h1VllYG6-SqRTdEhmzNZm8CcfhYCimnicZtrXthLuDFAAa70_YH-7iIW9RhHM4sSuGAD5cZ2huhueIRckfgEhAIvAKZKlDxbq2liwuewV0wGhSrQhYyogk7bjuoG4kOUQ4j3VMhjg\",\n \"width\": 4160\n }\n ],\n \"place_id\": \"ChIJ22-QLCS7MioRKrnTcxeLt2U\",\n \"plus_code\": {\n \"compound_code\": \"2VR8+2Q Perth, Western Australia, Australia\",\n \"global_code\": \"4PWQ2VR8+2Q\"\n },\n \"rating\": 3.9,\n \"reference\": \"ChIJ22-QLCS7MioRKrnTcxeLt2U\",\n \"scope\": \"GOOGLE\",\n \"types\": [\n \"lodging\",\n \"point_of_interest\",\n \"establishment\"\n ],\n \"user_ratings_total\": 157,\n \"vicinity\": \"118 Terrace Road, Perth\"\n }\n ],\n \"status\": \"OK\"\n}\n"
],
[
"hotel_df\n",
"_____no_output_____"
],
[
"# NOTE: Do not change any of the code in this cell\n\n# Using the template add the hotel marks to the heatmap\ninfo_box_template = \"\"\"\n<dl>\n<dt>Name</dt><dd>{Hotel Name}</dd>\n<dt>City</dt><dd>{City}</dd>\n<dt>Country</dt><dd>{Country}</dd>\n</dl>\n\"\"\"\n# Store the DataFrame Row\n# NOTE: be sure to update with your DataFrame name\nhotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]\nlocations = hotel_df[[\"Lat\", \"Lng\"]]",
"_____no_output_____"
],
[
"# Add marker layer ontop of heat map\nmarker_layer = gmaps.marker_layer(locations, info_box_content=hotel_info)\n\n# Display Map\nfig.add_layer(marker_layer) \nfig",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d00bfdb8ca5c268c4dbd67243a0f81e808b3938d | 26,941 | ipynb | Jupyter Notebook | SVM.ipynb | bbrighttaer/data_science_nbs | 21c1b088e758b0cf801bc9c8da87dfd916561163 | [
"MIT"
] | null | null | null | SVM.ipynb | bbrighttaer/data_science_nbs | 21c1b088e758b0cf801bc9c8da87dfd916561163 | [
"MIT"
] | null | null | null | SVM.ipynb | bbrighttaer/data_science_nbs | 21c1b088e758b0cf801bc9c8da87dfd916561163 | [
"MIT"
] | null | null | null | 39.502933 | 10,172 | 0.638655 | [
[
[
"# Support Vector Machine (SVM) Tutorial",
"_____no_output_____"
],
[
"Follow from: [link](https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47)",
"_____no_output_____"
],
[
"- SVM can be used for both regression and classification problems.\n- The goal of SVM models is to find a hyperplane in an N-dimensional space that distinctly classifies the data points.\n- The hyperplane must be the one with the maximum margin.\n- Support vectors are data points that are closer to the hyperplane and influence the position and orientation of the hyperplane.\n- In SVM, if the output of the model is greater than 1, it is identified with one class and if it is -1, it is identified with the other class $[-1, 1]$.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.utils import shuffle\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score",
"/home/bbrighttaer/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n return f(*args, **kwds)\n/home/bbrighttaer/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88\n return f(*args, **kwds)\n"
],
[
"%matplotlib inline\nplt.style.use('seaborn')",
"_____no_output_____"
],
[
"df = pd.read_csv('data/Iris.csv')",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df = df.drop(['Id'], axis=1)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"target = df['Species']",
"_____no_output_____"
],
[
"s = list(set(target))",
"_____no_output_____"
],
[
"rows = list(range(100, 150))",
"_____no_output_____"
],
[
"# Since the Iris dataset has three classes, we remove the third class. This then results in a binary classification problem\ndf = df.drop(df.index[rows])",
"_____no_output_____"
],
[
"x, y = df['SepalLengthCm'], df['PetalLengthCm']",
"_____no_output_____"
],
[
"setosa_x, setosa_y = x[:50], y[:50]\nversicolor_x, versicolor_y = x[50:], y[50:]",
"_____no_output_____"
],
[
"plt.figure(figsize=(8,6))\nplt.scatter(setosa_x, setosa_y, marker='.', color='green')\nplt.scatter(versicolor_x, versicolor_y, marker='*', color='red')\nplt.show()",
"_____no_output_____"
],
[
"df = df.drop(['SepalWidthCm', 'PetalWidthCm'], axis=1)",
"_____no_output_____"
],
[
"Y = []",
"_____no_output_____"
],
[
"target = df['Species']",
"_____no_output_____"
],
[
"for val in target:\n if(val == 'Iris-setosa'):\n Y.append(-1)\n else:\n Y.append(1)",
"_____no_output_____"
],
[
"df = df.drop(['Species'], axis=1)",
"_____no_output_____"
],
[
"X = df.values.tolist()",
"_____no_output_____"
],
[
"# Shuffle and split the data\nX, Y = shuffle(X, Y)\nx_train, x_test, y_train, y_test = train_test_split(X, Y, train_size=0.9)",
"/home/bbrighttaer/anaconda3/lib/python3.6/site-packages/sklearn/model_selection/_split.py:2179: FutureWarning: From version 0.21, test_size will always complement train_size unless both are specified.\n FutureWarning)\n"
],
[
"x_train = np.array(x_train)\ny_train = np.array(y_train).reshape(90, 1)\nx_test = np.array(x_test)\ny_test = np.array(y_test).reshape(10, 1)",
"_____no_output_____"
]
],
[
[
"## SVM implementation with Numpy",
"_____no_output_____"
]
],
[
[
"train_f1 = x_train[:, 0].reshape(90, 1)\ntrain_f2 = x_train[:, 1].reshape(90, 1)",
"_____no_output_____"
],
[
"w1, w2 = np.zeros((90, 1)), np.zeros((90, 1))",
"_____no_output_____"
],
[
"epochs = 1\nalpha = 1e-4",
"_____no_output_____"
],
[
"while epochs < 10000:\n y = w1 * train_f1 + w2 * train_f2\n prod = y * y_train\n count = 0\n for val in prod:\n if val >= 1:\n cost = 0\n w1 = w1 - alpha * (2 * 1/epochs * w1)\n w2 = w2 - alpha * (2 * 1/epochs * w2)\n else:\n cost = 1 - val\n w1 = w1 + alpha * (train_f1[count] * y_train[count] - 2 * 1/epochs * w1)\n w2 = w2 + alpha * (train_f2[count] * y_train[count] - 2 * 1/epochs * w2)\n count += 1\n epochs += 1",
"_____no_output_____"
]
],
[
[
"### Evaluation",
"_____no_output_____"
]
],
[
[
"index = list(range(10, 90))",
"_____no_output_____"
],
[
"w1 = np.delete(w1, index).reshape(10, 1)\nw2 = np.delete(w2, index).reshape(10, 1)",
"_____no_output_____"
],
[
"## Extract the test data features \ntest_f1 = x_test[:,0].reshape(10, 1)\ntest_f2 = x_test[:,1].reshape(10, 1)",
"_____no_output_____"
],
[
"## Predict\ny_pred = w1 * test_f1 + w2 * test_f2\npredictions = []\nfor val in y_pred:\n if val > 1:\n predictions.append(1)\n else:\n predictions.append(-1)\n\nprint(accuracy_score(y_test,predictions))",
"0.9\n"
]
],
[
[
"## SVM via sklearn",
"_____no_output_____"
]
],
[
[
"from sklearn.svm import SVC",
"_____no_output_____"
],
[
"clf = SVC(kernel='linear')",
"_____no_output_____"
],
[
"clf.fit(x_train, y_train)",
"/home/bbrighttaer/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py:761: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\n"
],
[
"y_pred = clf.predict(x_test)",
"_____no_output_____"
],
[
"print(accuracy_score(y_test, y_pred))",
"1.0\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d00c063b026a13f4d31b4cbb0703f34ec9cb4e02 | 1,649 | ipynb | Jupyter Notebook | papermill/tests/notebooks/broken1.ipynb | 69guitar1015/expmill | 16d4785f2dd249959acd897cbc31898098bf3c97 | [
"BSD-3-Clause"
] | 4 | 2020-05-22T17:35:43.000Z | 2022-02-02T10:29:48.000Z | papermill/tests/notebooks/broken1.ipynb | 69guitar1015/expmill | 16d4785f2dd249959acd897cbc31898098bf3c97 | [
"BSD-3-Clause"
] | 1 | 2022-03-21T09:23:29.000Z | 2022-03-21T09:23:29.000Z | papermill/tests/notebooks/broken1.ipynb | 69guitar1015/expmill | 16d4785f2dd249959acd897cbc31898098bf3c97 | [
"BSD-3-Clause"
] | 1 | 2020-02-12T13:51:52.000Z | 2020-02-12T13:51:52.000Z | 16.826531 | 36 | 0.471195 | [
[
[
"print(\"We're good.\")",
"_____no_output_____"
],
[
"assert False",
"_____no_output_____"
],
[
"print \"Shouldn't get here.\"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d00c13e1301a92e61bc77c2539e3129db275b117 | 12,453 | ipynb | Jupyter Notebook | TruthTables/TruthTables.ipynb | afgbloch/QuantumKatas | 2bd53efdaf4716ac0873a8e3919b57797cddcf95 | [
"MIT"
] | 1 | 2020-05-20T14:02:15.000Z | 2020-05-20T14:02:15.000Z | TruthTables/TruthTables.ipynb | afgbloch/QuantumKatas | 2bd53efdaf4716ac0873a8e3919b57797cddcf95 | [
"MIT"
] | null | null | null | TruthTables/TruthTables.ipynb | afgbloch/QuantumKatas | 2bd53efdaf4716ac0873a8e3919b57797cddcf95 | [
"MIT"
] | null | null | null | 33.03183 | 254 | 0.572794 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d00c2510c69b6b4f18bfdf6790734ca9a477c3f5 | 13,364 | ipynb | Jupyter Notebook | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial | 56dee9dddd9b4ec69f830bad8992f889ff98b556 | [
"BSD-3-Clause"
] | null | null | null | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial | 56dee9dddd9b4ec69f830bad8992f889ff98b556 | [
"BSD-3-Clause"
] | null | null | null | notebooks/03.2-Regression-Forests.ipynb | DininduSenanayake/sklearn_tutorial | 56dee9dddd9b4ec69f830bad8992f889ff98b556 | [
"BSD-3-Clause"
] | null | null | null | 30.863741 | 333 | 0.600644 | [
[
[
"<small><i>This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com). Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_tutorial/).</i></small>",
"_____no_output_____"
],
[
"# Supervised Learning In-Depth: Random Forests",
"_____no_output_____"
],
[
"Previously we saw a powerful discriminative classifier, **Support Vector Machines**.\nHere we'll take a look at motivating another powerful algorithm. This one is a *non-parametric* algorithm called **Random Forests**.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\n\nplt.style.use('seaborn')",
"_____no_output_____"
]
],
[
[
"## Motivating Random Forests: Decision Trees",
"_____no_output_____"
],
[
"Random forests are an example of an *ensemble learner* built on decision trees.\nFor this reason we'll start by discussing decision trees themselves.\n\nDecision trees are extremely intuitive ways to classify or label objects: you simply ask a series of questions designed to zero-in on the classification:",
"_____no_output_____"
]
],
[
[
"import fig_code\nfig_code.plot_example_decision_tree()",
"_____no_output_____"
]
],
[
[
"The binary splitting makes this extremely efficient.\nAs always, though, the trick is to *ask the right questions*.\nThis is where the algorithmic process comes in: in training a decision tree classifier, the algorithm looks at the features and decides which questions (or \"splits\") contain the most information.\n\n### Creating a Decision Tree\n\nHere's an example of a decision tree classifier in scikit-learn. We'll start by defining some two-dimensional labeled data:",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import make_blobs\n\nX, y = make_blobs(n_samples=300, centers=4,\n random_state=0, cluster_std=1.0)\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');",
"_____no_output_____"
]
],
[
[
"We have some convenience functions in the repository that help ",
"_____no_output_____"
]
],
[
[
"from fig_code import visualize_tree, plot_tree_interactive",
"_____no_output_____"
]
],
[
[
"Now using IPython's ``interact`` (available in IPython 2.0+, and requires a live kernel) we can view the decision tree splits:",
"_____no_output_____"
]
],
[
[
"plot_tree_interactive(X, y);",
"_____no_output_____"
]
],
[
[
"Notice that at each increase in depth, every node is split in two **except** those nodes which contain only a single class.\nThe result is a very fast **non-parametric** classification, and can be extremely useful in practice.\n\n**Question: Do you see any problems with this?**",
"_____no_output_____"
],
[
"### Decision Trees and over-fitting\n\nOne issue with decision trees is that it is very easy to create trees which **over-fit** the data. That is, they are flexible enough that they can learn the structure of the noise in the data rather than the signal! For example, take a look at two trees built on two subsets of this dataset:",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\nclf = DecisionTreeClassifier()\n\nplt.figure()\nvisualize_tree(clf, X[:200], y[:200], boundaries=False)\nplt.figure()\nvisualize_tree(clf, X[-200:], y[-200:], boundaries=False)",
"_____no_output_____"
]
],
[
[
"The details of the classifications are completely different! That is an indication of **over-fitting**: when you predict the value for a new point, the result is more reflective of the noise in the model rather than the signal.",
"_____no_output_____"
],
[
"## Ensembles of Estimators: Random Forests\n\nOne possible way to address over-fitting is to use an **Ensemble Method**: this is a meta-estimator which essentially averages the results of many individual estimators which over-fit the data. Somewhat surprisingly, the resulting estimates are much more robust and accurate than the individual estimates which make them up!\n\nOne of the most common ensemble methods is the **Random Forest**, in which the ensemble is made up of many decision trees which are in some way perturbed.\n\nThere are volumes of theory and precedent about how to randomize these trees, but as an example, let's imagine an ensemble of estimators fit on subsets of the data. We can get an idea of what these might look like as follows:",
"_____no_output_____"
]
],
[
[
"def fit_randomized_tree(random_state=0):\n X, y = make_blobs(n_samples=300, centers=4,\n random_state=0, cluster_std=2.0)\n clf = DecisionTreeClassifier(max_depth=15)\n \n rng = np.random.RandomState(random_state)\n i = np.arange(len(y))\n rng.shuffle(i)\n visualize_tree(clf, X[i[:250]], y[i[:250]], boundaries=False,\n xlim=(X[:, 0].min(), X[:, 0].max()),\n ylim=(X[:, 1].min(), X[:, 1].max()))\n \nfrom ipywidgets import interact\ninteract(fit_randomized_tree, random_state=(0, 100));",
"_____no_output_____"
]
],
[
[
"See how the details of the model change as a function of the sample, while the larger characteristics remain the same!\nThe random forest classifier will do something similar to this, but use a combined version of all these trees to arrive at a final answer:",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\nclf = RandomForestClassifier(n_estimators=100, random_state=0)\nvisualize_tree(clf, X, y, boundaries=False);",
"_____no_output_____"
]
],
[
[
"By averaging over 100 randomly perturbed models, we end up with an overall model which is a much better fit to our data!\n\n*(Note: above we randomized the model through sub-sampling... Random Forests use more sophisticated means of randomization, which you can read about in, e.g. the [scikit-learn documentation](http://scikit-learn.org/stable/modules/ensemble.html#forest)*)",
"_____no_output_____"
],
[
"## Quick Example: Moving to Regression\n\nAbove we were considering random forests within the context of classification.\nRandom forests can also be made to work in the case of regression (that is, continuous rather than categorical variables). The estimator to use for this is ``sklearn.ensemble.RandomForestRegressor``.\n\nLet's quickly demonstrate how this can be used:",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestRegressor\n\nx = 10 * np.random.rand(100)\n\ndef model(x, sigma=0.3):\n fast_oscillation = np.sin(5 * x)\n slow_oscillation = np.sin(0.5 * x)\n noise = sigma * np.random.randn(len(x))\n\n return slow_oscillation + fast_oscillation + noise\n\ny = model(x)\nplt.errorbar(x, y, 0.3, fmt='o');",
"_____no_output_____"
],
[
"xfit = np.linspace(0, 10, 1000)\nyfit = RandomForestRegressor(100).fit(x[:, None], y).predict(xfit[:, None])\nytrue = model(xfit, 0)\n\nplt.errorbar(x, y, 0.3, fmt='o')\nplt.plot(xfit, yfit, '-r');\nplt.plot(xfit, ytrue, '-k', alpha=0.5);",
"_____no_output_____"
]
],
[
[
"As you can see, the non-parametric random forest model is flexible enough to fit the multi-period data, without us even specifying a multi-period model!",
"_____no_output_____"
],
[
"## Example: Random Forest for Classifying Digits\n\nWe previously saw the **hand-written digits** data. Let's use that here to test the efficacy of the SVM and Random Forest classifiers.",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_digits\ndigits = load_digits()\ndigits.keys()",
"_____no_output_____"
],
[
"X = digits.data\ny = digits.target\nprint(X.shape)\nprint(y.shape)",
"_____no_output_____"
]
],
[
[
"To remind us what we're looking at, we'll visualize the first few data points:",
"_____no_output_____"
]
],
[
[
"# set up the figure\nfig = plt.figure(figsize=(6, 6)) # figure size in inches\nfig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)\n\n# plot the digits: each image is 8x8 pixels\nfor i in range(64):\n ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])\n ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')\n \n # label the image with the target value\n ax.text(0, 7, str(digits.target[i]))",
"_____no_output_____"
]
],
[
[
"We can quickly classify the digits using a decision tree as follows:",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\nfrom sklearn import metrics\n\nXtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0)\nclf = DecisionTreeClassifier(max_depth=11)\nclf.fit(Xtrain, ytrain)\nypred = clf.predict(Xtest)",
"_____no_output_____"
]
],
[
[
"We can check the accuracy of this classifier:",
"_____no_output_____"
]
],
[
[
"metrics.accuracy_score(ypred, ytest)",
"_____no_output_____"
]
],
[
[
"and for good measure, plot the confusion matrix:",
"_____no_output_____"
]
],
[
[
"plt.imshow(metrics.confusion_matrix(ypred, ytest),\n interpolation='nearest', cmap=plt.cm.binary)\nplt.grid(False)\nplt.colorbar()\nplt.xlabel(\"predicted label\")\nplt.ylabel(\"true label\");",
"_____no_output_____"
]
],
[
[
"### Exercise\n1. Repeat this classification task with ``sklearn.ensemble.RandomForestClassifier``. How does the ``max_depth``, ``max_features``, and ``n_estimators`` affect the results?\n2. Try this classification with ``sklearn.svm.SVC``, adjusting ``kernel``, ``C``, and ``gamma``. Which classifier performs optimally?\n3. Try a few sets of parameters for each model and check the F1 score (``sklearn.metrics.f1_score``) on your results. What's the best F1 score you can reach?",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d00c4ee4e0f52fbdf1c754c9fe98923ef533b61d | 13,324 | ipynb | Jupyter Notebook | my_classes/NumericTypes/floats_internal_repres.ipynb | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | my_classes/NumericTypes/floats_internal_repres.ipynb | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | my_classes/NumericTypes/floats_internal_repres.ipynb | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | 24.904673 | 295 | 0.430051 | [
[
[
"help(float)",
"Help on class float in module builtins:\n\nclass float(object)\n | float(x=0, /)\n | \n | Convert a string or number to a floating point number, if possible.\n | \n | Methods defined here:\n | \n | __abs__(self, /)\n | abs(self)\n | \n | __add__(self, value, /)\n | Return self+value.\n | \n | __bool__(self, /)\n | self != 0\n | \n | __divmod__(self, value, /)\n | Return divmod(self, value).\n | \n | __eq__(self, value, /)\n | Return self==value.\n | \n | __float__(self, /)\n | float(self)\n | \n | __floordiv__(self, value, /)\n | Return self//value.\n | \n | __format__(self, format_spec, /)\n | Formats the float according to format_spec.\n | \n | __ge__(self, value, /)\n | Return self>=value.\n | \n | __getattribute__(self, name, /)\n | Return getattr(self, name).\n | \n | __getnewargs__(self, /)\n | \n | __gt__(self, value, /)\n | Return self>value.\n | \n | __hash__(self, /)\n | Return hash(self).\n | \n | __int__(self, /)\n | int(self)\n | \n | __le__(self, value, /)\n | Return self<=value.\n | \n | __lt__(self, value, /)\n | Return self<value.\n | \n | __mod__(self, value, /)\n | Return self%value.\n | \n | __mul__(self, value, /)\n | Return self*value.\n | \n | __ne__(self, value, /)\n | Return self!=value.\n | \n | __neg__(self, /)\n | -self\n | \n | __pos__(self, /)\n | +self\n | \n | __pow__(self, value, mod=None, /)\n | Return pow(self, value, mod).\n | \n | __radd__(self, value, /)\n | Return value+self.\n | \n | __rdivmod__(self, value, /)\n | Return divmod(value, self).\n | \n | __repr__(self, /)\n | Return repr(self).\n | \n | __rfloordiv__(self, value, /)\n | Return value//self.\n | \n | __rmod__(self, value, /)\n | Return value%self.\n | \n | __rmul__(self, value, /)\n | Return value*self.\n | \n | __round__(self, ndigits=None, /)\n | Return the Integral closest to x, rounding half toward even.\n | \n | When an argument is passed, work like built-in round(x, ndigits).\n | \n | __rpow__(self, value, mod=None, /)\n | Return pow(value, self, mod).\n | \n | __rsub__(self, value, /)\n | Return value-self.\n | \n | __rtruediv__(self, value, /)\n | Return value/self.\n | \n | __sub__(self, value, /)\n | Return self-value.\n | \n | __truediv__(self, value, /)\n | Return self/value.\n | \n | __trunc__(self, /)\n | Return the Integral closest to x between 0 and x.\n | \n | as_integer_ratio(self, /)\n | Return integer ratio.\n | \n | Return a pair of integers, whose ratio is exactly equal to the original float\n | and with a positive denominator.\n | \n | Raise OverflowError on infinities and a ValueError on NaNs.\n | \n | >>> (10.0).as_integer_ratio()\n | (10, 1)\n | >>> (0.0).as_integer_ratio()\n | (0, 1)\n | >>> (-.25).as_integer_ratio()\n | (-1, 4)\n | \n | conjugate(self, /)\n | Return self, the complex conjugate of any float.\n | \n | hex(self, /)\n | Return a hexadecimal representation of a floating-point number.\n | \n | >>> (-0.1).hex()\n | '-0x1.999999999999ap-4'\n | >>> 3.14159.hex()\n | '0x1.921f9f01b866ep+1'\n | \n | is_integer(self, /)\n | Return True if the float is an integer.\n | \n | ----------------------------------------------------------------------\n | Class methods defined here:\n | \n | __getformat__(typestr, /) from builtins.type\n | You probably don't want to use this function.\n | \n | typestr\n | Must be 'double' or 'float'.\n | \n | It exists mainly to be used in Python's test suite.\n | \n | This function returns whichever of 'unknown', 'IEEE, big-endian' or 'IEEE,\n | little-endian' best describes the format of floating point numbers used by the\n | C type named by typestr.\n | \n | __set_format__(typestr, fmt, /) from builtins.type\n | You probably don't want to use this function.\n | \n | typestr\n | Must be 'double' or 'float'.\n | fmt\n | Must be one of 'unknown', 'IEEE, big-endian' or 'IEEE, little-endian',\n | and in addition can only be one of the latter two if it appears to\n | match the underlying C reality.\n | \n | It exists mainly to be used in Python's test suite.\n | \n | Override the automatic determination of C-level floating point type.\n | This affects how floats are converted to and from binary strings.\n | \n | fromhex(string, /) from builtins.type\n | Create a floating-point number from a hexadecimal string.\n | \n | >>> float.fromhex('0x1.ffffp10')\n | 2047.984375\n | >>> float.fromhex('-0x1p-1074')\n | -5e-324\n | \n | ----------------------------------------------------------------------\n | Static methods defined here:\n | \n | __new__(*args, **kwargs) from builtins.type\n | Create and return a new object. See help(type) for accurate signature.\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | imag\n | the imaginary part of a complex number\n | \n | real\n | the real part of a complex number\n\n"
],
[
"float(10)",
"_____no_output_____"
],
[
"float(10.4)",
"_____no_output_____"
],
[
"float('12.5')",
"_____no_output_____"
],
[
"float('22/7')",
"_____no_output_____"
],
[
"from fractions import Fraction",
"_____no_output_____"
],
[
"a = Fraction('22/7')",
"_____no_output_____"
],
[
"float(a)",
"_____no_output_____"
],
[
"print(0.1)",
"0.1\n"
],
[
"format(0.1, '.15f')",
"_____no_output_____"
],
[
"format(0.1, '.25f')",
"_____no_output_____"
],
[
"1/8",
"_____no_output_____"
],
[
"format(0.125, '.25f')",
"_____no_output_____"
],
[
"a = 0.1 + 0.1 + 0.1",
"_____no_output_____"
],
[
"b = 0.3",
"_____no_output_____"
],
[
"a == b",
"_____no_output_____"
],
[
"format(a, ' .25f')",
"_____no_output_____"
],
[
"format(b, '.25f')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00c56d3fd5f9c369fd6d5ddd216cf805a06b657 | 16,284 | ipynb | Jupyter Notebook | markdown_generator/talks.ipynb | krcalvert/krcalvert.github.io | 2d12693657f5bde119cf29c00632d544aa136739 | [
"MIT"
] | null | null | null | markdown_generator/talks.ipynb | krcalvert/krcalvert.github.io | 2d12693657f5bde119cf29c00632d544aa136739 | [
"MIT"
] | null | null | null | markdown_generator/talks.ipynb | krcalvert/krcalvert.github.io | 2d12693657f5bde119cf29c00632d544aa136739 | [
"MIT"
] | null | null | null | 40.009828 | 427 | 0.518177 | [
[
[
"# Talks markdown generator for academicpages\n\nTakes a TSV of talks with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `talks.py`. Run either from the `markdown_generator` folder after replacing `talks.tsv` with one containing your data.\n\nTODO: Make this work with BibTex and other databases, rather than Stuart's non-standard TSV format and citation style.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport os",
"_____no_output_____"
]
],
[
[
"## Data format\n\nThe TSV needs to have the following columns: title, type, url_slug, venue, date, location, talk_url, description, with a header at the top. Many of these fields can be blank, but the columns must be in the TSV.\n\n- Fields that cannot be blank: `title`, `url_slug`, `date`. All else can be blank. `type` defaults to \"Talk\" \n- `date` must be formatted as YYYY-MM-DD.\n- `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper. \n - The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/talks/YYYY-MM-DD-[url_slug]`\n - The combination of `url_slug` and `date` must be unique, as it will be the basis for your filenames\n\nThis is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).",
"_____no_output_____"
]
],
[
[
"!type talks.tsv",
"title\ttype\turl_slug\tvenue\tdate\tlocation\ttalk_url\tdescription\nClosing the Loop on Collections Review\tConference presentation\ttalk-1\tNorth Carolina Serials Conference\t2020-03-01\tChapel Hill, NC\t\t\nBreaking expectations for technical services assessment: outcomes over output\tConference presentation\ttalk-2\tSoutheastern Library Assessment Conference\t2019-11-01\tAtlanta, GA\thttps://scholarworks.gsu.edu/southeasternlac/2019/2019/1/\t\nYou, too, can be a library administrator (and enjoy it).\tConference presentation\ttalk-3\t62nd North Carolina Library Association Biennial Conference\t2017-10-01\tWinston-Salem, NC\thttps://www.slideshare.net/KristinCalvert1/you-too-can-be-a-library-administrator-and-enjoy-it \t\nTechnical services and public services: collaborative decision making\tConference presentation\ttalk-4\tRole of Professional Librarian in Technical Services Interest Group, American Library Association Midwinter Meeting\t2017-01-01\tAtlanta, GA\thttp://libres.uncg.edu/ir/wcu/listing.aspx?id=21773 \t\nThe weighted allocation formula and the association between academic discipline and research cited by faculty\tConference presentation\ttalk-5\tSeventh Annual Collection Management & Development Research Forum, American Library Association Annual Meeting\t2016-06-01\tOrlando, FL\t\t\nFrom Spreadsheets to SUSHI: Five Years of Assessing Use of E-Resources\tConference presentation\ttalk-6\tCharleston Conference: Issues in Book and Serial Acquisitions\t2013-11-01\tCharleston, SC. \thttps://www.slideshare.net/KristinCalvert1/from-spreadsheets-to-sushi \t\nGone, but not forgotten: an assessment framework for collection reviews\tConference presentation\ttalk-7\tACRL 2021 Conference\t2021-04-15\tVirtual\n"
]
],
[
[
"## Import TSV\n\nPandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\\t`.\n\nI found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.",
"_____no_output_____"
]
],
[
[
"talks = pd.read_csv(\"talks.tsv\", sep=\"\\t\", header=0)\ntalks",
"_____no_output_____"
]
],
[
[
"## Escape special characters\n\nYAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.",
"_____no_output_____"
]
],
[
[
"html_escape_table = {\n \"&\": \"&\",\n '\"': \""\",\n \"'\": \"'\"\n }\n\ndef html_escape(text):\n if type(text) is str:\n return \"\".join(html_escape_table.get(c,c) for c in text)\n else:\n return \"False\"",
"_____no_output_____"
]
],
[
[
"## Creating the markdown files\n\nThis is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.",
"_____no_output_____"
]
],
[
[
"loc_dict = {}\n\nfor row, item in talks.iterrows():\n \n md_filename = str(item.date) + \"-\" + item.url_slug + \".md\"\n html_filename = str(item.date) + \"-\" + item.url_slug \n year = item.date[:4]\n \n md = \"---\\ntitle: \\\"\" + item.title + '\"\\n'\n md += \"collection: talks\" + \"\\n\"\n \n if len(str(item.type)) > 3:\n md += 'type: \"' + item.type + '\"\\n'\n else:\n md += 'type: \"Talk\"\\n'\n \n md += \"permalink: /talks/\" + html_filename + \"\\n\"\n \n if len(str(item.venue)) > 3:\n md += 'venue: \"' + item.venue + '\"\\n'\n \n if len(str(item.location)) > 3:\n md += \"date: \" + str(item.date) + \"\\n\"\n \n if len(str(item.location)) > 3:\n md += 'location: \"' + str(item.location) + '\"\\n'\n \n md += \"---\\n\"\n \n \n if len(str(item.talk_url)) > 3:\n md += \"\\n[More information here](\" + item.talk_url + \")\\n\" \n \n \n if len(str(item.description)) > 3:\n md += \"\\n\" + html_escape(item.description) + \"\\n\"\n \n \n md_filename = os.path.basename(md_filename)\n #print(md)\n \n with open(\"../_talks/\" + md_filename, 'w') as f:\n f.write(md)",
"_____no_output_____"
]
],
[
[
"These files are in the talks directory, one directory below where we're working from.",
"_____no_output_____"
]
],
[
[
"!ls ../_talks",
"2012-03-01-talk-1.md\t 2014-02-01-talk-2.md\r\n2013-03-01-tutorial-1.md 2014-03-01-talk-3.md\r\n"
],
[
"!cat ../_talks/2013-03-01-tutorial-1.md",
"---\r\ntitle: \"Tutorial 1 on Relevant Topic in Your Field\"\r\ncollection: talks\r\ntype: \"Tutorial\"\r\npermalink: /talks/2013-03-01-tutorial-1\r\nvenue: \"UC-Berkeley Institute for Testing Science\"\r\ndate: 2013-03-01\r\nlocation: \"Berkeley CA, USA\"\r\n---\r\n\r\n[More information here](http://exampleurl.com)\r\n\r\nThis is a description of your tutorial, note the different field in type. This is a markdown files that can be all markdown-ified like any other post. Yay markdown!\r\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d00c595bbcb0d3b21bbdb5106bd870652c00f34f | 38,856 | ipynb | Jupyter Notebook | scripts/confusion_matrix.ipynb | samcw/pytorch-CycleGAN-and-pix2pix | 03e511c24ab9cb77ea11a429b98cb465825be828 | [
"BSD-3-Clause"
] | null | null | null | scripts/confusion_matrix.ipynb | samcw/pytorch-CycleGAN-and-pix2pix | 03e511c24ab9cb77ea11a429b98cb465825be828 | [
"BSD-3-Clause"
] | null | null | null | scripts/confusion_matrix.ipynb | samcw/pytorch-CycleGAN-and-pix2pix | 03e511c24ab9cb77ea11a429b98cb465825be828 | [
"BSD-3-Clause"
] | null | null | null | 260.778523 | 34,920 | 0.922921 | [
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport itertools\n\nfrom openpyxl import load_workbook\n\n# plt.rcParams['font.sans-serif'] = ['SimHei']\n# plt.rcParams['axes.unicode_minus'] = False\nplt.rcParams['figure.figsize'] = (7.0, 5.0)",
"_____no_output_____"
],
[
"def plot_Matrix(cm, classes, title=None, cmap=plt.cm.Blues):\n \n fig, ax = plt.subplots()\n im = ax.imshow(cm, interpolation = 'nearest', cmap = cmap)\n ax.figure.colorbar(im, ax = ax) # 侧边的颜色条带\n \n ax.set(xticks=np.arange(cm.shape[1]),\n yticks=np.arange(cm.shape[0]),\n xticklabels=classes, yticklabels=classes,\n title=title,\n ylabel='真实值',\n xlabel='预测值')\n\n # 通过绘制格网,模拟每个单元格的边框\n ax.set_xticks(np.arange(cm.shape[1] + 1) - .5, minor=True)\n ax.set_yticks(np.arange(cm.shape[0] + 1) - .5, minor=True)\n# ax.grid(which = \"minor\", color = \"gray\", linestyle = '-', linewidth = 0.2)\n ax.tick_params(which=\"minor\", bottom=False, left=False)\n\n # 将x轴上的lables旋转45度\n plt.setp(ax.get_xticklabels(), rotation=45, ha=\"right\",\n rotation_mode=\"anchor\")\n\n # 标注百分比信息\n fmt = 'd'\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, format(cm[i, j]), \n horizontalalignment=\"center\",\n verticalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n plt.tight_layout()\n plt.ylabel('真实值')\n plt.xlabel('预测值')\n plt.savefig('confusion_matrix.svg', dpi=256)",
"_____no_output_____"
],
[
"wb = load_workbook('../confusion_matrix.xlsx')\nsheet = wb['Sheet1']\nwb.close()\narray = []\nfor row in sheet['A1:K11']:\n _row = []\n for cell in row:\n _row.append(cell.value)\n array.append(_row)\n# 类别\nclasses = array.pop(0)\nclasses.pop(0)\n# 去除矩阵其他值\ncm = []\nfor row in array:\n row.pop(0)\n _row = []\n for cell in row:\n _row.append(float(cell))\n cm.append(_row)\ncm = np.array(cm)\n\nplot_Matrix(cm, classes, title=\"Confusion matrix\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d00c6cf71bdffc5e1414b4ece1a89cb27eb58159 | 53,907 | ipynb | Jupyter Notebook | MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb | rpatil524/Community-Notebooks | e87df00fefc33e6753c48b6bcdb63ab51f42cbca | [
"Apache-2.0"
] | 16 | 2019-04-29T23:56:17.000Z | 2022-02-15T12:47:03.000Z | MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb | rpatil524/Community-Notebooks | e87df00fefc33e6753c48b6bcdb63ab51f42cbca | [
"Apache-2.0"
] | 12 | 2019-04-23T18:00:16.000Z | 2020-07-06T22:49:46.000Z | MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb | rpatil524/Community-Notebooks | e87df00fefc33e6753c48b6bcdb63ab51f42cbca | [
"Apache-2.0"
] | 11 | 2019-05-29T17:46:58.000Z | 2022-01-25T18:13:06.000Z | 42.919586 | 1,068 | 0.407869 | [
[
[
"<a href=\"https://colab.research.google.com/github/isb-cgc/Community-Notebooks/blob/master/MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# How to build an RNA-seq logistic regression classifier with BigQuery ML\nCheck out other notebooks at our [Community Notebooks Repository](https://github.com/isb-cgc/Community-Notebooks)!\n\n- **Title:** How to build an RNA-seq logistic regression classifier with BigQuery ML\n- **Author:** John Phan\n- **Created:** 2021-07-19\n- **Purpose:** Demonstrate use of BigQuery ML to predict a cancer endpoint using gene expression data.\n- **URL:** https://github.com/isb-cgc/Community-Notebooks/blob/master/MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier_with_BigQuery_ML.ipynb\n- **Note:** This example is based on the work published by [Bosquet et al.](https://molecular-cancer.biomedcentral.com/articles/10.1186/s12943-016-0548-9)\n\n\nThis notebook builds upon the [scikit-learn notebook](https://github.com/isb-cgc/Community-Notebooks/blob/master/MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier.ipynb) and demonstrates how to build a machine learning model using BigQuery ML to predict ovarian cancer treatment outcome. BigQuery is used to create a temporary data table that contains both training and testing data. These datasets are then used to fit and evaluate a Logistic Regression classifier. ",
"_____no_output_____"
],
[
"# Import Dependencies",
"_____no_output_____"
]
],
[
[
"# GCP libraries\nfrom google.cloud import bigquery\nfrom google.colab import auth",
"_____no_output_____"
]
],
[
[
"## Authenticate\n\nBefore using BigQuery, we need to get authorization for access to BigQuery and the Google Cloud. For more information see ['Quick Start Guide to ISB-CGC'](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/HowToGetStartedonISB-CGC.html). Alternative authentication methods can be found [here](https://googleapis.dev/python/google-api-core/latest/auth.html)",
"_____no_output_____"
]
],
[
[
"# if you're using Google Colab, authenticate to gcloud with the following\nauth.authenticate_user()\n\n# alternatively, use the gcloud SDK\n#!gcloud auth application-default login",
"_____no_output_____"
]
],
[
[
"## Parameters\n\nCustomize the following parameters based on your notebook, execution environment, or project. BigQuery ML must create and store classification models, so be sure that you have write access to the locations stored in the \"bq_dataset\" and \"bq_project\" variables. ",
"_____no_output_____"
]
],
[
[
"# set the google project that will be billed for this notebook's computations\ngoogle_project = 'google-project' ## CHANGE ME\n\n# bq project for storing ML model\nbq_project = 'bq-project' ## CHANGE ME\n\n# bq dataset for storing ML model\nbq_dataset = 'scratch' ## CHANGE ME\n\n# name of temporary table for data\nbq_tmp_table = 'tmp_data'\n\n# name of ML model\nbq_ml_model = 'tcga_ov_therapy_ml_lr_model'\n\n# in this example, we'll be using the Ovarian cancer TCGA dataset\ncancer_type = 'TCGA-OV'\n\n# genes used for prediction model, taken from Bosquet et al.\ngenes = \"'RHOT1','MYO7A','ZBTB10','MATK','ST18','RPS23','GCNT1','DROSHA','NUAK1','CCPG1',\\\n'PDGFD','KLRAP1','MTAP','RNF13','THBS1','MLX','FAP','TIMP3','PRSS1','SLC7A11',\\\n'OLFML3','RPS20','MCM5','POLE','STEAP4','LRRC8D','WBP1L','ENTPD5','SYNE1','DPT',\\\n'COPZ2','TRIO','PDPR'\"\n\n# clinical data table\nclinical_table = 'isb-cgc-bq.TCGA_versioned.clinical_gdc_2019_06'\n\n# RNA seq data table\nrnaseq_table = 'isb-cgc-bq.TCGA.RNAseq_hg38_gdc_current'\n",
"_____no_output_____"
]
],
[
[
"## BigQuery Client\n\nCreate the BigQuery client.",
"_____no_output_____"
]
],
[
[
"# Create a client to access the data within BigQuery\nclient = bigquery.Client(google_project)",
"_____no_output_____"
]
],
[
[
"## Create a Table with a Subset of the Gene Expression Data\n\nPull RNA-seq gene expression data from the TCGA RNA-seq BigQuery table, join it with clinical labels, and pivot the table so that it can be used with BigQuery ML. In this example, we will label the samples based on therapy outcome. \"Complete Remission/Response\" will be labeled as \"1\" while all other therapy outcomes will be labeled as \"0\". This prepares the data for binary classification. \n\nPrediction modeling with RNA-seq data typically requires a feature selection step to reduce the dimensionality of the data before training a classifier. However, to simplify this example, we will use a pre-identified set of 33 genes (Bosquet et al. identified 34 genes, but PRSS2 and its aliases are not available in the hg38 RNA-seq data). \n\nCreation of a BQ table with only the data of interest reduces the size of the data passed to BQ ML and can significantly reduce the cost of running BQ ML queries. This query also randomly splits the dataset into \"training\" and \"testing\" sets using the \"FARM_FINGERPRINT\" hash function in BigQuery. \"FARM_FINGERPRINT\" generates an integer from the input string. More information can be found [here](https://cloud.google.com/bigquery/docs/reference/standard-sql/hash_functions).",
"_____no_output_____"
]
],
[
[
"tmp_table_query = client.query((\"\"\"\n BEGIN\n CREATE OR REPLACE TABLE `{bq_project}.{bq_dataset}.{bq_tmp_table}` AS\n SELECT * FROM (\n SELECT\n labels.case_barcode as sample,\n labels.data_partition as data_partition,\n labels.response_label AS label,\n ge.gene_name AS gene_name,\n -- Multiple samples may exist per case, take the max value\n MAX(LOG(ge.HTSeq__FPKM_UQ+1)) AS gene_expression\n FROM `{rnaseq_table}` AS ge\n INNER JOIN (\n SELECT\n *\n FROM (\n SELECT\n case_barcode,\n primary_therapy_outcome_success,\n CASE\n -- Complete Reponse --> label as 1\n -- All other responses --> label as 0\n WHEN primary_therapy_outcome_success = 'Complete Remission/Response' THEN 1\n WHEN (primary_therapy_outcome_success IN (\n 'Partial Remission/Response','Progressive Disease','Stable Disease'\n )) THEN 0\n END AS response_label,\n CASE \n WHEN MOD(ABS(FARM_FINGERPRINT(case_barcode)), 10) < 5 THEN 'training'\n WHEN MOD(ABS(FARM_FINGERPRINT(case_barcode)), 10) >= 5 THEN 'testing'\n END AS data_partition\n FROM `{clinical_table}`\n WHERE\n project_short_name = '{cancer_type}'\n AND primary_therapy_outcome_success IS NOT NULL\n )\n ) labels\n ON labels.case_barcode = ge.case_barcode\n WHERE gene_name IN ({genes})\n GROUP BY sample, label, data_partition, gene_name\n )\n PIVOT (\n MAX(gene_expression) FOR gene_name IN ({genes})\n );\n END;\n\"\"\").format(\n bq_project=bq_project,\n bq_dataset=bq_dataset,\n bq_tmp_table=bq_tmp_table,\n rnaseq_table=rnaseq_table,\n clinical_table=clinical_table,\n cancer_type=cancer_type,\n genes=genes\n)).result()\n\nprint(tmp_table_query)",
"<google.cloud.bigquery.table._EmptyRowIterator object at 0x7f3894001250>\n"
]
],
[
[
"Let's take a look at this subset table. The data has been pivoted such that each of the 33 genes is available as a column that can be \"SELECTED\" in a query. In addition, the \"label\" and \"data_partition\" columns simplify data handling for classifier training and evaluation. ",
"_____no_output_____"
]
],
[
[
"tmp_table_data = client.query((\"\"\"\n SELECT\n * --usually not recommended to use *, but in this case, we want to see all of the 33 genes\n FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`\n\"\"\").format(\n bq_project=bq_project,\n bq_dataset=bq_dataset,\n bq_tmp_table=bq_tmp_table\n)).result().to_dataframe()\n\nprint(tmp_table_data.info())\ntmp_table_data",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 264 entries, 0 to 263\nData columns (total 36 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 sample 264 non-null object \n 1 data_partition 264 non-null object \n 2 label 264 non-null int64 \n 3 RHOT1 264 non-null float64\n 4 MYO7A 264 non-null float64\n 5 ZBTB10 264 non-null float64\n 6 MATK 264 non-null float64\n 7 ST18 264 non-null float64\n 8 RPS23 264 non-null float64\n 9 GCNT1 264 non-null float64\n 10 DROSHA 264 non-null float64\n 11 NUAK1 264 non-null float64\n 12 CCPG1 264 non-null float64\n 13 PDGFD 264 non-null float64\n 14 KLRAP1 264 non-null float64\n 15 MTAP 264 non-null float64\n 16 RNF13 264 non-null float64\n 17 THBS1 264 non-null float64\n 18 MLX 264 non-null float64\n 19 FAP 264 non-null float64\n 20 TIMP3 264 non-null float64\n 21 PRSS1 264 non-null float64\n 22 SLC7A11 264 non-null float64\n 23 OLFML3 264 non-null float64\n 24 RPS20 264 non-null float64\n 25 MCM5 264 non-null float64\n 26 POLE 264 non-null float64\n 27 STEAP4 264 non-null float64\n 28 LRRC8D 264 non-null float64\n 29 WBP1L 264 non-null float64\n 30 ENTPD5 264 non-null float64\n 31 SYNE1 264 non-null float64\n 32 DPT 264 non-null float64\n 33 COPZ2 264 non-null float64\n 34 TRIO 264 non-null float64\n 35 PDPR 264 non-null float64\ndtypes: float64(33), int64(1), object(2)\nmemory usage: 74.4+ KB\nNone\n"
]
],
[
[
"# Train the Machine Learning Model\n\nNow we can train a classifier using BigQuery ML with the data stored in the subset table. This model will be stored in the location specified by the \"bq_ml_model\" variable, and can be reused to predict samples in the future.\n\nWe pass three options to the BQ ML model: model_type, auto_class_weights, and input_label_cols. Model_type specifies the classifier model type. In this case, we use \"LOGISTIC_REG\" to train a logistic regression classifier. Other classifier options are documented [here](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create). Auto_class_weights indicates whether samples should be weighted to balance the classes. For example, if the dataset happens to have more samples labeled as \"Complete Response\", those samples would be less weighted to ensure that the model is not biased towards predicting those samples. Input_label_cols tells BigQuery that the \"label\" column should be used to determine each sample's label. \n\n**Warning**: BigQuery ML models can be very time-consuming and expensive to train. Please check your data size before running BigQuery ML commands. Information about BigQuery ML costs can be found [here](https://cloud.google.com/bigquery-ml/pricing).",
"_____no_output_____"
]
],
[
[
"# create ML model using BigQuery\nml_model_query = client.query((\"\"\"\n CREATE OR REPLACE MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`\n OPTIONS\n (\n model_type='LOGISTIC_REG',\n auto_class_weights=TRUE,\n input_label_cols=['label']\n ) AS\n SELECT * EXCEPT(sample, data_partition) -- when training, we only the labels and feature columns\n FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`\n WHERE data_partition = 'training' -- using training data only\n\"\"\").format(\n bq_project=bq_project,\n bq_dataset=bq_dataset,\n bq_ml_model=bq_ml_model,\n bq_tmp_table=bq_tmp_table\n)).result()\nprint(ml_model_query)\n\n# now get the model metadata\nml_model = client.get_model('{}.{}.{}'.format(bq_project, bq_dataset, bq_ml_model))\nprint(ml_model)",
"<google.cloud.bigquery.table._EmptyRowIterator object at 0x7f3893663810>\nModel(reference=ModelReference(project='isb-project-zero', dataset_id='jhp_scratch', project_id='tcga_ov_therapy_ml_lr_model'))\n"
]
],
[
[
"# Evaluate the Machine Learning Model\nOnce the model has been trained and stored, we can evaluate the model's performance using the \"testing\" dataset from our subset table. Evaluating a BQ ML model is generally less expensive than training. \n\nUse the following query to evaluate the BQ ML model. Note that we're using the \"data_partition = 'testing'\" clause to ensure that we're only evaluating the model with test samples from the subset table. \n\nBigQuery's ML.EVALUATE function returns several performance metrics: precision, recall, accuracy, f1_score, log_loss, and roc_auc. More details about these performance metrics are available from [Google's ML Crash Course](https://developers.google.com/machine-learning/crash-course/classification/video-lecture). Specific topics can be found at the following URLs: [precision and recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall), [accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy), [ROC and AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc). ",
"_____no_output_____"
]
],
[
[
"ml_eval = client.query((\"\"\"\nSELECT * FROM ML.EVALUATE (MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`, \n (\n SELECT * EXCEPT(sample, data_partition)\n FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`\n WHERE data_partition = 'testing'\n )\n)\n\"\"\").format(\n bq_project=bq_project,\n bq_dataset=bq_dataset,\n bq_ml_model=bq_ml_model,\n bq_tmp_table=bq_tmp_table\n)).result().to_dataframe()",
"_____no_output_____"
],
[
"# Display the table of evaluation results\nml_eval",
"_____no_output_____"
]
],
[
[
"# Predict Outcome for One or More Samples\nML.EVALUATE evaluates a model's performance, but does not produce actual predictions for each sample. In order to do that, we need to use the ML.PREDICT function. The syntax is similar to that of the ML.EVALUATE function and returns \"label\", \"predicted_label\", \"predicted_label_probs\", and all feature columns. Since the feature columns are unchanged from the input dataset, we select only the original label, predicted label, and probabilities for each sample. \n\nNote that the input dataset can include one or more samples, and must include the same set of features as the training dataset. ",
"_____no_output_____"
]
],
[
[
"ml_predict = client.query((\"\"\"\nSELECT\n label,\n predicted_label,\n predicted_label_probs\nFROM ML.PREDICT (MODEL `{bq_project}.{bq_dataset}.{bq_ml_model}`, \n (\n SELECT * EXCEPT(sample, data_partition)\n FROM `{bq_project}.{bq_dataset}.{bq_tmp_table}`\n WHERE data_partition = 'testing' -- Use the testing dataset\n )\n)\n\"\"\").format(\n bq_project=bq_project,\n bq_dataset=bq_dataset,\n bq_ml_model=bq_ml_model,\n bq_tmp_table=bq_tmp_table\n)).result().to_dataframe()",
"_____no_output_____"
],
[
"# Display the table of prediction results\nml_predict",
"_____no_output_____"
],
[
"# Calculate the accuracy of prediction, which should match the result of ML.EVALUATE\naccuracy = 1-sum(abs(ml_predict['label']-ml_predict['predicted_label']))/len(ml_predict)\nprint('Accuracy: ', accuracy)",
"Accuracy: 0.6230769230769231\n"
]
],
[
[
"# Next Steps\nThe BigQuery ML logistic regression model trained in this notebook is comparable to the scikit-learn model developed in our [companion notebook](https://github.com/isb-cgc/Community-Notebooks/blob/master/MachineLearning/How_to_build_an_RNAseq_logistic_regression_classifier.ipynb). BigQuery ML simplifies the model building and evaluation process by enabling bioinformaticians to use machine learning within the BigQuery ecosystem. However, it is often necessary to optimize performance by evaluating several types of models (i.e., other than logistic regression), and tuning model parameters. Due to the cost of BigQuery ML for training, such iterative model fine-tuning may be cost prohibitive. In such cases, a combination of scikit-learn (or other libraries such as Keras and TensorFlow) and BigQuery ML may be appropriate. E.g., models can be fine-tuned using scikit-learn and published as a BigQuery ML model for production applications. In future notebooks, we will explore methods for model selection, optimization, and publication with BigQuery ML. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
d00c6d44074aee2c8974703e21fdaff2f0374eae | 14,838 | ipynb | Jupyter Notebook | Ops MGMT.ipynb | FireCARES/firecares | aa708d441790263206dd3a0a480eb6ca9031439d | [
"MIT"
] | 12 | 2016-01-30T02:28:35.000Z | 2019-05-29T15:49:56.000Z | Ops MGMT.ipynb | FireCARES/firecares | aa708d441790263206dd3a0a480eb6ca9031439d | [
"MIT"
] | 455 | 2015-07-27T20:21:56.000Z | 2022-03-11T23:26:20.000Z | Ops MGMT.ipynb | FireCARES/firecares | aa708d441790263206dd3a0a480eb6ca9031439d | [
"MIT"
] | 14 | 2015-07-29T09:45:53.000Z | 2020-10-21T20:03:17.000Z | 36.278729 | 409 | 0.555803 | [
[
[
"# FireCARES ops management notebook\n\n### Using this notebook\n\nIn order to use this notebook, a single production/test web node will need to be bootstrapped w/ ipython and django-shell-plus python libraries. After bootstrapping is complete and while forwarding a local port to the port that the ipython notebook server will be running on the node, you can open the ipython notebook using the token provided in the SSH session after ipython notebook server start.\n\n#### Bootstrapping a prod/test node\n\nTo bootstrap a specific node for use of this notebook, you'll need to ssh into the node and forward a local port # to localhost:8888 on the node.\n\ne.g. `ssh firecares-prod -L 8890:localhost:8888` to forward the local port 8890 to 8888 on the web node, assumes that the \"firecares-prod\" SSH config is listed w/ the correct webserver IP in your `~/.ssh/config`\n\n- `sudo chown -R firecares: /run/user/1000` as the `ubuntu` user\n- `sudo su firecares`\n- `workon firecares`\n- `pip install -r dev_requirements.txt`\n- `python manage.py shell_plus --notebook --no-browser --settings=firecares.settings.local`\n\nAt this point, there will be a mention of \"The jupyter notebook is running at: http://localhost:8888/?token=XXXX\". Copy the URL, but be sure to use the local port that you're forwarding instead for the connection vs the default of 8888 if necessary.\n\nSince the ipython notebook server supports django-shell-plus, all of the FireCARES models will automatically be imported. From here any command that you execute in the notebook will run on the remote web node immediately.",
"_____no_output_____"
],
[
"## Fire department management",
"_____no_output_____"
],
[
"### Re-generate performance score for a specific fire department\n\nUseful for when a department's FDID has been corrected. Will do the following:\n\n1. Pull NFIRS counts for the department (cached in FireCARES database)\n1. Generate fires heatmap\n1. Update department owned census tracts geom\n1. Regenerate structure hazard counts in jurisdiction\n1. Regenerate population_quartiles materialized view to get safe grades for department\n1. Re-run performance score for the department",
"_____no_output_____"
]
],
[
[
"import psycopg2\nfrom firecares.tasks import update\nfrom firecares.utils import dictfetchall\nfrom django.db import connections\nfrom django.conf import settings\nfrom django.core.management import call_command\nfrom IPython.display import display\nimport pandas as pd",
"_____no_output_____"
],
[
"fd = {'fdid': '18M04', 'state': 'WA'}\nnfirs = connections['nfirs']\ndepartment = FireDepartment.objects.filter(**fd).first()\nfid = department.id\nprint 'FireCARES id: %s' % fid\nprint 'https://firecares.org/departments/%s' % fid",
"FireCARES id: 92616\nhttps://firecares.org/departments/92616\n"
],
[
"%%time\n# Get raw fire incident counts (prior to intersection with )\n\nwith nfirs.cursor() as cur:\n cur.execute(\"\"\"\n select count(1), fdid, state, extract(year from inc_date) as year\n from fireincident where fdid=%(fdid)s and state=%(state)s\n group by fdid, state, year\n order by year\"\"\", fd)\n fire_years = dictfetchall(cur)\n display(fire_years)\n print 'Total fires: %s\\n' % sum([x['count'] for x in fire_years])",
"_____no_output_____"
],
[
"%%time\n# Get building fire counts after structure hazard level calculations\nsql = update.STRUCTURE_FIRES\n\nprint sql\n\nwith nfirs.cursor() as cur:\n cur.execute(sql, dict(fd, years=tuple([x['year'] for x in fire_years])))\n fires_by_hazard_level = dictfetchall(cur)\n display(fires_by_hazard_level)\n print 'Total geocoded fires: %s\\n' % sum([x['count'] for x in fires_by_hazard_level])",
"SELECT count(1) as count, extract(year from a.alarm) as year, COALESCE(y.risk_category, 'N/A') as risk_level\nFROM joint_buildingfires a\nLEFT JOIN\n (SELECT state, fdid, inc_date, inc_no, exp_no, x.parcel_id, x.risk_category\n FROM ( SELECT *\n FROM joint_incidentaddress a\n LEFT JOIN parcel_risk_category_local using (parcel_id)\n ) AS x\n ) AS y\nUSING (state, fdid, inc_date, inc_no, exp_no)\nWHERE a.state = %(state)s AND a.fdid = %(fdid)s AND extract(year FROM a.inc_date) IN %(years)s\nGROUP BY y.risk_category, extract(year from a.alarm)\nORDER BY extract(year from a.alarm) DESC\n"
],
[
"sql = \"\"\"\nselect alarm, a.inc_type, alarms,ff_death, oth_death, ST_X(geom) as x, st_y(geom) as y, COALESCE(y.risk_category, 'Unknown') as risk_category\nfrom buildingfires a\nLEFT JOIN (\n SELECT state, fdid, inc_date, inc_no, exp_no, x.geom, x.parcel_id, x.risk_category\n FROM (\n SELECT * FROM incidentaddress a\n LEFT JOIN parcel_risk_category_local using (parcel_id)\n ) AS x\n) AS y\n USING (state, fdid, inc_date, inc_no, exp_no)\nWHERE a.state = %(state)s and a.fdid = %(fdid)s\"\"\"\n\nwith nfirs.cursor() as cur:\n cur.execute(sql, fd)\n rows = dictfetchall(cur)\n \nout_name = '{id}-building-fires.csv'.format(id=fid)\nfull_path = '/tmp/' + out_name\n\nwith open(full_path, 'w') as f:\n writer = csv.DictWriter(f, fieldnames=[x.name for x in cur.description])\n writer.writeheader()\n writer.writerows(rows)\n\n# Push building fires to S3\n!aws s3 cp $full_path s3://firecares-test/$out_name --acl=\"public-read\"",
"Completed 12.9 KiB/12.9 KiB (125.9 KiB/s) with 1 file(s) remaining\rupload: ../../../tmp/92616-building-fires.csv to s3://firecares-test/92616-building-fires.csv\r\n"
],
[
"update.update_nfirs_counts(fid)",
"updating NFIRS counts for 92616\n...updated NFIRS counts for 92616\n"
],
[
"update.calculate_department_census_geom(fid)",
"No census geom - Poulsbo Fire Department (Kitsap Fire District 18) (92616)\n"
],
[
"# Fire counts by hazard level over all years, keep in mind that the performance score model will currently ONLY work\n# hazard levels w/ \ndisplay(pd.DataFrame(fires_by_hazard_level).groupby(['risk_level']).sum()['count'])\n\nupdate.update_performance_score(fid)",
"_____no_output_____"
]
],
[
[
"## User management",
"_____no_output_____"
],
[
"### Whitelist",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d00c7cb7bd7578ff9e2704d4c3ceb6af34ab8b4e | 3,499 | ipynb | Jupyter Notebook | Alexa.ipynb | architgwl2000/Alexa-Shut-Down-Computer | 772658721a2935c2e963e8829b2ff1e3ecc5310e | [
"MIT"
] | null | null | null | Alexa.ipynb | architgwl2000/Alexa-Shut-Down-Computer | 772658721a2935c2e963e8829b2ff1e3ecc5310e | [
"MIT"
] | null | null | null | Alexa.ipynb | architgwl2000/Alexa-Shut-Down-Computer | 772658721a2935c2e963e8829b2ff1e3ecc5310e | [
"MIT"
] | null | null | null | 27.551181 | 97 | 0.474421 | [
[
[
"import time\nimport os\nimport pyttsx3\nimport speech_recognition as sr\nsrc = sr.Recognizer()",
"_____no_output_____"
],
[
"def speak(text):\n engine = pyttsx3.init('sapi5')\n voices = engine.getProperty('voices')\n engine.setProperty('voice', voices[1].id) \n engine.say(text)\n engine.runAndWait()",
"_____no_output_____"
],
[
"def listen():\n with sr.Microphone() as source:\n print(\"Input => \",end = \"\")\n src.pause_threshold = 0.7 \n inp_audio = src.listen(source)\n try:\n inp_text = src.recognize_google(inp_audio, language = 'en-in')\n print(inp_text)\n inp_text = inp_text.lower().split()\n except Exception as e:\n print(e)\n print(\"Alexa => Sorry, I couldn't hear you. Can you please Repeat Again.\")\n speak(\"Sorry, I couldn't hear you. Can you please Repeat Again.\")\n return \"None\"\n return inp_text",
"_____no_output_____"
],
[
"def quit():\n print(\"Alexa => Are you sure you want to shutdown this system ?\")\n speak(\"Are you sure you want to shutdown this system ?\")\n command = listen()\n if command[0] == \"yes\":\n print(\"Alexa => Shutiing Down in 3 seconds.\")\n speak(\"Shutiing Down in 3 seconds.\")\n time.sleep(3)\n print(\"Alexa => Shutting Down.\")\n speak(\"Shutting Down.\")\n os.system(\"shutdown /s /t 1\")\n elif command[0] == \"no\":\n print(\"Alexa => OK! Cancelling the Request.\")\n speak(\"OK! Cancelling the Request.\")",
"_____no_output_____"
],
[
"def Alexa():\n inp_text = listen()\n if 'alexa' in inp_text:\n alexa_id = inp_text.index('alexa')\n if inp_text[alexa_id - 1] == 'hey' or inp_text[alexa_id - 1] == 'hi':\n print(\"Alexa => Activated\")\n speak(\"Activated\")\n print(\"Alexa => What can I do for you?\")\n speak(\"What can I do for you?\")\n shut = listen()\n if 'shutdown' in shut:\n quit()",
"_____no_output_____"
],
[
"Alexa()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00c8212cd810d5ce24ef24aec89153faa1fd27f | 8,481 | ipynb | Jupyter Notebook | 04-annealing-applications/Vertex-Cover.ipynb | a-capra/Intro-QC-TRIUMF | 9738e6a49f226367247cf7bc05a00751f7bf2fe5 | [
"MIT"
] | null | null | null | 04-annealing-applications/Vertex-Cover.ipynb | a-capra/Intro-QC-TRIUMF | 9738e6a49f226367247cf7bc05a00751f7bf2fe5 | [
"MIT"
] | null | null | null | 04-annealing-applications/Vertex-Cover.ipynb | a-capra/Intro-QC-TRIUMF | 9738e6a49f226367247cf7bc05a00751f7bf2fe5 | [
"MIT"
] | 1 | 2021-04-26T05:20:11.000Z | 2021-04-26T05:20:11.000Z | 27.358065 | 313 | 0.578705 | [
[
[
"# Solving vertex cover with a quantum annealer",
"_____no_output_____"
],
[
"The problem of vertex cover is, given an undirected graph $G = (V, E)$, colour the smallest amount of vertices such that each edge $e \\in E$ is connected to a coloured vertex.\n\nThis notebooks works through the process of creating a random graph, translating to an optimization problem, and eventually finding the ground state using a quantum annealer.",
"_____no_output_____"
],
[
"### Graph setup",
"_____no_output_____"
],
[
"The first thing we will do is create an instance of the problem, by constructing a small, random undirected graph. We are going to use the `networkx` package, which should already be installed if you have installed if you are using Anaconda.",
"_____no_output_____"
]
],
[
[
"import dimod\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"n_vertices = 5\nn_edges = 6\n\nsmall_graph = nx.gnm_random_graph(n_vertices, n_edges)",
"_____no_output_____"
],
[
"nx.draw(small_graph, with_labels=True)",
"_____no_output_____"
]
],
[
[
"### Constructing the Hamiltonian",
"_____no_output_____"
],
[
"I showed in class that the objective function for vertex cover looks like this:\n\\begin{equation}\n \\sum_{(u,v) \\in E} (1 - x_u) (1 - x_v) + \\gamma \\sum_{v \\in V} x_v\n\\end{equation}\nWe want to find an assignment of the $x_u$ of 1 (coloured) or 0 (uncoloured) that _minimizes_ this function. The first sum tries to force us to choose an assignment that makes sure every edge gets attached to a coloured vertex. The second sum is essentially just counting the number of coloured vertices.\n\n**Task**: Expand out the QUBO above to see how you can convert it to a more 'traditional' looking QUBO:\n\\begin{equation}\n \\sum_{(u,v) \\in E} x_u x_v + \\sum_{v \\in V} (\\gamma - \\hbox{deg}(x_v)) x_v\n\\end{equation}\nwhere deg($x_v$) indicates the degree of vertex $x_v$ in the graph.",
"_____no_output_____"
]
],
[
[
"γ = 0.8\nQ = {x : 1 for x in small_graph.edges()}\nr = {x : (γ - small_graph.degree[x]) for x in small_graph.nodes}",
"_____no_output_____"
]
],
[
[
"Let's convert it to the appropriate data structure, and solve using the exact solver. ",
"_____no_output_____"
]
],
[
[
"bqm = dimod.BinaryQuadraticModel(r, Q, 0, dimod.BINARY)\nresponse = dimod.ExactSolver().sample(bqm)\nprint(f\"Sample energy = {next(response.data(['energy']))[0]}\")",
"_____no_output_____"
]
],
[
[
"Let's print the graph with proper colours included",
"_____no_output_____"
]
],
[
[
"colour_assignments = next(response.data(['sample']))[0]\ncolours = ['grey' if colour_assignments[x] == 0 else 'red' for x in range(len(colour_assignments))]\n\nnx.draw(small_graph, with_labels=True, node_color=colours)",
"_____no_output_____"
]
],
[
[
"### Scaling up...",
"_____no_output_____"
],
[
"That one was easy enough to solve by hand. Let's try a much larger instance...",
"_____no_output_____"
]
],
[
[
"n_vertices = 20\nn_edges = 60\n\nlarge_graph = nx.gnm_random_graph(n_vertices, n_edges)\nnx.draw(large_graph, with_labels=True)",
"_____no_output_____"
],
[
"# Create h, J and put it into the exact solver\nγ = 0.8\nQ = {x : 1 for x in large_graph.edges()}\nr = {x : (γ - large_graph.degree[x]) for x in large_graph.nodes}\n\nbqm = dimod.BinaryQuadraticModel(r, Q, 0, dimod.BINARY)\nresponse = dimod.ExactSolver().sample(bqm)\nprint(f\"Sample energy = {next(response.data(['energy']))[0]}\")\n \ncolour_assignments = next(response.data(['sample']))[0]\ncolours = ['grey' if colour_assignments[x] == 0 else 'red' for x in range(len(colour_assignments))]\n\nnx.draw(large_graph, with_labels=True, node_color=colours)\nprint(f\"Coloured {list(colour_assignments.values()).count(1)}/{n_vertices} vertices.\")",
"_____no_output_____"
]
],
[
[
"### Running on the D-Wave",
"_____no_output_____"
],
[
"You'll only be able to run the next few cells if you have D-Wave access. We will send the same graph as before to the D-Wave QPU and see what kind of results we get back!",
"_____no_output_____"
]
],
[
[
"from dwave.system.samplers import DWaveSampler\nfrom dwave.system.composites import EmbeddingComposite\n\nsampler = EmbeddingComposite(DWaveSampler())",
"_____no_output_____"
],
[
"ising_conversion = bqm.to_ising()\nh, J = ising_conversion[0], ising_conversion[1]\nresponse = sampler.sample_ising(h, J, num_reads = 1000)",
"_____no_output_____"
],
[
"best_solution =np.sort(response.record, order='energy')[0]",
"_____no_output_____"
],
[
"print(f\"Sample energy = {best_solution['energy']}\")\n \ncolour_assignments_qpu = {x : best_solution['sample'][x] for x in range(n_vertices)}\nfor x in range(n_vertices):\n if colour_assignments_qpu[x] == -1:\n colour_assignments_qpu[x] = 0\ncolours = ['grey' if colour_assignments_qpu[x] == 0 else 'red' for x in range(len(colour_assignments_qpu))]\n\nnx.draw(large_graph, with_labels=True, node_color=colours)\nprint(f\"Coloured {list(colour_assignments_qpu.values()).count(1)}/{n_vertices} vertices.\")",
"_____no_output_____"
],
[
"print(\"Node\\tExact\\tQPU\")\nfor x in range(n_vertices):\n print(f\"{x}\\t{colour_assignments[x]}\\t{colour_assignments_qpu[x]}\")",
"_____no_output_____"
]
],
[
[
"Here is a scatter plot of all the different energies we got out, against the number of times each solution occurred. ",
"_____no_output_____"
]
],
[
[
"plt.scatter(response.record['energy'], response.record['num_occurrences'])",
"_____no_output_____"
],
[
"response.record['num_occurrences']",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d00ca58193f90735165eac6a5d1356bc0c6597cd | 4,049 | ipynb | Jupyter Notebook | notebooks/notebook_template.ipynb | knu2xs/la-covid-challenge | 18f2ecffd5f33d7d6a022270db5ec39882107e62 | [
"Apache-2.0"
] | 1 | 2020-06-05T15:39:30.000Z | 2020-06-05T15:39:30.000Z | notebooks/notebook_template.ipynb | knu2xs/la-covid-challenge | 18f2ecffd5f33d7d6a022270db5ec39882107e62 | [
"Apache-2.0"
] | null | null | null | notebooks/notebook_template.ipynb | knu2xs/la-covid-challenge | 18f2ecffd5f33d7d6a022270db5ec39882107e62 | [
"Apache-2.0"
] | 1 | 2020-06-05T15:39:31.000Z | 2020-06-05T15:39:31.000Z | 31.146154 | 342 | 0.613485 | [
[
[
"# Notebook Template\n\nThis Notebook is stubbed out with some project paths, loading of enviroment variables, and common package imports to speed up the process of starting a new project.\n\nIt is highly recommended you copy and rename this notebook following the naming convention outlined in the readme of naming notebooks with a double number such as `01_first_thing`, and `02_next_thing`. This way the order of notebooks is apparent, and each notebook does not need to be needlesssly long, complex, and difficult to follow.",
"_____no_output_____"
]
],
[
[
"import importlib\nimport os\nfrom pathlib import Path\nimport sys\n\nfrom arcgis.features import GeoAccessor, GeoSeriesAccessor\nfrom arcgis.gis import GIS\nfrom dotenv import load_dotenv, find_dotenv\nimport pandas as pd\n\n# import arcpy if available\nif importlib.util.find_spec(\"arcpy\") is not None:\n import arcpy",
"_____no_output_____"
],
[
"# load environment variables from .env\nload_dotenv(find_dotenv())\n\n# paths to common data locations - NOTE: to convert any path to a raw string, simply use str(path_instance)\nproject_parent = Path('./').absolute().parent\n\ndata_dir = project_parent/'data'\n\ndata_raw = data_dir/'raw'\ndata_ext = data_dir/'external'\ndata_int = data_dir/'interim'\ndata_out = data_dir/'processed'\n\ngdb_raw = data_raw/'raw.gdb'\ngdb_int = data_int/'interim.gdb'\ngdb_out = data_out/'processed.gdb'\n\n# import the project package from the project package path\nsys.path.append(str(project_parent/'src'))\nimport la_challenge\n\n# load the \"autoreload\" extension so that code can change, & always reload modules so that as you change code in src, it gets loaded\n%load_ext autoreload\n%autoreload 2\n\n# create a GIS object instance; if you did not enter any information here, it defaults to anonymous access to ArcGIS Online\ngis = GIS(\n url=os.getenv('ESRI_GIS_URL'), \n username=os.getenv('ESRI_GIS_USERNAME'),\n password=os.getenv('ESRI_GIS_PASSWORD')\n)\n\ngis",
"_____no_output_____"
]
],
[
[
"Licensing\n\nCopyright 2020 Esri\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); You\nmay not use this file except in compliance with the License. You may\nobtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\nimplied. See the License for the specific language governing\npermissions and limitations under the License.\n\nA copy of the license is available in the repository's\nLICENSE file.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d00cca954e657d58d579fc04b99100c9eab700fb | 149,647 | ipynb | Jupyter Notebook | notebook/DCinversion.ipynb | sgkang/DamGeophysics | c35cac2f33f1c84c99f6234da9af33fcd2eda88f | [
"MIT"
] | 2 | 2021-07-16T04:40:03.000Z | 2022-01-05T08:12:30.000Z | notebook/DCinversion.ipynb | sgkang/DamGeophysics | c35cac2f33f1c84c99f6234da9af33fcd2eda88f | [
"MIT"
] | null | null | null | notebook/DCinversion.ipynb | sgkang/DamGeophysics | c35cac2f33f1c84c99f6234da9af33fcd2eda88f | [
"MIT"
] | 2 | 2016-08-25T05:10:01.000Z | 2019-09-28T08:18:14.000Z | 267.226786 | 30,516 | 0.910356 | [
[
[
"import sys\nsys.path.append(\"/Users/sgkang/Projects/DamGeophysics/codes/\")\nfrom Readfiles import getFnames\nfrom DCdata import readReservoirDC\n%pylab inline",
"/Users/sgkang/anaconda2/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.\n warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')\n"
],
[
"from SimPEG.EM.Static import DC\nfrom SimPEG import EM\nfrom SimPEG import Mesh",
"_____no_output_____"
]
],
[
[
"Read DC data",
"_____no_output_____"
]
],
[
[
"fname = \"../data/ChungCheonDC/20150101000000.apr\"\nsurvey = readReservoirDC(fname)\ndobsAppres = survey.dobs\nfig, ax = plt.subplots(1,1, figsize = (10, 2))\ndat = EM.Static.Utils.StaticUtils.plot_pseudoSection(survey, ax, dtype='volt', sameratio=False)\ncb = dat[2]\ncb.set_label(\"Apprent resistivity (ohm-m)\")\ngeom = np.hstack(dat[3])\ndobsDC = dobsAppres * geom ",
"_____no_output_____"
],
[
"# problem = DC.Problem2D_CC(mesh)\ncs = 2.5\nnpad = 6\nhx = [(cs,npad, -1.3),(cs,160),(cs,npad, 1.3)]\nhy = [(cs,npad, -1.3),(cs,20)]\nmesh = Mesh.TensorMesh([hx, hy])\nmesh = Mesh.TensorMesh([hx, hy],x0=[-mesh.hx[:6].sum()-0.25, -mesh.hy.sum()])",
"_____no_output_____"
],
[
"def from3Dto2Dsurvey(survey):\n srcLists2D = []\n nSrc = len(survey.srcList)\n\n for iSrc in range (nSrc):\n src = survey.srcList[iSrc]\n locsM = np.c_[src.rxList[0].locs[0][:,0], np.ones_like(src.rxList[0].locs[0][:,0])*-0.75] \n locsN = np.c_[src.rxList[0].locs[1][:,0], np.ones_like(src.rxList[0].locs[1][:,0])*-0.75] \n rx = DC.Rx.Dipole_ky(locsM, locsN)\n locA = np.r_[src.loc[0][0], -0.75]\n locB = np.r_[src.loc[1][0], -0.75]\n src = DC.Src.Dipole([rx], locA, locB)\n srcLists2D.append(src)\n survey2D = DC.Survey_ky(srcLists2D)\n return survey2D",
"_____no_output_____"
],
[
"from SimPEG import (Mesh, Maps, Utils, DataMisfit, Regularization,\n Optimization, Inversion, InvProblem, Directives)",
"_____no_output_____"
],
[
"mapping = Maps.ExpMap(mesh)\nsurvey2D = from3Dto2Dsurvey(survey)\nproblem = DC.Problem2D_N(mesh, mapping=mapping)\nproblem.pair(survey2D)\nm0 = np.ones(mesh.nC)*np.log(1e-2)",
"_____no_output_____"
],
[
"from ipywidgets import interact\nnSrc = len(survey2D.srcList)\ndef foo(isrc):\n figsize(10, 5)\n mesh.plotImage(np.ones(mesh.nC)*np.nan, gridOpts={\"color\":\"k\", \"alpha\":0.5}, grid=True)\n# isrc=0\n src = survey2D.srcList[isrc]\n plt.plot(src.loc[0][0], src.loc[0][1], 'bo')\n plt.plot(src.loc[1][0], src.loc[1][1], 'ro')\n locsM = src.rxList[0].locs[0]\n locsN = src.rxList[0].locs[1]\n plt.plot(locsM[:,0], locsM[:,1], 'ko')\n plt.plot(locsN[:,0], locsN[:,1], 'go')\n plt.gca().set_aspect('equal', adjustable='box')\n \ninteract(foo, isrc=(0, nSrc-1, 1))",
"_____no_output_____"
],
[
"pred = survey2D.dpred(m0)",
"_____no_output_____"
],
[
"# data_anal = []\n# nSrc = len(survey.srcList)\n# for isrc in range(nSrc):\n# src = survey.srcList[isrc] \n# locA = src.loc[0]\n# locB = src.loc[1]\n# locsM = src.rxList[0].locs[0]\n# locsN = src.rxList[0].locs[1]\n# rxloc=[locsM, locsN]\n# a = EM.Analytics.DCAnalyticHalf(locA, rxloc, 1e-3, earth_type=\"halfspace\")\n# b = EM.Analytics.DCAnalyticHalf(locB, rxloc, 1e-3, earth_type=\"halfspace\")\n# data_anal.append(a-b)\n# data_anal = np.hstack(data_anal)",
"_____no_output_____"
],
[
"survey.dobs = pred\nfig, ax = plt.subplots(1,1, figsize = (10, 2))\ndat = EM.Static.Utils.StaticUtils.plot_pseudoSection(survey, ax, dtype='appr', sameratio=False, scale=\"linear\", clim=(0, 200))",
"_____no_output_____"
],
[
"out = hist(np.log10(abs(dobsDC)), bins = 100)",
"_____no_output_____"
],
[
"weight = 1./abs(mesh.gridCC[:,1])**1.5",
"_____no_output_____"
],
[
"mesh.plotImage(np.log10(weight))",
"_____no_output_____"
],
[
"survey2D.dobs = dobsDC\nsurvey2D.eps = 10**(-2.3)\nsurvey2D.std = 0.02\ndmisfit = DataMisfit.l2_DataMisfit(survey2D)\nregmap = Maps.IdentityMap(nP=int(mesh.nC))\nreg = Regularization.Simple(mesh,mapping=regmap,cell_weights=weight)\nopt = Optimization.InexactGaussNewton(maxIter=5)\ninvProb = InvProblem.BaseInvProblem(dmisfit, reg, opt)\n# Create an inversion object\nbeta = Directives.BetaSchedule(coolingFactor=5, coolingRate=2)\nbetaest = Directives.BetaEstimate_ByEig(beta0_ratio=1e0)\ninv = Inversion.BaseInversion(invProb, directiveList=[beta, betaest])\nproblem.counter = opt.counter = Utils.Counter()\nopt.LSshorten = 0.5\nopt.remember('xc')\nmopt = inv.run(m0)",
"SimPEG.InvProblem will set Regularization.mref to m0.\nSimPEG.InvProblem is setting bfgsH0 to the inverse of the eval2Deriv.\n ***Done using same Solver and solverOpts as the problem***\n============================ Inexact Gauss Newton ============================\n # beta phi_d phi_m f |proj(x-g)-x| LS Comment \n-----------------------------------------------------------------------------\n 0 5.06e+02 1.27e+04 0.00e+00 1.27e+04 5.19e+03 0 \n 1 5.06e+02 1.33e+03 3.39e+00 3.05e+03 3.69e+02 0 \n 2 1.01e+02 1.03e+03 3.87e+00 1.42e+03 8.45e+02 0 Skip BFGS \n 3 1.01e+02 2.21e+02 7.04e+00 9.34e+02 3.98e+01 0 \n 4 2.02e+01 2.11e+02 7.11e+00 3.55e+02 2.69e+02 0 \n 5 2.02e+01 1.46e+02 8.35e+00 3.15e+02 2.22e+02 0 \n------------------------- STOP! -------------------------\n1 : |fc-fOld| = 3.9939e+01 <= tolF*(1+|f0|) = 1.2708e+03\n1 : |xc-x_last| = 2.6661e+00 <= tolX*(1+|x0|) = 3.0896e+01\n0 : |proj(x-g)-x| = 2.2174e+02 <= tolG = 1.0000e-01\n0 : |proj(x-g)-x| = 2.2174e+02 <= 1e3*eps = 1.0000e-02\n1 : maxIter = 5 <= iter = 5\n------------------------- DONE! -------------------------\n"
],
[
"xc = opt.recall(\"xc\")",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(1,1, figsize = (10, 1.5))\nsigma = mapping*mopt\ndat = mesh.plotImage(1./sigma, clim=(10, 150),grid=False, ax=ax, pcolorOpts={\"cmap\":\"jet\"})\nax.set_ylim(-50, 0)\nax.set_xlim(-10, 290)",
"_____no_output_____"
],
[
"print np.log10(sigma).min(), np.log10(sigma).max()",
"-2.16155569809 -1.52329859013\n"
],
[
"survey.dobs = invProb.dpred\nfig, ax = plt.subplots(1,1, figsize = (10, 2))\ndat = EM.Static.Utils.StaticUtils.plot_pseudoSection(survey, ax, dtype='appr', sameratio=False, clim=(40, 170))",
"_____no_output_____"
],
[
"survey.dobs = dobsDC\nfig, ax = plt.subplots(1,1, figsize = (10, 2))\ndat = EM.Static.Utils.StaticUtils.plot_pseudoSection(survey, ax, dtype='appr', sameratio=False, clim=(40, 170))",
"_____no_output_____"
],
[
"survey.dobs = abs(dmisfit.Wd*(dobsDC-invProb.dpred))\nfig, ax = plt.subplots(1,1, figsize = (10, 2))\ndat = EM.Static.Utils.StaticUtils.plot_pseudoSection(survey, ax, dtype='volt', sameratio=False, clim=(0, 2))",
"_____no_output_____"
],
[
"# sigma = np.ones(mesh.nC)\nmodelname = \"sigma0101.npy\"\nnp.save(modelname, sigma)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00cd098c0558cd2115c61aa67ab6b5dbe895b33 | 373,805 | ipynb | Jupyter Notebook | notebooks/dataset-visualization.ipynb | yenchenlin/ravens | b7b97bb30fc1926dad6543112d8f4132841014e4 | [
"Apache-2.0"
] | null | null | null | notebooks/dataset-visualization.ipynb | yenchenlin/ravens | b7b97bb30fc1926dad6543112d8f4132841014e4 | [
"Apache-2.0"
] | null | null | null | notebooks/dataset-visualization.ipynb | yenchenlin/ravens | b7b97bb30fc1926dad6543112d8f4132841014e4 | [
"Apache-2.0"
] | null | null | null | 2,560.308219 | 370,756 | 0.958719 | [
[
[
"%matplotlib inline\n\n\nimport matplotlib.pyplot as plt\nimport pickle\nfrom pathlib import Path\n\n\nDATA_DIR = Path('../sweeping-piles-train')",
"_____no_output_____"
],
[
"color_dir = DATA_DIR / 'color'\nwith open(color_dir / '000000-0.pkl', 'rb') as f:\n data = pickle.load(f)\n \naction_dir = DATA_DIR / 'action'\nwith open(color_dir / '000000-0.pkl', 'rb') as f:\n data = pickle.load(f)",
"_____no_output_____"
],
[
"n_steps = data.shape[0]\nn_cameras = data.shape[1]\n\nfig, axs = plt.subplots(n_steps, n_cameras)\nfig.set_size_inches(30, 20)\n\nfor i in range(n_steps):\n for j in range(n_cameras):\n axs[i, j].imshow(data[i, j])",
"_____no_output_____"
],
[
"data.shape",
"_____no_output_____"
],
[
"action_dir = DATA_DIR / 'action'\n\nfor i in range(10):\n with open(color_dir / f'{i:06d}-{i*2}.pkl', 'rb') as f:\n data = pickle.load(f)\n print(data.shape)",
"(10, 3, 480, 640, 3)\n(10, 3, 480, 640, 3)\n(13, 3, 480, 640, 3)\n(16, 3, 480, 640, 3)\n(12, 3, 480, 640, 3)\n(12, 3, 480, 640, 3)\n(15, 3, 480, 640, 3)\n(10, 3, 480, 640, 3)\n(14, 3, 480, 640, 3)\n(9, 3, 480, 640, 3)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d00ce2821bf103ad2ad0bfeb6fc8a8c128bccef4 | 9,290 | ipynb | Jupyter Notebook | examples/Notebooks/flopy3_multi-component_SSM.ipynb | jdlarsen-UA/flopy | bf2c59aaa689de186bd4c80685532802ac7149cd | [
"CC0-1.0",
"BSD-3-Clause"
] | 351 | 2015-01-03T15:18:48.000Z | 2022-03-31T09:46:43.000Z | examples/Notebooks/flopy3_multi-component_SSM.ipynb | smasky/flopy | 81b17fa93df67f938c2d1b1bea34e8292359208d | [
"CC0-1.0",
"BSD-3-Clause"
] | 1,256 | 2015-01-15T21:10:42.000Z | 2022-03-31T22:43:06.000Z | examples/Notebooks/flopy3_multi-component_SSM.ipynb | smasky/flopy | 81b17fa93df67f938c2d1b1bea34e8292359208d | [
"CC0-1.0",
"BSD-3-Clause"
] | 553 | 2015-01-31T22:46:48.000Z | 2022-03-31T17:43:35.000Z | 26.31728 | 359 | 0.505597 | [
[
[
"# FloPy\n\n## Using FloPy to simplify the use of the MT3DMS ```SSM``` package\n\nA multi-component transport demonstration",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport numpy as np\n\n# run installed version of flopy or add local path\ntry:\n import flopy\nexcept:\n fpth = os.path.abspath(os.path.join('..', '..'))\n sys.path.append(fpth)\n import flopy\n\nprint(sys.version)\nprint('numpy version: {}'.format(np.__version__))\nprint('flopy version: {}'.format(flopy.__version__))",
"3.8.10 (default, May 19 2021, 11:01:55) \n[Clang 10.0.0 ]\nnumpy version: 1.19.2\nflopy version: 3.3.4\n"
]
],
[
[
"First, we will create a simple model structure",
"_____no_output_____"
]
],
[
[
"nlay, nrow, ncol = 10, 10, 10\nperlen = np.zeros((10), dtype=float) + 10\nnper = len(perlen)\n\nibound = np.ones((nlay,nrow,ncol), dtype=int)\n\nbotm = np.arange(-1,-11,-1)\ntop = 0.",
"_____no_output_____"
]
],
[
[
"## Create the ```MODFLOW``` packages",
"_____no_output_____"
]
],
[
[
"model_ws = 'data'\nmodelname = 'ssmex'\nmf = flopy.modflow.Modflow(modelname, model_ws=model_ws)\ndis = flopy.modflow.ModflowDis(mf, nlay=nlay, nrow=nrow, ncol=ncol, \n perlen=perlen, nper=nper, botm=botm, top=top, \n steady=False)\nbas = flopy.modflow.ModflowBas(mf, ibound=ibound, strt=top)\nlpf = flopy.modflow.ModflowLpf(mf, hk=100, vka=100, ss=0.00001, sy=0.1)\noc = flopy.modflow.ModflowOc(mf)\npcg = flopy.modflow.ModflowPcg(mf)\nrch = flopy.modflow.ModflowRch(mf)",
"_____no_output_____"
]
],
[
[
"We'll track the cell locations for the ```SSM``` data using the ```MODFLOW``` boundary conditions.\n\n\nGet a dictionary (```dict```) that has the ```SSM``` ```itype``` for each of the boundary types.",
"_____no_output_____"
]
],
[
[
"itype = flopy.mt3d.Mt3dSsm.itype_dict()\nprint(itype)\nprint(flopy.mt3d.Mt3dSsm.get_default_dtype())\nssm_data = {}",
"{'CHD': 1, 'BAS6': 1, 'PBC': 1, 'WEL': 2, 'DRN': 3, 'RIV': 4, 'GHB': 5, 'MAS': 15, 'CC': -1}\n[('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('css', '<f4'), ('itype', '<i8')]\n"
]
],
[
[
"Add a general head boundary (```ghb```). The general head boundary head (```bhead```) is 0.1 for the first 5 stress periods with a component 1 (comp_1) concentration of 1.0 and a component 2 (comp_2) concentration of 100.0. Then ```bhead``` is increased to 0.25 and comp_1 concentration is reduced to 0.5 and comp_2 concentration is increased to 200.0",
"_____no_output_____"
]
],
[
[
"ghb_data = {}\nprint(flopy.modflow.ModflowGhb.get_default_dtype())\nghb_data[0] = [(4, 4, 4, 0.1, 1.5)]\nssm_data[0] = [(4, 4, 4, 1.0, itype['GHB'], 1.0, 100.0)]\nghb_data[5] = [(4, 4, 4, 0.25, 1.5)]\nssm_data[5] = [(4, 4, 4, 0.5, itype['GHB'], 0.5, 200.0)]\n\nfor k in range(nlay):\n for i in range(nrow):\n ghb_data[0].append((k, i, 0, 0.0, 100.0))\n ssm_data[0].append((k, i, 0, 0.0, itype['GHB'], 0.0, 0.0))\n \nghb_data[5] = [(4, 4, 4, 0.25, 1.5)]\nssm_data[5] = [(4, 4, 4, 0.5, itype['GHB'], 0.5, 200.0)]\nfor k in range(nlay):\n for i in range(nrow):\n ghb_data[5].append((k, i, 0, -0.5, 100.0))\n ssm_data[5].append((k, i, 0, 0.0, itype['GHB'], 0.0, 0.0))",
"[('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('bhead', '<f4'), ('cond', '<f4')]\n"
]
],
[
[
"Add an injection ```well```. The injection rate (```flux```) is 10.0 with a comp_1 concentration of 10.0 and a comp_2 concentration of 0.0 for all stress periods. WARNING: since we changed the ```SSM``` data in stress period 6, we need to add the well to the ssm_data for stress period 6.",
"_____no_output_____"
]
],
[
[
"wel_data = {}\nprint(flopy.modflow.ModflowWel.get_default_dtype())\nwel_data[0] = [(0, 4, 8, 10.0)]\nssm_data[0].append((0, 4, 8, 10.0, itype['WEL'], 10.0, 0.0))\nssm_data[5].append((0, 4, 8, 10.0, itype['WEL'], 10.0, 0.0))",
"[('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('flux', '<f4')]\n"
]
],
[
[
"Add the ```GHB``` and ```WEL``` packages to the ```mf``` ```MODFLOW``` object instance.",
"_____no_output_____"
]
],
[
[
"ghb = flopy.modflow.ModflowGhb(mf, stress_period_data=ghb_data)\nwel = flopy.modflow.ModflowWel(mf, stress_period_data=wel_data)",
"_____no_output_____"
]
],
[
[
"## Create the ```MT3DMS``` packages",
"_____no_output_____"
]
],
[
[
"mt = flopy.mt3d.Mt3dms(modflowmodel=mf, modelname=modelname, model_ws=model_ws)\nbtn = flopy.mt3d.Mt3dBtn(mt, sconc=0, ncomp=2, sconc2=50.0)\nadv = flopy.mt3d.Mt3dAdv(mt)\nssm = flopy.mt3d.Mt3dSsm(mt, stress_period_data=ssm_data)\ngcg = flopy.mt3d.Mt3dGcg(mt)",
"found 'rch' in modflow model, resetting crch to 0.0\nSSM: setting crch for component 2 to zero. kwarg name crch2\n"
]
],
[
[
"Let's verify that ```stress_period_data``` has the right ```dtype```",
"_____no_output_____"
]
],
[
[
"print(ssm.stress_period_data.dtype)",
"[('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('css', '<f4'), ('itype', '<i8'), ('cssm(01)', '<f4'), ('cssm(02)', '<f4')]\n"
]
],
[
[
"## Create the ```SEAWAT``` packages",
"_____no_output_____"
]
],
[
[
"swt = flopy.seawat.Seawat(modflowmodel=mf, mt3dmodel=mt, \n modelname=modelname, namefile_ext='nam_swt', model_ws=model_ws)\nvdf = flopy.seawat.SeawatVdf(swt, mtdnconc=0, iwtable=0, indense=-1)",
"_____no_output_____"
],
[
"mf.write_input()\nmt.write_input()\nswt.write_input()",
"_____no_output_____"
]
],
[
[
"And finally, modify the ```vdf``` package to fix ```indense```.",
"_____no_output_____"
]
],
[
[
"fname = modelname + '.vdf'\nf = open(os.path.join(model_ws, fname),'r')\nlines = f.readlines()\nf.close()\nf = open(os.path.join(model_ws, fname),'w')\nfor line in lines:\n f.write(line)\nfor kper in range(nper):\n f.write(\"-1\\n\")\nf.close() \n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d00ce8529bf5973ebb8c4575120b726c359a0ada | 2,606 | ipynb | Jupyter Notebook | gcv/notes/clean.ipynb | fuzzyklein/gcv-lab | 9d2c552b8226350dd6f4d5c38a42d1b90d3c3ca7 | [
"MIT"
] | null | null | null | gcv/notes/clean.ipynb | fuzzyklein/gcv-lab | 9d2c552b8226350dd6f4d5c38a42d1b90d3c3ca7 | [
"MIT"
] | null | null | null | gcv/notes/clean.ipynb | fuzzyklein/gcv-lab | 9d2c552b8226350dd6f4d5c38a42d1b90d3c3ca7 | [
"MIT"
] | null | null | null | 23.061947 | 139 | 0.487721 | [
[
[
"# Clean the Project Directory",
"_____no_output_____"
]
],
[
[
"import glob\nimport os\nfrom pathlib import Path\nimport shutil",
"_____no_output_____"
],
[
"exec(Path('startup.py').read_text())",
"_____no_output_____"
],
[
"DEBUG=False\nVERBOSE=True",
"_____no_output_____"
],
[
"def clean(d='../', pats=['.ipynb*','__pycache__']):\n \"\"\" Clean the working directory or a directory given by d.\n \"\"\"\n if DEBUG: print(\"debugging clean\")\n if VERBOSE: print(\"running `clean` in `VERBOSE` mode\")\n for p in pats:\n F = [Path(f) for f in Path(d).rglob(p)]\n if VERBOSE:\n print(f\"files matching '{p}':\")\n print(F)\n for f in F:\n if VERBOSE: print(f\"removing {f}\")\n if f.is_dir():\n shutil.rmtree(f)\n else:\n f.unlink()",
"_____no_output_____"
],
[
"clean()",
"running `clean` in `VERBOSE` mode\nfiles matching '.ipynb*':\n[WindowsPath('../etc/.ipynb_checkpoints'), WindowsPath('../gcv/.ipynb_checkpoints'), WindowsPath('../notes/.ipynb_checkpoints')]\nremoving ..\\etc\\.ipynb_checkpoints\nremoving ..\\gcv\\.ipynb_checkpoints\nremoving ..\\notes\\.ipynb_checkpoints\nfiles matching '__pycache__':\n[WindowsPath('../gcv/__pycache__')]\nremoving ..\\gcv\\__pycache__\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d00d0019ce29391b3d310b347bfa0a77ccb3d23e | 627,175 | ipynb | Jupyter Notebook | location_analysis.ipynb | kgraghav/Location_Finder | 6bcde1b10407160a5778d2f55cca8bdf8c78a3b5 | [
"MIT"
] | null | null | null | location_analysis.ipynb | kgraghav/Location_Finder | 6bcde1b10407160a5778d2f55cca8bdf8c78a3b5 | [
"MIT"
] | null | null | null | location_analysis.ipynb | kgraghav/Location_Finder | 6bcde1b10407160a5778d2f55cca8bdf8c78a3b5 | [
"MIT"
] | null | null | null | 236.759154 | 149,103 | 0.90873 | [
[
[
"# Home2\nYour home away from home <br>\nThe best location for your needs, anywhere in the world <br>\n### Inputs: \n Addresses (eg. 'Pune, Maharashtra')\n Category List (eg. 'Food', 'Restaurant', 'Gym', 'Trails', 'School', 'Train Station')\n Limit of Results to return (eg. 75)\n Radius of search in metres (eg. 10,000)\n Radiums of hotels to search\n### Outputs:\n Cluster of venues coded by criteria\n map of cluster\n Centroid latitude and longitude for these venues\n Address near centroid\n Hotels near centroid",
"_____no_output_____"
],
[
"## User Input",
"_____no_output_____"
]
],
[
[
"# Addresses to analyze venues around and obtain best location\naddresses=['Bend, Oregon'] \n# 4square Venue categories of interest (https://developer.foursquare.com/docs/build-with-foursquare/categories)\ncategories=['4bf58dd8d48988d181941735','4bf58dd8d48988d149941735','4bf58dd8d48988d176941735','53e510b7498ebcb1801b55d4',\n '52e81612bcbc57f1066b7a21','5bae9231bedf3950379f89d0','4bf58dd8d48988d159941735'] \n# Limit of search results \nLIMIT=500\n# Radius of search in metres (maximum 100000)\nradius=20000\n# Radius in metres to search hotels around from final optimum location (centroid). This list is also sorted by likes \nhotel_radius=20000 \n# Remove Outliers? 'Y' or 'N'\nremove_outliers='Y'",
"_____no_output_____"
]
],
[
[
"## Import Libraries",
"_____no_output_____"
]
],
[
[
"import numpy as np # library to handle data in a vectorized manner\n\nimport pandas as pd # library for data analsysis\npd.set_option('display.max_columns', None)\npd.set_option('display.max_rows', None)\n\nimport math\n\nimport json # library to handle JSON files\n\n!conda install -c conda-forge geopy --yes # uncomment this line if you haven't completed the Foursquare API lab\nfrom geopy.geocoders import Nominatim # convert an address into latitude and longitude values\n\nimport requests # library to handle requests\nfrom pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe\n\n# Matplotlib and associated plotting modules\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nimport matplotlib.colors as colors\n\n# import k-means from clustering stage\nfrom sklearn.cluster import KMeans\n\n#!conda install -c conda-forge folium=0.5.0 --yes # uncomment this line if you haven't completed the Foursquare API lab\nimport folium # map rendering library\n \n! pip install lxml\n\nprint('Libraries imported.')",
"Collecting package metadata (current_repodata.json): done\nSolving environment: done\n\n# All requested packages already installed.\n\nRequirement already satisfied: lxml in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (4.6.2)\nLibraries imported.\n"
]
],
[
[
"## Obtain Location and Venue information in a Dataframe",
"_____no_output_____"
],
[
"### Create Geolocator using Nominatim and Obtain Location info. for the addresses",
"_____no_output_____"
]
],
[
[
"geoagent=\"explorer\"\n\nlat=[]\nlong=[]\nfor address in addresses:\n geolocator = Nominatim(user_agent=geoagent)\n loc = geolocator.geocode(address)\n lat.append(loc.latitude)\n long.append(loc.longitude)\n print('The geograpical coordinate of '+address +' are {}, {}.'.format(lat[-1], long[-1]))\n\n",
"The geograpical coordinate of Bend, Oregon are 44.0581728, -121.3153096.\n"
],
[
"df_loc=pd.DataFrame({'Name': addresses,'Latitude': lat, 'Longitude':long})\ndf_loc",
"_____no_output_____"
]
],
[
[
"### Foursquare Credentials",
"_____no_output_____"
]
],
[
[
"CLIENT_ID = '' # your Foursquare ID\nCLIENT_SECRET = '' # your Foursquare Secret\nVERSION = '' # Foursquare API version\n\nprint('Your credentails:')\nprint('CLIENT_ID: ' + CLIENT_ID)\nprint('CLIENT_SECRET:' + CLIENT_SECRET)",
"Your credentails:\nCLIENT_ID: \nCLIENT_SECRET:\n"
]
],
[
[
"### Make list of category names by ID",
"_____no_output_____"
]
],
[
[
"categories_url='https://api.foursquare.com/v2/venues/categories?&client_id={}&client_secret={}&v={}'.format(CLIENT_ID,\n CLIENT_SECRET,VERSION)\nresult=requests.get(categories_url).json()\ncat_id_list=[]\ncat_name_list=[]\n\ntry:\n len_major_cat=len(result['response']['categories'])\n for i in range(len_major_cat-1):\n cat_id_list.append(result['response']['categories'][i]['id'])\n cat_name_list.append(result['response']['categories'][i]['name'])\n len_sub_cat=len(result['response']['categories'][i]['categories'])\n for j in range(len_sub_cat-1):\n cat_id_list.append(result['response']['categories'][i]['categories'][j]['id'])\n cat_name_list.append(result['response']['categories'][i]['categories'][j]['name'])\n len_sub_sub_cat=len(result['response']['categories'][i]['categories'][j]['categories'])\n for k in range(len_sub_sub_cat-1):\n cat_id_list.append(result['response']['categories'][i]['categories'][j]['categories'][k]['id'])\n cat_name_list.append(result['response']['categories'][i]['categories'][j]['categories'][k]['name'])\n len_sub_sub_sub_cat=len(result['response']['categories'][i]['categories'][j]['categories'][k]['categories'])\n for l in range(len_sub_sub_sub_cat-1):\n cat_id_list.append(result['response']['categories'][i]['categories'][j]['categories'][k]['categories'][l]['id'])\n cat_name_list.append(result['response']['categories'][i]['categories'][j]['categories'][k]['categories'][l]['name'])\n len_sub_sub_sub_sub_cat=len(result['response']['categories'][i]['categories'][j]['categories'][k]['categories'][l]['categories'])\n for m in range(len_sub_sub_sub_sub_cat-1):\n cat_id_list.append(result['response']['categories'][i]['categories'][j]['categories'][k]['categories'][l]['categories'][m]['id'])\n cat_name_list.append(result['response']['categories'][i]['categories'][j]['categories'][k]['categories'][l]['categories'][m]['name'])\n len_sub_sub_sub_sub_sub_cat=len(result['response']['categories'][i]['categories'][j]['categories'][k]['categories'][l]['categories'][m])\n for n in range(len_sub_sub_sub_sub_sub_cat-1):\n cat_id_list.append(result['response']['categories'][i]['categories'][j]['categories'][k]['categories'][l]['categories'][m]['categories'][n]['id'])\n cat_name_list.append(result['response']['categories'][i]['categories'][j]['categories'][k]['categories'][l]['categories'][m]['categories'][n]['name'])\n len_sub_sub_sub_sub_sub_sub_cat=len(result['response']['categories'][i]['categories'][j]['categories'][k]['categories'][l]['categories'][m]['categories'][n])\n for o in range(len_sub_sub_sub_sub_sub_sub_cat-1):\n cat_id_list.append(result['response']['categories'][i]['categories'][j]['categories'][k]['categories'][l]['categories'][m]['categories'][n]['categories'][o]['id'])\n cat_name_list.append(result['response']['categories'][i]['categories'][j]['categories'][k]['categories'][l]['categories'][m]['categories'][n]['categories'][o]['name'])\nexcept:\n pass\ncat_dict={}\nfor i in range (len(cat_name_list)):\n cat_dict[cat_id_list[i]] = cat_name_list[i]\n ",
"_____no_output_____"
],
[
"for cat in categories:\n print(cat_dict[cat])",
"Museum\nThai Restaurant\nGym\nNight Market\nNational Park\nState / Provincial Park\nTrail\n"
],
[
"# General Search URL string\nurl_str='https://api.foursquare.com/v2/venues/search?categoryId={}&client_id={}&client_secret={}&ll={},{}&v={}&radius={}&limit={}'\n# Zipcode Search URL string\nurl_str_zip='https://api.foursquare.com/v2/venues/search?&client_id={}&client_secret={}&ll={},{}&v={}&radius={}&limit={}'",
"_____no_output_____"
]
],
[
[
"### Explore nearby venues",
"_____no_output_____"
],
[
"<b> Function to get nearby venues matching the \"categories\" given address information </b>",
"_____no_output_____"
]
],
[
[
"def getNearbyVenues(names, latitudes, longitudes,url_link,categories, radius):\n ''' Create the venue search url and lookup nearby venues and return as dataframe'''\n venues_list=[]\n for i in range(0,len(names)):\n name=names[i]\n lat=latitudes[i]\n lng = longitudes[i]\n for category in categories: \n # create the API request URL\n url = url_link.format(\n category,\n CLIENT_ID, \n CLIENT_SECRET, \n lat, \n lng, \n VERSION,\n radius,\n LIMIT)\n\n \n\n # make the GET request\n results = requests.get(url).json()\n\n # return only relevant information for each nearby venue\n for j in range(0,len(results['response']['venues'])):\n venues_list.append([\n name, \n results['response']['venues'][j]['name'],\n results['response']['venues'][j]['id'],\n results['response']['venues'][j]['location']['lat'], \n results['response']['venues'][j]['location']['lng'], \n results['response']['venues'][j]['categories'][0]['name'],\n category, \n results['response']['venues'][j]['location']['distance'],\n ])\n \n nearby_venues = pd.DataFrame(venues_list)\n nearby_venues.columns = ['Address', \n 'Venue', \n 'Venue_id',\n 'Venue Latitude', \n 'Venue Longitude', \n 'Venue Category',\n 'Category ID',\n 'Distance [m]']\n \n return(nearby_venues)\n\ndef getVenueLikes(venue_ids):\n ''' Obtain the list of number of likes for the venues in \"venue_ids\"'''\n likes_list=[]\n for i in range(0,len(venue_ids)): \n # create the API request URL\n url_link='https://api.foursquare.com/v2/venues/{}?&client_id={}&client_secret={}&v={}'\n url = url_link.format(venue_ids[i],\n CLIENT_ID, \n CLIENT_SECRET, \n VERSION)\n\n\n # make the GET request\n results = requests.get(url).json()\n \n likes_list.append(results['response']['venue']['likes']['count'])\n\n \n return(likes_list)",
"_____no_output_____"
]
],
[
[
"Dataframe of Venues for each address, matching the \"categories\"",
"_____no_output_____"
]
],
[
[
"# Implement 'getNearbyVenues'\nloc_venues=getNearbyVenues(df_loc['Name'], df_loc['Latitude'],df_loc['Longitude'],url_str,categories,radius)\n\nloc_venues.head()",
"_____no_output_____"
]
],
[
[
"## Pre-Processing",
"_____no_output_____"
],
[
"Drop NaN values, set 'Venue' as index column since we are dealing with venues.",
"_____no_output_____"
]
],
[
[
"loc_venues.set_index('Venue',inplace=True)\nloc_venues.dropna(inplace=True)\nloc_venues.head()\nloc_venues.shape",
"_____no_output_____"
]
],
[
[
"Print number of categories for each address",
"_____no_output_____"
]
],
[
[
"for i in range(0,len(addresses)):\n print('There are {} uniques categories for '\n .format(len(loc_venues.loc[loc_venues['Address']==addresses[i],'Venue Category'].unique()))+addresses[i])",
"There are 20 uniques categories for Bend, Oregon\n"
]
],
[
[
"## Exploratory Data Analysis",
"_____no_output_____"
],
[
"### Make Folium plot to show venues",
"_____no_output_____"
]
],
[
[
"maps={}\nloc_results_lat=[]\nloc_results_long=[]\nzip_results={}\n\ni=0\nfor address in addresses:\n # create map\n clustered=loc_venues[loc_venues['Address']==address]\n lat_array=clustered['Venue Latitude']\n long_array=clustered['Venue Longitude']\n venue_name=clustered.index\n \n # Calculate mean latitude and longitude\n latitude=lat_array.mean() \n longitude=long_array.mean()\n \n # Update results latitude and longitude arrays\n loc_results_lat.append(latitude)\n loc_results_long.append(longitude)\n \n # Obtain Zipcode\n url_zip=url_str_zip.format(CLIENT_ID, CLIENT_SECRET, latitude, longitude, VERSION,500,1)\n zip_result=requests.get(url_zip).json()\n try:\n zip_results[address]=zip_result['response']['venues'][0]['location']['formattedAddress']\n except:\n zip_results[address]='0'\n \n print('Centroid for '+str(address)+' at: '+str(round(latitude,5))+', '+str(round(longitude,5))\n +', Address:',zip_results[address][0])\n\n map_clusters = folium.Map(location=[latitude, longitude],zoom_start=10)\n\n \n\n # add markers to the map\n markers_colors = []\n for lat, lon, name in zip(lat_array, long_array, \n venue_name ):\n label = folium.Popup(name, parse_html=True)\n folium.CircleMarker(\n [lat, lon],\n radius=5,\n popup=label,\n color='blue',\n fill=True,\n fill_color='blue',\n fill_opacity=0.7,\n ).add_to(map_clusters)\n\n folium.RegularPolygonMarker(location=[latitude, longitude], popup='Centroid',\n fill_color='yellow', radius=10).add_to(map_clusters)\n maps[address]=map_clusters\n i=i+1\n\nlat1=latitude\nlong1=longitude\nmaps[addresses[0]]\n",
"Centroid for Bend, Oregon at: 44.05927, -121.33016, Address: 1010 NW 14th St\n"
]
],
[
[
"### Make Venue Longitude and Latitude box plots",
"_____no_output_____"
]
],
[
[
"fnum=1\n\nunique_cat=len(loc_venues['Category ID'].unique())+1\nbp={} # Box plot object dict.\n\nfor i in range(0,unique_cat):\n plt.figure()\n plt.subplot(1,5,1)\n Y=loc_venues.loc[loc_venues['Category ID']==categories[i],'Venue Latitude']\n bp[categories[i]+'.Latitude']=(plt.boxplot(Y))\n plt.xlabel('Latitude')\n plt.xticks([])\n plt.title(str(cat_dict[categories[i]]))\n\n plt.subplot(1,5,5)\n Y=loc_venues.loc[loc_venues['Category ID']==categories[i],'Venue Longitude']\n bp[categories[i]+'.Longitude']=(plt.boxplot(Y))\n plt.xlabel('Longitude')\n plt.xticks([])\n plt.title(str(cat_dict[categories[i]]))\n\nfnum=fnum+1",
"_____no_output_____"
]
],
[
[
"Remove the outliers from data, by referencing the category ID and latitude/longitude values",
"_____no_output_____"
]
],
[
[
"if remove_outliers=='Y':\n flag=1\n while flag==1:\n flag=0\n for category in categories:\n\n Y=loc_venues.loc[loc_venues['Category ID']==category,'Venue Latitude']\n bp[category+'.Latitude']=plt.boxplot(Y)\n Y=loc_venues.loc[loc_venues['Category ID']==category,'Venue Longitude']\n bp[category+'.Longitude']=plt.boxplot(Y)\n\n outliers_lat=bp[category+'.Latitude']['fliers'][0].get_data()[1]\n outliers_long=bp[category+'.Longitude']['fliers'][0].get_data()[1]\n if len(outliers_lat)>0 or len(outliers_long)>0:\n flag=1\n for outlier_lat in outliers_lat:\n idx1=loc_venues['Category ID']==category\n idx2=loc_venues['Venue Latitude']==outlier_lat\n idx1=idx1[idx1==True].index\n idx2=idx2[idx2==True].index\n loc_venues.drop(idx1.intersection(idx2),axis=0,inplace=True)\n for outlier_long in outliers_long:\n idx1=loc_venues['Category ID']==category\n idx2=loc_venues['Venue Longitude']==outlier_long\n idx1=idx1[idx1==True].index\n idx2=idx2[idx2==True].index\n loc_venues.drop(idx1.intersection(idx2),axis=0,inplace=True)\nloc_venues.shape",
"_____no_output_____"
]
],
[
[
"Re-Plot the box plots to check that there are no outliers remaining",
"_____no_output_____"
]
],
[
[
"fnum=1\n\nunique_cat=len(loc_venues['Category ID'].unique())+1\nbp={} # Box plot object dict.\n\nfor i in range(0,unique_cat):\n plt.figure()\n plt.subplot(1,5,1)\n Y=loc_venues.loc[loc_venues['Category ID']==categories[i],'Venue Latitude']\n bp[categories[i]+'.Latitude']=(plt.boxplot(Y))\n plt.xlabel('Latitude')\n plt.xticks([])\n plt.title(str(cat_dict[categories[i]]))\n\n plt.subplot(1,5,5)\n Y=loc_venues.loc[loc_venues['Category ID']==categories[i],'Venue Longitude']\n bp[categories[i]+'.Longitude']=(plt.boxplot(Y))\n plt.xlabel('Longitude')\n plt.xticks([])\n plt.title(str(cat_dict[categories[i]]))\n\nfnum=fnum+1",
"_____no_output_____"
]
],
[
[
"Re-plot the folium plot",
"_____no_output_____"
]
],
[
[
"maps={}\nloc_results_lat=[]\nloc_results_long=[]\nzip_results={}\n\ni=0\nfor address in addresses:\n # create map\n clustered=loc_venues[loc_venues['Address']==address]\n lat_array=clustered['Venue Latitude']\n long_array=clustered['Venue Longitude']\n venue_name=clustered.index\n \n # Calculate mean latitude and longitude\n latitude=lat_array.mean() \n longitude=long_array.mean()\n \n # Update results latitude and longitude arrays\n loc_results_lat.append(latitude)\n loc_results_long.append(longitude)\n \n # Obtain Zipcode\n url_zip=url_str_zip.format(CLIENT_ID, CLIENT_SECRET, latitude, longitude, VERSION,500,1)\n zip_result=requests.get(url_zip).json()\n try:\n zip_results[address]=zip_result['response']['venues'][0]['location']['formattedAddress']\n except:\n zip_results[address]='0'\n \n print('Centroid for '+str(address)+' at: '+str(round(latitude,5))+', '+str(round(longitude,5))\n +', Address:',zip_results[address][0])\n\n map_clusters = folium.Map(location=[latitude, longitude],zoom_start=10)\n\n \n\n # add markers to the map\n markers_colors = []\n for lat, lon, name in zip(lat_array, long_array, \n venue_name ):\n label = folium.Popup(name, parse_html=True)\n folium.CircleMarker(\n [lat, lon],\n radius=5,\n popup=label,\n color='blue',\n fill=True,\n fill_color='blue',\n fill_opacity=0.7,\n ).add_to(map_clusters)\n\n folium.RegularPolygonMarker(location=[latitude, longitude], popup='Centroid',\n fill_color='yellow', radius=10).add_to(map_clusters)\n maps[address]=map_clusters\n i=i+1\nlat2=latitude\nlong2=longitude\nmaps[addresses[-1]]",
"Centroid for Bend, Oregon at: 44.04606, -121.32722, Address: 345 SW Cyber Dr\n"
]
],
[
[
"### New dataframe for the venues with outliers removed and distance calculated from new centroid",
"_____no_output_____"
]
],
[
[
"loc_venues_new=getNearbyVenues(df_loc['Name'], loc_results_lat,loc_results_long,url_str,categories,radius)\n\nloc_venues_new.head()",
"_____no_output_____"
],
[
"# Find common venues between initial and outlier adjusted venue list\nset_loc_venues=set(loc_venues['Venue_id'].tolist())\nset_loc_venues_new=set(loc_venues_new['Venue_id'].tolist())\nlist_common_venues=list(set_loc_venues.intersection(set_loc_venues_new))",
"_____no_output_____"
],
[
"#Filter loc_venues_new to common venue ids\nloc_venues_new=loc_venues_new.set_index(['Venue_id']).loc[list_common_venues,:].reset_index()\nloc_venues_new.head()",
"_____no_output_____"
],
[
"#Sum of distances between centroid before and after removing outliers\ndist_1=loc_venues['Distance [m]'].mean()/1000\nprint('Mean distance between venues and centroid is {} km.'.format(dist_1))\nif remove_outliers=='Y':\n dist_2=loc_venues.set_index(['Venue_id']).loc[list_common_venues,:]['Distance [m]'].mean()/1000\n dist_3=loc_venues_new['Distance [m]'].mean()/1000\n # Print values\n print('Mean distance between non-outlier venues and centroid before removing outliers is {} km.'.format(dist_1))\n print('Mean distance between non-outlier venues and centroid after removing outliers is {} km.'.format(dist_2))\n print('Difference in mean distance between non-outlier venues and centroid before and after removing outliers is {} km'.\n format(dist_2-dist_3))\n print('Difference in mean distance between venues and centroid before and after removing outliers is {} km'.\n format(dist_1-dist_3))",
"Mean distance between venues and centroid is 4.5278311688311685 km.\nMean distance between non-outlier venues and centroid before removing outliers is 4.5278311688311685 km.\nMean distance between non-outlier venues and centroid after removing outliers is 4.272197368421052 km.\nDifference in mean distance between non-outlier venues and centroid before and after removing outliers is 0.10176315789473644 km\nDifference in mean distance between venues and centroid before and after removing outliers is 0.35739695830485285 km\n"
]
],
[
[
"### Geo-distance shift in coordinates",
"_____no_output_____"
],
[
"Define function to calculate distance based on geo-coordinates",
"_____no_output_____"
]
],
[
[
"def geodistance (coord1,coord2):\n R = 6373.0 ## Radius of Earth in kms.\n latd1 = math.radians(coord1[0]) # Latitude of coord1 calculated in radians\n lon1 = math.radians(coord1[1]) # Longitude of coord1 calculated in radians\n latd2 = math.radians(coord2[0]) # Latitude of coord2 calculated in radians\n lon2 = math.radians(coord2[1]) # Longitude of coord2 calculated in radians\n\n dlon = lon2 - lon1\n dlat = latd2 - latd1\n\n a = math.sin(dlat / 2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon / 2)**2\n c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))\n distance = R * c * 1000 # Geo-distance in kms.\n return distance",
"_____no_output_____"
]
],
[
[
"Distance shift in centroid after omitting the outliers:",
"_____no_output_____"
]
],
[
[
"shift_outlier=geodistance([lat1, long1],[lat2,long2])\nprint('Geo-Distance shift in Centroid before and after removing outliers is: '+str(round(shift_outlier/1000,2))+'kms.' )\nfor i in range(0,len(addresses)):\n shift_original=geodistance([df_loc.loc[i,'Latitude'],df_loc.loc[i,'Longitude']],[lat2,long2])\n print('Geo-Distance shift in Centroid for {} before and after processing is: {}'\n .format(addresses[i],str(round(shift_original/1000,2))+'kms.'))\n ",
"Geo-Distance shift in Centroid before and after removing outliers is: 1.5kms.\nGeo-Distance shift in Centroid for Bend, Oregon before and after processing is: 1.89kms.\n"
]
],
[
[
"## Encoding and Clustering",
"_____no_output_____"
],
[
"### Encoding by user specified category",
"_____no_output_____"
]
],
[
[
"# one hot encoding\nloc_encoded=pd.get_dummies(loc_venues_new[['Category ID']], prefix=\"\", prefix_sep=\"\") # Dataframe of encoding\nloc_venues_encoded=pd.concat([loc_venues_new,loc_encoded],axis=1) # Encoded dataframe with venue details\nloc_venues_encoded.head()",
"_____no_output_____"
]
],
[
[
"### Clustering by category encoding",
"_____no_output_____"
]
],
[
[
"# set number of clusters\nkclusters = len(categories)\n\n# run k-means clustering\nkmeans = KMeans(n_clusters=kclusters, random_state=0).fit(loc_encoded)\n# check cluster labels generated for each row in the dataframe\n(kmeans.labels_)\n# add clustering labels\nloc_venues_clustered=loc_venues_encoded\nloc_venues_clustered['Cluster Labels']= kmeans.labels_\n\n# Display clustered dataframe\nloc_venues_clustered.head(2)",
"/home/jupyterlab/conda/envs/python/lib/python3.6/site-packages/sklearn/cluster/k_means_.py:971: ConvergenceWarning: Number of distinct clusters (6) found smaller than n_clusters (7). Possibly due to duplicate points in X.\n return_n_iter=True)\n"
]
],
[
[
"kclusters=5\nloc_df=loc_venues[['Distance [m]','Venue Latitude','Venue Longitude']]\nloc_df.reset_index(inplace=True)\nloc_df['Venue']=loc_df.index\nkmeans_loc = KMeans(n_clusters=kclusters, random_state=0).fit(loc_df[['Venue','Distance [m]']])\nloc_df_kmeans=loc_df.copy()\nloc_df_kmeans['Labels']=kmeans_loc.labels_\nloc_df_kmeans['Venue']=loc_venues.index\nloc_df_kmeans.head(10)",
"_____no_output_____"
],
[
"loc_venues_clustered['Cluster Labels']=loc_df_kmeans['Labels'].tolist()\nloc_venues_clustered.head()",
"_____no_output_____"
]
],
[
[
"## Display Results",
"_____no_output_____"
],
[
"### Generate analyzed map array for each address using folium, and display centroid location for each address",
"_____no_output_____"
]
],
[
[
"maps={}\nloc_results_lat=[]\nloc_results_long=[]\nzip_results={}\n\ni=0\nfor address in addresses:\n # create map\n clustered=loc_venues_clustered[loc_venues_clustered['Address']==address]\n lat_array=clustered['Venue Latitude']\n long_array=clustered['Venue Longitude']\n \n # Calculate mean latitude and longitude\n latitude=lat_array.mean() \n longitude=long_array.mean()\n \n # Update results latitude and longitude arrays\n loc_results_lat.append(latitude)\n loc_results_long.append(longitude)\n \n # Obtain Zipcode\n url_zip=url_str_zip.format(CLIENT_ID, CLIENT_SECRET, latitude, longitude, VERSION,500,1)\n zip_result=requests.get(url_zip).json()\n try:\n zip_results[address]=zip_result['response']['venues'][0]['location']['formattedAddress']\n except:\n zip_results[address]='0'\n \n print('Centroid for '+str(address)+' at: '+str(round(latitude,5))+', '+str(round(longitude,5))\n +', Address:',zip_results[address][0])\n\n map_clusters = folium.Map(location=[latitude, longitude],zoom_start=10)\n\n # set color scheme for the clusters\n x = np.arange(kclusters)\n ys = [j + x + (j*x)**2 for j in range(kclusters)]\n colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))\n rainbow = [colors.rgb2hex(j) for j in colors_array]\n\n # add markers to the map\n markers_colors = []\n for lat, lon, poi, cluster, category in zip(lat_array, long_array, \n clustered.index, clustered['Cluster Labels'], clustered['Category ID']):\n label = folium.Popup(str(poi) + ' Cluster: ' + str(cat_dict[category]), parse_html=True)\n folium.CircleMarker(\n [lat, lon],\n radius=5,\n popup=label,\n color=rainbow[cluster],\n fill=True,\n fill_color=rainbow[cluster],\n fill_opacity=0.7,\n ).add_to(map_clusters)\n\n folium.RegularPolygonMarker(location=[latitude, longitude], popup='Centroid',\n fill_color='yellow', radius=10).add_to(map_clusters)\n maps[address]=map_clusters\n i=i+1",
"Centroid for Bend, Oregon at: 44.04313, -121.32815, Address: 1000 SW Reed Market Rd (at Brookswood Blvd)\n"
]
],
[
[
"### Plot map",
"_____no_output_____"
]
],
[
[
"maps[addresses[-1]]",
"_____no_output_____"
],
[
"hotel_category=['4bf58dd8d48988d1fa931735'] # Category for Hotels\n\n# Enter Hotel Search URL string\nurl_str_hotels='https://api.foursquare.com/v2/venues/search?categoryId={}&client_id={}&client_secret={}&ll={},{}&v={}&radius={}&limit={}'\n\ndf_hotels=df_loc.copy()\ndf_hotels['Latitude']=latitude\ndf_hotels['Longitude']=longitude\n # Radius in metres \nhotels=getNearbyVenues(names=df_hotels['Name'], latitudes=df_hotels['Latitude'],longitudes=df_hotels['Longitude'],\n url_link=url_str_hotels,categories=hotel_category,radius=hotel_radius)\nhotel_ids=hotels['Venue_id'].tolist()\nlikes_list=getVenueLikes(hotel_ids)\n\nhotels['Likes']=likes_list\n\nhotels.drop(columns=['Category ID','Address','Venue_id'],inplace=True)\nhotels.sort_values(by=['Likes','Distance [m]'],ascending=[False,True],inplace=True)\nhotels.reset_index(inplace=True,drop=True)\n\nhotels",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"raw",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"raw",
"raw"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d00d0679376381a3998f85e74e99f97d2d9d3d5f | 76,418 | ipynb | Jupyter Notebook | modularity/mod_kvals_lr.ipynb | ehbeam/neuro-knowledge-engine | 9dc56ade0bbbd8d14f0660774f787c3f46d7e632 | [
"MIT"
] | 15 | 2020-07-17T07:10:26.000Z | 2022-02-18T05:51:45.000Z | modularity/mod_kvals_lr.ipynb | YifeiCAO/neuro-knowledge-engine | 9dc56ade0bbbd8d14f0660774f787c3f46d7e632 | [
"MIT"
] | 2 | 2022-01-14T09:10:12.000Z | 2022-01-28T17:32:42.000Z | modularity/mod_kvals_lr.ipynb | YifeiCAO/neuro-knowledge-engine | 9dc56ade0bbbd8d14f0660774f787c3f46d7e632 | [
"MIT"
] | 4 | 2021-12-22T13:27:32.000Z | 2022-02-18T05:51:47.000Z | 112.379412 | 28,876 | 0.836321 | [
[
[
"# Introduction\n\nIn a prior notebook, documents were partitioned by assigning them to the domain with the highest Dice similarity of their term and structure occurrences. The occurrences of terms and structures in each domain is what we refer to as the domain \"archetype.\" Here, we'll assess whether the observed similarity between documents and the archetype is greater than expected by chance. This would indicate that information in the framework generalizes well to individual documents.\n\n# Load the data",
"_____no_output_____"
]
],
[
[
"import os\nimport pandas as pd\nimport numpy as np\n\nimport sys\nsys.path.append(\"..\")\nimport utilities\nfrom ontology import ontology\nfrom style import style",
"_____no_output_____"
],
[
"version = 190325 # Document-term matrix version\nclf = \"lr\" # Classifier used to generate the framework\nsuffix = \"_\" + clf # Suffix for term lists\nn_iter = 1000 # Iterations for null distribution\ncircuit_counts = range(2, 51) # Range of k values",
"_____no_output_____"
]
],
[
[
"## Brain activation coordinates",
"_____no_output_____"
]
],
[
[
"act_bin = utilities.load_coordinates()\nprint(\"Document N={}, Structure N={}\".format(\n act_bin.shape[0], act_bin.shape[1]))",
"Document N=18155, Structure N=118\n"
]
],
[
[
"## Document-term matrix",
"_____no_output_____"
]
],
[
[
"dtm_bin = utilities.load_doc_term_matrix(version=version, binarize=True)\nprint(\"Document N={}, Term N={}\".format(\n dtm_bin.shape[0], dtm_bin.shape[1]))",
"Document N=18155, Term N=4107\n"
]
],
[
[
"## Document splits",
"_____no_output_____"
]
],
[
[
"splits = {}\n# splits[\"train\"] = [int(pmid.strip()) for pmid in open(\"../data/splits/train.txt\")]\nsplits[\"validation\"] = [int(pmid.strip()) for pmid in open(\"../data/splits/validation.txt\")]\nsplits[\"test\"] = [int(pmid.strip()) for pmid in open(\"../data/splits/test.txt\")]\nfor split, split_pmids in splits.items():\n print(\"{:12s} N={}\".format(split.title(), len(split_pmids)))",
"Validation N=3631\nTest N=1816\n"
],
[
"pmids = dtm_bin.index.intersection(act_bin.index)",
"_____no_output_____"
]
],
[
[
"## Document assignments and distances\n\nIndexing by min:max will be faster in subsequent computations",
"_____no_output_____"
]
],
[
[
"from collections import OrderedDict\nfrom scipy.spatial.distance import cdist",
"_____no_output_____"
],
[
"def load_doc2dom(k, clf=\"lr\"):\n doc2dom_df = pd.read_csv(\"../partition/data/doc2dom_k{:02d}_{}.csv\".format(k, clf), \n header=None, index_col=0)\n doc2dom = {int(pmid): str(dom.values[0]) for pmid, dom in doc2dom_df.iterrows()}\n return doc2dom\n\ndef load_dom2docs(k, domains, splits, clf=\"lr\"):\n doc2dom = load_doc2dom(k, clf=clf)\n dom2docs = {dom: {split: [] for split, _ in splits.items()} for dom in domains}\n for doc, dom in doc2dom.items():\n for split, split_pmids in splits.items():\n if doc in splits[split]:\n dom2docs[dom][split].append(doc)\n return dom2docs",
"_____no_output_____"
],
[
"sorted_pmids, doc_dists, dom_idx = {}, {}, {}\nfor k in circuit_counts:\n \n print(\"Processing k={:02d}\".format(k))\n sorted_pmids[k], doc_dists[k], dom_idx[k] = {}, {}, {}\n \n for split, split_pmids in splits.items():\n \n lists, circuits = ontology.load_ontology(k, path=\"../ontology/\", suffix=suffix)\n words = sorted(list(set(lists[\"TOKEN\"])))\n structures = sorted(list(set(act_bin.columns)))\n domains = list(OrderedDict.fromkeys(lists[\"DOMAIN\"]))\n\n dtm_words = dtm_bin.loc[pmids, words]\n act_structs = act_bin.loc[pmids, structures]\n docs = dtm_words.copy()\n docs[structures] = act_structs.copy()\n\n doc2dom = load_doc2dom(k, clf=clf)\n dom2docs = load_dom2docs(k, domains, splits, clf=clf)\n\n ids = []\n for dom in domains:\n ids += [pmid for pmid, sys in doc2dom.items() if sys == dom and pmid in split_pmids]\n sorted_pmids[k][split] = ids\n\n doc_dists[k][split] = pd.DataFrame(cdist(docs.loc[ids], docs.loc[ids], metric=\"dice\"),\n index=ids, columns=ids)\n\n dom_idx[k][split] = {}\n for dom in domains:\n dom_idx[k][split][dom] = {}\n dom_pmids = dom2docs[dom][split]\n if len(dom_pmids) > 0:\n dom_idx[k][split][dom][\"min\"] = sorted_pmids[k][split].index(dom_pmids[0])\n dom_idx[k][split][dom][\"max\"] = sorted_pmids[k][split].index(dom_pmids[-1]) + 1\n else:\n dom_idx[k][split][dom][\"min\"] = 0\n dom_idx[k][split][dom][\"max\"] = 0",
"Processing k=02\nProcessing k=03\nProcessing k=04\nProcessing k=05\nProcessing k=06\nProcessing k=07\nProcessing k=08\nProcessing k=09\nProcessing k=10\nProcessing k=11\nProcessing k=12\nProcessing k=13\nProcessing k=14\nProcessing k=15\nProcessing k=16\nProcessing k=17\nProcessing k=18\nProcessing k=19\nProcessing k=20\nProcessing k=21\nProcessing k=22\nProcessing k=23\nProcessing k=24\nProcessing k=25\nProcessing k=26\nProcessing k=27\nProcessing k=28\nProcessing k=29\nProcessing k=30\nProcessing k=31\nProcessing k=32\nProcessing k=33\nProcessing k=34\nProcessing k=35\nProcessing k=36\nProcessing k=37\nProcessing k=38\nProcessing k=39\nProcessing k=40\nProcessing k=41\nProcessing k=42\nProcessing k=43\nProcessing k=44\nProcessing k=45\nProcessing k=46\nProcessing k=47\nProcessing k=48\nProcessing k=49\nProcessing k=50\n"
]
],
[
[
"# Index by PMID and sort by structure",
"_____no_output_____"
]
],
[
[
"structures = sorted(list(set(act_bin.columns)))\nact_structs = act_bin.loc[pmids, structures]",
"_____no_output_____"
]
],
[
[
"# Compute domain modularity\n\n## Observed values\n\n## Distances internal and external to articles in each domain",
"_____no_output_____"
]
],
[
[
"dists_int, dists_ext = {}, {} \nfor k in circuit_counts:\n\n dists_int[k], dists_ext[k] = {}, {}\n lists, circuits = ontology.load_ontology(k, path=\"../ontology/\", suffix=suffix)\n domains = list(OrderedDict.fromkeys(lists[\"DOMAIN\"]))\n\n for split, split_pmids in splits.items():\n dists_int[k][split], dists_ext[k][split] = {}, {}\n \n for dom in domains:\n\n dom_min, dom_max = dom_idx[k][split][dom][\"min\"], dom_idx[k][split][dom][\"max\"]\n dom_dists = doc_dists[k][split].values[:,dom_min:dom_max][dom_min:dom_max,:]\n dists_int[k][split][dom] = dom_dists\n\n other_dists_lower = doc_dists[k][split].values[:,dom_min:dom_max][:dom_min,:]\n other_dists_upper = doc_dists[k][split].values[:,dom_min:dom_max][dom_max:,:]\n other_dists = np.concatenate((other_dists_lower, other_dists_upper))\n dists_ext[k][split][dom] = other_dists",
"_____no_output_____"
]
],
[
[
"## Domain-averaged ratio of external to internal distances",
"_____no_output_____"
]
],
[
[
"means = {split: np.empty((len(circuit_counts),)) for split in splits.keys()}\n\nfor k_i, k in enumerate(circuit_counts):\n \n file_obs = \"data/kvals/mod_obs_k{:02d}_{}_{}.csv\".format(k, clf, split)\n if not os.path.isfile(file_obs):\n \n print(\"Processing k={:02d}\".format(k))\n \n lists, circuits = ontology.load_ontology(k, path=\"../ontology/\", suffix=suffix)\n domains = list(OrderedDict.fromkeys(lists[\"DOMAIN\"]))\n dom2docs = load_dom2docs(k, domains, splits, clf=clf)\n \n pmid_list, split_list, dom_list, obs_list = [], [], [], []\n for split, split_pmids in splits.items():\n for dom in domains:\n\n n_dom_docs = dists_int[k][split][dom].shape[0]\n if n_dom_docs > 0:\n\n mean_dist_int = np.nanmean(dists_int[k][split][dom], axis=0)\n mean_dist_ext = np.nanmean(dists_ext[k][split][dom], axis=0)\n ratio = mean_dist_ext / mean_dist_int\n ratio[ratio == np.inf] = np.nan \n\n pmid_list += dom2docs[dom][split]\n dom_list += [dom] * len(ratio)\n split_list += [split] * len(ratio)\n obs_list += list(ratio)\n\n df_obs = pd.DataFrame({\"PMID\": pmid_list, \"SPLIT\": split_list, \n \"DOMAIN\": dom_list, \"OBSERVED\": obs_list})\n df_obs.to_csv(file_obs, index=None) \n \n else:\n df_obs = pd.read_csv(file_obs)\n \n for split, split_pmids in splits.items():\n dom_means = []\n for dom in set(df_obs[\"DOMAIN\"]):\n dom_vals = df_obs.loc[(df_obs[\"SPLIT\"] == split) & (df_obs[\"DOMAIN\"] == dom), \"OBSERVED\"]\n dom_means.append(np.nanmean(dom_vals))\n means[split][k_i] = np.nanmean(dom_means)",
"Processing k=02\nProcessing k=03\nProcessing k=04\nProcessing k=05\nProcessing k=06\nProcessing k=07\nProcessing k=08\nProcessing k=09\nProcessing k=10\nProcessing k=11\nProcessing k=12\nProcessing k=13\nProcessing k=14\n"
]
],
[
[
"## Null distributions",
"_____no_output_____"
]
],
[
[
"nulls = {split: np.empty((len(circuit_counts),n_iter)) for split in splits.keys()}\n\nfor split, split_pmids in splits.items():\n for k_i, k in enumerate(circuit_counts):\n\n file_null = \"data/kvals/mod_null_k{:02d}_{}_{}iter.csv\".format(k, split, n_iter)\n if not os.path.isfile(file_null):\n\n print(\"Processing k={:02d}\".format(k))\n\n lists, circuits = ontology.load_ontology(k, path=\"../ontology/\", suffix=suffix)\n domains = list(OrderedDict.fromkeys(lists[\"DOMAIN\"]))\n\n n_docs = len(split_pmids)\n df_null = np.empty((len(domains), n_iter))\n for i, dom in enumerate(domains):\n\n n_dom_docs = dists_int[k][split][dom].shape[0]\n if n_dom_docs > 0:\n dist_int_ext = np.concatenate((dists_int[k][split][dom], dists_ext[k][split][dom]))\n for n in range(n_iter):\n\n null = np.random.choice(range(n_docs), size=n_docs, replace=False)\n dist_int_ext_null = dist_int_ext[null,:]\n\n mean_dist_int = np.nanmean(dist_int_ext_null[:n_dom_docs,:], axis=0)\n mean_dist_ext = np.nanmean(dist_int_ext_null[n_dom_docs:,:], axis=0)\n ratio = mean_dist_ext / mean_dist_int\n ratio[ratio == np.inf] = np.nan \n\n df_null[i,n] = np.nanmean(ratio)\n else:\n df_null[i,:] = np.nan\n\n df_null = pd.DataFrame(df_null, index=domains, columns=range(n_iter))\n df_null.to_csv(file_null)\n\n else:\n df_null = pd.read_csv(file_null, index_col=0, header=0)\n\n nulls[split][k_i,:] = np.nanmean(df_null, axis=0)",
"_____no_output_____"
]
],
[
[
"## Bootstrap distributions",
"_____no_output_____"
]
],
[
[
"boots = {split: np.empty((len(circuit_counts),n_iter)) for split in splits.keys()}\n\nfor split, split_pmids in splits.items():\n for k_i, k in enumerate(circuit_counts):\n\n file_boot = \"data/kvals/mod_boot_k{:02d}_{}_{}iter.csv\".format(k, split, n_iter)\n if not os.path.isfile(file_boot):\n\n print(\"Processing k={:02d}\".format(k))\n\n lists, circuits = ontology.load_ontology(k, path=\"../ontology/\", suffix=suffix)\n domains = list(OrderedDict.fromkeys(lists[\"DOMAIN\"]))\n\n df_boot = np.empty((len(domains), n_iter))\n for i, dom in enumerate(domains):\n\n n_dom_docs = dists_int[k][split][dom].shape[0]\n if n_dom_docs > 0:\n for n in range(n_iter):\n\n boot = np.random.choice(range(n_dom_docs), size=n_dom_docs, replace=True)\n\n mean_dist_int = np.nanmean(dists_int[k][split][dom][:,boot], axis=0)\n mean_dist_ext = np.nanmean(dists_ext[k][split][dom][:,boot], axis=0)\n ratio = mean_dist_ext / mean_dist_int\n ratio[ratio == np.inf] = np.nan \n\n df_boot[i,n] = np.nanmean(ratio)\n else:\n df_boot[i,:] = np.nan\n\n df_boot = pd.DataFrame(df_boot, index=domains, columns=range(n_iter))\n df_boot.to_csv(file_boot)\n\n else:\n df_boot = pd.read_csv(file_boot, index_col=0, header=0)\n\n boots[split][k_i,:] = np.nanmean(df_boot, axis=0)",
"_____no_output_____"
]
],
[
[
"# Plot results over k",
"_____no_output_____"
]
],
[
[
"from matplotlib import rcParams\n%matplotlib inline",
"_____no_output_____"
],
[
"rcParams[\"axes.linewidth\"] = 1.5",
"_____no_output_____"
],
[
"for split in splits.keys():\n print(split.upper())\n utilities.plot_stats_by_k(means, nulls, boots, circuit_counts, metric=\"mod\",\n split=split, op_k=6, clf=clf, interval=0.999, \n ylim=[0.8,1.4], yticks=[0.8, 0.9,1,1.1,1.2,1.3,1.4])",
"VALIDATION\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d00d0db51c58919de2cc69780212e856a8fffe62 | 2,350 | ipynb | Jupyter Notebook | courses/ml/regularization.ipynb | obs145628/ml-notebooks | 08a64962e106ec569039ab204a7ae4c900783b6b | [
"MIT"
] | 1 | 2020-10-29T11:26:00.000Z | 2020-10-29T11:26:00.000Z | courses/ml/regularization.ipynb | obs145628/ml-notebooks | 08a64962e106ec569039ab204a7ae4c900783b6b | [
"MIT"
] | 5 | 2021-03-18T21:33:45.000Z | 2022-03-11T23:34:50.000Z | courses/ml/regularization.ipynb | obs145628/ml-notebooks | 08a64962e106ec569039ab204a7ae4c900783b6b | [
"MIT"
] | 1 | 2019-12-23T21:50:02.000Z | 2019-12-23T21:50:02.000Z | 20.258621 | 124 | 0.54 | [
[
[
"import sys\nsys.path.append('../../pyutils')\n\nimport numpy as np\nimport scipy.linalg\nimport torch\n\nimport metrics\nimport revdiff as rd\nimport utils\n\nnp.random.seed(12)",
"_____no_output_____"
]
],
[
[
"# Regularization",
"_____no_output_____"
],
[
"## Norm Penalties",
"_____no_output_____"
],
[
"$$\\tilde{J}(\\theta) = J(\\theta) + \\alpha \\Omega(\\theta)$$ \n\n$\\Omega(\\theta)$: parameter norm penalty \n$\\alpha$: hyperparameter indicating the importance of the penalty term relative to the loss function",
"_____no_output_____"
],
[
"They help limiting the capicity of the model, and thus reducing overfitting",
"_____no_output_____"
],
[
"### L1 and L2 regularizations\n\nsee notebook linear_regression_regularization",
"_____no_output_____"
],
[
"### Constrained optimisations",
"_____no_output_____"
],
[
"It's also possible to directly put some constraints on the parameters. \nThanks to the generelized Lagrange function, this problem can be turned into an unconstrained one with a penalty term.",
"_____no_output_____"
],
[
"## Dataset Augmentation",
"_____no_output_____"
],
[
"- small transalations / rotations / scaling for images\n- noise injection",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d00d11ee20eae8e4247145bff75e9782a1ec8e0e | 12,664 | ipynb | Jupyter Notebook | Sandbox/Notebooks/DataGathering/Sandbox/PRAW.ipynb | LorenzoNajt/ErdosInstitute-SIG_Project | 7dc82434eb20c6ed673a365a67ea8f3653997f64 | [
"MIT"
] | 2 | 2021-05-06T22:18:38.000Z | 2021-05-07T19:53:17.000Z | Sandbox/Notebooks/DataGathering/Sandbox/PRAW.ipynb | LorenzoNajt/ErdosInstitute-SIG_Project | 7dc82434eb20c6ed673a365a67ea8f3653997f64 | [
"MIT"
] | null | null | null | Sandbox/Notebooks/DataGathering/Sandbox/PRAW.ipynb | LorenzoNajt/ErdosInstitute-SIG_Project | 7dc82434eb20c6ed673a365a67ea8f3653997f64 | [
"MIT"
] | null | null | null | 34.69589 | 158 | 0.549984 | [
[
[
"import pandas as pd \nimport praw \nimport re \nimport datetime as dt\nimport seaborn as sns\nimport requests\nimport json\nimport sys\nimport time\n## acknowledgements\n'''\nhttps://stackoverflow.com/questions/48358837/pulling-reddit-comments-using-python-praw-and-creating-a-dataframe-with-the-resu\nhttps://www.reddit.com/r/redditdev/comments/2e2q2l/praw_downvote_count_always_zero/\nhttps://towardsdatascience.com/an-easy-tutorial-about-sentiment-analysis-with-deep-learning-and-keras-2bf52b9cba91\n\nFor navigating pushshift: https://github.com/Watchful1/Sketchpad/blob/master/postDownloader.py\n\n# traffic = reddit.subreddit(subreddit).traffic() is not available to us, sadly.\n'''\n\nwith open(\"../API.env\") as file:\n exec(file.read())\n\nreddit = praw.Reddit(\n client_id = client_id,\n client_secret = client_secret,\n user_agent = user_agent\n)\n\n\n'''\nSome helper functions for the reddit API.\n'''\n\ndef extract_num_rewards(awardings_data):\n return sum( x[\"count\"] for x in awardings_data)\n\ndef extract_data(submission, comments = False):\n postlist = []\n\n # extracts top level comments\n\n if comments:\n submission.comments.replace_more(limit=0)\n for comment in submission.comments: \n post = vars(comment)\n postlist.append(post)\n\n content = vars(submission)\n \n content[\"total_awards\"] = extract_num_rewards(content[\"all_awardings\"])\n return content",
"_____no_output_____"
],
[
"'''\nSample num_samples random submissions, and get the top num_samples submissions, and put them into dataframes.\n\nOpted instead to scrape the entire thing.\n'''\n\ndef random_sample(num_samples, subreddit):\n sample = []\n for i in range(num_samples):\n submission = reddit.subreddit(subreddit).random() \n sample.append(extract_data(submission))\n return(pd.DataFrame(sample))\n\ndef sample(source):\n submissions = []\n for submission in source:\n submissions.append(extract_data(submission))\n print(f\"Got {len(submissions)} submissions. (This can be less than num_samples.)\")\n return(pd.DataFrame(submissions))\n\ndef top_sample(num_samples, subreddit):\n return sample(reddit.subreddit(subreddit).top(limit=num_samples) )\n\ndef rising_sample(num_samples, subreddit):\n return sample(reddit.subreddit(subreddit).rising(limit=num_samples))\n\ndef controversial_sample(num_samples, subreddit):\n return sample(reddit.subreddit(subreddit).controversial(limit=num_samples) )\n",
"_____no_output_____"
],
[
"\nnum_samples = 10\nsubreddit ='wallstreetbets'\n\n\n\n#random_wsb = random_sample(num_samples, subreddit)\n#top_wsb = top_sample(num_samples,subreddit)\n#rising_wsb = rising_sample(num_samples, subreddit)\n#controversial_wsb = controversial_sample(num_samples, subreddit)\n\n#random_wsb.to_pickle(\"random_wsb.pkl\")\n#top_wsb.to_pickle(\"top_wsb.pkl\")\n#rising_wsb.to_pickle(\"rising_wsb.pkl\")\n#controversial_wsb.to_pickle(\"controversial_wsb.pkl\")\n\n# other commands here: https://praw.readthedocs.io/en/latest/code_overview/models/subreddit.html#praw.models.Subreddit.rising\n# NB: The subreddit stream option seems useful.\n# NB: There is also rising_random\n",
"Got 10 submissions. (This can be less than num_samples.)\n"
],
[
"submission = reddit.subreddit(subreddit).random() ",
"_____no_output_____"
],
[
"submission.approved_at_utc",
"_____no_output_____"
],
[
"vars(submission)",
"_____no_output_____"
],
[
"str(submission.flair)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00d15ce7d1d9ea092ed23d592ac4f6e02e10bc3 | 24,784 | ipynb | Jupyter Notebook | Hello, scikit-learn World!.ipynb | InterruptSpeed/mnist-svc | 672b8345503d6d23906d196ab575041e95feec73 | [
"MIT"
] | null | null | null | Hello, scikit-learn World!.ipynb | InterruptSpeed/mnist-svc | 672b8345503d6d23906d196ab575041e95feec73 | [
"MIT"
] | null | null | null | Hello, scikit-learn World!.ipynb | InterruptSpeed/mnist-svc | 672b8345503d6d23906d196ab575041e95feec73 | [
"MIT"
] | null | null | null | 61.498759 | 6,616 | 0.761338 | [
[
[
"#pip install sklearn numpy scipy matplotlib",
"_____no_output_____"
],
[
"from sklearn import datasets\niris = datasets.load_iris()\ndigits = datasets.load_digits()",
"_____no_output_____"
],
[
"print(digits.data)",
"[[ 0. 0. 5. ... 0. 0. 0.]\n [ 0. 0. 0. ... 10. 0. 0.]\n [ 0. 0. 0. ... 16. 9. 0.]\n ...\n [ 0. 0. 1. ... 6. 0. 0.]\n [ 0. 0. 2. ... 12. 0. 0.]\n [ 0. 0. 10. ... 12. 1. 0.]]\n"
],
[
"digits.target",
"_____no_output_____"
],
[
"digits.images[0]",
"_____no_output_____"
]
],
[
[
"create a support vector classifier and manually set the gamma",
"_____no_output_____"
]
],
[
[
"from sklearn import svm, metrics\nclf = svm.SVC(gamma=0.001, C=100.)\n",
"_____no_output_____"
]
],
[
[
"fit the classifier to the model and use all the images in our dataset except the last one",
"_____no_output_____"
]
],
[
[
"\nclf.fit(digits.data[:-1], digits.target[:-1]) \nsvm.SVC(C=100.0, cache_size=200, class_weight=None, coef0=0.0,\n decision_function_shape='ovr', degree=3, gamma=0.001, kernel='rbf',\n max_iter=-1, probability=False, random_state=None, shrinking=True,\n tol=0.001, verbose=False)",
"_____no_output_____"
],
[
"clf.predict(digits.data[-1:])",
"_____no_output_____"
]
],
[
[
"reshape the image data into a 8x8 array prior to rendering it",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\nplt.imshow(digits.data[-1:].reshape(8,8), cmap=plt.cm.gray_r)\nplt.show()",
"_____no_output_____"
]
],
[
[
"persist the model using pickle and load it again to ensure it works",
"_____no_output_____"
]
],
[
[
"import pickle\ns = pickle.dumps(clf)\nwith open(b\"digits.model.obj\", \"wb\") as f:\n pickle.dump(clf, f)\nclf2 = pickle.loads(s)\nclf2.predict(digits.data[0:1])",
"_____no_output_____"
],
[
"plt.imshow(digits.data[0:1].reshape(8,8), cmap=plt.cm.gray_r)\nplt.show()",
"_____no_output_____"
]
],
[
[
"alternately use joblib.dump",
"_____no_output_____"
]
],
[
[
"from sklearn.externals import joblib\njoblib.dump(clf, 'digits.model.pkl') ",
"_____no_output_____"
]
],
[
[
"example from http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html#sphx-glr-auto-examples-classification-plot-digits-classification-py",
"_____no_output_____"
]
],
[
[
"images_and_labels = list(zip(digits.images, digits.target))\nfor index, (image, label) in enumerate(images_and_labels[:4]):\n plt.subplot(2, 4, index + 1)\n plt.axis('off')\n plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')\n plt.title('Training: %i' % label)\n \n# To apply a classifier on this data, we need to flatten the image, to\n# turn the data in a (samples, feature) matrix:\nn_samples = len(digits.images)\ndata = digits.images.reshape((n_samples, -1))\n\n# Create a classifier: a support vector classifier\nclassifier = svm.SVC(gamma=0.001)\n\n# We learn the digits on the first half of the digits\nclassifier.fit(data[:n_samples // 2], digits.target[:n_samples // 2])\n\n# Now predict the value of the digit on the second half:\nexpected = digits.target[n_samples // 2:]\npredicted = classifier.predict(data[n_samples // 2:])\n\nprint(\"Classification report for classifier %s:\\n%s\\n\"\n % (classifier, metrics.classification_report(expected, predicted)))\nprint(\"Confusion matrix:\\n%s\" % metrics.confusion_matrix(expected, predicted))\n\nimages_and_predictions = list(zip(digits.images[n_samples // 2:], predicted))\nfor index, (image, prediction) in enumerate(images_and_predictions[:4]):\n plt.subplot(2, 4, index + 5)\n plt.axis('off')\n plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')\n plt.title('Prediction: %i' % prediction)\n \nplt.show()",
"Classification report for classifier SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,\n decision_function_shape='ovr', degree=3, gamma=0.001, kernel='rbf',\n max_iter=-1, probability=False, random_state=None, shrinking=True,\n tol=0.001, verbose=False):\n precision recall f1-score support\n\n 0 1.00 0.99 0.99 88\n 1 0.99 0.97 0.98 91\n 2 0.99 0.99 0.99 86\n 3 0.98 0.87 0.92 91\n 4 0.99 0.96 0.97 92\n 5 0.95 0.97 0.96 91\n 6 0.99 0.99 0.99 91\n 7 0.96 0.99 0.97 89\n 8 0.94 1.00 0.97 88\n 9 0.93 0.98 0.95 92\n\navg / total 0.97 0.97 0.97 899\n\n\nConfusion matrix:\n[[87 0 0 0 1 0 0 0 0 0]\n [ 0 88 1 0 0 0 0 0 1 1]\n [ 0 0 85 1 0 0 0 0 0 0]\n [ 0 0 0 79 0 3 0 4 5 0]\n [ 0 0 0 0 88 0 0 0 0 4]\n [ 0 0 0 0 0 88 1 0 0 2]\n [ 0 1 0 0 0 0 90 0 0 0]\n [ 0 0 0 0 0 1 0 88 0 0]\n [ 0 0 0 0 0 0 0 0 88 0]\n [ 0 0 0 1 0 1 0 0 0 90]]\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d00d1a2ccbaa4a874119ba3c14912de355f3aba6 | 58,628 | ipynb | Jupyter Notebook | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization | 67f3d36c1bccc8c87ba8b17041b4619512c4f1b3 | [
"MIT"
] | 1 | 2020-06-28T19:07:59.000Z | 2020-06-28T19:07:59.000Z | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization | 67f3d36c1bccc8c87ba8b17041b4619512c4f1b3 | [
"MIT"
] | null | null | null | 4. Convolutional Neural Networks/Residual Networks v2a.ipynb | MohamedAskar/Deep-Learning-Specialization | 67f3d36c1bccc8c87ba8b17041b4619512c4f1b3 | [
"MIT"
] | null | null | null | 60.069672 | 1,599 | 0.625844 | [
[
[
"# Residual Networks\n\nWelcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by [He et al.](https://arxiv.org/pdf/1512.03385.pdf), allow you to train much deeper networks than were previously practically feasible.\n\n**In this assignment, you will:**\n- Implement the basic building blocks of ResNets. \n- Put together these building blocks to implement and train a state-of-the-art neural network for image classification. ",
"_____no_output_____"
],
[
"## <font color='darkblue'>Updates</font>\n\n#### If you were working on the notebook before this update...\n* The current notebook is version \"2a\".\n* You can find your original work saved in the notebook with the previous version name (\"v2\") \n* To view the file directory, go to the menu \"File->Open\", and this will open a new tab that shows the file directory.\n\n#### List of updates\n* For testing on an image, replaced `preprocess_input(x)` with `x=x/255.0` to normalize the input image in the same way that the model's training data was normalized.\n* Refers to \"shallower\" layers as those layers closer to the input, and \"deeper\" layers as those closer to the output (Using \"shallower\" layers instead of \"lower\" or \"earlier\").\n* Added/updated instructions.\n",
"_____no_output_____"
],
[
"This assignment will be done in Keras. \n\nBefore jumping into the problem, let's run the cell below to load the required packages.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom keras import layers\nfrom keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D\nfrom keras.models import Model, load_model\nfrom keras.preprocessing import image\nfrom keras.utils import layer_utils\nfrom keras.utils.data_utils import get_file\nfrom keras.applications.imagenet_utils import preprocess_input\nimport pydot\nfrom IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot\nfrom keras.utils import plot_model\nfrom resnets_utils import *\nfrom keras.initializers import glorot_uniform\nimport scipy.misc\nfrom matplotlib.pyplot import imshow\n%matplotlib inline\n\nimport keras.backend as K\nK.set_image_data_format('channels_last')\nK.set_learning_phase(1)",
"_____no_output_____"
]
],
[
[
"## 1 - The problem of very deep neural networks\n\nLast week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.\n\n* The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the shallower layers, closer to the input) to very complex features (at the deeper layers, closer to the output). \n* However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent prohibitively slow. \n* More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and \"explode\" to take very large values). \n* During training, you might therefore see the magnitude (or norm) of the gradient for the shallower layers decrease to zero very rapidly as training proceeds: ",
"_____no_output_____"
],
[
"<img src=\"images/vanishing_grad_kiank.png\" style=\"width:450px;height:220px;\">\n<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Vanishing gradient** <br> The speed of learning decreases very rapidly for the shallower layers as the network trains </center></caption>\n\nYou are now going to solve this problem by building a Residual Network!",
"_____no_output_____"
],
[
"## 2 - Building a Residual Network\n\nIn ResNets, a \"shortcut\" or a \"skip connection\" allows the model to skip layers: \n\n<img src=\"images/skip_connection_kiank.png\" style=\"width:650px;height:200px;\">\n<caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : A ResNet block showing a **skip-connection** <br> </center></caption>\n\nThe image on the left shows the \"main path\" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network. \n\nWe also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. \n \n(There is also some evidence that the ease of learning an identity function accounts for ResNets' remarkable performance even more so than skip connections helping with vanishing gradients).\n\nTwo main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them: the \"identity block\" and the \"convolutional block.\"",
"_____no_output_____"
],
[
"### 2.1 - The identity block\n\nThe identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps:\n\n<img src=\"images/idblock2_kiank.png\" style=\"width:650px;height:150px;\">\n<caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Identity block.** Skip connection \"skips over\" 2 layers. </center></caption>\n\nThe upper path is the \"shortcut path.\" The lower path is the \"main path.\" In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras! \n\nIn this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection \"skips over\" 3 hidden layers rather than 2 layers. It looks like this: \n\n<img src=\"images/idblock3_kiank.png\" style=\"width:650px;height:150px;\">\n<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Identity block.** Skip connection \"skips over\" 3 layers.</center></caption>",
"_____no_output_____"
],
[
"Here are the individual steps.\n\nFirst component of main path: \n- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is \"valid\" and its name should be `conv_name_base + '2a'`. Use 0 as the seed for the random initialization. \n- The first BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2a'`.\n- Then apply the ReLU activation function. This has no name and no hyperparameters. \n\nSecond component of main path:\n- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is \"same\" and its name should be `conv_name_base + '2b'`. Use 0 as the seed for the random initialization. \n- The second BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2b'`.\n- Then apply the ReLU activation function. This has no name and no hyperparameters. \n\nThird component of main path:\n- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is \"valid\" and its name should be `conv_name_base + '2c'`. Use 0 as the seed for the random initialization. \n- The third BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2c'`. \n- Note that there is **no** ReLU activation function in this component. \n\nFinal step: \n- The `X_shortcut` and the output from the 3rd layer `X` are added together.\n- **Hint**: The syntax will look something like `Add()([var1,var2])`\n- Then apply the ReLU activation function. This has no name and no hyperparameters. \n\n**Exercise**: Implement the ResNet identity block. We have implemented the first component of the main path. Please read this carefully to make sure you understand what it is doing. You should implement the rest. \n- To implement the Conv2D step: [Conv2D](https://keras.io/layers/convolutional/#conv2d)\n- To implement BatchNorm: [BatchNormalization](https://faroit.github.io/keras-docs/1.2.2/layers/normalization/) (axis: Integer, the axis that should be normalized (typically the 'channels' axis))\n- For the activation, use: `Activation('relu')(X)`\n- To add the value passed forward by the shortcut: [Add](https://keras.io/layers/merge/#add)",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: identity_block\n\ndef identity_block(X, f, filters, stage, block):\n \"\"\"\n Implementation of the identity block as defined in Figure 4\n \n Arguments:\n X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)\n f -- integer, specifying the shape of the middle CONV's window for the main path\n filters -- python list of integers, defining the number of filters in the CONV layers of the main path\n stage -- integer, used to name the layers, depending on their position in the network\n block -- string/character, used to name the layers, depending on their position in the network\n \n Returns:\n X -- output of the identity block, tensor of shape (n_H, n_W, n_C)\n \"\"\"\n \n # defining name basis\n conv_name_base = 'res' + str(stage) + block + '_branch'\n bn_name_base = 'bn' + str(stage) + block + '_branch'\n \n # Retrieve Filters\n F1, F2, F3 = filters\n \n # Save the input value. You'll need this later to add back to the main path. \n X_shortcut = X\n \n # First component of main path\n X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)\n X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)\n X = Activation('relu')(X)\n \n ### START CODE HERE ###\n \n # Second component of main path (≈3 lines)\n X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)\n X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)\n X = Activation('relu')(X)\n \n \n\n # Third component of main path (≈2 lines)\n X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)\n X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)\n\n # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)\n X = Add()([X, X_shortcut])\n X = Activation('relu')(X)\n \n ### END CODE HERE ###\n \n return X",
"_____no_output_____"
],
[
"tf.reset_default_graph()\n\nwith tf.Session() as test:\n np.random.seed(1)\n A_prev = tf.placeholder(\"float\", [3, 4, 4, 6])\n X = np.random.randn(3, 4, 4, 6)\n A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')\n test.run(tf.global_variables_initializer())\n out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})\n print(\"out = \" + str(out[0][1][1][0]))",
"out = [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]\n"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **out**\n </td>\n <td>\n [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"## 2.2 - The convolutional block\n\nThe ResNet \"convolutional block\" is the second block type. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path: \n\n<img src=\"images/convblock_kiank.png\" style=\"width:650px;height:150px;\">\n<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Convolutional block** </center></caption>\n\n* The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.) \n* For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. \n* The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step. \n\nThe details of the convolutional block are as follows. \n\nFirst component of main path:\n- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is \"valid\" and its name should be `conv_name_base + '2a'`. Use 0 as the `glorot_uniform` seed.\n- The first BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2a'`.\n- Then apply the ReLU activation function. This has no name and no hyperparameters. \n\nSecond component of main path:\n- The second CONV2D has $F_2$ filters of shape (f,f) and a stride of (1,1). Its padding is \"same\" and it's name should be `conv_name_base + '2b'`. Use 0 as the `glorot_uniform` seed.\n- The second BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2b'`.\n- Then apply the ReLU activation function. This has no name and no hyperparameters. \n\nThird component of main path:\n- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is \"valid\" and it's name should be `conv_name_base + '2c'`. Use 0 as the `glorot_uniform` seed.\n- The third BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component. \n\nShortcut path:\n- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is \"valid\" and its name should be `conv_name_base + '1'`. Use 0 as the `glorot_uniform` seed.\n- The BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '1'`. \n\nFinal step: \n- The shortcut and the main path values are added together.\n- Then apply the ReLU activation function. This has no name and no hyperparameters. \n \n**Exercise**: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.\n- [Conv2D](https://keras.io/layers/convolutional/#conv2d)\n- [BatchNormalization](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))\n- For the activation, use: `Activation('relu')(X)`\n- [Add](https://keras.io/layers/merge/#add)",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: convolutional_block\n\ndef convolutional_block(X, f, filters, stage, block, s = 2):\n \"\"\"\n Implementation of the convolutional block as defined in Figure 4\n \n Arguments:\n X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)\n f -- integer, specifying the shape of the middle CONV's window for the main path\n filters -- python list of integers, defining the number of filters in the CONV layers of the main path\n stage -- integer, used to name the layers, depending on their position in the network\n block -- string/character, used to name the layers, depending on their position in the network\n s -- Integer, specifying the stride to be used\n \n Returns:\n X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)\n \"\"\"\n \n # defining name basis\n conv_name_base = 'res' + str(stage) + block + '_branch'\n bn_name_base = 'bn' + str(stage) + block + '_branch'\n \n # Retrieve Filters\n F1, F2, F3 = filters\n \n # Save the input value\n X_shortcut = X\n\n\n ##### MAIN PATH #####\n # First component of main path \n X = Conv2D(F1, (1, 1), strides = (s,s), padding='valid' , name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)\n X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)\n X = Activation('relu')(X)\n \n ### START CODE HERE ###\n\n X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)\n X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)\n X = Activation('relu')(X)\n\n # Third component of main path (≈2 lines)\n X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)\n X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)\n\n ##### SHORTCUT PATH #### (≈2 lines)\n X_shortcut = Conv2D(filters=F3, kernel_size=(1, 1), strides=(s, s), padding='valid', name=conv_name_base + '1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)\n X_shortcut = BatchNormalization(axis=3, name=bn_name_base + '1')(X_shortcut)\n\n # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)\n X = Add()([X, X_shortcut])\n X = Activation('relu')(X)\n\n\n \n ### END CODE HERE ###\n \n return X",
"_____no_output_____"
],
[
"tf.reset_default_graph()\n\nwith tf.Session() as test:\n np.random.seed(1)\n A_prev = tf.placeholder(\"float\", [3, 4, 4, 6])\n X = np.random.randn(3, 4, 4, 6)\n A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')\n test.run(tf.global_variables_initializer())\n out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})\n print(\"out = \" + str(out[0][1][1][0]))",
"out = [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]\n"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **out**\n </td>\n <td>\n [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"## 3 - Building your first ResNet model (50 layers)\n\nYou now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. \"ID BLOCK\" in the diagram stands for \"Identity block,\" and \"ID BLOCK x3\" means you should stack 3 identity blocks together.\n\n<img src=\"images/resnet_kiank.png\" style=\"width:850px;height:150px;\">\n<caption><center> <u> <font color='purple'> **Figure 5** </u><font color='purple'> : **ResNet-50 model** </center></caption>\n\nThe details of this ResNet-50 model are:\n- Zero-padding pads the input with a pad of (3,3)\n- Stage 1:\n - The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is \"conv1\".\n - BatchNorm is applied to the 'channels' axis of the input.\n - MaxPooling uses a (3,3) window and a (2,2) stride.\n- Stage 2:\n - The convolutional block uses three sets of filters of size [64,64,256], \"f\" is 3, \"s\" is 1 and the block is \"a\".\n - The 2 identity blocks use three sets of filters of size [64,64,256], \"f\" is 3 and the blocks are \"b\" and \"c\".\n- Stage 3:\n - The convolutional block uses three sets of filters of size [128,128,512], \"f\" is 3, \"s\" is 2 and the block is \"a\".\n - The 3 identity blocks use three sets of filters of size [128,128,512], \"f\" is 3 and the blocks are \"b\", \"c\" and \"d\".\n- Stage 4:\n - The convolutional block uses three sets of filters of size [256, 256, 1024], \"f\" is 3, \"s\" is 2 and the block is \"a\".\n - The 5 identity blocks use three sets of filters of size [256, 256, 1024], \"f\" is 3 and the blocks are \"b\", \"c\", \"d\", \"e\" and \"f\".\n- Stage 5:\n - The convolutional block uses three sets of filters of size [512, 512, 2048], \"f\" is 3, \"s\" is 2 and the block is \"a\".\n - The 2 identity blocks use three sets of filters of size [512, 512, 2048], \"f\" is 3 and the blocks are \"b\" and \"c\".\n- The 2D Average Pooling uses a window of shape (2,2) and its name is \"avg_pool\".\n- The 'flatten' layer doesn't have any hyperparameters or name.\n- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be `'fc' + str(classes)`.\n\n**Exercise**: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above. \n\nYou'll need to use this function: \n- Average pooling [see reference](https://keras.io/layers/pooling/#averagepooling2d)\n\nHere are some other functions we used in the code below:\n- Conv2D: [See reference](https://keras.io/layers/convolutional/#conv2d)\n- BatchNorm: [See reference](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))\n- Zero padding: [See reference](https://keras.io/layers/convolutional/#zeropadding2d)\n- Max pooling: [See reference](https://keras.io/layers/pooling/#maxpooling2d)\n- Fully connected layer: [See reference](https://keras.io/layers/core/#dense)\n- Addition: [See reference](https://keras.io/layers/merge/#add)",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: ResNet50\n\ndef ResNet50(input_shape = (64, 64, 3), classes = 6):\n \"\"\"\n Implementation of the popular ResNet50 the following architecture:\n CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3\n -> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER\n\n Arguments:\n input_shape -- shape of the images of the dataset\n classes -- integer, number of classes\n\n Returns:\n model -- a Model() instance in Keras\n \"\"\"\n \n # Define the input as a tensor with shape input_shape\n X_input = Input(input_shape)\n\n \n # Zero-Padding\n X = ZeroPadding2D((3, 3))(X_input)\n \n # Stage 1\n X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)\n X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)\n X = Activation('relu')(X)\n X = MaxPooling2D((3, 3), strides=(2, 2))(X)\n\n # Stage 2\n X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)\n X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')\n X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')\n\n ### START CODE HERE ###\n\n # Stage 3 (≈4 lines)\n X = convolutional_block(X, f=3, filters=[128, 128, 512], stage=3, block='a', s=2)\n X = identity_block(X, 3, [128, 128, 512], stage=3, block='b')\n X = identity_block(X, 3, [128, 128, 512], stage=3, block='c')\n X = identity_block(X, 3, [128, 128, 512], stage=3, block='d')\n\n # Stage 4 (≈6 lines)\n X = convolutional_block(X, f=3, filters=[256, 256, 1024], stage=4, block='a', s=2)\n X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')\n X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')\n X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')\n X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')\n X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f')\n\n # Stage 5 (≈3 lines)\n X = X = convolutional_block(X, f=3, filters=[512, 512, 2048], stage=5, block='a', s=2)\n X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')\n X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c')\n\n # AVGPOOL (≈1 line). Use \"X = AveragePooling2D(...)(X)\"\n X = AveragePooling2D(pool_size=(2, 2))(X)\n\n ### END CODE HERE ###\n\n # output layer\n X = Flatten()(X)\n X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)\n \n \n # Create model\n model = Model(inputs = X_input, outputs = X, name='ResNet50')\n\n return model",
"_____no_output_____"
]
],
[
[
"Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running `model.fit(...)` below.",
"_____no_output_____"
]
],
[
[
"model = ResNet50(input_shape = (64, 64, 3), classes = 6)",
"_____no_output_____"
]
],
[
[
"As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.",
"_____no_output_____"
]
],
[
[
"model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"The model is now ready to be trained. The only thing you need is a dataset.",
"_____no_output_____"
],
[
"Let's load the SIGNS Dataset.\n\n<img src=\"images/signs_data_kiank.png\" style=\"width:450px;height:250px;\">\n<caption><center> <u> <font color='purple'> **Figure 6** </u><font color='purple'> : **SIGNS dataset** </center></caption>\n",
"_____no_output_____"
]
],
[
[
"X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()\n\n# Normalize image vectors\nX_train = X_train_orig/255.\nX_test = X_test_orig/255.\n\n# Convert training and test labels to one hot matrices\nY_train = convert_to_one_hot(Y_train_orig, 6).T\nY_test = convert_to_one_hot(Y_test_orig, 6).T\n\nprint (\"number of training examples = \" + str(X_train.shape[0]))\nprint (\"number of test examples = \" + str(X_test.shape[0]))\nprint (\"X_train shape: \" + str(X_train.shape))\nprint (\"Y_train shape: \" + str(Y_train.shape))\nprint (\"X_test shape: \" + str(X_test.shape))\nprint (\"Y_test shape: \" + str(Y_test.shape))",
"number of training examples = 1080\nnumber of test examples = 120\nX_train shape: (1080, 64, 64, 3)\nY_train shape: (1080, 6)\nX_test shape: (120, 64, 64, 3)\nY_test shape: (120, 6)\n"
]
],
[
[
"Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch. ",
"_____no_output_____"
]
],
[
[
"model.fit(X_train, Y_train, epochs = 2, batch_size = 32)",
"Epoch 1/2\n"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n ** Epoch 1/2**\n </td>\n <td>\n loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours.\n </td>\n </tr>\n <tr>\n <td>\n ** Epoch 2/2**\n </td>\n <td>\n loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing.\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"Let's see how this model (trained on only two epochs) performs on the test set.",
"_____no_output_____"
]
],
[
[
"preds = model.evaluate(X_test, Y_test)\nprint (\"Loss = \" + str(preds[0]))\nprint (\"Test Accuracy = \" + str(preds[1]))",
"_____no_output_____"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **Test Accuracy**\n </td>\n <td>\n between 0.16 and 0.25\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"For the purpose of this assignment, we've asked you to train the model for just two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well.",
"_____no_output_____"
],
[
"After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU. \n\nUsing a GPU, we've trained our own ResNet50 model's weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.",
"_____no_output_____"
]
],
[
[
"model = load_model('ResNet50.h5') ",
"_____no_output_____"
],
[
"preds = model.evaluate(X_test, Y_test)\nprint (\"Loss = \" + str(preds[0]))\nprint (\"Test Accuracy = \" + str(preds[1]))",
"_____no_output_____"
]
],
[
[
"ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.\n\nCongratulations on finishing this assignment! You've now implemented a state-of-the-art image classification system! ",
"_____no_output_____"
],
[
"## 4 - Test on your own image (Optional/Ungraded)",
"_____no_output_____"
],
[
"If you wish, you can also take a picture of your own hand and see the output of the model. To do this:\n 1. Click on \"File\" in the upper bar of this notebook, then click \"Open\" to go on your Coursera Hub.\n 2. Add your image to this Jupyter Notebook's directory, in the \"images\" folder\n 3. Write your image's name in the following code\n 4. Run the code and check if the algorithm is right! ",
"_____no_output_____"
]
],
[
[
"img_path = 'images/my_image.jpg'\nimg = image.load_img(img_path, target_size=(64, 64))\nx = image.img_to_array(img)\nx = np.expand_dims(x, axis=0)\nx = x/255.0\nprint('Input image shape:', x.shape)\nmy_image = scipy.misc.imread(img_path)\nimshow(my_image)\nprint(\"class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = \")\nprint(model.predict(x))",
"_____no_output_____"
]
],
[
[
"You can also print a summary of your model by running the following code.",
"_____no_output_____"
]
],
[
[
"model.summary()",
"_____no_output_____"
]
],
[
[
"Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to \"File -> Open...-> model.png\".",
"_____no_output_____"
]
],
[
[
"plot_model(model, to_file='model.png')\nSVG(model_to_dot(model).create(prog='dot', format='svg'))",
"_____no_output_____"
]
],
[
[
"## What you should remember\n- Very deep \"plain\" networks don't work in practice because they are hard to train due to vanishing gradients. \n- The skip-connections help to address the Vanishing Gradient problem. They also make it easy for a ResNet block to learn an identity function. \n- There are two main types of blocks: The identity block and the convolutional block. \n- Very deep Residual Networks are built by stacking these blocks together.",
"_____no_output_____"
],
[
"### References \n\nThis notebook presents the ResNet algorithm due to He et al. (2015). The implementation here also took significant inspiration and follows the structure given in the GitHub repository of Francois Chollet: \n\n- Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - [Deep Residual Learning for Image Recognition (2015)](https://arxiv.org/abs/1512.03385)\n- Francois Chollet's GitHub repository: https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d00d1c1a6f47a49f520d04dc961ce04325c51313 | 240,623 | ipynb | Jupyter Notebook | code/sandbox-Blue-O.ipynb | MattPat1981/new_space_race_nlp | ced47926b1a4fe7f83e0f1a460e456f4bf5f6b0e | [
"CC0-1.0"
] | 1 | 2021-06-26T21:28:32.000Z | 2021-06-26T21:28:32.000Z | code/sandbox-Blue-O.ipynb | MattPat1981/new_space_race_nlp | ced47926b1a4fe7f83e0f1a460e456f4bf5f6b0e | [
"CC0-1.0"
] | null | null | null | code/sandbox-Blue-O.ipynb | MattPat1981/new_space_race_nlp | ced47926b1a4fe7f83e0f1a460e456f4bf5f6b0e | [
"CC0-1.0"
] | 1 | 2022-02-11T00:30:58.000Z | 2022-02-11T00:30:58.000Z | 39.485231 | 12,076 | 0.496436 | [
[
[
"# Project 3 Sandbox-Blue-O, NLP using webscraping to create the dataset\n\n## Objective: Determine if posts are in the SpaceX Subreddit or the Blue Origin Subreddit\n\nWe'll utilize the RESTful API from pushshift.io to scrape subreddit posts from r/blueorigin and r/spacex and see if we cannot use the Bag-of-words algorithm to predict which posts are from where.",
"_____no_output_____"
],
[
"Author: Matt Paterson, [email protected]\n\nThis notebook is the SANDBOX and should be used to play around. The formal presentation will be in a different notebook\n",
"_____no_output_____"
]
],
[
[
"import requests\nfrom bs4 import BeautifulSoup\nimport pandas as pd\n\nimport lebowski as dude\n\nfrom sklearn.feature_extraction.text import CountVectorizer\nimport re, regex\n",
"_____no_output_____"
],
[
"# Establish a connection to the API and search for a specific keyword. Maybe we'll add this function to the\n# lebowski library? Or maybe make a new and slicker Library called spaceman or something\n# CREDIT: code below adapted from Riley Dallas Lesson on webscraping\n\n# keyword = 'propulsion'\n# url_boeing = 'https://api.pushshift.io/reddit/search/comment/?q=' + keyword + '&subreddit=boeing'\n# res = requests.get(url_boeing)\n# res.status_code\n",
"_____no_output_____"
],
[
"# instantiate a Beautiful Soup object for Boeing\n#boeing = BeautifulSoup(res.content, 'lxml')",
"_____no_output_____"
],
[
"#boeing.find(\"body\")",
"_____no_output_____"
],
[
"spacex = dude.create_lexicon('spacex', 5000)",
"connection nominal\nAdded in 100 new rows\nList contains 100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 600 rows\nconnection nominal\nAdded in 100 new rows\nList contains 700 rows\nconnection nominal\nAdded in 100 new rows\nList contains 800 rows\nconnection nominal\nAdded in 100 new rows\nList contains 900 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1000 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1600 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1700 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1800 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1900 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2000 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2600 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2700 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2800 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2900 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3000 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3600 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3700 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3800 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3900 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4000 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4600 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4700 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4800 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4900 rows\nconnection nominal\nAdded in 100 new rows\nList contains 5000 rows\nSucceffully created a list of 5000 total rows from r/spacex\nCreating dataframe\nPrinting dataframe shape:\n(5000, 88)\n"
],
[
"blueorigin = dude.create_lexicon('blueorigin', 5000)",
"connection nominal\nAdded in 100 new rows\nList contains 100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 600 rows\nconnection nominal\nAdded in 100 new rows\nList contains 700 rows\nconnection nominal\nAdded in 100 new rows\nList contains 800 rows\nconnection nominal\nAdded in 98 new rows\nList contains 898 rows\nconnection nominal\nAdded in 100 new rows\nList contains 998 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1098 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1198 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1298 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1398 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1498 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1598 rows\nconnection nominal\nAdded in 46 new rows\nList contains 1644 rows\nconnection nominal\nAdded in 0 new rows\nList contains 1644 rows\nAn error occurred, possibly there are not enough posts to scrape, \n but maybe something completely different\nCreating dataframe\nPrinting dataframe shape:\n(1644, 95)\n"
],
[
"spacex.head()",
"_____no_output_____"
],
[
"blueorigin.head()",
"_____no_output_____"
],
[
"spacex[['subreddit', 'selftext', 'title']].head() # predict the subreddit column",
"_____no_output_____"
],
[
"blueorigin[['subreddit', 'selftext', 'title']].head() # predict the subreddit column",
"_____no_output_____"
],
[
"print('Soux City Sarsparilla?') # silly print statement to check progress of long print",
"Soux City Sarsparilla?\n"
],
[
"spacex_comments = dude.create_lexicon('spacex', 5000, post_type='comment')",
"connection nominal\nAdded in 100 new rows\nList contains 100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 600 rows\nconnection nominal\nAdded in 100 new rows\nList contains 700 rows\nconnection nominal\nAdded in 100 new rows\nList contains 800 rows\nconnection nominal\nAdded in 100 new rows\nList contains 900 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1000 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1600 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1700 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1800 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1900 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2000 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2600 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2700 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2800 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2900 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3000 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3600 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3700 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3800 rows\nconnection nominal\nAdded in 100 new rows\nList contains 3900 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4000 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4600 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4700 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4800 rows\nconnection nominal\nAdded in 100 new rows\nList contains 4900 rows\nconnection nominal\nAdded in 100 new rows\nList contains 5000 rows\nSucceffully created a list of 5000 total rows from r/spacex\nCreating dataframe\nPrinting dataframe shape:\n(5000, 37)\n"
],
[
"spacex_comments.head()",
"_____no_output_____"
],
[
"spacex_comments[['subreddit', 'body']].head() # predict the subreddit column",
"_____no_output_____"
],
[
"blueorigin_comments = dude.create_lexicon('blueorigin', 5000, post_type='comment')",
"connection nominal\nAdded in 100 new rows\nList contains 100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 600 rows\nconnection nominal\nAdded in 100 new rows\nList contains 700 rows\nconnection nominal\nAdded in 100 new rows\nList contains 800 rows\nconnection nominal\nAdded in 100 new rows\nList contains 900 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1000 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1600 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1700 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1800 rows\nconnection nominal\nAdded in 100 new rows\nList contains 1900 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2000 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2100 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2200 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2300 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2400 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2500 rows\nconnection nominal\nAdded in 100 new rows\nList contains 2600 rows\nconnection nominal\nAn error occurred, possibly there are not enough posts to scrape, \n but maybe something completely different\nCreating dataframe\nPrinting dataframe shape:\n(2600, 36)\n"
],
[
"blueorigin_comments[['subreddit', 'body']].head() # predict the subreddit column",
"_____no_output_____"
],
[
"blueorigin_comments.columns",
"_____no_output_____"
]
],
[
[
"There's not a \"title\" column in the comments dataframe, so how is the comment tied to the original post? ",
"_____no_output_____"
]
],
[
[
"# View the first entry in the dataframe and see if you can find that answer\n# permalink?\nblueorigin_comments.iloc[0]",
"_____no_output_____"
]
],
[
[
"IN EDA below, we find: \"We have empty rows in 'body' in many columns. It's likely that all of those are postings, not comments, and we should actually map the postings to the body for those before merging the datafrmes.\"",
"_____no_output_____"
]
],
[
[
"def strip_and_rep(word):\n if len(str(word).strip().replace(\" \", \"\")) < 1:\n return 'replace_me'\n else:\n return word\n",
"_____no_output_____"
],
[
"blueorigin['selftext'] = blueorigin['selftext'].map(strip_and_rep)\nspacex['selftext'] = spacex['selftext'].map(strip_and_rep)",
"_____no_output_____"
],
[
"spacex.selftext.isna().sum()",
"_____no_output_____"
],
[
"blueorigin.selftext.isna().sum()",
"_____no_output_____"
],
[
"blueorigin.selftext.head()",
"_____no_output_____"
],
[
"spacex.iloc[2300:2320]",
"_____no_output_____"
],
[
"blo_coms = blueorigin_comments[['subreddit', 'body', 'permalink']]\n\nblo_posts = blueorigin[['subreddit', 'selftext', 'permalink']].copy()\n\nspx_coms = spacex_comments[['subreddit', 'body', 'permalink']]\n\nspx_posts = spacex[['subreddit', 'selftext', 'permalink']].copy()",
"_____no_output_____"
],
[
"#blueorigin['selftext'][len(blueorigin['selftext'])>0]\ntype(blueorigin.selftext.iloc[1])",
"_____no_output_____"
],
[
"blo_posts.rename(columns={'selftext': 'body'}, inplace=True)",
"_____no_output_____"
],
[
"spx_posts.rename(columns={'selftext': 'body'}, inplace=True)",
"_____no_output_____"
],
[
"# result = pd.concat(frames)\n\nspace_wars_2 = pd.concat([blo_coms, blo_posts, spx_coms, spx_posts])",
"_____no_output_____"
],
[
"space_wars_2.shape",
"_____no_output_____"
],
[
"space_wars_2.head()",
"_____no_output_____"
],
[
"dude.show_details(space_wars_2)",
"Accessing quick look at {dataframe}\n{dataframe}.head(2)\n>>> subreddit body \\\n0 BlueOrigin I don't know why they would want to waste prop... \n1 BlueOrigin Haha what if we stole one of his houses? \n\n permalink \n0 /r/BlueOrigin/comments/hoc1in/is_there_a_blue_... \n1 /r/BlueOrigin/comments/hus5ng/just_posted_a_ne... \n\n\n{dataframe}.isna().sum()\n>>> subreddit 0\nbody 40\npermalink 0\ndtype: int64\n<class 'pandas.core.frame.DataFrame'>\nInt64Index: 14244 entries, 0 to 4999\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 subreddit 14244 non-null object\n 1 body 14204 non-null object\n 2 permalink 14244 non-null object\ndtypes: object(3)\nmemory usage: 445.1+ KB\n\n\n{dataframe}.info()\n>>> None\n\n{dataframe}.shape (14244, 3)\n"
]
],
[
[
"We have empty rows in 'body' in many columns. It's likely that all of those are postings, not comments, and we should actually map the postings to the body for those before merging the datafrmes. \n\nHowever, when trying that above, we ended up with more null values. Mapping 'replace_me' in to empty fileds kept the number of null values low. We'll add that token to our stop_words dictionary when creating the BOW from this corpus.",
"_____no_output_____"
]
],
[
[
"space_wars_2.dropna(inplace=True)",
"_____no_output_____"
],
[
"space_wars_2.isna().sum()",
"_____no_output_____"
],
[
"space_wars.to_csv('./data/betaset.csv', index=False)",
"_____no_output_____"
]
],
[
[
"# Before we split up the training and testing sets, establish our X and y. If you need to reset the dataframe, run the next cell FIRST\nkeyword = RESET",
"_____no_output_____"
]
],
[
[
"space_wars_2 = pd.read_csv('./data/betaset.csv')\nspace_wars_2.columns",
"_____no_output_____"
]
],
[
[
"I believe that the 'permalink' will be almost as indicative as the 'subreddit' that we are trying to predict, so the X will only include the words...",
"_____no_output_____"
]
],
[
[
"space_wars_2.head()",
"_____no_output_____"
]
],
[
[
"## Convert target column to binary before moving forward\n\nWe want to predict whether this post is Spacex, 1, or is not Spacex, 0",
"_____no_output_____"
]
],
[
[
"space_wars_2['subreddit'].value_counts()",
"_____no_output_____"
],
[
"space_wars_2['subreddit'] = space_wars_2['subreddit'].map({'spacex': 1, 'BlueOrigin': 0})",
"_____no_output_____"
],
[
"space_wars_2['subreddit'].value_counts()",
"_____no_output_____"
],
[
"X = space_wars_2.body\ny = space_wars_2.subreddit",
"_____no_output_____"
]
],
[
[
"Calculate our baseline split",
"_____no_output_____"
]
],
[
[
"space_wars_2.subreddit.value_counts(normalize=True)",
"_____no_output_____"
],
[
"base_set = space_wars_2.subreddit.value_counts(normalize=True)\nbaseline = 0.0\nif base_set[0] > base_set[1]:\n baseline = base_set[0]\nelse:\n baseline = base_set[1]\nbaseline",
"_____no_output_____"
]
],
[
[
"Before we sift out stopwords, etc, let's just run a logistic regression on the words, as well as a decision tree:",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.model_selection import GridSearchCV, train_test_split, cross_val_score",
"_____no_output_____"
]
],
[
[
"## Before we can fit the models we need to convert the data to numbers...we can use CountVectorizer or TF-IDF for this",
"_____no_output_____"
]
],
[
[
"# from https://stackoverflow.com/questions/5511708/adding-words-to-nltk-stoplist\n# add certain words to the stop_words library\nimport nltk\nstopwords = nltk.corpus.stopwords.words('english')\nnew_words=('replace_me', 'removed', 'deleted', '0','1', '2', '3', '4', '5', '6', '7', '8','9', '00', '000')\nfor i in new_words:\n stopwords.append(i)\nprint(stopwords)\n\n",
"['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', \"you're\", \"you've\", \"you'll\", \"you'd\", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', \"she's\", 'her', 'hers', 'herself', 'it', \"it's\", 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', \"that'll\", 'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'don', \"don't\", 'should', \"should've\", 'now', 'd', 'll', 'm', 'o', 're', 've', 'y', 'ain', 'aren', \"aren't\", 'couldn', \"couldn't\", 'didn', \"didn't\", 'doesn', \"doesn't\", 'hadn', \"hadn't\", 'hasn', \"hasn't\", 'haven', \"haven't\", 'isn', \"isn't\", 'ma', 'mightn', \"mightn't\", 'mustn', \"mustn't\", 'needn', \"needn't\", 'shan', \"shan't\", 'shouldn', \"shouldn't\", 'wasn', \"wasn't\", 'weren', \"weren't\", 'won', \"won't\", 'wouldn', \"wouldn't\", 'replace_me', 'removed', 'deleted', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '00', '000']\n"
],
[
"space_wars_2.isna().sum()\n",
"_____no_output_____"
],
[
"space_wars_2.dropna(inplace=True)",
"_____no_output_____"
],
[
"# This section, next number of cells, borrowed from Noelle's lesson on NLP EDA\n# Instantiate the \"CountVectorizer\" object, which is sklearn's\n# bag of words tool.\ncnt_vec = CountVectorizer(analyzer = \"word\",\n tokenizer = None,\n preprocessor = None,\n stop_words = stopwords,\n max_features = 5000) ",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X,\n y,\n test_size=.20,\n random_state=42,\n stratify=y)",
"_____no_output_____"
]
],
[
[
"Keyword = CHANGELING",
"_____no_output_____"
]
],
[
[
"y_test",
"_____no_output_____"
],
[
"# This section, next number of cells, borrowed from Noelle's lesson on NLP EDA\n\n# fit_transform() does two things: First, it fits the model and \n# learns the vocabulary; second, it transforms our training data\n# into feature vectors. The input to fit_transform should be a \n# list of strings.\n\ntrain_data_features = cnt_vec.fit_transform(X_train, y_train)\n\ntest_data_features = cnt_vec.transform(X_test)",
"_____no_output_____"
],
[
"train_data_features.shape",
"_____no_output_____"
],
[
"train_data_df = pd.DataFrame(train_data_features)",
"_____no_output_____"
],
[
"test_data_features.shape",
"_____no_output_____"
],
[
"test_data_df = pd.DataFrame(test_data_features)",
"_____no_output_____"
],
[
"test_data_df['subreddit']",
"_____no_output_____"
],
[
"lr = LogisticRegression( max_iter = 10_000)\n\nlr.fit(train_data_features, y_train)",
"_____no_output_____"
],
[
"train_data_features.shape\n",
"_____no_output_____"
],
[
"dt = DecisionTreeClassifier()\n\ndt.fit(train_data_features, y_train)",
"_____no_output_____"
],
[
"print('Logistic Regression without doing anything, really:', lr.score(train_data_features, y_train))\nprint('Decision Tree without doing anything, really:', dt.score(train_data_features, y_train))\nprint('*'*80)\n\nprint('Logistic Regression Test Score without doing anything, really:', lr.score(test_data_features, y_test))\nprint('Decision Tree Test Score without doing anything, really:', dt.score(test_data_features, y_test))\nprint('*'*80)\n\nprint(f'The baseline split is {baseline}')",
"Logistic Regression without doing anything, really: 0.8318177373618852\nDecision Tree without doing anything, really: 0.8375867800919136\n********************************************************************************\nLogistic Regression Test Score without doing anything, really: 0.7876417676965194\nDecision Tree Test Score without doing anything, really: 0.7442315213140399\n********************************************************************************\nThe baseline split is 0.5682102628285357\n"
]
],
[
[
"So we see that we are above our baseline of 57% accuracy by only guessing a single subreddit without trying to predict. We also see that our initial runs without any GridSearch or HPO tuning gives us a fairly overfit model for either mode. \n\n**Let's see next what happens when we sift through our data with stopwords, etc, to really clean up the dataset and also let's do some comparative EDA including comparing lengths of posts, etc. Finally we can create a sepatate dataframe with engineered features and try running a Logistic Regression model using only descriptors in the dataset such as post lenth, word length, most common words, etc.**",
"_____no_output_____"
],
[
"## Deep EDA of our words",
"_____no_output_____"
]
],
[
[
"space_wars.shape",
"_____no_output_____"
],
[
"space_wars.describe()",
"_____no_output_____"
]
],
[
[
"## Feature Engineering",
"_____no_output_____"
],
[
"Map word count and character length funcitons on to the 'body' column to see a difference in each.",
"_____no_output_____"
]
],
[
[
"def word_count(string):\n '''\n returns the number of words or tokens in a string literal, splitting on spaces,\n regardless of word lenth. This function will include space-separated\n punctuation as a word, such as \" : \" where the colon would be counted\n string, a string\n '''\n str_list = string.split()\n return len(str_list)\n\ndef count_chars(string):\n '''\n returns the total number of characters including spaces in a string literal\n string, a string\n '''\n count=0\n for s in string:\n count+=1\n return count",
"_____no_output_____"
],
[
"import lebowski as dude\n\nspace_wars['word_count'] = space_wars['body'].map(word_count)\nspace_wars['word_count'].value_counts().head()",
"_____no_output_____"
],
[
"# code from https://stackoverflow.com/questions/39132742/groupby-value-counts-on-the-dataframe-pandas\n#df.groupby(['id', 'group', 'term']).size().unstack(fill_value=0)\n\nspace_wars.groupby(['subreddit', 'word_count']).size().head()\n",
"_____no_output_____"
],
[
"space_wars['post_length'] = space_wars['body'].map(count_chars)\nspace_wars['post_length'].value_counts().head()",
"_____no_output_____"
],
[
"space_wars.columns",
"_____no_output_____"
],
[
"import seaborn as sns\nimport matplotlib.pyplot as plt\n\n",
"_____no_output_____"
],
[
"sns.distplot(space_wars['word_count'])\n",
"_____no_output_____"
],
[
"# Borrowing from Noelle's nlp II lesson, import the following, \n# and think about what you want to use in the presentation\n\n# imports\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import train_test_split, GridSearchCV\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.metrics import confusion_matrix, plot_confusion_matrix\n\n# Import CountVectorizer and TFIDFVectorizer from feature_extraction.text.\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer",
"_____no_output_____"
]
],
[
[
"## Text Feature Extraction",
"_____no_output_____"
],
[
"## Follow along in the NLP EDA II video and do some analysis",
"_____no_output_____"
]
],
[
[
"X_train_df = pd.DataFrame(train_data_features.toarray(),\n columns=cntv.get_feature_names())\nX_train_df",
"_____no_output_____"
],
[
"X_train_df['subreddit']",
"_____no_output_____"
],
[
"# get count of top-occurring words\n\n# empty dictionary\ntop_words = {}\n\n# loop through columns\nfor i in X_train_df.columns:\n # save sum of each column in dictionary\n top_words[i] = X_train_df[i].sum()\n \n# top_words to dataframe sorted by highest occurance\nmost_freq = pd.DataFrame(sorted(top_words.items(), key = lambda x: x[1], reverse = True))",
"_____no_output_____"
],
[
"most_freq.head()",
"_____no_output_____"
],
[
"# Make a different CountVectorizer\n\ncount_v = CountVectorizer(analyzer='word',\n stop_words = stopwords,\n max_features = 1_000,\n min_df = 50,\n max_df = .80,\n ngram_range=(2,3),\n )",
"_____no_output_____"
],
[
"# Redefine the training and testing sets\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, \n test_size = .1,\n stratify = y,\n random_state=42)",
"_____no_output_____"
],
[
"baseline",
"_____no_output_____"
]
],
[
[
"## Implement Naive Bayes because it's in the project instructions\nMultinomial Naive Bayes often outperforms other models despite text data being non-independent data",
"_____no_output_____"
]
],
[
[
"pipe = Pipeline([\n ('count_v', CountVectorizer()),\n ('nb', MultinomialNB())\n])",
"_____no_output_____"
],
[
"pipe_params = {\n 'count_v__max_features': [2000, 5000, 9000],\n 'count_v__stop_words': [stopwords],\n 'count_v__min_df': [2, 3, 10],\n 'count_v__max_df': [.9, .8, .7],\n 'count_v__ngram_range': [(1, 1), (1, 2)]\n}",
"_____no_output_____"
],
[
"gs = GridSearchCV(pipe,\n pipe_params,\n cv = 5,\n n_jobs=6\n)",
"_____no_output_____"
],
[
"%%time\ngs.fit(X_train, y_train)",
"Wall time: 51 s\n"
],
[
"gs.best_params_",
"_____no_output_____"
],
[
"print(gs.best_score_)",
"0.7892220773576706\n"
],
[
"gs.score(X_train, y_train)",
"_____no_output_____"
],
[
"gs.score(X_test, y_test)",
"_____no_output_____"
]
],
[
[
"So far, the Multinomial Naive Bayes Algorithm is the top function at 79.28% Accuracy. The confusion matrix below is very simiar to that of other models",
"_____no_output_____"
]
],
[
[
"# Get predictions\npreds = gs.predict(X_test)\n\n# Save confusion matrix values\ntn, fp, fn, tp = confusion_matrix(y_test, preds).ravel()\n\n# View confusion matrix\n\nplot_confusion_matrix(gs, X_test, y_test, cmap='Blues', values_format='d');",
"_____no_output_____"
],
[
"# Calculate the specificity\n\nspec = tn / (tn + fp)\n\nprint('Specificity:', spec)",
"Specificity: 0.5670289855072463\n"
]
],
[
[
"None of the 1620 different models we tried in this pipeline performed noticibly better than the thrown-together Logistic Regression Classifier that we started out with. Let's try TF-IDF, then Random Cut Forest, and finally Vector Machines. Our last run brought the best accuracy score to 79.3%",
"_____no_output_____"
],
[
"# TF-IDF",
"_____no_output_____"
]
],
[
[
"# Redefine the training and testing sets\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, \n test_size = .1,\n stratify = y,\n random_state=42)",
"_____no_output_____"
],
[
"tvec = TfidfVectorizer(stop_words=stopwords)",
"_____no_output_____"
],
[
"df = pd.DataFrame(tvec.fit_transform(X_train).toarray(),\n columns=tvec.get_feature_names())\ndf.head()",
"_____no_output_____"
],
[
"# get count of top-occurring words\ntop_words_tf = {}\nfor i in df.columns:\n top_words_tf[i] = df[i].sum()\n \n# top_words to dataframe sorted by highest occurance\nmost_freq_tf = pd.DataFrame(sorted(top_words_tf.items(), key = lambda x: x[1], reverse = True))\n\nplt.figure(figsize = (10, 5))\n\n# visualize top 10 words\nplt.bar(most_freq_tf[0][:10], most_freq_tf[1][:10]);",
"_____no_output_____"
],
[
"pipe_tvec = Pipeline([\n ('tvec', TfidfVectorizer()),\n ('nb', MultinomialNB())\n])",
"_____no_output_____"
],
[
"pipe_params_tvec = {\n 'tvec__max_features': [2000, 9000],\n 'tvec__stop_words' : [None, stopwords],\n 'tvec__ngram_range': [(1, 1), (1, 2)]\n}",
"_____no_output_____"
],
[
"gs_tvec = GridSearchCV(pipe_tvec, pipe_params_tvec, cv = 5)",
"_____no_output_____"
],
[
"%%time\ngs_tvec.fit(X_train, y_train)",
"Wall time: 39.5 s\n"
],
[
"gs_tvec.best_params_",
"_____no_output_____"
],
[
"gs_tvec.score(X_train, y_train)",
"_____no_output_____"
],
[
"gs_tvec.score(X_test, y_test)",
"_____no_output_____"
],
[
"# Get predictions\npreds = gs_tvec.predict(X_test)\n\n# Save confusion matrix values\ntn, fp, fn, tp = confusion_matrix(y_test, preds).ravel()\n\n# View confusion matrix\n\nplot_confusion_matrix(gs_tvec, X_test, y_test, cmap='Blues', values_format='d');",
"_____no_output_____"
],
[
"# Calculate the specificity\n\nspec = tn / (tn + fp)\n\nprint('Specificity:', spec)",
"Specificity: 0.5489130434782609\n"
]
],
[
[
"## Random Cut Forest, Bagging, and Support Vector Machines ",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier",
"_____no_output_____"
]
],
[
[
"Before we run the decision tree model or RandomForestClassifier(), we need to convert all of the data to numeric data",
"_____no_output_____"
]
],
[
[
"rf = RandomForestClassifier()",
"_____no_output_____"
],
[
"et = ExtraTreesClassifier()",
"_____no_output_____"
],
[
"cross_val_score(rf, train_data_features, X_train_df['subreddit']).mean()",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_split.py:667: UserWarning: The least populated class in y has only 1 members, which is less than n_splits=5.\n % (min_groups, self.n_splits)), UserWarning)\n"
],
[
"cross_val_score(et, train_data_features, X_train_df['subreddit']).mean()",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_split.py:667: UserWarning: The least populated class in y has only 1 members, which is less than n_splits=5.\n % (min_groups, self.n_splits)), UserWarning)\n"
],
[
"#cross_val_score(rf, test_data_features, y_test).mean()",
"_____no_output_____"
]
],
[
[
"## Make sure that we are using X and y data that are completely numeric and free of nulls",
"_____no_output_____"
]
],
[
[
"space_wars.head(1)",
"_____no_output_____"
],
[
"space_wars.shape",
"_____no_output_____"
],
[
"pipe_rf = Pipeline([\n ('count_v', CountVectorizer()),\n ('rf', RandomForestClassifier()),\n])\n\npipe_ef = Pipeline([\n ('count_v', CountVectorizer()),\n ('ef', ExtraTreesClassifier()),\n])",
"_____no_output_____"
],
[
"pipe_params =\n 'count_v__max_features': [2000, 5000, 9000],\n 'count_v__stop_words': [stopwords],\n 'count_v__min_df': [2, 3, 10],\n 'count_v__max_df': [.9, .8, .7],\n 'count_v__ngram_range': [(1, 1), (1, 2)]\n}",
"_____no_output_____"
],
[
"%%time\ngs_rf = GridSearchCV(pipe_rf,\n pipe_params,\n cv = 5,\n n_jobs=6)\ngs_rf.fit(X_train, y_train)\nprint(gs_rf.best_score_)\ngs_rf.best_params_",
"0.7620165145588874\nWall time: 5min 27s\n"
],
[
"gs_rf.score(X_train, y_train)",
"_____no_output_____"
],
[
"gs_rf.score(X_test, y_test)",
"_____no_output_____"
],
[
"# %%time\n# gs_ef = GridSearchCV(pipe_ef,\n# pipe_params,\n# cv = 5,\n# n_jobs=6)\n# gs_ef.fit(X_train, y_train)\n# print(gs_ef.best_score_)\n# gs_ef.best_params_",
"_____no_output_____"
],
[
"#gs_ef.score(X_train, y_train)",
"_____no_output_____"
],
[
"#gs_ef.score(X_test, y_test)",
"_____no_output_____"
]
],
[
[
"## Now run through Gradient Boosting and SVM",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import GradientBoostingClassifier, AdaBoostClassifier, VotingClassifier\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.neighbors import KNeighborsClassifier\n",
"_____no_output_____"
]
],
[
[
"Using samples from Riley's Lessons:\n",
"_____no_output_____"
]
],
[
[
"AdaBoostClassifier()",
"_____no_output_____"
],
[
"GradientBoostingClassifier()",
"_____no_output_____"
]
],
[
[
"Use the CountVectorizer to convert the data to numeric data prior to running it through the below VotingClassifier",
"_____no_output_____"
]
],
[
[
"'count_v__max_df': 0.9,\n 'count_v__max_features': 9000,\n 'count_v__min_df': 2,\n 'count_v__ngram_range': (1, 1),",
"_____no_output_____"
],
[
"knn_pipe = Pipeline([\n ('ss', StandardScaler()),\n ('knn', KNeighborsClassifier())\n])",
"_____no_output_____"
],
[
"%%time\n\nvote = VotingClassifier([\n ('ada', AdaBoostClassifier(base_estimator=DecisionTreeClassifier())),\n ('grad_boost', GradientBoostingClassifier()),\n ('tree', DecisionTreeClassifier()),\n ('knn_pipe', knn_pipe)\n])\nparams = {}\n# 'ada__n_estimators': [50, 51],\n# 'grad_boost__n_estimators': [10, 11],\n# 'knn_pipe__knn__n_neighbors': [5],\n# 'ada__base_estimator__max_depth': [1, 2],\n# 'weights': [[.25] * 4, [.3, .3, .3, .1]]\n# }\ngs = GridSearchCV(vote, param_grid=params, cv=3)\ngs.fit(X_train, y_train)\nprint(gs.best_score_)\ngs.best_params_",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d00d1fd31d99a85620063e299bc079d92cc907c1 | 31,222 | ipynb | Jupyter Notebook | classify_papers.ipynb | jouterleys/BiomchBERT | cb17b62224fdbc18ee2cc15b46fc27596d6f1fcc | [
"Apache-2.0"
] | null | null | null | classify_papers.ipynb | jouterleys/BiomchBERT | cb17b62224fdbc18ee2cc15b46fc27596d6f1fcc | [
"Apache-2.0"
] | null | null | null | classify_papers.ipynb | jouterleys/BiomchBERT | cb17b62224fdbc18ee2cc15b46fc27596d6f1fcc | [
"Apache-2.0"
] | null | null | null | 31,222 | 31,222 | 0.653514 | [
[
[
"Uses Fine-Tuned BERT network to classify biomechanics papers from PubMed",
"_____no_output_____"
]
],
[
[
"# Check date\n!rm /etc/localtime\n!ln -s /usr/share/zoneinfo/America/Los_Angeles /etc/localtime\n!date\n# might need to restart runtime if timezone didn't change",
"Thu Mar 24 06:59:32 PDT 2022\n"
],
[
"## Install & load libraries\n\n!pip install tensorflow==2.7.0\n\ntry:\n from official.nlp import optimization\nexcept:\n !pip install -q -U tf-models-official==2.4.0\n from official.nlp import optimization\ntry:\n from Bio import Entrez\nexcept:\n !pip install -q -U biopython\n from Bio import Entrez\ntry:\n import tensorflow_text as text\nexcept:\n !pip install -q -U tensorflow_text==2.7.3\n import tensorflow_text as text\n \nimport pandas as pd\nimport numpy as np\nimport nltk\nnltk.download('stopwords')\nfrom nltk.corpus import stopwords\nimport tensorflow as tf # probably have to lock version\nimport string\nimport datetime\nfrom bs4 import BeautifulSoup\nfrom sklearn.preprocessing import LabelEncoder\nfrom tensorflow.keras.models import load_model\nimport tensorflow_hub as hub\nfrom google.colab import drive\nimport datetime as dt\n\n#Define date range\ntoday = dt.date.today()\nyesterday = today - dt.timedelta(days=1)\nweek_ago = yesterday - dt.timedelta(days=7) # ensure overlap in pubmed search\ndays_ago_6 = yesterday - dt.timedelta(days=6) # for text output\n\n# Mount Google Drive for model and csv up/download\ndrive.mount('/content/gdrive')\nprint(today)",
"Collecting tensorflow==2.7.0\n Downloading tensorflow-2.7.0-cp37-cp37m-manylinux2010_x86_64.whl (489.6 MB)\n\u001b[K |████████████████████████████████| 489.6 MB 24 kB/s \n\u001b[?25hRequirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.44.0)\nRequirement already satisfied: keras-preprocessing>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.1.2)\nRequirement already satisfied: h5py>=2.9.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (3.1.0)\nRequirement already satisfied: flatbuffers<3.0,>=1.12 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (2.0)\nRequirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.15.0)\nRequirement already satisfied: tensorflow-io-gcs-filesystem>=0.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (0.24.0)\nRequirement already satisfied: numpy>=1.14.5 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.21.5)\nRequirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.6.3)\nCollecting gast<0.5.0,>=0.2.1\n Downloading gast-0.4.0-py3-none-any.whl (9.8 kB)\nRequirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.14.0)\nCollecting keras<2.8,>=2.7.0rc0\n Downloading keras-2.7.0-py2.py3-none-any.whl (1.3 MB)\n\u001b[K |████████████████████████████████| 1.3 MB 43.1 MB/s \n\u001b[?25hRequirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (3.17.3)\nRequirement already satisfied: wheel<1.0,>=0.32.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (0.37.1)\nRequirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (0.2.0)\nRequirement already satisfied: tensorboard~=2.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (2.8.0)\nRequirement already satisfied: libclang>=9.0.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (13.0.0)\nRequirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (3.10.0.2)\nRequirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.1.0)\nRequirement already satisfied: absl-py>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (1.0.0)\nRequirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.7.0) (3.3.0)\nCollecting tensorflow-estimator<2.8,~=2.7.0rc0\n Downloading tensorflow_estimator-2.7.0-py2.py3-none-any.whl (463 kB)\n\u001b[K |████████████████████████████████| 463 kB 48.7 MB/s \n\u001b[?25hRequirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py>=2.9.0->tensorflow==2.7.0) (1.5.2)\nRequirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (57.4.0)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (1.0.1)\nRequirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (1.35.0)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (3.3.6)\nRequirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (0.6.1)\nRequirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (2.23.0)\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (1.8.1)\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.6->tensorflow==2.7.0) (0.4.6)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow==2.7.0) (4.2.4)\nRequirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow==2.7.0) (4.8)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow==2.7.0) (0.2.8)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.6->tensorflow==2.7.0) (1.3.1)\nRequirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard~=2.6->tensorflow==2.7.0) (4.11.3)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard~=2.6->tensorflow==2.7.0) (3.7.0)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard~=2.6->tensorflow==2.7.0) (0.4.8)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow==2.7.0) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow==2.7.0) (2021.10.8)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow==2.7.0) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.6->tensorflow==2.7.0) (1.24.3)\nRequirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.6->tensorflow==2.7.0) (3.2.0)\nInstalling collected packages: tensorflow-estimator, keras, gast, tensorflow\n Attempting uninstall: tensorflow-estimator\n Found existing installation: tensorflow-estimator 2.8.0\n Uninstalling tensorflow-estimator-2.8.0:\n Successfully uninstalled tensorflow-estimator-2.8.0\n Attempting uninstall: keras\n Found existing installation: keras 2.8.0\n Uninstalling keras-2.8.0:\n Successfully uninstalled keras-2.8.0\n Attempting uninstall: gast\n Found existing installation: gast 0.5.3\n Uninstalling gast-0.5.3:\n Successfully uninstalled gast-0.5.3\n Attempting uninstall: tensorflow\n Found existing installation: tensorflow 2.8.0\n Uninstalling tensorflow-2.8.0:\n Successfully uninstalled tensorflow-2.8.0\nSuccessfully installed gast-0.4.0 keras-2.7.0 tensorflow-2.7.0 tensorflow-estimator-2.7.0\n\u001b[K |████████████████████████████████| 1.1 MB 5.3 MB/s \n\u001b[K |████████████████████████████████| 99 kB 7.8 MB/s \n\u001b[K |████████████████████████████████| 596 kB 41.1 MB/s \n\u001b[K |████████████████████████████████| 352 kB 49.1 MB/s \n\u001b[K |████████████████████████████████| 1.1 MB 37.0 MB/s \n\u001b[K |████████████████████████████████| 47.8 MB 57 kB/s \n\u001b[K |████████████████████████████████| 1.2 MB 43.6 MB/s \n\u001b[K |████████████████████████████████| 43 kB 1.8 MB/s \n\u001b[K |████████████████████████████████| 237 kB 45.6 MB/s \n\u001b[?25h Building wheel for py-cpuinfo (setup.py) ... \u001b[?25l\u001b[?25hdone\n Building wheel for seqeval (setup.py) ... \u001b[?25l\u001b[?25hdone\n\u001b[K |████████████████████████████████| 2.3 MB 4.9 MB/s \n\u001b[K |████████████████████████████████| 4.9 MB 5.5 MB/s \n\u001b[?25h[nltk_data] Downloading package stopwords to /root/nltk_data...\n[nltk_data] Unzipping corpora/stopwords.zip.\nMounted at /content/gdrive\n2022-03-24\n"
],
[
"# Define Search Criteria ----\ndef search(query):\n Entrez.email = '[email protected]'\n handle = Entrez.esearch(db='pubmed',\n sort='most recent',\n retmax='5000',\n retmode='xml',\n datetype='pdat', # pdat is published date, edat is entrez date. \n # reldate=7, # only within n days from now\n mindate= min_date,\n maxdate= max_date, # for searching date range\n term=query)\n results = Entrez.read(handle)\n return results\n\n\n# Perform Search and Pull Paper Titles ----\ndef fetch_details(ids):\n Entrez.email = '[email protected]'\n handle = Entrez.efetch(db='pubmed',\n retmode='xml',\n id=ids)\n results = Entrez.read(handle)\n return results\n\n\n# Make the stop words for string cleaning ----\ndef html_strip(text):\n text = BeautifulSoup(text, 'lxml').text\n text = text.replace('[','').replace(']','')\n return text\n\ndef clean_str(text, stops):\n text = BeautifulSoup(text, 'lxml').text\n text = text.split()\n return ' '.join([word for word in text if word not in stops])\n\nstop = list(stopwords.words('english'))\nstop_c = [string.capwords(word) for word in stop]\nfor word in stop_c:\n stop.append(word)\n\nnew_stop = ['The', 'An', 'A', 'Do', 'Is', 'In', 'StringElement', \n 'NlmCategory', 'Label', 'attributes', 'INTRODUCTION',\n 'METHODS', 'BACKGROUND', 'RESULTS', 'CONCLUSIONS']\nfor s in new_stop:\n stop.append(s)\n\n# Search terms (can test string with Pubmed Advanced Search) ----\n# search_results = search('(Biomech*[Title/Abstract] OR locomot*[Title/Abstract])')\nmin_date = week_ago.strftime('%m/%d/%Y')\nmax_date = yesterday.strftime('%m/%d/%Y')\nsearch_results = search('(biomech*[Title/Abstract] OR locomot*[Title/Abstract] NOT opiod*[Title/Abstract] NOT pharm*[Journal] NOT mouse[Title/Abstract] NOT drosophil*[Title/Abstract] NOT mice[Title/Abstract] NOT rats*[Title/Abstract] NOT elegans[Title/Abstract])')\nid_list = search_results['IdList']\npapers = fetch_details(id_list)\nprint(len(papers['PubmedArticle']), 'Papers found')\n\ntitles, full_titles, keywords, authors, links, journals, abstracts = ([] for i in range(7))\n\nfor paper in papers['PubmedArticle']:\n # clean and store titles, abstracts, and links\n t = clean_str(paper['MedlineCitation']['Article']['ArticleTitle'], \n stop).replace('[','').replace(']','').capitalize() # rm brackets that survived beautifulsoup, sentence case\n titles.append(t)\n full_titles.append(paper['MedlineCitation']['Article']['ArticleTitle'])\n pmid = paper['MedlineCitation']['PMID']\n links.append('[URL=\"https://www.ncbi.nlm.nih.gov/pubmed/{0}\"]{1}[/URL]'.format(pmid, html_strip(paper['MedlineCitation']['Article']['ArticleTitle'])))\n try:\n abstracts.append(clean_str(paper['MedlineCitation']['Article']['Abstract']['AbstractText'][0], \n stop).replace('[','').replace(']','').capitalize()) # rm brackets that survived beautifulsoup, sentence case\n except:\n abstracts.append('')\n\n # clean and store authors\n auths = []\n try:\n for auth in paper['MedlineCitation']['Article']['AuthorList']:\n try: # see if there is a last name and initials\n auth_name = [auth['LastName'], auth['Initials'] + ',']\n auth_name = ' '.join(auth_name)\n auths.append(auth_name)\n except:\n if 'LastName' in auth.keys(): # maybe they don't have initials\n auths.append(auth['LastName'] + ',')\n else: # no last name\n auths.append('')\n print(paper['MedlineCitation']['Article']['ArticleTitle'],\n 'has an issue with an author name:')\n\n except:\n auths.append('AUTHOR NAMES ERROR')\n print(paper['MedlineCitation']['Article']['ArticleTitle'], 'has no author list?')\n # compile authors\n authors.append(' '.join(auths).replace('[','').replace(']','')) # rm brackets in names\n # journal names\n journals.append(paper['MedlineCitation']['Article']['Journal']['Title'].replace('[','').replace(']','')) # rm brackets\n\n # store keywords \n if paper['MedlineCitation']['KeywordList'] != []:\n kwds = []\n for kw in paper['MedlineCitation']['KeywordList'][0]:\n kwds.append(kw[:])\n keywords.append(', '.join(kwds).lower())\n else:\n keywords.append('')\n\n# Put Titles, Abstracts, Authors, Journal, and Keywords into dataframe\npapers_df = pd.DataFrame({'title': titles,\n 'keywords': keywords,\n 'abstract': abstracts,\n 'authors': authors,\n 'journal': journals,\n 'links': links,\n 'raw_title': full_titles,\n 'mindate': min_date,\n 'maxdate': max_date})\n\n\n# remove papers with no title or no authors\nfor index, row in papers_df.iterrows():\n if row['title'] == '' or row['authors'] == 'AUTHOR NAMES ERROR':\n papers_df.drop(index, inplace=True)\npapers_df.reset_index(drop=True, inplace=True)\n\n# join titles and abstract\npapers_df['BERT_input'] = pd.DataFrame(papers_df['title'] + ' ' + papers_df['abstract'])\n\n# Load Fine-Tuned BERT Network ----\nmodel = tf.saved_model.load('/content/gdrive/My Drive/BiomchBERT/Data/BiomchBERT/')\nprint('Loaded model from disk')\n\n# Load Label Encoder ----\nle = LabelEncoder()\nle.classes_ = np.load('/content/gdrive/My Drive/BiomchBERT/Data/BERT_label_encoder.npy')\nprint('Loaded Label Encoder')\n",
"84 Papers found\nLoaded model from disk\nLoaded Label Encoder\n"
],
[
"# Predict Paper Topic ----\npredicted_topic = model(papers_df['BERT_input'], training=False) # will run out of GPU memory (14GB) if predicting more than ~2000 title+abstracts at once",
"_____no_output_____"
],
[
"# Determine Publications that BiomchBERT is unsure about ----\ntopics, pred_val_str = ([] for i in range(2))\n\nfor pred_prob in predicted_topic:\n pred_val = np.max(pred_prob)\n if pred_val > 1.5 * np.sort(pred_prob)[-2]: # Is top confidence score more than 1.5x the second best confidence score?\n topics.append(le.inverse_transform([np.argmax(pred_prob)])[0])\n top1 = le.inverse_transform([np.argmax(pred_prob)])[0]\n top2 = le.inverse_transform([list(pred_prob).index([np.sort(pred_prob)[-2]])])[0]\n # pred_val_str.append(pred_val * 100) # just report top category\n pred_val_str.append(str(np.round(pred_val * 100, 1)) + '% ' + str(top1) + '; ' + str(\n np.round(np.sort(pred_prob)[-2] * 100, 1)) + '% ' + str(top2)) # report top 2 categories\n else:\n topics.append('UNKNOWN')\n top1 = le.inverse_transform([np.argmax(pred_prob)])[0]\n top2 = le.inverse_transform([list(pred_prob).index([np.sort(pred_prob)[-2]])])[0]\n pred_val_str.append(str(np.round(pred_val * 100, 1)) + '% ' + str(top1) + '; ' + str(\n np.round(np.sort(pred_prob)[-2] * 100, 1)) + '% ' + str(top2))\n \npapers_df['topic'] = topics\npapers_df['pred_val'] = pred_val_str\n\nprint('BiomchBERT is unsure about {0} papers\\n'.format(len(papers_df[papers_df['topic'] == 'UNKNOWN'])))\n",
"BiomchBERT is unsure about 6 papers\n\n"
],
[
"\n# Prompt User to decide for BiomchBERT ----\nunknown_papers = papers_df[papers_df['topic'] == 'UNKNOWN']\nfor indx, paper in unknown_papers.iterrows():\n print(paper['raw_title'])\n print(paper['journal'])\n print(paper['pred_val'])\n print()\n splt_str = paper['pred_val'].split(';')\n options = [str for pred_cls in splt_str for str in le.classes_ if (str in pred_cls)]\n \n \n choice = input('(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? ')\n print()\n if choice == '1':\n papers_df.iloc[indx]['topic'] = str(options[0])\n elif choice == '2':\n papers_df.iloc[indx]['topic'] = str(options[1])\n elif choice == 'o':\n # print all categories so you can select\n for i in zip(range(len(le.classes_)),le.classes_):\n print(i) \n new_cat = input('Enter number of new class or type \"r\" to remove paper: ')\n print()\n if new_cat == 'r':\n papers_df.iloc[indx]['topic'] = '_REMOVE_' # not deleted, but withheld from text file output\n else:\n papers_df.iloc[indx]['topic'] = le.classes_[int(new_cat)] \n elif choice == 'r':\n papers_df.iloc[indx]['topic'] = '_REMOVE_' # not deleted, but withheld from text file output\n \nprint('Removing {0} papers\\n'.format(len(papers_df[papers_df['topic'] == '_REMOVE_'])))",
"Contribution of sensory feedback to Soleus muscle activity during voluntary contraction in humans.\nJournal of neurophysiology\n51.6% NEURAL; 38.0% MUSCLE\n\n(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? 1\n\nAnterior Cable Reconstruction: Prioritizing Rotator Cable and Tendon Cord When Considering Superior Capsular Reconstruction.\nArthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association\n47.0% ORTHOPAEDICS/SURGERY; 45.3% TENDON/LIGAMENT\n\n(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? 1\n\nComparison the Effect of Pain Neuroscience and Pain Biomechanics Education on Neck Pain and Fear of Movement in Patients with Chronic Nonspecific Neck Pain During the COVID-19 Pandemic.\nPain and therapy\n45.3% REHABILITATION; 34.9% ERGONOMICS\n\n(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? 1\n\nThe role of nanoplastics on the toxicity of the herbicide phenmedipham, using Danio rerio embryos as model organisms.\nEnvironmental pollution (Barking, Essex : 1987)\n28.0% COMPARATIVE; 23.5% CELLULAR/SUBCELLULAR\n\n(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? r\n\nThe Terrific Skink bite force suggests insularity as a likely driver to exceptional resource use.\nScientific reports\n52.1% EVOLUTION/ANTHROPOLOGY; 42.1% COMPARATIVE\n\n(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? o\n\n(0, 'BONE')\n(1, 'BOTANY')\n(2, 'CARDIOVASCULAR/CARDIOPULMONARY')\n(3, 'CELLULAR/SUBCELLULAR')\n(4, 'COMPARATIVE')\n(5, 'DENTAL/ORAL/FACIAL')\n(6, 'ERGONOMICS')\n(7, 'EVOLUTION/ANTHROPOLOGY')\n(8, 'GAIT/LOCOMOTION')\n(9, 'HAND/FINGER/FOOT/TOE')\n(10, 'JOINT/CARTILAGE')\n(11, 'METHODS')\n(12, 'MODELING')\n(13, 'MUSCLE')\n(14, 'NEURAL')\n(15, 'ORTHOPAEDICS/SPINE')\n(16, 'ORTHOPAEDICS/SURGERY')\n(17, 'POSTURE/BALANCE')\n(18, 'PROSTHETICS/ORTHOTICS')\n(19, 'REHABILITATION')\n(20, 'ROBOTICS')\n(21, 'SPORT/EXERCISE')\n(22, 'TENDON/LIGAMENT')\n(23, 'TISSUE/BIOMATERIAL')\n(24, 'TRAUMA/IMPACT')\n(25, 'VETERINARY/AGRICULTURAL')\n(26, 'VISUAL/VESTIBULAR')\nEnter number of new class or type \"r\" to remove paper: 4\n\nOverground gait kinematics and muscle activation patterns in the Yucatan mini pig.\nJournal of neural engineering\n36.0% COMPARATIVE; 28.8% GAIT/LOCOMOTION\n\n(1)st topic, (2)nd topic, (o)ther topic, or (r)emove paper? 1\n\nRemoving 1 papers\n\n"
],
[
"# Double check that none of these papers were included in past literature updates ----\n# load prior papers\n# papers_df.to_csv('/content/gdrive/My Drive/BiomchBERT/Updates/prior_papers.csv', index=False) # run ONLY if there are no prior papers\nprior_papers = pd.read_csv('/content/gdrive/My Drive/BiomchBERT/Updates/prior_papers.csv')\nprior_papers.dropna(subset=['title'], inplace=True)\nprior_papers.reset_index(drop=True, inplace=True)\n\n# NEED TO DO: find matching papers between current week and prior papers using Pubmed ID since titles can change from ahead of print to final version.\n# match = papers_df['links'].split(']')[0].isin(prior_papers['links'].split(']')[0])\n\nmatch = papers_df['title'].isin(prior_papers['title']) # boolean\nprint('Removing {0} papers found in prior literature updates\\n'.format(sum(match)))\n# filter and check if everything accidentally was removed\nfiltered_papers_df = papers_df.drop(papers_df[match].index)\nif filtered_papers_df.shape[0] < 1:\n raise ValueError('might have removed all the papers for some reason. ')\nelse:\n papers_df = filtered_papers_df\n papers_df.reset_index(drop=True, inplace=True)\n updated_prior_papers = pd.concat([prior_papers, papers_df], axis=0)\n updated_prior_papers.reset_index(drop=True, inplace=True)\n updated_prior_papers.to_csv('/content/gdrive/My Drive/BiomchBERT/Updates/prior_papers.csv', index=False)",
"Removing 18 papers found in prior literature updates\n\n"
],
[
"# Create Text File for Biomch-L ----\n# Compile papers grouped by topic\ntxtname = '/content/gdrive/My Drive/BiomchBERT/Updates/' + today.strftime(\"%Y-%m-%d\") + '-litupdate.txt'\ntxt = open(txtname, 'w', encoding='utf-8')\ntxt.write('[SIZE=16px][B]LITERATURE UPDATE[/B][/SIZE]\\n')\ntxt.write(days_ago_6.strftime(\"%b %d, %Y\") + ' - '+ yesterday.strftime(\"%b %d, %Y\")+'\\n') # a week ago from yesterday.\ntxt.write(\n \"\"\"\nLiterature search terms: biomech* & locomot*\n\nPublications are classified by [URL=\"https://www.ryan-alcantara.com/projects/p88_BiomchBERT/\"]BiomchBERT[/URL], a neural network trained on past Biomch-L Literature Updates. BiomchBERT is managed by [URL=\"https://jouterleys.github.io\"]Jereme Outerleys[/URL], a Doctoral Student at Queen's University. Each publication has a score (out of 100%) reflecting how confident BiomchBERT is that the publication belongs in a particular category (top 2 shown). If something doesn't look right, email jereme.outerleys[at]queensu.ca.\n\nTwitter: [URL=\"https://www.twitter.com/jouterleys\"]@jouterleys[/URL]. \n\n\n \"\"\"\n )\n\n# Write papers to text file grouped by topic ----\ntopic_list = np.unique(papers_df.sort_values('topic')['topic'])\n\nfor topic in topic_list:\n papers_subset = pd.DataFrame(papers_df[papers_df.topic == topic].reset_index(drop=True))\n txt.write('\\n')\n # TOPIC NAME (with some cleaning)\n if topic == '_REMOVE_':\n continue\n elif topic == 'UNKNOWN':\n txt.write('[SIZE=16px][B]*Papers BiomchBERT is unsure how to classify*[/B][/SIZE]\\n')\n elif topic == 'CARDIOVASCULAR/CARDIOPULMONARY':\n topic = 'CARDIOVASCULAR/PULMONARY'\n txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\\n' % topic)\n elif topic == 'CELLULAR/SUBCELLULAR':\n topic = 'CELLULAR'\n txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\\n' % topic)\n elif topic == 'ORTHOPAEDICS/SURGERY':\n topic = 'ORTHOPAEDICS (SURGERY)'\n txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\\n' % topic)\n elif topic == 'ORTHOPAEDICS/SPINE':\n topic = 'ORTHOPAEDICS (SPINE)'\n txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\\n' % topic)\n else:\n txt.write('[SIZE=16px][B]*%s*[/B][/SIZE]\\n' % topic)\n # HYPERLINKED PAPERS, AUTHORS, JOURNAL NAME\n for i, paper in enumerate(papers_subset['links']):\n txt.write('[B]%s[/B] ' % paper)\n txt.write('%s ' % papers_subset['authors'][i])\n txt.write('[I]%s[/I]. ' % papers_subset['journal'][i])\n # CONFIDENCE SCORE (BERT softmax categorical crossentropy)\n try:\n txt.write('(%.1f%%) \\n\\n' % papers_subset['pred_val'][i])\n except:\n txt.write('(%s)\\n\\n' % papers_subset['pred_val'][i]) \n\ntxt.write('[SIZE=16px][B]*PICK OF THE WEEK*[/B][/SIZE]\\n')\ntxt.close()\nprint('Literature Update Exported for Biomch-L')\nprint('Location:', txtname)",
"Literature Update Exported for Biomch-L\nLocation: /content/gdrive/My Drive/BiomchBERT/Updates/2022-03-24-litupdate.txt\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00d1fdb9b92edbb4c0e303908ceb6133581b066 | 85,275 | ipynb | Jupyter Notebook | Wilcoxon and Chi Squared.ipynb | massie/readmission-study | 5d5ae75ca29503fdbd1a80999006f585757c0e17 | [
"BSD-2-Clause"
] | null | null | null | Wilcoxon and Chi Squared.ipynb | massie/readmission-study | 5d5ae75ca29503fdbd1a80999006f585757c0e17 | [
"BSD-2-Clause"
] | null | null | null | Wilcoxon and Chi Squared.ipynb | massie/readmission-study | 5d5ae75ca29503fdbd1a80999006f585757c0e17 | [
"BSD-2-Clause"
] | null | null | null | 382.399103 | 78,002 | 0.559332 | [
[
[
"# Wilcoxon and Chi Squared",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\n\ndf = pd.read_csv(\"prepared_neuror2_data.csv\")",
"_____no_output_____"
],
[
"def stats_for_neuror2_range(lo, hi):\n admissions = df[df.NR2_Score.between(lo, hi)]\n total_patients = admissions.shape[0]\n readmits = admissions[admissions.UnplannedReadmission]\n total_readmits = readmits.shape[0]\n return (total_readmits, total_patients, \"%.1f\" % (total_readmits/total_patients*100,))\n\nmayo_davis = []\nfor (expected, (lo, hi)) in [(1.4, (0, 0)),\n (4, (1, 4)),\n (5.6, (5, 8)),\n (14.2, (9, 13)),\n (33.0, (14, 19)),\n (0.0, (20, 22))]:\n (total_readmits, total_patients, readmit_percent) = stats_for_neuror2_range(lo, hi)\n mayo_davis.append([lo, hi, expected, readmit_percent, total_readmits, total_patients])\n \ntitle=\"Davis and Mayo Populations by NeuroR2 Score\"\nprint(title)\nprint(\"-\" * len(title))\nprint(pd.DataFrame(mayo_davis, columns=[\"Low\", \"High\", \"Mayo %\", \"Davis %\", \n \"Readmits\", \"Total\"]).to_string(index=False))",
"Davis and Mayo Populations by NeuroR2 Score\n-------------------------------------------\n Low High Mayo % Davis % Readmits Total\n 0 0 1.4 3.6 24 669\n 1 4 4.0 6.9 94 1362\n 5 8 5.6 6.2 64 1039\n 9 13 14.2 10.4 80 770\n 14 19 33.0 15.2 34 224\n 20 22 0.0 20.0 2 10\n"
],
[
"# Continuous variables were compared using wilcoxon\nfrom scipy.stats import ranksums as wilcoxon\n\ndef create_samples(col_name):\n unplanned = df[df.UnplannedReadmission][col_name].values\n planned = df[~df.UnplannedReadmission][col_name].values\n return (unplanned, planned)\n\ncontinous_vars = [\"AdmissionAgeYears\", \"LengthOfStay\", \"NR2_Score\"]#, \"MsDrgWeight\"]\nfor var in continous_vars:\n (unplanned, planned) = create_samples(var)\n (stat, p) = wilcoxon(unplanned, planned)\n \n print (\"%30s\" % (var,), \"p-value %f\" % (p,))",
" AdmissionAgeYears p-value 0.396951\n LengthOfStay p-value 0.004208\n NR2_Score p-value 0.000000\n"
],
[
"unplanned, planned = create_samples(\"LengthOfStay\")\nprint(pd.DataFrame(unplanned, columns=[\"Unplanned Readmission\"]).describe())\nprint(pd.DataFrame(planned, columns=[\" Index Only Admission\"]).describe())",
" Unplanned Readmission\ncount 298.000000\nmean 5.510067\nstd 6.270682\nmin 0.000000\n25% 2.000000\n50% 4.000000\n75% 6.000000\nmax 40.000000\n Index Only Admission\ncount 3776.000000\nmean 4.849841\nstd 5.828859\nmin 0.000000\n25% 1.000000\n50% 3.000000\n75% 6.000000\nmax 41.000000\n"
],
[
"# Categorical variables were compared using chi squared\nfrom scipy.stats import chi2, chi2_contingency\nfrom IPython.core.display import display, HTML\n\n# Collect all the categorical features\ncols = sorted([col for col in df.columns if \"_\" in col])\nfor var in continous_vars:\n try:\n cols.remove(var)\n except:\n pass\n\nindex_only = df[~df.UnplannedReadmission].shape[0]\nunplanned_readmit = df[df.UnplannedReadmission].shape[0]\n\nhtml = \"<table><tr>\"\nfor th in [\"Characteristic\", \"Index admission only</br>(n=%d)\" % (index_only,), \n \"Unplanned readmission</br>(n = %d)\" % (unplanned_readmit,),\"<i>p</i> Value\"]:\n html += \"<th>%s</th>\" % (th,)\nhtml += \"</tr>\"\n\nstart_row = \"<tr><td>%s</td>\"\nend_row = \"<td>%d (%.1f)</td><td>%d (%.1f)</td><td></td></tr>\"\n\npval_str = lambda p: \"<0.001\" if p<0.001 else \"%.3f\" % p\ncol_str = lambda col, p: \"<b><i>%s</i></b>\" % (col,) if p < 0.05 else col\n\nfor col in sorted(cols):\n table = pd.crosstab(df[col], df.UnplannedReadmission)\n stat, p, dof, expected = chi2_contingency(table)\n html += \"<tr><td>%s</td><td></td><td></td><td>%s</td></tr>\" % (col_str(col,p), pval_str(p))\n html += start_row % (\"No\",)\n html += end_row % (table.values[0][0], expected[0][0],\n table.values[0][1], expected[0][1])\n try:\n html += start_row % (\"Yes\",)\n html += end_row % (table.values[1][0], expected[1,0],\n table.values[1][1], expected[1][1])\n except IndexError:\n html += \"<td>-</td><td>-</td><td></td></tr>\"\n \nhtml += \"</table>\"\ndisplay(HTML(html))",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d00d240300d6c09fb17bf66fb8a0c2ededb4101a | 576,419 | ipynb | Jupyter Notebook | experiments/exp027.ipynb | Quvotha/atmacup11 | 1a2bebcd76b3255d4fcf07aea1be5bde67c2d225 | [
"MIT"
] | 2 | 2021-07-23T02:10:51.000Z | 2021-07-23T03:13:53.000Z | experiments/exp027.ipynb | Quvotha/atmacup11 | 1a2bebcd76b3255d4fcf07aea1be5bde67c2d225 | [
"MIT"
] | null | null | null | experiments/exp027.ipynb | Quvotha/atmacup11 | 1a2bebcd76b3255d4fcf07aea1be5bde67c2d225 | [
"MIT"
] | null | null | null | 576,419 | 576,419 | 0.703889 | [
[
[
"Note: \nThis notebook was executed on google colab pro.",
"_____no_output_____"
]
],
[
[
"!pip3 install pytorch-lightning --quiet",
"\u001b[K |████████████████████████████████| 813 kB 7.2 MB/s \n\u001b[K |████████████████████████████████| 829 kB 17.6 MB/s \n\u001b[K |████████████████████████████████| 118 kB 37.6 MB/s \n\u001b[K |████████████████████████████████| 10.6 MB 30.6 MB/s \n\u001b[K |████████████████████████████████| 636 kB 70.4 MB/s \n\u001b[K |████████████████████████████████| 234 kB 76.0 MB/s \n\u001b[K |████████████████████████████████| 1.3 MB 90.1 MB/s \n\u001b[K |████████████████████████████████| 142 kB 74.3 MB/s \n\u001b[K |████████████████████████████████| 294 kB 91.0 MB/s \n\u001b[?25h Building wheel for future (setup.py) ... \u001b[?25l\u001b[?25hdone\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\ntensorflow 2.5.0 requires tensorboard~=2.5, but you have tensorboard 2.4.1 which is incompatible.\u001b[0m\n"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
],
[
"import os\nos.chdir('/content/drive/MyDrive/Colab Notebooks/atmacup11/experiments')",
"_____no_output_____"
]
],
[
[
"# Settings",
"_____no_output_____"
]
],
[
[
"EXP_NO = 27\nSEED = 1\nN_SPLITS = 5\nTARGET = 'target'\nGROUP = 'art_series_id'\nREGRESSION = False",
"_____no_output_____"
],
[
"assert((TARGET, REGRESSION) in (('target', True), ('target', False), ('sorting_date', True)))",
"_____no_output_____"
],
[
"MODEL_NAME = 'resnet'\nBATCH_SIZE = 512\nNUM_EPOCHS = 500",
"_____no_output_____"
]
],
[
[
"# Library",
"_____no_output_____"
]
],
[
[
"from collections import defaultdict\nfrom functools import partial\nimport gc\nimport glob\nimport json\nfrom logging import getLogger, StreamHandler, FileHandler, DEBUG, Formatter\nimport pickle\nimport os\nimport sys\nimport time\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom sklearn.metrics import confusion_matrix, mean_squared_error, cohen_kappa_score\n# from sklearnex import patch_sklearn\nfrom pytorch_lightning import seed_everything\nimport torch\nimport torch.nn as nn\nimport torch.optim\nfrom torch.utils.data import DataLoader\nfrom torchvision import transforms\n\nSCRIPTS_DIR = os.path.join('..', 'scripts')\nassert(os.path.isdir(SCRIPTS_DIR))\nif SCRIPTS_DIR not in sys.path: sys.path.append(SCRIPTS_DIR)\n\nfrom cross_validation import load_cv_object_ids\nfrom dataset import load_csvfiles, load_photofile,load_photofiles, AtmaImageDatasetV02\nfrom folder import experiment_dir_of\nfrom models import initialize_model\nfrom utils import train_model, predict_by_model",
"_____no_output_____"
],
[
"pd.options.display.float_format = '{:.5f}'.format",
"_____no_output_____"
],
[
"DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nDEVICE",
"_____no_output_____"
]
],
[
[
"# Prepare directory",
"_____no_output_____"
]
],
[
[
"output_dir = experiment_dir_of(EXP_NO)",
"_____no_output_____"
],
[
"output_dir",
"_____no_output_____"
]
],
[
[
"# Prepare logger",
"_____no_output_____"
]
],
[
[
"logger = getLogger(__name__)",
"_____no_output_____"
],
[
"'''Refference\nhttps://docs.python.org/ja/3/howto/logging-cookbook.html\n'''\nlogger.setLevel(DEBUG)\n# create file handler which logs even debug messages\nfh = FileHandler(os.path.join(output_dir, 'log.log'))\nfh.setLevel(DEBUG)\n# create console handler with a higher log level\nch = StreamHandler()\nch.setLevel(DEBUG)\n# create formatter and add it to the handlers\nformatter = Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\nfh.setFormatter(formatter)\nch.setFormatter(formatter)\n# add the handlers to the logger\nlogger.addHandler(fh)\nlogger.addHandler(ch)\nlen(logger.handlers)",
"_____no_output_____"
],
[
"logger.info('Experiment no: {}'.format(EXP_NO))\nlogger.info('CV: StratifiedGroupKFold')\nlogger.info('SEED: {}'.format(SEED))\nlogger.info('REGRESSION: {}'.format(REGRESSION))",
"2021-07-21 19:41:13,190 - __main__ - INFO - Experiment no: 27\n2021-07-21 19:41:13,192 - __main__ - INFO - CV: StratifiedGroupKFold\n2021-07-21 19:41:13,194 - __main__ - INFO - SEED: 1\n2021-07-21 19:41:13,197 - __main__ - INFO - REGRESSION: False\n"
]
],
[
[
"# Load csv files",
"_____no_output_____"
]
],
[
[
"SINCE = time.time()",
"_____no_output_____"
],
[
"logger.debug('Start loading csv files ({:.3f} seconds passed)'.format(time.time() - SINCE))\ntrain, test, materials, techniques, sample_submission = load_csvfiles()\nlogger.debug('Complete loading csv files ({:.3f} seconds passed)'.format(time.time() - SINCE))",
"2021-07-21 19:41:13,211 - __main__ - DEBUG - Start loading csv files (0.007 seconds passed)\n2021-07-21 19:41:14,428 - __main__ - DEBUG - Complete loading csv files (1.225 seconds passed)\n"
],
[
"train",
"_____no_output_____"
],
[
"test",
"_____no_output_____"
]
],
[
[
"# Cross validation",
"_____no_output_____"
]
],
[
[
"seed_everything(SEED)",
"Global seed set to 1\n"
],
[
"train.set_index('object_id', inplace=True)",
"_____no_output_____"
],
[
"fold_object_ids = load_cv_object_ids()\nfor i, (train_object_ids, valid_object_ids) in enumerate(zip(fold_object_ids[0], fold_object_ids[1])):\n assert(set(train_object_ids) & set(valid_object_ids) == set())\n num_fold = i + 1\n logger.debug('Start fold {} ({:.3f} seconds passed)'.format(num_fold, time.time() - SINCE))\n\n # Separate dataset into training/validation fold\n y_train = train.loc[train_object_ids, TARGET].values\n y_valid = train.loc[valid_object_ids, TARGET].values\n\n torch.cuda.empty_cache()\n \n # Training\n logger.debug('Start training model ({:.3f} seconds passed)'.format(time.time() - SINCE))\n ## Prepare model\n num_classes = len(set(list(y_train)))\n model, input_size = initialize_model(MODEL_NAME, num_classes)\n model.to(DEVICE)\n ## Prepare transformers\n train_transformer = transforms.Compose([\n transforms.RandomResizedCrop(input_size),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n ])\n val_transformer = transforms.Compose([\n transforms.Resize(input_size),\n transforms.CenterCrop(input_size),\n transforms.ToTensor(),\n transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n ])\n # Prepare dataset\n if not REGRESSION:\n # label should be one-hot style\n y_train = np.identity(num_classes)[y_train].astype('int')\n y_valid = np.identity(num_classes)[y_valid].astype('int')\n train_dataset = AtmaImageDatasetV02(train_object_ids, train_transformer, y_train)\n val_dataset = AtmaImageDatasetV02(valid_object_ids, val_transformer, y_valid)\n # Prepare dataloader\n dataloaders = {\n 'train': DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=os.cpu_count()),\n 'val': DataLoader(dataset=val_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=os.cpu_count()),\n }\n ## train estimator\n estimator, train_losses, valid_losses = train_model(\n model, dataloaders, criterion=nn.BCEWithLogitsLoss(), num_epochs=NUM_EPOCHS, device=DEVICE,\n optimizer=torch.optim.Adam(model.parameters()), log_func=logger.debug,\n is_inception=MODEL_NAME == 'inception')\n logger.debug('Complete training ({:.3f} seconds passed)'.format(time.time() - SINCE))\n ## Visualize training loss\n plt.plot(train_losses, label='train')\n plt.plot(valid_losses, label='valid')\n plt.legend(loc='upper left', bbox_to_anchor=[1., 1.])\n plt.title(f'Fold{num_fold}')\n plt.show()\n \n # Save model and prediction\n ## Prediction\n predictions = {}\n for fold_, object_ids_ in zip(['train', 'val', 'test'],\n [train_object_ids, valid_object_ids, test['object_id']]):\n # Prepare transformer\n transformer_ = transforms.Compose([\n transforms.Resize(input_size),\n transforms.CenterCrop(input_size),\n transforms.ToTensor(),\n transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n ])\n # Prepare dataset\n dataset_ = AtmaImageDatasetV02(object_ids_, transformer_)\n # Prepare dataloader\n dataloader_ = DataLoader(dataset=dataset_, batch_size=BATCH_SIZE, shuffle=False,\n num_workers=os.cpu_count())\n # Prediction\n predictions[fold_] = predict_by_model(estimator, dataloader_, DEVICE)\n logger.debug('Complete prediction for {} fold ({:.3f} seconds passed)' \\\n .format(fold_, time.time() - SINCE))\n if REGRESSION:\n pred_train = pd.DataFrame(data=predictions['train'], columns=['pred'])\n pred_valid = pd.DataFrame(data=predictions['val'], columns=['pred'])\n pred_test = pd.DataFrame(data=predictions['test'], columns=['pred'])\n else:\n columns = list(range(num_classes))\n pred_train = pd.DataFrame(data=predictions['train'], columns=columns)\n pred_valid = pd.DataFrame(data=predictions['val'], columns=columns)\n pred_test = pd.DataFrame(data=predictions['test'], columns=columns)\n # else: # Do not come here!\n # raise NotImplemented\n # try:\n # pred_train = pd.DataFrame(data=estimator.predict_proba(X_train),\n # columns=estimator.classes_)\n # pred_valid = pd.DataFrame(data=estimator.predict_proba(X_valid),\n # columns=estimator.classes_)\n # pred_test = pd.DataFrame(data=estimator.predict_proba(X_test),\n # columns=estimator.classes_)\n # except AttributeError:\n # pred_train = pd.DataFrame(data=estimator.decision_function(X_train),\n # columns=estimator.classes_)\n # pred_valid = pd.DataFrame(data=estimator.decision_function(X_valid),\n # columns=estimator.classes_)\n # pred_test = pd.DataFrame(data=estimator.decision_function(X_test),\n # columns=estimator.classes_)\n ## Training set\n pred_train['object_id'] = train_object_ids\n filepath_fold_train = os.path.join(output_dir, f'cv_fold{num_fold}_training.csv')\n pred_train.to_csv(filepath_fold_train, index=False)\n logger.debug('Save training fold to {} ({:.3f} seconds passed)' \\\n .format(filepath_fold_train, time.time() - SINCE))\n ## Validation set\n pred_valid['object_id'] = valid_object_ids\n filepath_fold_valid = os.path.join(output_dir, f'cv_fold{num_fold}_validation.csv')\n pred_valid.to_csv(filepath_fold_valid, index=False)\n logger.debug('Save validation fold to {} ({:.3f} seconds passed)' \\\n .format(filepath_fold_valid, time.time() - SINCE))\n ## Test set\n pred_test['object_id'] = test['object_id'].values\n filepath_fold_test = os.path.join(output_dir, f'cv_fold{num_fold}_test.csv')\n pred_test.to_csv(filepath_fold_test, index=False)\n logger.debug('Save test result {} ({:.3f} seconds passed)' \\\n .format(filepath_fold_test, time.time() - SINCE))\n ## Model\n filepath_fold_model = os.path.join(output_dir, f'cv_fold{num_fold}_model.torch')\n torch.save(estimator.state_dict(), filepath_fold_model)\n# with open(filepath_fold_model, 'wb') as f:\n# pickle.dump(estimator, f)\n logger.debug('Save model {} ({:.3f} seconds passed)'.format(filepath_fold_model, time.time() - SINCE))\n \n # Save memory\n del (estimator, y_train, y_valid, pred_train, pred_valid, pred_test)\n gc.collect()\n\n logger.debug('Complete fold {} ({:.3f} seconds passed)'.format(num_fold, time.time() - SINCE))",
"2021-07-21 19:41:14,941 - __main__ - DEBUG - Start fold 1 (1.737 seconds passed)\n2021-07-21 19:41:14,948 - __main__ - DEBUG - Start training model (1.744 seconds passed)\n2021-07-21 19:41:21,378 - __main__ - DEBUG - Epoch 0/499\n/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)\n return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)\n2021-07-21 19:46:23,273 - __main__ - DEBUG - train Loss: 0.6660\n2021-07-21 19:48:28,084 - __main__ - DEBUG - val Loss: 29.5672\n2021-07-21 19:48:28,094 - __main__ - DEBUG - Epoch 1/499\n2021-07-21 19:48:36,328 - __main__ - DEBUG - train Loss: 0.5293\n2021-07-21 19:48:38,931 - __main__ - DEBUG - val Loss: 1.9881\n2021-07-21 19:48:38,940 - __main__ - DEBUG - Epoch 2/499\n2021-07-21 19:48:47,096 - __main__ - DEBUG - train Loss: 0.5137\n2021-07-21 19:48:49,716 - __main__ - DEBUG - val Loss: 0.7746\n2021-07-21 19:48:49,725 - __main__ - DEBUG - Epoch 3/499\n2021-07-21 19:48:58,048 - __main__ - DEBUG - train Loss: 0.5090\n2021-07-21 19:49:00,838 - __main__ - DEBUG - val Loss: 0.5369\n2021-07-21 19:49:00,847 - __main__ - DEBUG - Epoch 4/499\n2021-07-21 19:49:09,254 - __main__ - DEBUG - train Loss: 0.5060\n2021-07-21 19:49:12,055 - __main__ - DEBUG - val Loss: 0.5659\n2021-07-21 19:49:12,056 - __main__ - DEBUG - Epoch 5/499\n2021-07-21 19:49:20,634 - __main__ - DEBUG - train Loss: 0.5033\n2021-07-21 19:49:23,357 - __main__ - DEBUG - val Loss: 0.5304\n2021-07-21 19:49:23,366 - __main__ - DEBUG - Epoch 6/499\n2021-07-21 19:49:31,648 - __main__ - DEBUG - train Loss: 0.5015\n2021-07-21 19:49:34,271 - __main__ - DEBUG - val Loss: 0.5245\n2021-07-21 19:49:34,280 - __main__ - DEBUG - Epoch 7/499\n2021-07-21 19:49:42,482 - __main__ - DEBUG - train Loss: 0.4997\n2021-07-21 19:49:45,132 - __main__ - DEBUG - val Loss: 0.5221\n2021-07-21 19:49:45,141 - __main__ - DEBUG - Epoch 8/499\n2021-07-21 19:49:53,365 - __main__ - DEBUG - train Loss: 0.5030\n2021-07-21 19:49:55,987 - __main__ - DEBUG - val Loss: 0.5143\n2021-07-21 19:49:55,996 - __main__ - DEBUG - Epoch 9/499\n2021-07-21 19:50:04,363 - __main__ - DEBUG - train Loss: 0.4967\n2021-07-21 19:50:07,148 - __main__ - DEBUG - val Loss: 0.5456\n2021-07-21 19:50:07,150 - __main__ - DEBUG - Epoch 10/499\n2021-07-21 19:50:15,648 - __main__ - DEBUG - train Loss: 0.4901\n2021-07-21 19:50:18,438 - __main__ - DEBUG - val Loss: 0.6672\n2021-07-21 19:50:18,440 - __main__ - DEBUG - Epoch 11/499\n2021-07-21 19:50:26,876 - __main__ - DEBUG - train Loss: 0.4877\n2021-07-21 19:50:29,464 - __main__ - DEBUG - val Loss: 0.6374\n2021-07-21 19:50:29,465 - __main__ - DEBUG - Epoch 12/499\n2021-07-21 19:50:37,714 - __main__ - DEBUG - train Loss: 0.4942\n2021-07-21 19:50:40,335 - __main__ - DEBUG - val Loss: 0.5209\n2021-07-21 19:50:40,337 - __main__ - DEBUG - Epoch 13/499\n2021-07-21 19:50:48,549 - __main__ - DEBUG - train Loss: 0.4861\n2021-07-21 19:50:51,206 - __main__ - DEBUG - val Loss: 0.4929\n2021-07-21 19:50:51,217 - __main__ - DEBUG - Epoch 14/499\n2021-07-21 19:50:59,393 - __main__ - DEBUG - train Loss: 0.4823\n2021-07-21 19:51:02,006 - __main__ - DEBUG - val Loss: 0.4895\n2021-07-21 19:51:02,015 - __main__ - DEBUG - Epoch 15/499\n2021-07-21 19:51:10,370 - __main__ - DEBUG - train Loss: 0.4791\n2021-07-21 19:51:13,172 - __main__ - DEBUG - val Loss: 0.5717\n2021-07-21 19:51:13,174 - __main__ - DEBUG - Epoch 16/499\n2021-07-21 19:51:21,631 - __main__ - DEBUG - train Loss: 0.4846\n2021-07-21 19:51:24,422 - __main__ - DEBUG - val Loss: 0.5962\n2021-07-21 19:51:24,424 - __main__ - DEBUG - Epoch 17/499\n2021-07-21 19:51:32,866 - __main__ - DEBUG - train Loss: 0.4832\n2021-07-21 19:51:35,495 - __main__ - DEBUG - val Loss: 0.5160\n2021-07-21 19:51:35,497 - __main__ - DEBUG - Epoch 18/499\n2021-07-21 19:51:43,801 - __main__ - DEBUG - train Loss: 0.4800\n2021-07-21 19:51:46,602 - __main__ - DEBUG - val Loss: 0.5324\n2021-07-21 19:51:46,603 - __main__ - DEBUG - Epoch 19/499\n2021-07-21 19:51:55,063 - __main__ - DEBUG - train Loss: 0.4860\n2021-07-21 19:51:57,835 - __main__ - DEBUG - val Loss: 0.5257\n2021-07-21 19:51:57,837 - __main__ - DEBUG - Epoch 20/499\n2021-07-21 19:52:06,336 - __main__ - DEBUG - train Loss: 0.4800\n2021-07-21 19:52:08,943 - __main__ - DEBUG - val Loss: 0.4967\n2021-07-21 19:52:08,945 - __main__ - DEBUG - Epoch 21/499\n2021-07-21 19:52:17,229 - __main__ - DEBUG - train Loss: 0.4754\n2021-07-21 19:52:19,831 - __main__ - DEBUG - val Loss: 0.5236\n2021-07-21 19:52:19,833 - __main__ - DEBUG - Epoch 22/499\n2021-07-21 19:52:28,046 - __main__ - DEBUG - train Loss: 0.4702\n2021-07-21 19:52:30,621 - __main__ - DEBUG - val Loss: 0.7657\n2021-07-21 19:52:30,623 - __main__ - DEBUG - Epoch 23/499\n2021-07-21 19:52:38,781 - __main__ - DEBUG - train Loss: 0.4771\n2021-07-21 19:52:41,425 - __main__ - DEBUG - val Loss: 0.5258\n2021-07-21 19:52:41,427 - __main__ - DEBUG - Epoch 24/499\n2021-07-21 19:52:49,780 - __main__ - DEBUG - train Loss: 0.4785\n2021-07-21 19:52:52,581 - __main__ - DEBUG - val Loss: 0.5838\n2021-07-21 19:52:52,582 - __main__ - DEBUG - Epoch 25/499\n2021-07-21 19:53:01,027 - __main__ - DEBUG - train Loss: 0.4716\n2021-07-21 19:53:03,804 - __main__ - DEBUG - val Loss: 0.5328\n2021-07-21 19:53:03,805 - __main__ - DEBUG - Epoch 26/499\n2021-07-21 19:53:12,204 - __main__ - DEBUG - train Loss: 0.4756\n2021-07-21 19:53:14,800 - __main__ - DEBUG - val Loss: 0.4866\n2021-07-21 19:53:14,810 - __main__ - DEBUG - Epoch 27/499\n2021-07-21 19:53:23,079 - __main__ - DEBUG - train Loss: 0.4719\n2021-07-21 19:53:25,702 - __main__ - DEBUG - val Loss: 0.4860\n2021-07-21 19:53:25,712 - __main__ - DEBUG - Epoch 28/499\n2021-07-21 19:53:33,937 - __main__ - DEBUG - train Loss: 0.4692\n2021-07-21 19:53:36,519 - __main__ - DEBUG - val Loss: 0.5204\n2021-07-21 19:53:36,521 - __main__ - DEBUG - Epoch 29/499\n2021-07-21 19:53:44,728 - __main__ - DEBUG - train Loss: 0.4676\n2021-07-21 19:53:47,384 - __main__ - DEBUG - val Loss: 0.6347\n2021-07-21 19:53:47,386 - __main__ - DEBUG - Epoch 30/499\n2021-07-21 19:53:55,706 - __main__ - DEBUG - train Loss: 0.4624\n2021-07-21 19:53:58,531 - __main__ - DEBUG - val Loss: 0.5099\n2021-07-21 19:53:58,533 - __main__ - DEBUG - Epoch 31/499\n2021-07-21 19:54:07,009 - __main__ - DEBUG - train Loss: 0.4610\n2021-07-21 19:54:09,778 - __main__ - DEBUG - val Loss: 0.5070\n2021-07-21 19:54:09,780 - __main__ - DEBUG - Epoch 32/499\n2021-07-21 19:54:18,213 - __main__ - DEBUG - train Loss: 0.4653\n2021-07-21 19:54:20,824 - __main__ - DEBUG - val Loss: 0.5578\n2021-07-21 19:54:20,826 - __main__ - DEBUG - Epoch 33/499\n2021-07-21 19:54:29,107 - __main__ - DEBUG - train Loss: 0.4667\n2021-07-21 19:54:31,772 - __main__ - DEBUG - val Loss: 0.5599\n2021-07-21 19:54:31,774 - __main__ - DEBUG - Epoch 34/499\n2021-07-21 19:54:40,000 - __main__ - DEBUG - train Loss: 0.4706\n2021-07-21 19:54:42,560 - __main__ - DEBUG - val Loss: 0.5165\n2021-07-21 19:54:42,562 - __main__ - DEBUG - Epoch 35/499\n2021-07-21 19:54:50,828 - __main__ - DEBUG - train Loss: 0.4660\n2021-07-21 19:54:53,462 - __main__ - DEBUG - val Loss: 0.4866\n2021-07-21 19:54:53,464 - __main__ - DEBUG - Epoch 36/499\n2021-07-21 19:55:01,718 - __main__ - DEBUG - train Loss: 0.4794\n2021-07-21 19:55:04,562 - __main__ - DEBUG - val Loss: 0.5003\n2021-07-21 19:55:04,564 - __main__ - DEBUG - Epoch 37/499\n2021-07-21 19:55:13,021 - __main__ - DEBUG - train Loss: 0.4659\n2021-07-21 19:55:15,792 - __main__ - DEBUG - val Loss: 0.6377\n2021-07-21 19:55:15,794 - __main__ - DEBUG - Epoch 38/499\n2021-07-21 19:55:24,230 - __main__ - DEBUG - train Loss: 0.4629\n2021-07-21 19:55:26,857 - __main__ - DEBUG - val Loss: 0.5010\n2021-07-21 19:55:26,859 - __main__ - DEBUG - Epoch 39/499\n2021-07-21 19:55:35,030 - __main__ - DEBUG - train Loss: 0.4628\n2021-07-21 19:55:37,684 - __main__ - DEBUG - val Loss: 0.6017\n2021-07-21 19:55:37,686 - __main__ - DEBUG - Epoch 40/499\n2021-07-21 19:55:45,924 - __main__ - DEBUG - train Loss: 0.4599\n2021-07-21 19:55:48,566 - __main__ - DEBUG - val Loss: 0.4994\n2021-07-21 19:55:48,568 - __main__ - DEBUG - Epoch 41/499\n2021-07-21 19:55:56,760 - __main__ - DEBUG - train Loss: 0.4653\n2021-07-21 19:55:59,377 - __main__ - DEBUG - val Loss: 0.4944\n2021-07-21 19:55:59,378 - __main__ - DEBUG - Epoch 42/499\n2021-07-21 19:56:07,769 - __main__ - DEBUG - train Loss: 0.4592\n2021-07-21 19:56:10,548 - __main__ - DEBUG - val Loss: 0.5708\n2021-07-21 19:56:10,549 - __main__ - DEBUG - Epoch 43/499\n2021-07-21 19:56:19,025 - __main__ - DEBUG - train Loss: 0.4603\n2021-07-21 19:56:21,796 - __main__ - DEBUG - val Loss: 0.5421\n2021-07-21 19:56:21,797 - __main__ - DEBUG - Epoch 44/499\n2021-07-21 19:56:30,181 - __main__ - DEBUG - train Loss: 0.4655\n2021-07-21 19:56:32,768 - __main__ - DEBUG - val Loss: 0.5487\n2021-07-21 19:56:32,770 - __main__ - DEBUG - Epoch 45/499\n2021-07-21 19:56:41,050 - __main__ - DEBUG - train Loss: 0.4623\n2021-07-21 19:56:43,858 - __main__ - DEBUG - val Loss: 0.5050\n2021-07-21 19:56:43,860 - __main__ - DEBUG - Epoch 46/499\n2021-07-21 19:56:52,286 - __main__ - DEBUG - train Loss: 0.4548\n2021-07-21 19:56:55,063 - __main__ - DEBUG - val Loss: 0.6900\n2021-07-21 19:56:55,064 - __main__ - DEBUG - Epoch 47/499\n2021-07-21 19:57:03,475 - __main__ - DEBUG - train Loss: 0.4575\n2021-07-21 19:57:06,066 - __main__ - DEBUG - val Loss: 0.4961\n2021-07-21 19:57:06,068 - __main__ - DEBUG - Epoch 48/499\n2021-07-21 19:57:14,347 - __main__ - DEBUG - train Loss: 0.4496\n2021-07-21 19:57:16,958 - __main__ - DEBUG - val Loss: 0.4835\n2021-07-21 19:57:16,968 - __main__ - DEBUG - Epoch 49/499\n2021-07-21 19:57:25,134 - __main__ - DEBUG - train Loss: 0.4471\n2021-07-21 19:57:27,751 - __main__ - DEBUG - val Loss: 0.5383\n2021-07-21 19:57:27,752 - __main__ - DEBUG - Epoch 50/499\n2021-07-21 19:57:35,928 - __main__ - DEBUG - train Loss: 0.4487\n2021-07-21 19:57:38,555 - __main__ - DEBUG - val Loss: 0.5361\n2021-07-21 19:57:38,557 - __main__ - DEBUG - Epoch 51/499\n2021-07-21 19:57:46,837 - __main__ - DEBUG - train Loss: 0.4499\n2021-07-21 19:57:49,625 - __main__ - DEBUG - val Loss: 0.5852\n2021-07-21 19:57:49,627 - __main__ - DEBUG - Epoch 52/499\n2021-07-21 19:57:58,091 - __main__ - DEBUG - train Loss: 0.4511\n2021-07-21 19:58:00,847 - __main__ - DEBUG - val Loss: 0.5596\n2021-07-21 19:58:00,849 - __main__ - DEBUG - Epoch 53/499\n2021-07-21 19:58:09,294 - __main__ - DEBUG - train Loss: 0.4473\n2021-07-21 19:58:11,891 - __main__ - DEBUG - val Loss: 0.5320\n2021-07-21 19:58:11,892 - __main__ - DEBUG - Epoch 54/499\n2021-07-21 19:58:20,145 - __main__ - DEBUG - train Loss: 0.4491\n2021-07-21 19:58:22,752 - __main__ - DEBUG - val Loss: 0.5050\n2021-07-21 19:58:22,754 - __main__ - DEBUG - Epoch 55/499\n2021-07-21 19:58:30,937 - __main__ - DEBUG - train Loss: 0.4516\n2021-07-21 19:58:33,520 - __main__ - DEBUG - val Loss: 0.5315\n2021-07-21 19:58:33,522 - __main__ - DEBUG - Epoch 56/499\n2021-07-21 19:58:41,708 - __main__ - DEBUG - train Loss: 0.4537\n2021-07-21 19:58:44,383 - __main__ - DEBUG - val Loss: 0.5311\n2021-07-21 19:58:44,385 - __main__ - DEBUG - Epoch 57/499\n2021-07-21 19:58:52,756 - __main__ - DEBUG - train Loss: 0.4443\n2021-07-21 19:58:55,519 - __main__ - DEBUG - val Loss: 0.4704\n2021-07-21 19:58:55,529 - __main__ - DEBUG - Epoch 58/499\n2021-07-21 19:59:03,950 - __main__ - DEBUG - train Loss: 0.4418\n2021-07-21 19:59:06,734 - __main__ - DEBUG - val Loss: 0.4765\n2021-07-21 19:59:06,736 - __main__ - DEBUG - Epoch 59/499\n2021-07-21 19:59:15,155 - __main__ - DEBUG - train Loss: 0.4430\n2021-07-21 19:59:17,766 - __main__ - DEBUG - val Loss: 0.5835\n2021-07-21 19:59:17,768 - __main__ - DEBUG - Epoch 60/499\n2021-07-21 19:59:26,001 - __main__ - DEBUG - train Loss: 0.4437\n2021-07-21 19:59:28,642 - __main__ - DEBUG - val Loss: 0.6347\n2021-07-21 19:59:28,644 - __main__ - DEBUG - Epoch 61/499\n2021-07-21 19:59:36,845 - __main__ - DEBUG - train Loss: 0.4517\n2021-07-21 19:59:39,502 - __main__ - DEBUG - val Loss: 0.4941\n2021-07-21 19:59:39,504 - __main__ - DEBUG - Epoch 62/499\n2021-07-21 19:59:47,744 - __main__ - DEBUG - train Loss: 0.4495\n2021-07-21 19:59:50,337 - __main__ - DEBUG - val Loss: 0.5197\n2021-07-21 19:59:50,339 - __main__ - DEBUG - Epoch 63/499\n2021-07-21 19:59:58,636 - __main__ - DEBUG - train Loss: 0.4450\n2021-07-21 20:00:01,402 - __main__ - DEBUG - val Loss: 0.6089\n2021-07-21 20:00:01,404 - __main__ - DEBUG - Epoch 64/499\n2021-07-21 20:00:09,916 - __main__ - DEBUG - train Loss: 0.4399\n2021-07-21 20:00:12,691 - __main__ - DEBUG - val Loss: 0.5423\n2021-07-21 20:00:12,692 - __main__ - DEBUG - Epoch 65/499\n2021-07-21 20:00:21,089 - __main__ - DEBUG - train Loss: 0.4401\n2021-07-21 20:00:23,691 - __main__ - DEBUG - val Loss: 0.4724\n2021-07-21 20:00:23,693 - __main__ - DEBUG - Epoch 66/499\n2021-07-21 20:00:31,961 - __main__ - DEBUG - train Loss: 0.4406\n2021-07-21 20:00:34,630 - __main__ - DEBUG - val Loss: 0.5328\n2021-07-21 20:00:34,632 - __main__ - DEBUG - Epoch 67/499\n2021-07-21 20:00:42,844 - __main__ - DEBUG - train Loss: 0.4427\n2021-07-21 20:00:45,485 - __main__ - DEBUG - val Loss: 0.4897\n2021-07-21 20:00:45,488 - __main__ - DEBUG - Epoch 68/499\n2021-07-21 20:00:53,717 - __main__ - DEBUG - train Loss: 0.4463\n2021-07-21 20:00:56,345 - __main__ - DEBUG - val Loss: 0.4767\n2021-07-21 20:00:56,347 - __main__ - DEBUG - Epoch 69/499\n2021-07-21 20:01:04,737 - __main__ - DEBUG - train Loss: 0.4377\n2021-07-21 20:01:07,512 - __main__ - DEBUG - val Loss: 0.6272\n2021-07-21 20:01:07,514 - __main__ - DEBUG - Epoch 70/499\n2021-07-21 20:01:15,955 - __main__ - DEBUG - train Loss: 0.4476\n2021-07-21 20:01:18,721 - __main__ - DEBUG - val Loss: 0.4830\n2021-07-21 20:01:18,724 - __main__ - DEBUG - Epoch 71/499\n2021-07-21 20:01:27,117 - __main__ - DEBUG - train Loss: 0.4481\n2021-07-21 20:01:29,716 - __main__ - DEBUG - val Loss: 0.5387\n2021-07-21 20:01:29,718 - __main__ - DEBUG - Epoch 72/499\n2021-07-21 20:01:37,909 - __main__ - DEBUG - train Loss: 0.4436\n2021-07-21 20:01:40,526 - __main__ - DEBUG - val Loss: 0.5136\n2021-07-21 20:01:40,527 - __main__ - DEBUG - Epoch 73/499\n2021-07-21 20:01:48,737 - __main__ - DEBUG - train Loss: 0.4286\n2021-07-21 20:01:51,367 - __main__ - DEBUG - val Loss: 0.5946\n2021-07-21 20:01:51,369 - __main__ - DEBUG - Epoch 74/499\n2021-07-21 20:01:59,667 - __main__ - DEBUG - train Loss: 0.4318\n2021-07-21 20:02:02,338 - __main__ - DEBUG - val Loss: 0.5065\n2021-07-21 20:02:02,340 - __main__ - DEBUG - Epoch 75/499\n2021-07-21 20:02:10,622 - __main__ - DEBUG - train Loss: 0.4376\n2021-07-21 20:02:13,350 - __main__ - DEBUG - val Loss: 0.4697\n2021-07-21 20:02:13,362 - __main__ - DEBUG - Epoch 76/499\n2021-07-21 20:02:21,822 - __main__ - DEBUG - train Loss: 0.4400\n2021-07-21 20:02:24,562 - __main__ - DEBUG - val Loss: 0.5548\n2021-07-21 20:02:24,564 - __main__ - DEBUG - Epoch 77/499\n2021-07-21 20:02:33,007 - __main__ - DEBUG - train Loss: 0.4413\n2021-07-21 20:02:35,632 - __main__ - DEBUG - val Loss: 0.5041\n2021-07-21 20:02:35,633 - __main__ - DEBUG - Epoch 78/499\n2021-07-21 20:02:43,939 - __main__ - DEBUG - train Loss: 0.4339\n2021-07-21 20:02:46,697 - __main__ - DEBUG - val Loss: 0.5462\n2021-07-21 20:02:46,699 - __main__ - DEBUG - Epoch 79/499\n2021-07-21 20:02:55,151 - __main__ - DEBUG - train Loss: 0.4280\n2021-07-21 20:02:57,976 - __main__ - DEBUG - val Loss: 0.4559\n2021-07-21 20:02:57,990 - __main__ - DEBUG - Epoch 80/499\n2021-07-21 20:03:06,388 - __main__ - DEBUG - train Loss: 0.4232\n2021-07-21 20:03:09,030 - __main__ - DEBUG - val Loss: 0.4905\n2021-07-21 20:03:09,031 - __main__ - DEBUG - Epoch 81/499\n2021-07-21 20:03:17,238 - __main__ - DEBUG - train Loss: 0.4218\n2021-07-21 20:03:19,871 - __main__ - DEBUG - val Loss: 0.5359\n2021-07-21 20:03:19,873 - __main__ - DEBUG - Epoch 82/499\n2021-07-21 20:03:28,035 - __main__ - DEBUG - train Loss: 0.4409\n2021-07-21 20:03:30,704 - __main__ - DEBUG - val Loss: 0.5545\n2021-07-21 20:03:30,706 - __main__ - DEBUG - Epoch 83/499\n2021-07-21 20:03:38,928 - __main__ - DEBUG - train Loss: 0.4381\n2021-07-21 20:03:41,557 - __main__ - DEBUG - val Loss: 0.5519\n2021-07-21 20:03:41,558 - __main__ - DEBUG - Epoch 84/499\n2021-07-21 20:03:49,901 - __main__ - DEBUG - train Loss: 0.4238\n2021-07-21 20:03:52,658 - __main__ - DEBUG - val Loss: 0.5609\n2021-07-21 20:03:52,660 - __main__ - DEBUG - Epoch 85/499\n2021-07-21 20:04:01,131 - __main__ - DEBUG - train Loss: 0.4288\n2021-07-21 20:04:03,881 - __main__ - DEBUG - val Loss: 0.4923\n2021-07-21 20:04:03,883 - __main__ - DEBUG - Epoch 86/499\n2021-07-21 20:04:12,265 - __main__ - DEBUG - train Loss: 0.4238\n2021-07-21 20:04:14,919 - __main__ - DEBUG - val Loss: 0.4963\n2021-07-21 20:04:14,921 - __main__ - DEBUG - Epoch 87/499\n2021-07-21 20:04:23,169 - __main__ - DEBUG - train Loss: 0.4244\n2021-07-21 20:04:25,799 - __main__ - DEBUG - val Loss: 0.5751\n2021-07-21 20:04:25,801 - __main__ - DEBUG - Epoch 88/499\n2021-07-21 20:04:33,992 - __main__ - DEBUG - train Loss: 0.4200\n2021-07-21 20:04:36,578 - __main__ - DEBUG - val Loss: 0.4822\n2021-07-21 20:04:36,580 - __main__ - DEBUG - Epoch 89/499\n2021-07-21 20:04:44,767 - __main__ - DEBUG - train Loss: 0.4193\n2021-07-21 20:04:47,437 - __main__ - DEBUG - val Loss: 0.4927\n2021-07-21 20:04:47,440 - __main__ - DEBUG - Epoch 90/499\n2021-07-21 20:04:55,759 - __main__ - DEBUG - train Loss: 0.4197\n2021-07-21 20:04:58,512 - __main__ - DEBUG - val Loss: 0.4789\n2021-07-21 20:04:58,514 - __main__ - DEBUG - Epoch 91/499\n2021-07-21 20:05:06,931 - __main__ - DEBUG - train Loss: 0.4148\n2021-07-21 20:05:09,653 - __main__ - DEBUG - val Loss: 0.5120\n2021-07-21 20:05:09,654 - __main__ - DEBUG - Epoch 92/499\n2021-07-21 20:05:18,114 - __main__ - DEBUG - train Loss: 0.4213\n2021-07-21 20:05:20,742 - __main__ - DEBUG - val Loss: 0.4933\n2021-07-21 20:05:20,745 - __main__ - DEBUG - Epoch 93/499\n2021-07-21 20:05:28,994 - __main__ - DEBUG - train Loss: 0.4180\n2021-07-21 20:05:31,636 - __main__ - DEBUG - val Loss: 0.4979\n2021-07-21 20:05:31,639 - __main__ - DEBUG - Epoch 94/499\n2021-07-21 20:05:39,874 - __main__ - DEBUG - train Loss: 0.4224\n2021-07-21 20:05:42,487 - __main__ - DEBUG - val Loss: 0.5177\n2021-07-21 20:05:42,489 - __main__ - DEBUG - Epoch 95/499\n2021-07-21 20:05:50,741 - __main__ - DEBUG - train Loss: 0.4177\n2021-07-21 20:05:53,378 - __main__ - DEBUG - val Loss: 0.5166\n2021-07-21 20:05:53,380 - __main__ - DEBUG - Epoch 96/499\n2021-07-21 20:06:01,745 - __main__ - DEBUG - train Loss: 0.4193\n2021-07-21 20:06:04,548 - __main__ - DEBUG - val Loss: 0.5801\n2021-07-21 20:06:04,550 - __main__ - DEBUG - Epoch 97/499\n2021-07-21 20:06:13,014 - __main__ - DEBUG - train Loss: 0.4167\n2021-07-21 20:06:15,741 - __main__ - DEBUG - val Loss: 0.5017\n2021-07-21 20:06:15,743 - __main__ - DEBUG - Epoch 98/499\n2021-07-21 20:06:24,186 - __main__ - DEBUG - train Loss: 0.4245\n2021-07-21 20:06:26,781 - __main__ - DEBUG - val Loss: 0.5806\n2021-07-21 20:06:26,783 - __main__ - DEBUG - Epoch 99/499\n2021-07-21 20:06:34,990 - __main__ - DEBUG - train Loss: 0.4187\n2021-07-21 20:06:37,632 - __main__ - DEBUG - val Loss: 0.4704\n2021-07-21 20:06:37,633 - __main__ - DEBUG - Epoch 100/499\n2021-07-21 20:06:45,855 - __main__ - DEBUG - train Loss: 0.4169\n2021-07-21 20:06:48,462 - __main__ - DEBUG - val Loss: 0.4901\n2021-07-21 20:06:48,464 - __main__ - DEBUG - Epoch 101/499\n2021-07-21 20:06:56,734 - __main__ - DEBUG - train Loss: 0.4158\n2021-07-21 20:06:59,309 - __main__ - DEBUG - val Loss: 0.4950\n2021-07-21 20:06:59,310 - __main__ - DEBUG - Epoch 102/499\n2021-07-21 20:07:07,600 - __main__ - DEBUG - train Loss: 0.4170\n2021-07-21 20:07:10,347 - __main__ - DEBUG - val Loss: 0.4729\n2021-07-21 20:07:10,348 - __main__ - DEBUG - Epoch 103/499\n2021-07-21 20:07:18,774 - __main__ - DEBUG - train Loss: 0.4097\n2021-07-21 20:07:21,522 - __main__ - DEBUG - val Loss: 0.5407\n2021-07-21 20:07:21,524 - __main__ - DEBUG - Epoch 104/499\n2021-07-21 20:07:29,910 - __main__ - DEBUG - train Loss: 0.4088\n2021-07-21 20:07:32,520 - __main__ - DEBUG - val Loss: 0.4948\n2021-07-21 20:07:32,521 - __main__ - DEBUG - Epoch 105/499\n2021-07-21 20:07:40,804 - __main__ - DEBUG - train Loss: 0.4009\n2021-07-21 20:07:43,615 - __main__ - DEBUG - val Loss: 0.4878\n2021-07-21 20:07:43,617 - __main__ - DEBUG - Epoch 106/499\n2021-07-21 20:07:52,022 - __main__ - DEBUG - train Loss: 0.3969\n2021-07-21 20:07:54,777 - __main__ - DEBUG - val Loss: 0.5315\n2021-07-21 20:07:54,779 - __main__ - DEBUG - Epoch 107/499\n2021-07-21 20:08:03,170 - __main__ - DEBUG - train Loss: 0.4026\n2021-07-21 20:08:05,777 - __main__ - DEBUG - val Loss: 0.4986\n2021-07-21 20:08:05,778 - __main__ - DEBUG - Epoch 108/499\n2021-07-21 20:08:14,053 - __main__ - DEBUG - train Loss: 0.4094\n2021-07-21 20:08:16,652 - __main__ - DEBUG - val Loss: 0.6988\n2021-07-21 20:08:16,654 - __main__ - DEBUG - Epoch 109/499\n2021-07-21 20:08:24,830 - __main__ - DEBUG - train Loss: 0.4105\n2021-07-21 20:08:27,395 - __main__ - DEBUG - val Loss: 0.5153\n2021-07-21 20:08:27,396 - __main__ - DEBUG - Epoch 110/499\n2021-07-21 20:08:35,530 - __main__ - DEBUG - train Loss: 0.4099\n2021-07-21 20:08:38,133 - __main__ - DEBUG - val Loss: 0.5246\n2021-07-21 20:08:38,134 - __main__ - DEBUG - Epoch 111/499\n2021-07-21 20:08:46,363 - __main__ - DEBUG - train Loss: 0.4116\n2021-07-21 20:08:49,156 - __main__ - DEBUG - val Loss: 0.5546\n2021-07-21 20:08:49,158 - __main__ - DEBUG - Epoch 112/499\n2021-07-21 20:08:57,612 - __main__ - DEBUG - train Loss: 0.4005\n2021-07-21 20:09:00,334 - __main__ - DEBUG - val Loss: 0.5666\n2021-07-21 20:09:00,336 - __main__ - DEBUG - Epoch 113/499\n2021-07-21 20:09:08,804 - __main__ - DEBUG - train Loss: 0.4022\n2021-07-21 20:09:11,406 - __main__ - DEBUG - val Loss: 0.4990\n2021-07-21 20:09:11,408 - __main__ - DEBUG - Epoch 114/499\n2021-07-21 20:09:19,695 - __main__ - DEBUG - train Loss: 0.4061\n2021-07-21 20:09:22,389 - __main__ - DEBUG - val Loss: 0.5179\n2021-07-21 20:09:22,390 - __main__ - DEBUG - Epoch 115/499\n2021-07-21 20:09:30,594 - __main__ - DEBUG - train Loss: 0.3996\n2021-07-21 20:09:33,208 - __main__ - DEBUG - val Loss: 0.4999\n2021-07-21 20:09:33,210 - __main__ - DEBUG - Epoch 116/499\n2021-07-21 20:09:41,400 - __main__ - DEBUG - train Loss: 0.4055\n2021-07-21 20:09:43,994 - __main__ - DEBUG - val Loss: 0.5155\n2021-07-21 20:09:43,996 - __main__ - DEBUG - Epoch 117/499\n2021-07-21 20:09:52,204 - __main__ - DEBUG - train Loss: 0.4031\n2021-07-21 20:09:54,999 - __main__ - DEBUG - val Loss: 0.5037\n2021-07-21 20:09:55,001 - __main__ - DEBUG - Epoch 118/499\n2021-07-21 20:10:03,521 - __main__ - DEBUG - train Loss: 0.4036\n2021-07-21 20:10:06,352 - __main__ - DEBUG - val Loss: 0.5519\n2021-07-21 20:10:06,354 - __main__ - DEBUG - Epoch 119/499\n2021-07-21 20:10:14,766 - __main__ - DEBUG - train Loss: 0.3960\n2021-07-21 20:10:17,323 - __main__ - DEBUG - val Loss: 0.4741\n2021-07-21 20:10:17,325 - __main__ - DEBUG - Epoch 120/499\n2021-07-21 20:10:25,518 - __main__ - DEBUG - train Loss: 0.4064\n2021-07-21 20:10:28,194 - __main__ - DEBUG - val Loss: 0.4859\n2021-07-21 20:10:28,196 - __main__ - DEBUG - Epoch 121/499\n2021-07-21 20:10:36,328 - __main__ - DEBUG - train Loss: 0.4001\n2021-07-21 20:10:38,948 - __main__ - DEBUG - val Loss: 0.5070\n2021-07-21 20:10:38,950 - __main__ - DEBUG - Epoch 122/499\n2021-07-21 20:10:47,112 - __main__ - DEBUG - train Loss: 0.3854\n2021-07-21 20:10:49,750 - __main__ - DEBUG - val Loss: 0.4955\n2021-07-21 20:10:49,752 - __main__ - DEBUG - Epoch 123/499\n2021-07-21 20:10:58,025 - __main__ - DEBUG - train Loss: 0.3852\n2021-07-21 20:11:00,773 - __main__ - DEBUG - val Loss: 0.5777\n2021-07-21 20:11:00,774 - __main__ - DEBUG - Epoch 124/499\n2021-07-21 20:11:09,214 - __main__ - DEBUG - train Loss: 0.3969\n2021-07-21 20:11:11,977 - __main__ - DEBUG - val Loss: 0.6686\n2021-07-21 20:11:11,979 - __main__ - DEBUG - Epoch 125/499\n2021-07-21 20:11:20,425 - __main__ - DEBUG - train Loss: 0.3990\n2021-07-21 20:11:23,030 - __main__ - DEBUG - val Loss: 0.5622\n2021-07-21 20:11:23,031 - __main__ - DEBUG - Epoch 126/499\n2021-07-21 20:11:31,261 - __main__ - DEBUG - train Loss: 0.3904\n2021-07-21 20:11:33,899 - __main__ - DEBUG - val Loss: 0.4785\n2021-07-21 20:11:33,900 - __main__ - DEBUG - Epoch 127/499\n2021-07-21 20:11:42,057 - __main__ - DEBUG - train Loss: 0.3853\n2021-07-21 20:11:44,649 - __main__ - DEBUG - val Loss: 0.6093\n2021-07-21 20:11:44,651 - __main__ - DEBUG - Epoch 128/499\n2021-07-21 20:11:52,879 - __main__ - DEBUG - train Loss: 0.3901\n2021-07-21 20:11:55,476 - __main__ - DEBUG - val Loss: 0.4859\n2021-07-21 20:11:55,479 - __main__ - DEBUG - Epoch 129/499\n2021-07-21 20:12:03,723 - __main__ - DEBUG - train Loss: 0.3890\n2021-07-21 20:12:06,556 - __main__ - DEBUG - val Loss: 0.5002\n2021-07-21 20:12:06,558 - __main__ - DEBUG - Epoch 130/499\n2021-07-21 20:12:15,028 - __main__ - DEBUG - train Loss: 0.3872\n2021-07-21 20:12:17,796 - __main__ - DEBUG - val Loss: 0.5427\n2021-07-21 20:12:17,798 - __main__ - DEBUG - Epoch 131/499\n2021-07-21 20:12:26,222 - __main__ - DEBUG - train Loss: 0.3901\n2021-07-21 20:12:28,829 - __main__ - DEBUG - val Loss: 0.5499\n2021-07-21 20:12:28,830 - __main__ - DEBUG - Epoch 132/499\n2021-07-21 20:12:36,995 - __main__ - DEBUG - train Loss: 0.3945\n2021-07-21 20:12:39,679 - __main__ - DEBUG - val Loss: 0.5409\n2021-07-21 20:12:39,680 - __main__ - DEBUG - Epoch 133/499\n2021-07-21 20:12:47,823 - __main__ - DEBUG - train Loss: 0.3851\n2021-07-21 20:12:50,449 - __main__ - DEBUG - val Loss: 0.4935\n2021-07-21 20:12:50,451 - __main__ - DEBUG - Epoch 134/499\n2021-07-21 20:12:58,650 - __main__ - DEBUG - train Loss: 0.3849\n2021-07-21 20:13:01,251 - __main__ - DEBUG - val Loss: 0.5270\n2021-07-21 20:13:01,252 - __main__ - DEBUG - Epoch 135/499\n2021-07-21 20:13:09,502 - __main__ - DEBUG - train Loss: 0.3738\n2021-07-21 20:13:12,299 - __main__ - DEBUG - val Loss: 0.6596\n2021-07-21 20:13:12,301 - __main__ - DEBUG - Epoch 136/499\n2021-07-21 20:13:20,715 - __main__ - DEBUG - train Loss: 0.3740\n2021-07-21 20:13:23,442 - __main__ - DEBUG - val Loss: 0.4937\n2021-07-21 20:13:23,444 - __main__ - DEBUG - Epoch 137/499\n2021-07-21 20:13:31,887 - __main__ - DEBUG - train Loss: 0.3675\n2021-07-21 20:13:34,474 - __main__ - DEBUG - val Loss: 0.5034\n2021-07-21 20:13:34,475 - __main__ - DEBUG - Epoch 138/499\n2021-07-21 20:13:42,691 - __main__ - DEBUG - train Loss: 0.3894\n2021-07-21 20:13:45,452 - __main__ - DEBUG - val Loss: 0.5025\n2021-07-21 20:13:45,454 - __main__ - DEBUG - Epoch 139/499\n2021-07-21 20:13:53,870 - __main__ - DEBUG - train Loss: 0.3801\n2021-07-21 20:13:56,635 - __main__ - DEBUG - val Loss: 0.4824\n2021-07-21 20:13:56,637 - __main__ - DEBUG - Epoch 140/499\n2021-07-21 20:14:05,031 - __main__ - DEBUG - train Loss: 0.3824\n2021-07-21 20:14:07,677 - __main__ - DEBUG - val Loss: 0.5749\n2021-07-21 20:14:07,678 - __main__ - DEBUG - Epoch 141/499\n2021-07-21 20:14:15,869 - __main__ - DEBUG - train Loss: 0.3765\n2021-07-21 20:14:18,475 - __main__ - DEBUG - val Loss: 0.5110\n2021-07-21 20:14:18,476 - __main__ - DEBUG - Epoch 142/499\n2021-07-21 20:14:26,621 - __main__ - DEBUG - train Loss: 0.3665\n2021-07-21 20:14:29,219 - __main__ - DEBUG - val Loss: 0.4913\n2021-07-21 20:14:29,222 - __main__ - DEBUG - Epoch 143/499\n2021-07-21 20:14:37,432 - __main__ - DEBUG - train Loss: 0.3673\n2021-07-21 20:14:40,035 - __main__ - DEBUG - val Loss: 0.4918\n2021-07-21 20:14:40,037 - __main__ - DEBUG - Epoch 144/499\n2021-07-21 20:14:48,235 - __main__ - DEBUG - train Loss: 0.3718\n2021-07-21 20:14:51,000 - __main__ - DEBUG - val Loss: 0.6820\n2021-07-21 20:14:51,001 - __main__ - DEBUG - Epoch 145/499\n2021-07-21 20:14:59,463 - __main__ - DEBUG - train Loss: 0.3729\n2021-07-21 20:15:02,200 - __main__ - DEBUG - val Loss: 0.4613\n2021-07-21 20:15:02,202 - __main__ - DEBUG - Epoch 146/499\n2021-07-21 20:15:10,619 - __main__ - DEBUG - train Loss: 0.3763\n2021-07-21 20:15:13,269 - __main__ - DEBUG - val Loss: 0.5138\n2021-07-21 20:15:13,271 - __main__ - DEBUG - Epoch 147/499\n2021-07-21 20:15:21,511 - __main__ - DEBUG - train Loss: 0.3747\n2021-07-21 20:15:24,160 - __main__ - DEBUG - val Loss: 0.5858\n2021-07-21 20:15:24,162 - __main__ - DEBUG - Epoch 148/499\n2021-07-21 20:15:32,355 - __main__ - DEBUG - train Loss: 0.3702\n2021-07-21 20:15:34,986 - __main__ - DEBUG - val Loss: 0.6189\n2021-07-21 20:15:34,988 - __main__ - DEBUG - Epoch 149/499\n2021-07-21 20:15:43,193 - __main__ - DEBUG - train Loss: 0.3695\n2021-07-21 20:15:45,843 - __main__ - DEBUG - val Loss: 0.5755\n2021-07-21 20:15:45,845 - __main__ - DEBUG - Epoch 150/499\n2021-07-21 20:15:54,136 - __main__ - DEBUG - train Loss: 0.3721\n2021-07-21 20:15:56,924 - __main__ - DEBUG - val Loss: 0.5198\n2021-07-21 20:15:56,926 - __main__ - DEBUG - Epoch 151/499\n2021-07-21 20:16:05,389 - __main__ - DEBUG - train Loss: 0.3639\n2021-07-21 20:16:08,218 - __main__ - DEBUG - val Loss: 0.4767\n2021-07-21 20:16:08,219 - __main__ - DEBUG - Epoch 152/499\n2021-07-21 20:16:16,708 - __main__ - DEBUG - train Loss: 0.3622\n2021-07-21 20:16:19,355 - __main__ - DEBUG - val Loss: 0.5455\n2021-07-21 20:16:19,357 - __main__ - DEBUG - Epoch 153/499\n2021-07-21 20:16:27,486 - __main__ - DEBUG - train Loss: 0.3618\n2021-07-21 20:16:30,112 - __main__ - DEBUG - val Loss: 0.5550\n2021-07-21 20:16:30,114 - __main__ - DEBUG - Epoch 154/499\n2021-07-21 20:16:38,328 - __main__ - DEBUG - train Loss: 0.3618\n2021-07-21 20:16:40,936 - __main__ - DEBUG - val Loss: 0.5522\n2021-07-21 20:16:40,938 - __main__ - DEBUG - Epoch 155/499\n2021-07-21 20:16:49,166 - __main__ - DEBUG - train Loss: 0.3654\n2021-07-21 20:16:51,802 - __main__ - DEBUG - val Loss: 0.6095\n2021-07-21 20:16:51,804 - __main__ - DEBUG - Epoch 156/499\n2021-07-21 20:16:59,995 - __main__ - DEBUG - train Loss: 0.3592\n2021-07-21 20:17:02,765 - __main__ - DEBUG - val Loss: 0.5349\n2021-07-21 20:17:02,767 - __main__ - DEBUG - Epoch 157/499\n2021-07-21 20:17:11,238 - __main__ - DEBUG - train Loss: 0.3657\n2021-07-21 20:17:14,009 - __main__ - DEBUG - val Loss: 0.5453\n2021-07-21 20:17:14,010 - __main__ - DEBUG - Epoch 158/499\n2021-07-21 20:17:22,471 - __main__ - DEBUG - train Loss: 0.3652\n2021-07-21 20:17:25,133 - __main__ - DEBUG - val Loss: 0.4869\n2021-07-21 20:17:25,135 - __main__ - DEBUG - Epoch 159/499\n2021-07-21 20:17:33,348 - __main__ - DEBUG - train Loss: 0.3658\n2021-07-21 20:17:35,978 - __main__ - DEBUG - val Loss: 0.6156\n2021-07-21 20:17:35,980 - __main__ - DEBUG - Epoch 160/499\n2021-07-21 20:17:44,197 - __main__ - DEBUG - train Loss: 0.3641\n2021-07-21 20:17:46,863 - __main__ - DEBUG - val Loss: 0.4780\n2021-07-21 20:17:46,864 - __main__ - DEBUG - Epoch 161/499\n2021-07-21 20:17:55,051 - __main__ - DEBUG - train Loss: 0.3649\n2021-07-21 20:17:57,686 - __main__ - DEBUG - val Loss: 0.5176\n2021-07-21 20:17:57,688 - __main__ - DEBUG - Epoch 162/499\n2021-07-21 20:18:05,942 - __main__ - DEBUG - train Loss: 0.3628\n2021-07-21 20:18:08,713 - __main__ - DEBUG - val Loss: 0.4872\n2021-07-21 20:18:08,715 - __main__ - DEBUG - Epoch 163/499\n2021-07-21 20:18:17,103 - __main__ - DEBUG - train Loss: 0.3550\n2021-07-21 20:18:19,876 - __main__ - DEBUG - val Loss: 0.5916\n2021-07-21 20:18:19,877 - __main__ - DEBUG - Epoch 164/499\n2021-07-21 20:18:28,366 - __main__ - DEBUG - train Loss: 0.3489\n2021-07-21 20:18:30,969 - __main__ - DEBUG - val Loss: 0.5172\n2021-07-21 20:18:30,971 - __main__ - DEBUG - Epoch 165/499\n2021-07-21 20:18:39,201 - __main__ - DEBUG - train Loss: 0.3475\n2021-07-21 20:18:42,054 - __main__ - DEBUG - val Loss: 0.5257\n2021-07-21 20:18:42,056 - __main__ - DEBUG - Epoch 166/499\n2021-07-21 20:18:50,510 - __main__ - DEBUG - train Loss: 0.3334\n2021-07-21 20:18:53,302 - __main__ - DEBUG - val Loss: 0.5475\n2021-07-21 20:18:53,303 - __main__ - DEBUG - Epoch 167/499\n2021-07-21 20:19:01,782 - __main__ - DEBUG - train Loss: 0.3469\n2021-07-21 20:19:04,388 - __main__ - DEBUG - val Loss: 0.7318\n2021-07-21 20:19:04,390 - __main__ - DEBUG - Epoch 168/499\n2021-07-21 20:19:12,637 - __main__ - DEBUG - train Loss: 0.3541\n2021-07-21 20:19:15,265 - __main__ - DEBUG - val Loss: 0.5272\n2021-07-21 20:19:15,267 - __main__ - DEBUG - Epoch 169/499\n2021-07-21 20:19:23,475 - __main__ - DEBUG - train Loss: 0.3636\n2021-07-21 20:19:26,079 - __main__ - DEBUG - val Loss: 0.5108\n2021-07-21 20:19:26,080 - __main__ - DEBUG - Epoch 170/499\n2021-07-21 20:19:34,264 - __main__ - DEBUG - train Loss: 0.3488\n2021-07-21 20:19:36,880 - __main__ - DEBUG - val Loss: 0.5908\n2021-07-21 20:19:36,882 - __main__ - DEBUG - Epoch 171/499\n2021-07-21 20:19:45,124 - __main__ - DEBUG - train Loss: 0.3593\n2021-07-21 20:19:47,880 - __main__ - DEBUG - val Loss: 0.5713\n2021-07-21 20:19:47,881 - __main__ - DEBUG - Epoch 172/499\n2021-07-21 20:19:56,316 - __main__ - DEBUG - train Loss: 0.3390\n2021-07-21 20:19:59,112 - __main__ - DEBUG - val Loss: 0.5112\n2021-07-21 20:19:59,114 - __main__ - DEBUG - Epoch 173/499\n2021-07-21 20:20:07,561 - __main__ - DEBUG - train Loss: 0.3412\n2021-07-21 20:20:10,180 - __main__ - DEBUG - val Loss: 0.5806\n2021-07-21 20:20:10,182 - __main__ - DEBUG - Epoch 174/499\n2021-07-21 20:20:18,391 - __main__ - DEBUG - train Loss: 0.3455\n2021-07-21 20:20:21,083 - __main__ - DEBUG - val Loss: 0.5144\n2021-07-21 20:20:21,085 - __main__ - DEBUG - Epoch 175/499\n2021-07-21 20:20:29,300 - __main__ - DEBUG - train Loss: 0.3341\n2021-07-21 20:20:31,898 - __main__ - DEBUG - val Loss: 0.5369\n2021-07-21 20:20:31,900 - __main__ - DEBUG - Epoch 176/499\n2021-07-21 20:20:40,075 - __main__ - DEBUG - train Loss: 0.3499\n2021-07-21 20:20:42,689 - __main__ - DEBUG - val Loss: 0.6134\n2021-07-21 20:20:42,691 - __main__ - DEBUG - Epoch 177/499\n2021-07-21 20:20:50,910 - __main__ - DEBUG - train Loss: 0.3450\n2021-07-21 20:20:53,687 - __main__ - DEBUG - val Loss: 0.5693\n2021-07-21 20:20:53,689 - __main__ - DEBUG - Epoch 178/499\n2021-07-21 20:21:02,155 - __main__ - DEBUG - train Loss: 0.3422\n2021-07-21 20:21:04,979 - __main__ - DEBUG - val Loss: 0.6029\n2021-07-21 20:21:04,981 - __main__ - DEBUG - Epoch 179/499\n2021-07-21 20:21:13,489 - __main__ - DEBUG - train Loss: 0.3313\n2021-07-21 20:21:16,147 - __main__ - DEBUG - val Loss: 0.6410\n2021-07-21 20:21:16,149 - __main__ - DEBUG - Epoch 180/499\n2021-07-21 20:21:24,366 - __main__ - DEBUG - train Loss: 0.3356\n2021-07-21 20:21:27,012 - __main__ - DEBUG - val Loss: 0.5680\n2021-07-21 20:21:27,014 - __main__ - DEBUG - Epoch 181/499\n2021-07-21 20:21:35,181 - __main__ - DEBUG - train Loss: 0.3420\n2021-07-21 20:21:37,777 - __main__ - DEBUG - val Loss: 0.5432\n2021-07-21 20:21:37,779 - __main__ - DEBUG - Epoch 182/499\n2021-07-21 20:21:45,999 - __main__ - DEBUG - train Loss: 0.3402\n2021-07-21 20:21:48,640 - __main__ - DEBUG - val Loss: 0.6301\n2021-07-21 20:21:48,642 - __main__ - DEBUG - Epoch 183/499\n2021-07-21 20:21:56,843 - __main__ - DEBUG - train Loss: 0.3478\n2021-07-21 20:21:59,667 - __main__ - DEBUG - val Loss: 0.5517\n2021-07-21 20:21:59,669 - __main__ - DEBUG - Epoch 184/499\n2021-07-21 20:22:08,136 - __main__ - DEBUG - train Loss: 0.3459\n2021-07-21 20:22:10,939 - __main__ - DEBUG - val Loss: 0.5787\n2021-07-21 20:22:10,940 - __main__ - DEBUG - Epoch 185/499\n2021-07-21 20:22:19,331 - __main__ - DEBUG - train Loss: 0.3445\n2021-07-21 20:22:21,972 - __main__ - DEBUG - val Loss: 0.6333\n2021-07-21 20:22:21,974 - __main__ - DEBUG - Epoch 186/499\n2021-07-21 20:22:30,169 - __main__ - DEBUG - train Loss: 0.3439\n2021-07-21 20:22:32,792 - __main__ - DEBUG - val Loss: 0.5235\n2021-07-21 20:22:32,793 - __main__ - DEBUG - Epoch 187/499\n2021-07-21 20:22:41,060 - __main__ - DEBUG - train Loss: 0.3368\n2021-07-21 20:22:43,700 - __main__ - DEBUG - val Loss: 0.5019\n2021-07-21 20:22:43,702 - __main__ - DEBUG - Epoch 188/499\n2021-07-21 20:22:51,958 - __main__ - DEBUG - train Loss: 0.3464\n2021-07-21 20:22:54,653 - __main__ - DEBUG - val Loss: 0.5108\n2021-07-21 20:22:54,655 - __main__ - DEBUG - Epoch 189/499\n2021-07-21 20:23:02,964 - __main__ - DEBUG - train Loss: 0.3494\n2021-07-21 20:23:05,773 - __main__ - DEBUG - val Loss: 0.6541\n2021-07-21 20:23:05,775 - __main__ - DEBUG - Epoch 190/499\n2021-07-21 20:23:14,268 - __main__ - DEBUG - train Loss: 0.3334\n2021-07-21 20:23:17,084 - __main__ - DEBUG - val Loss: 0.4633\n2021-07-21 20:23:17,086 - __main__ - DEBUG - Epoch 191/499\n2021-07-21 20:23:25,655 - __main__ - DEBUG - train Loss: 0.3209\n2021-07-21 20:23:28,284 - __main__ - DEBUG - val Loss: 0.5079\n2021-07-21 20:23:28,287 - __main__ - DEBUG - Epoch 192/499\n2021-07-21 20:23:36,605 - __main__ - DEBUG - train Loss: 0.3162\n2021-07-21 20:23:39,254 - __main__ - DEBUG - val Loss: 0.5408\n2021-07-21 20:23:39,256 - __main__ - DEBUG - Epoch 193/499\n2021-07-21 20:23:47,622 - __main__ - DEBUG - train Loss: 0.3432\n2021-07-21 20:23:50,254 - __main__ - DEBUG - val Loss: 0.5864\n2021-07-21 20:23:50,255 - __main__ - DEBUG - Epoch 194/499\n2021-07-21 20:23:58,520 - __main__ - DEBUG - train Loss: 0.3226\n2021-07-21 20:24:01,226 - __main__ - DEBUG - val Loss: 0.5458\n2021-07-21 20:24:01,228 - __main__ - DEBUG - Epoch 195/499\n2021-07-21 20:24:09,542 - __main__ - DEBUG - train Loss: 0.3172\n2021-07-21 20:24:12,369 - __main__ - DEBUG - val Loss: 0.4888\n2021-07-21 20:24:12,371 - __main__ - DEBUG - Epoch 196/499\n2021-07-21 20:24:20,910 - __main__ - DEBUG - train Loss: 0.3219\n2021-07-21 20:24:23,689 - __main__ - DEBUG - val Loss: 0.6979\n2021-07-21 20:24:23,691 - __main__ - DEBUG - Epoch 197/499\n2021-07-21 20:24:32,205 - __main__ - DEBUG - train Loss: 0.3216\n2021-07-21 20:24:34,863 - __main__ - DEBUG - val Loss: 0.5263\n2021-07-21 20:24:34,865 - __main__ - DEBUG - Epoch 198/499\n2021-07-21 20:24:43,168 - __main__ - DEBUG - train Loss: 0.3221\n2021-07-21 20:24:45,981 - __main__ - DEBUG - val Loss: 0.6137\n2021-07-21 20:24:45,984 - __main__ - DEBUG - Epoch 199/499\n2021-07-21 20:24:54,515 - __main__ - DEBUG - train Loss: 0.3171\n2021-07-21 20:24:57,328 - __main__ - DEBUG - val Loss: 0.5743\n2021-07-21 20:24:57,330 - __main__ - DEBUG - Epoch 200/499\n2021-07-21 20:25:05,874 - __main__ - DEBUG - train Loss: 0.3271\n2021-07-21 20:25:08,518 - __main__ - DEBUG - val Loss: 0.5958\n2021-07-21 20:25:08,520 - __main__ - DEBUG - Epoch 201/499\n2021-07-21 20:25:16,845 - __main__ - DEBUG - train Loss: 0.3309\n2021-07-21 20:25:19,490 - __main__ - DEBUG - val Loss: 0.6098\n2021-07-21 20:25:19,492 - __main__ - DEBUG - Epoch 202/499\n2021-07-21 20:25:27,757 - __main__ - DEBUG - train Loss: 0.3295\n2021-07-21 20:25:30,505 - __main__ - DEBUG - val Loss: 0.5212\n2021-07-21 20:25:30,507 - __main__ - DEBUG - Epoch 203/499\n2021-07-21 20:25:38,900 - __main__ - DEBUG - train Loss: 0.3197\n2021-07-21 20:25:41,550 - __main__ - DEBUG - val Loss: 0.5129\n2021-07-21 20:25:41,552 - __main__ - DEBUG - Epoch 204/499\n2021-07-21 20:25:49,894 - __main__ - DEBUG - train Loss: 0.3120\n2021-07-21 20:25:52,697 - __main__ - DEBUG - val Loss: 0.5657\n2021-07-21 20:25:52,699 - __main__ - DEBUG - Epoch 205/499\n2021-07-21 20:26:01,226 - __main__ - DEBUG - train Loss: 0.3194\n2021-07-21 20:26:04,024 - __main__ - DEBUG - val Loss: 0.5907\n2021-07-21 20:26:04,026 - __main__ - DEBUG - Epoch 206/499\n2021-07-21 20:26:12,521 - __main__ - DEBUG - train Loss: 0.3091\n2021-07-21 20:26:15,235 - __main__ - DEBUG - val Loss: 0.5573\n2021-07-21 20:26:15,237 - __main__ - DEBUG - Epoch 207/499\n2021-07-21 20:26:23,617 - __main__ - DEBUG - train Loss: 0.3192\n2021-07-21 20:26:26,327 - __main__ - DEBUG - val Loss: 0.5896\n2021-07-21 20:26:26,329 - __main__ - DEBUG - Epoch 208/499\n2021-07-21 20:26:34,602 - __main__ - DEBUG - train Loss: 0.3079\n2021-07-21 20:26:37,252 - __main__ - DEBUG - val Loss: 0.5461\n2021-07-21 20:26:37,254 - __main__ - DEBUG - Epoch 209/499\n2021-07-21 20:26:45,561 - __main__ - DEBUG - train Loss: 0.3002\n2021-07-21 20:26:48,240 - __main__ - DEBUG - val Loss: 0.6271\n2021-07-21 20:26:48,242 - __main__ - DEBUG - Epoch 210/499\n2021-07-21 20:26:56,592 - __main__ - DEBUG - train Loss: 0.3016\n2021-07-21 20:26:59,412 - __main__ - DEBUG - val Loss: 0.7629\n2021-07-21 20:26:59,414 - __main__ - DEBUG - Epoch 211/499\n2021-07-21 20:27:07,935 - __main__ - DEBUG - train Loss: 0.3079\n2021-07-21 20:27:10,763 - __main__ - DEBUG - val Loss: 0.7392\n2021-07-21 20:27:10,765 - __main__ - DEBUG - Epoch 212/499\n2021-07-21 20:27:19,328 - __main__ - DEBUG - train Loss: 0.3117\n2021-07-21 20:27:21,964 - __main__ - DEBUG - val Loss: 0.6046\n2021-07-21 20:27:21,966 - __main__ - DEBUG - Epoch 213/499\n2021-07-21 20:27:30,337 - __main__ - DEBUG - train Loss: 0.3229\n2021-07-21 20:27:33,065 - __main__ - DEBUG - val Loss: 0.5931\n2021-07-21 20:27:33,067 - __main__ - DEBUG - Epoch 214/499\n2021-07-21 20:27:41,563 - __main__ - DEBUG - train Loss: 0.3113\n2021-07-21 20:27:44,315 - __main__ - DEBUG - val Loss: 0.5669\n2021-07-21 20:27:44,316 - __main__ - DEBUG - Epoch 215/499\n2021-07-21 20:27:52,591 - __main__ - DEBUG - train Loss: 0.3098\n2021-07-21 20:27:55,288 - __main__ - DEBUG - val Loss: 0.8357\n2021-07-21 20:27:55,291 - __main__ - DEBUG - Epoch 216/499\n2021-07-21 20:28:03,834 - __main__ - DEBUG - train Loss: 0.2904\n2021-07-21 20:28:06,651 - __main__ - DEBUG - val Loss: 0.5543\n2021-07-21 20:28:06,653 - __main__ - DEBUG - Epoch 217/499\n2021-07-21 20:28:15,216 - __main__ - DEBUG - train Loss: 0.2992\n2021-07-21 20:28:18,023 - __main__ - DEBUG - val Loss: 0.5817\n2021-07-21 20:28:18,025 - __main__ - DEBUG - Epoch 218/499\n2021-07-21 20:28:26,618 - __main__ - DEBUG - train Loss: 0.3089\n2021-07-21 20:28:29,355 - __main__ - DEBUG - val Loss: 0.6606\n2021-07-21 20:28:29,356 - __main__ - DEBUG - Epoch 219/499\n2021-07-21 20:28:37,611 - __main__ - DEBUG - train Loss: 0.3004\n2021-07-21 20:28:40,294 - __main__ - DEBUG - val Loss: 0.6560\n2021-07-21 20:28:40,296 - __main__ - DEBUG - Epoch 220/499\n2021-07-21 20:28:48,644 - __main__ - DEBUG - train Loss: 0.3037\n2021-07-21 20:28:51,355 - __main__ - DEBUG - val Loss: 0.6510\n2021-07-21 20:28:51,357 - __main__ - DEBUG - Epoch 221/499\n2021-07-21 20:28:59,706 - __main__ - DEBUG - train Loss: 0.3079\n2021-07-21 20:29:02,411 - __main__ - DEBUG - val Loss: 0.6042\n2021-07-21 20:29:02,413 - __main__ - DEBUG - Epoch 222/499\n2021-07-21 20:29:11,016 - __main__ - DEBUG - train Loss: 0.3104\n2021-07-21 20:29:13,859 - __main__ - DEBUG - val Loss: 0.5764\n2021-07-21 20:29:13,862 - __main__ - DEBUG - Epoch 223/499\n2021-07-21 20:29:22,416 - __main__ - DEBUG - train Loss: 0.2997\n2021-07-21 20:29:25,275 - __main__ - DEBUG - val Loss: 0.6142\n2021-07-21 20:29:25,277 - __main__ - DEBUG - Epoch 224/499\n2021-07-21 20:29:33,763 - __main__ - DEBUG - train Loss: 0.2985\n2021-07-21 20:29:36,418 - __main__ - DEBUG - val Loss: 0.5204\n2021-07-21 20:29:36,420 - __main__ - DEBUG - Epoch 225/499\n2021-07-21 20:29:44,947 - __main__ - DEBUG - train Loss: 0.2929\n2021-07-21 20:29:47,811 - __main__ - DEBUG - val Loss: 0.5537\n2021-07-21 20:29:47,813 - __main__ - DEBUG - Epoch 226/499\n2021-07-21 20:29:56,369 - __main__ - DEBUG - train Loss: 0.3054\n2021-07-21 20:29:59,183 - __main__ - DEBUG - val Loss: 0.6361\n2021-07-21 20:29:59,185 - __main__ - DEBUG - Epoch 227/499\n2021-07-21 20:30:07,618 - __main__ - DEBUG - train Loss: 0.2967\n2021-07-21 20:30:10,368 - __main__ - DEBUG - val Loss: 0.7319\n2021-07-21 20:30:10,370 - __main__ - DEBUG - Epoch 228/499\n2021-07-21 20:30:18,696 - __main__ - DEBUG - train Loss: 0.2915\n2021-07-21 20:30:21,298 - __main__ - DEBUG - val Loss: 0.5815\n2021-07-21 20:30:21,300 - __main__ - DEBUG - Epoch 229/499\n2021-07-21 20:30:29,560 - __main__ - DEBUG - train Loss: 0.3089\n2021-07-21 20:30:32,233 - __main__ - DEBUG - val Loss: 0.5950\n2021-07-21 20:30:32,235 - __main__ - DEBUG - Epoch 230/499\n2021-07-21 20:30:40,530 - __main__ - DEBUG - train Loss: 0.2908\n2021-07-21 20:30:43,335 - __main__ - DEBUG - val Loss: 0.6695\n2021-07-21 20:30:43,337 - __main__ - DEBUG - Epoch 231/499\n2021-07-21 20:30:51,985 - __main__ - DEBUG - train Loss: 0.2913\n2021-07-21 20:30:54,808 - __main__ - DEBUG - val Loss: 0.5716\n2021-07-21 20:30:54,810 - __main__ - DEBUG - Epoch 232/499\n2021-07-21 20:31:03,412 - __main__ - DEBUG - train Loss: 0.2942\n2021-07-21 20:31:06,252 - __main__ - DEBUG - val Loss: 0.5891\n2021-07-21 20:31:06,254 - __main__ - DEBUG - Epoch 233/499\n2021-07-21 20:31:14,637 - __main__ - DEBUG - train Loss: 0.2968\n2021-07-21 20:31:17,370 - __main__ - DEBUG - val Loss: 0.5838\n2021-07-21 20:31:17,372 - __main__ - DEBUG - Epoch 234/499\n2021-07-21 20:31:25,718 - __main__ - DEBUG - train Loss: 0.2824\n2021-07-21 20:31:28,341 - __main__ - DEBUG - val Loss: 0.5770\n2021-07-21 20:31:28,343 - __main__ - DEBUG - Epoch 235/499\n2021-07-21 20:31:36,619 - __main__ - DEBUG - train Loss: 0.2887\n2021-07-21 20:31:39,221 - __main__ - DEBUG - val Loss: 0.6019\n2021-07-21 20:31:39,223 - __main__ - DEBUG - Epoch 236/499\n2021-07-21 20:31:47,534 - __main__ - DEBUG - train Loss: 0.2870\n2021-07-21 20:31:50,346 - __main__ - DEBUG - val Loss: 0.5778\n2021-07-21 20:31:50,348 - __main__ - DEBUG - Epoch 237/499\n2021-07-21 20:31:58,862 - __main__ - DEBUG - train Loss: 0.2935\n2021-07-21 20:32:01,731 - __main__ - DEBUG - val Loss: 0.5843\n2021-07-21 20:32:01,733 - __main__ - DEBUG - Epoch 238/499\n2021-07-21 20:32:10,261 - __main__ - DEBUG - train Loss: 0.3023\n2021-07-21 20:32:13,115 - __main__ - DEBUG - val Loss: 0.5628\n2021-07-21 20:32:13,117 - __main__ - DEBUG - Epoch 239/499\n2021-07-21 20:32:21,389 - __main__ - DEBUG - train Loss: 0.3038\n2021-07-21 20:32:24,036 - __main__ - DEBUG - val Loss: 0.6136\n2021-07-21 20:32:24,038 - __main__ - DEBUG - Epoch 240/499\n2021-07-21 20:32:32,364 - __main__ - DEBUG - train Loss: 0.2996\n2021-07-21 20:32:35,063 - __main__ - DEBUG - val Loss: 0.6337\n2021-07-21 20:32:35,065 - __main__ - DEBUG - Epoch 241/499\n2021-07-21 20:32:43,330 - __main__ - DEBUG - train Loss: 0.2952\n2021-07-21 20:32:46,016 - __main__ - DEBUG - val Loss: 0.6673\n2021-07-21 20:32:46,017 - __main__ - DEBUG - Epoch 242/499\n2021-07-21 20:32:54,395 - __main__ - DEBUG - train Loss: 0.2930\n2021-07-21 20:32:57,220 - __main__ - DEBUG - val Loss: 0.5919\n2021-07-21 20:32:57,221 - __main__ - DEBUG - Epoch 243/499\n2021-07-21 20:33:05,821 - __main__ - DEBUG - train Loss: 0.2876\n2021-07-21 20:33:08,633 - __main__ - DEBUG - val Loss: 0.6276\n2021-07-21 20:33:08,635 - __main__ - DEBUG - Epoch 244/499\n2021-07-21 20:33:17,220 - __main__ - DEBUG - train Loss: 0.2865\n2021-07-21 20:33:19,989 - __main__ - DEBUG - val Loss: 0.5775\n2021-07-21 20:33:19,991 - __main__ - DEBUG - Epoch 245/499\n2021-07-21 20:33:28,263 - __main__ - DEBUG - train Loss: 0.2851\n2021-07-21 20:33:30,979 - __main__ - DEBUG - val Loss: 0.5618\n2021-07-21 20:33:30,981 - __main__ - DEBUG - Epoch 246/499\n2021-07-21 20:33:39,358 - __main__ - DEBUG - train Loss: 0.2806\n2021-07-21 20:33:42,094 - __main__ - DEBUG - val Loss: 0.6023\n2021-07-21 20:33:42,096 - __main__ - DEBUG - Epoch 247/499\n2021-07-21 20:33:50,413 - __main__ - DEBUG - train Loss: 0.2788\n2021-07-21 20:33:53,092 - __main__ - DEBUG - val Loss: 0.5500\n2021-07-21 20:33:53,095 - __main__ - DEBUG - Epoch 248/499\n2021-07-21 20:34:01,406 - __main__ - DEBUG - train Loss: 0.2603\n2021-07-21 20:34:04,208 - __main__ - DEBUG - val Loss: 0.6558\n2021-07-21 20:34:04,210 - __main__ - DEBUG - Epoch 249/499\n2021-07-21 20:34:12,722 - __main__ - DEBUG - train Loss: 0.2673\n2021-07-21 20:34:15,520 - __main__ - DEBUG - val Loss: 0.6333\n2021-07-21 20:34:15,521 - __main__ - DEBUG - Epoch 250/499\n2021-07-21 20:34:24,048 - __main__ - DEBUG - train Loss: 0.2661\n2021-07-21 20:34:26,808 - __main__ - DEBUG - val Loss: 1.2307\n2021-07-21 20:34:26,810 - __main__ - DEBUG - Epoch 251/499\n2021-07-21 20:34:35,018 - __main__ - DEBUG - train Loss: 0.2697\n2021-07-21 20:34:37,696 - __main__ - DEBUG - val Loss: 0.6750\n2021-07-21 20:34:37,698 - __main__ - DEBUG - Epoch 252/499\n2021-07-21 20:34:46,031 - __main__ - DEBUG - train Loss: 0.2725\n2021-07-21 20:34:48,732 - __main__ - DEBUG - val Loss: 0.5750\n2021-07-21 20:34:48,734 - __main__ - DEBUG - Epoch 253/499\n2021-07-21 20:34:57,013 - __main__ - DEBUG - train Loss: 0.2591\n2021-07-21 20:34:59,711 - __main__ - DEBUG - val Loss: 0.5694\n2021-07-21 20:34:59,713 - __main__ - DEBUG - Epoch 254/499\n2021-07-21 20:35:08,032 - __main__ - DEBUG - train Loss: 0.2580\n2021-07-21 20:35:10,858 - __main__ - DEBUG - val Loss: 0.6123\n2021-07-21 20:35:10,859 - __main__ - DEBUG - Epoch 255/499\n2021-07-21 20:35:19,441 - __main__ - DEBUG - train Loss: 0.2530\n2021-07-21 20:35:22,284 - __main__ - DEBUG - val Loss: 0.6661\n2021-07-21 20:35:22,286 - __main__ - DEBUG - Epoch 256/499\n2021-07-21 20:35:30,824 - __main__ - DEBUG - train Loss: 0.2690\n2021-07-21 20:35:33,569 - __main__ - DEBUG - val Loss: 0.6892\n2021-07-21 20:35:33,571 - __main__ - DEBUG - Epoch 257/499\n2021-07-21 20:35:41,905 - __main__ - DEBUG - train Loss: 0.2646\n2021-07-21 20:35:44,769 - __main__ - DEBUG - val Loss: 0.6833\n2021-07-21 20:35:44,771 - __main__ - DEBUG - Epoch 258/499\n2021-07-21 20:35:53,399 - __main__ - DEBUG - train Loss: 0.2530\n2021-07-21 20:35:56,311 - __main__ - DEBUG - val Loss: 0.5608\n2021-07-21 20:35:56,313 - __main__ - DEBUG - Epoch 259/499\n2021-07-21 20:36:04,893 - __main__ - DEBUG - train Loss: 0.2487\n2021-07-21 20:36:07,707 - __main__ - DEBUG - val Loss: 0.5700\n2021-07-21 20:36:07,708 - __main__ - DEBUG - Epoch 260/499\n2021-07-21 20:36:16,026 - __main__ - DEBUG - train Loss: 0.2499\n2021-07-21 20:36:18,686 - __main__ - DEBUG - val Loss: 0.5872\n2021-07-21 20:36:18,687 - __main__ - DEBUG - Epoch 261/499\n2021-07-21 20:36:27,060 - __main__ - DEBUG - train Loss: 0.2544\n2021-07-21 20:36:29,705 - __main__ - DEBUG - val Loss: 0.6045\n2021-07-21 20:36:29,706 - __main__ - DEBUG - Epoch 262/499\n2021-07-21 20:36:38,048 - __main__ - DEBUG - train Loss: 0.2665\n2021-07-21 20:36:40,681 - __main__ - DEBUG - val Loss: 0.7628\n2021-07-21 20:36:40,683 - __main__ - DEBUG - Epoch 263/499\n2021-07-21 20:36:49,015 - __main__ - DEBUG - train Loss: 0.2717\n2021-07-21 20:36:51,827 - __main__ - DEBUG - val Loss: 0.6828\n2021-07-21 20:36:51,829 - __main__ - DEBUG - Epoch 264/499\n2021-07-21 20:37:00,278 - __main__ - DEBUG - train Loss: 0.2664\n2021-07-21 20:37:03,177 - __main__ - DEBUG - val Loss: 0.5842\n2021-07-21 20:37:03,179 - __main__ - DEBUG - Epoch 265/499\n2021-07-21 20:37:11,694 - __main__ - DEBUG - train Loss: 0.2798\n2021-07-21 20:37:14,295 - __main__ - DEBUG - val Loss: 0.6068\n2021-07-21 20:37:14,297 - __main__ - DEBUG - Epoch 266/499\n2021-07-21 20:37:22,638 - __main__ - DEBUG - train Loss: 0.2598\n2021-07-21 20:37:25,280 - __main__ - DEBUG - val Loss: 0.5900\n2021-07-21 20:37:25,282 - __main__ - DEBUG - Epoch 267/499\n2021-07-21 20:37:33,520 - __main__ - DEBUG - train Loss: 0.2545\n2021-07-21 20:37:36,145 - __main__ - DEBUG - val Loss: 0.5676\n2021-07-21 20:37:36,147 - __main__ - DEBUG - Epoch 268/499\n2021-07-21 20:37:44,511 - __main__ - DEBUG - train Loss: 0.2560\n2021-07-21 20:37:47,190 - __main__ - DEBUG - val Loss: 0.6845\n2021-07-21 20:37:47,191 - __main__ - DEBUG - Epoch 269/499\n2021-07-21 20:37:55,588 - __main__ - DEBUG - train Loss: 0.2531\n2021-07-21 20:37:58,395 - __main__ - DEBUG - val Loss: 0.8293\n2021-07-21 20:37:58,397 - __main__ - DEBUG - Epoch 270/499\n2021-07-21 20:38:06,935 - __main__ - DEBUG - train Loss: 0.2695\n2021-07-21 20:38:09,753 - __main__ - DEBUG - val Loss: 0.6507\n2021-07-21 20:38:09,754 - __main__ - DEBUG - Epoch 271/499\n2021-07-21 20:38:18,341 - __main__ - DEBUG - train Loss: 0.2556\n2021-07-21 20:38:20,973 - __main__ - DEBUG - val Loss: 0.5408\n2021-07-21 20:38:20,975 - __main__ - DEBUG - Epoch 272/499\n2021-07-21 20:38:29,277 - __main__ - DEBUG - train Loss: 0.2608\n2021-07-21 20:38:31,960 - __main__ - DEBUG - val Loss: 0.6709\n2021-07-21 20:38:31,962 - __main__ - DEBUG - Epoch 273/499\n2021-07-21 20:38:40,293 - __main__ - DEBUG - train Loss: 0.2520\n2021-07-21 20:38:42,911 - __main__ - DEBUG - val Loss: 0.5867\n2021-07-21 20:38:42,913 - __main__ - DEBUG - Epoch 274/499\n2021-07-21 20:38:51,184 - __main__ - DEBUG - train Loss: 0.2509\n2021-07-21 20:38:53,844 - __main__ - DEBUG - val Loss: 0.6472\n2021-07-21 20:38:53,846 - __main__ - DEBUG - Epoch 275/499\n2021-07-21 20:39:02,197 - __main__ - DEBUG - train Loss: 0.2501\n2021-07-21 20:39:05,074 - __main__ - DEBUG - val Loss: 0.6576\n2021-07-21 20:39:05,076 - __main__ - DEBUG - Epoch 276/499\n2021-07-21 20:39:13,597 - __main__ - DEBUG - train Loss: 0.2500\n2021-07-21 20:39:16,433 - __main__ - DEBUG - val Loss: 0.7051\n2021-07-21 20:39:16,435 - __main__ - DEBUG - Epoch 277/499\n2021-07-21 20:39:24,940 - __main__ - DEBUG - train Loss: 0.2575\n2021-07-21 20:39:27,671 - __main__ - DEBUG - val Loss: 0.6352\n2021-07-21 20:39:27,673 - __main__ - DEBUG - Epoch 278/499\n2021-07-21 20:39:36,073 - __main__ - DEBUG - train Loss: 0.2540\n2021-07-21 20:39:38,764 - __main__ - DEBUG - val Loss: 0.5794\n2021-07-21 20:39:38,765 - __main__ - DEBUG - Epoch 279/499\n2021-07-21 20:39:47,143 - __main__ - DEBUG - train Loss: 0.2484\n2021-07-21 20:39:49,821 - __main__ - DEBUG - val Loss: 0.6683\n2021-07-21 20:39:49,822 - __main__ - DEBUG - Epoch 280/499\n2021-07-21 20:39:58,118 - __main__ - DEBUG - train Loss: 0.2323\n2021-07-21 20:40:00,824 - __main__ - DEBUG - val Loss: 0.5921\n2021-07-21 20:40:00,826 - __main__ - DEBUG - Epoch 281/499\n2021-07-21 20:40:09,236 - __main__ - DEBUG - train Loss: 0.2649\n2021-07-21 20:40:12,019 - __main__ - DEBUG - val Loss: 0.7006\n2021-07-21 20:40:12,021 - __main__ - DEBUG - Epoch 282/499\n2021-07-21 20:40:20,536 - __main__ - DEBUG - train Loss: 0.2419\n2021-07-21 20:40:23,330 - __main__ - DEBUG - val Loss: 0.7426\n2021-07-21 20:40:23,332 - __main__ - DEBUG - Epoch 283/499\n2021-07-21 20:40:31,844 - __main__ - DEBUG - train Loss: 0.2650\n2021-07-21 20:40:34,537 - __main__ - DEBUG - val Loss: 0.6561\n2021-07-21 20:40:34,538 - __main__ - DEBUG - Epoch 284/499\n2021-07-21 20:40:42,983 - __main__ - DEBUG - train Loss: 0.2471\n2021-07-21 20:40:45,845 - __main__ - DEBUG - val Loss: 0.6553\n2021-07-21 20:40:45,847 - __main__ - DEBUG - Epoch 285/499\n2021-07-21 20:40:54,384 - __main__ - DEBUG - train Loss: 0.2376\n2021-07-21 20:40:57,182 - __main__ - DEBUG - val Loss: 0.7177\n2021-07-21 20:40:57,184 - __main__ - DEBUG - Epoch 286/499\n2021-07-21 20:41:05,770 - __main__ - DEBUG - train Loss: 0.2309\n2021-07-21 20:41:08,478 - __main__ - DEBUG - val Loss: 0.6248\n2021-07-21 20:41:08,479 - __main__ - DEBUG - Epoch 287/499\n2021-07-21 20:41:16,824 - __main__ - DEBUG - train Loss: 0.2313\n2021-07-21 20:41:19,530 - __main__ - DEBUG - val Loss: 0.6535\n2021-07-21 20:41:19,532 - __main__ - DEBUG - Epoch 288/499\n2021-07-21 20:41:27,881 - __main__ - DEBUG - train Loss: 0.2398\n2021-07-21 20:41:30,588 - __main__ - DEBUG - val Loss: 0.5976\n2021-07-21 20:41:30,590 - __main__ - DEBUG - Epoch 289/499\n2021-07-21 20:41:38,868 - __main__ - DEBUG - train Loss: 0.2345\n2021-07-21 20:41:41,502 - __main__ - DEBUG - val Loss: 0.6227\n2021-07-21 20:41:41,504 - __main__ - DEBUG - Epoch 290/499\n2021-07-21 20:41:50,004 - __main__ - DEBUG - train Loss: 0.2384\n2021-07-21 20:41:52,766 - __main__ - DEBUG - val Loss: 0.5890\n2021-07-21 20:41:52,768 - __main__ - DEBUG - Epoch 291/499\n2021-07-21 20:42:01,321 - __main__ - DEBUG - train Loss: 0.2389\n2021-07-21 20:42:04,171 - __main__ - DEBUG - val Loss: 0.6106\n2021-07-21 20:42:04,173 - __main__ - DEBUG - Epoch 292/499\n2021-07-21 20:42:12,645 - __main__ - DEBUG - train Loss: 0.2373\n2021-07-21 20:42:15,333 - __main__ - DEBUG - val Loss: 0.7296\n2021-07-21 20:42:15,335 - __main__ - DEBUG - Epoch 293/499\n2021-07-21 20:42:23,729 - __main__ - DEBUG - train Loss: 0.2525\n2021-07-21 20:42:26,386 - __main__ - DEBUG - val Loss: 0.6527\n2021-07-21 20:42:26,388 - __main__ - DEBUG - Epoch 294/499\n2021-07-21 20:42:34,681 - __main__ - DEBUG - train Loss: 0.2498\n2021-07-21 20:42:37,341 - __main__ - DEBUG - val Loss: 0.6828\n2021-07-21 20:42:37,343 - __main__ - DEBUG - Epoch 295/499\n2021-07-21 20:42:45,611 - __main__ - DEBUG - train Loss: 0.2476\n2021-07-21 20:42:48,266 - __main__ - DEBUG - val Loss: 0.7436\n2021-07-21 20:42:48,268 - __main__ - DEBUG - Epoch 296/499\n2021-07-21 20:42:56,840 - __main__ - DEBUG - train Loss: 0.2456\n2021-07-21 20:42:59,701 - __main__ - DEBUG - val Loss: 0.6463\n2021-07-21 20:42:59,703 - __main__ - DEBUG - Epoch 297/499\n2021-07-21 20:43:08,251 - __main__ - DEBUG - train Loss: 0.2352\n2021-07-21 20:43:11,143 - __main__ - DEBUG - val Loss: 0.7020\n2021-07-21 20:43:11,145 - __main__ - DEBUG - Epoch 298/499\n2021-07-21 20:43:19,538 - __main__ - DEBUG - train Loss: 0.2268\n2021-07-21 20:43:22,256 - __main__ - DEBUG - val Loss: 0.7418\n2021-07-21 20:43:22,258 - __main__ - DEBUG - Epoch 299/499\n2021-07-21 20:43:30,554 - __main__ - DEBUG - train Loss: 0.2352\n2021-07-21 20:43:33,219 - __main__ - DEBUG - val Loss: 0.6732\n2021-07-21 20:43:33,221 - __main__ - DEBUG - Epoch 300/499\n2021-07-21 20:43:41,511 - __main__ - DEBUG - train Loss: 0.2481\n2021-07-21 20:43:44,160 - __main__ - DEBUG - val Loss: 0.6101\n2021-07-21 20:43:44,162 - __main__ - DEBUG - Epoch 301/499\n2021-07-21 20:43:52,465 - __main__ - DEBUG - train Loss: 0.2323\n2021-07-21 20:43:55,095 - __main__ - DEBUG - val Loss: 0.5886\n2021-07-21 20:43:55,097 - __main__ - DEBUG - Epoch 302/499\n2021-07-21 20:44:03,714 - __main__ - DEBUG - train Loss: 0.2297\n2021-07-21 20:44:06,512 - __main__ - DEBUG - val Loss: 0.6009\n2021-07-21 20:44:06,514 - __main__ - DEBUG - Epoch 303/499\n2021-07-21 20:44:15,013 - __main__ - DEBUG - train Loss: 0.2156\n2021-07-21 20:44:17,827 - __main__ - DEBUG - val Loss: 0.6664\n2021-07-21 20:44:17,828 - __main__ - DEBUG - Epoch 304/499\n2021-07-21 20:44:26,194 - __main__ - DEBUG - train Loss: 0.2144\n2021-07-21 20:44:28,883 - __main__ - DEBUG - val Loss: 0.6875\n2021-07-21 20:44:28,884 - __main__ - DEBUG - Epoch 305/499\n2021-07-21 20:44:37,188 - __main__ - DEBUG - train Loss: 0.2155\n2021-07-21 20:44:39,845 - __main__ - DEBUG - val Loss: 0.6150\n2021-07-21 20:44:39,847 - __main__ - DEBUG - Epoch 306/499\n2021-07-21 20:44:48,131 - __main__ - DEBUG - train Loss: 0.2079\n2021-07-21 20:44:50,826 - __main__ - DEBUG - val Loss: 0.6467\n2021-07-21 20:44:50,828 - __main__ - DEBUG - Epoch 307/499\n2021-07-21 20:44:59,134 - __main__ - DEBUG - train Loss: 0.2322\n2021-07-21 20:45:01,846 - __main__ - DEBUG - val Loss: 0.6044\n2021-07-21 20:45:01,848 - __main__ - DEBUG - Epoch 308/499\n2021-07-21 20:45:10,355 - __main__ - DEBUG - train Loss: 0.2294\n2021-07-21 20:45:13,165 - __main__ - DEBUG - val Loss: 0.7746\n2021-07-21 20:45:13,168 - __main__ - DEBUG - Epoch 309/499\n2021-07-21 20:45:21,721 - __main__ - DEBUG - train Loss: 0.2246\n2021-07-21 20:45:24,556 - __main__ - DEBUG - val Loss: 0.7321\n2021-07-21 20:45:24,558 - __main__ - DEBUG - Epoch 310/499\n2021-07-21 20:45:32,851 - __main__ - DEBUG - train Loss: 0.2251\n2021-07-21 20:45:35,503 - __main__ - DEBUG - val Loss: 0.6353\n2021-07-21 20:45:35,505 - __main__ - DEBUG - Epoch 311/499\n2021-07-21 20:45:43,845 - __main__ - DEBUG - train Loss: 0.2299\n2021-07-21 20:45:46,522 - __main__ - DEBUG - val Loss: 0.7244\n2021-07-21 20:45:46,524 - __main__ - DEBUG - Epoch 312/499\n2021-07-21 20:45:54,909 - __main__ - DEBUG - train Loss: 0.2161\n2021-07-21 20:45:57,545 - __main__ - DEBUG - val Loss: 0.7419\n2021-07-21 20:45:57,547 - __main__ - DEBUG - Epoch 313/499\n2021-07-21 20:46:05,806 - __main__ - DEBUG - train Loss: 0.2075\n2021-07-21 20:46:08,623 - __main__ - DEBUG - val Loss: 0.6372\n2021-07-21 20:46:08,625 - __main__ - DEBUG - Epoch 314/499\n2021-07-21 20:46:17,259 - __main__ - DEBUG - train Loss: 0.2078\n2021-07-21 20:46:20,073 - __main__ - DEBUG - val Loss: 0.7573\n2021-07-21 20:46:20,075 - __main__ - DEBUG - Epoch 315/499\n2021-07-21 20:46:28,617 - __main__ - DEBUG - train Loss: 0.2102\n2021-07-21 20:46:31,501 - __main__ - DEBUG - val Loss: 0.6528\n2021-07-21 20:46:31,503 - __main__ - DEBUG - Epoch 316/499\n2021-07-21 20:46:40,115 - __main__ - DEBUG - train Loss: 0.2177\n2021-07-21 20:46:42,905 - __main__ - DEBUG - val Loss: 0.6523\n2021-07-21 20:46:42,907 - __main__ - DEBUG - Epoch 317/499\n2021-07-21 20:46:51,444 - __main__ - DEBUG - train Loss: 0.2115\n2021-07-21 20:46:54,262 - __main__ - DEBUG - val Loss: 0.8199\n2021-07-21 20:46:54,264 - __main__ - DEBUG - Epoch 318/499\n2021-07-21 20:47:02,778 - __main__ - DEBUG - train Loss: 0.2143\n2021-07-21 20:47:05,550 - __main__ - DEBUG - val Loss: 0.8887\n2021-07-21 20:47:05,552 - __main__ - DEBUG - Epoch 319/499\n2021-07-21 20:47:13,859 - __main__ - DEBUG - train Loss: 0.2318\n2021-07-21 20:47:16,487 - __main__ - DEBUG - val Loss: 0.6540\n2021-07-21 20:47:16,489 - __main__ - DEBUG - Epoch 320/499\n2021-07-21 20:47:24,788 - __main__ - DEBUG - train Loss: 0.2357\n2021-07-21 20:47:27,436 - __main__ - DEBUG - val Loss: 0.6616\n2021-07-21 20:47:27,438 - __main__ - DEBUG - Epoch 321/499\n2021-07-21 20:47:35,742 - __main__ - DEBUG - train Loss: 0.2099\n2021-07-21 20:47:38,360 - __main__ - DEBUG - val Loss: 0.6025\n2021-07-21 20:47:38,362 - __main__ - DEBUG - Epoch 322/499\n2021-07-21 20:47:46,639 - __main__ - DEBUG - train Loss: 0.2046\n2021-07-21 20:47:49,433 - __main__ - DEBUG - val Loss: 0.6163\n2021-07-21 20:47:49,435 - __main__ - DEBUG - Epoch 323/499\n2021-07-21 20:47:57,985 - __main__ - DEBUG - train Loss: 0.2039\n2021-07-21 20:48:00,773 - __main__ - DEBUG - val Loss: 0.6068\n2021-07-21 20:48:00,775 - __main__ - DEBUG - Epoch 324/499\n2021-07-21 20:48:09,307 - __main__ - DEBUG - train Loss: 0.2084\n2021-07-21 20:48:12,132 - __main__ - DEBUG - val Loss: 0.6466\n2021-07-21 20:48:12,134 - __main__ - DEBUG - Epoch 325/499\n2021-07-21 20:48:20,493 - __main__ - DEBUG - train Loss: 0.2141\n2021-07-21 20:48:23,212 - __main__ - DEBUG - val Loss: 0.6229\n2021-07-21 20:48:23,214 - __main__ - DEBUG - Epoch 326/499\n2021-07-21 20:48:31,538 - __main__ - DEBUG - train Loss: 0.2061\n2021-07-21 20:48:34,176 - __main__ - DEBUG - val Loss: 0.6555\n2021-07-21 20:48:34,178 - __main__ - DEBUG - Epoch 327/499\n2021-07-21 20:48:42,519 - __main__ - DEBUG - train Loss: 0.1976\n2021-07-21 20:48:45,173 - __main__ - DEBUG - val Loss: 0.6620\n2021-07-21 20:48:45,175 - __main__ - DEBUG - Epoch 328/499\n2021-07-21 20:48:53,503 - __main__ - DEBUG - train Loss: 0.2051\n2021-07-21 20:48:56,346 - __main__ - DEBUG - val Loss: 0.6814\n2021-07-21 20:48:56,348 - __main__ - DEBUG - Epoch 329/499\n2021-07-21 20:49:04,893 - __main__ - DEBUG - train Loss: 0.2064\n2021-07-21 20:49:07,710 - __main__ - DEBUG - val Loss: 0.6405\n2021-07-21 20:49:07,712 - __main__ - DEBUG - Epoch 330/499\n2021-07-21 20:49:16,282 - __main__ - DEBUG - train Loss: 0.2191\n2021-07-21 20:49:18,983 - __main__ - DEBUG - val Loss: 0.7247\n2021-07-21 20:49:18,986 - __main__ - DEBUG - Epoch 331/499\n2021-07-21 20:49:27,316 - __main__ - DEBUG - train Loss: 0.2160\n2021-07-21 20:49:29,964 - __main__ - DEBUG - val Loss: 0.7504\n2021-07-21 20:49:29,966 - __main__ - DEBUG - Epoch 332/499\n2021-07-21 20:49:38,319 - __main__ - DEBUG - train Loss: 0.2109\n2021-07-21 20:49:40,988 - __main__ - DEBUG - val Loss: 0.7068\n2021-07-21 20:49:40,989 - __main__ - DEBUG - Epoch 333/499\n2021-07-21 20:49:49,297 - __main__ - DEBUG - train Loss: 0.2013\n2021-07-21 20:49:51,988 - __main__ - DEBUG - val Loss: 0.8111\n2021-07-21 20:49:51,990 - __main__ - DEBUG - Epoch 334/499\n2021-07-21 20:50:00,351 - __main__ - DEBUG - train Loss: 0.2125\n2021-07-21 20:50:03,215 - __main__ - DEBUG - val Loss: 0.7056\n2021-07-21 20:50:03,217 - __main__ - DEBUG - Epoch 335/499\n2021-07-21 20:50:11,785 - __main__ - DEBUG - train Loss: 0.2115\n2021-07-21 20:50:14,607 - __main__ - DEBUG - val Loss: 0.7033\n2021-07-21 20:50:14,609 - __main__ - DEBUG - Epoch 336/499\n2021-07-21 20:50:23,153 - __main__ - DEBUG - train Loss: 0.1942\n2021-07-21 20:50:25,799 - __main__ - DEBUG - val Loss: 0.8294\n2021-07-21 20:50:25,801 - __main__ - DEBUG - Epoch 337/499\n2021-07-21 20:50:34,059 - __main__ - DEBUG - train Loss: 0.1844\n2021-07-21 20:50:36,691 - __main__ - DEBUG - val Loss: 0.6374\n2021-07-21 20:50:36,693 - __main__ - DEBUG - Epoch 338/499\n2021-07-21 20:50:45,018 - __main__ - DEBUG - train Loss: 0.1857\n2021-07-21 20:50:47,683 - __main__ - DEBUG - val Loss: 0.7689\n2021-07-21 20:50:47,685 - __main__ - DEBUG - Epoch 339/499\n2021-07-21 20:50:55,989 - __main__ - DEBUG - train Loss: 0.1956\n2021-07-21 20:50:58,599 - __main__ - DEBUG - val Loss: 0.7678\n2021-07-21 20:50:58,601 - __main__ - DEBUG - Epoch 340/499\n2021-07-21 20:51:06,924 - __main__ - DEBUG - train Loss: 0.2012\n2021-07-21 20:51:09,735 - __main__ - DEBUG - val Loss: 0.8389\n2021-07-21 20:51:09,737 - __main__ - DEBUG - Epoch 341/499\n2021-07-21 20:51:18,344 - __main__ - DEBUG - train Loss: 0.2047\n2021-07-21 20:51:21,306 - __main__ - DEBUG - val Loss: 0.7374\n2021-07-21 20:51:21,308 - __main__ - DEBUG - Epoch 342/499\n2021-07-21 20:51:29,977 - __main__ - DEBUG - train Loss: 0.2110\n2021-07-21 20:51:32,844 - __main__ - DEBUG - val Loss: 0.8270\n2021-07-21 20:51:32,846 - __main__ - DEBUG - Epoch 343/499\n2021-07-21 20:51:41,166 - __main__ - DEBUG - train Loss: 0.2015\n2021-07-21 20:51:44,011 - __main__ - DEBUG - val Loss: 0.7179\n2021-07-21 20:51:44,013 - __main__ - DEBUG - Epoch 344/499\n2021-07-21 20:51:52,546 - __main__ - DEBUG - train Loss: 0.1960\n2021-07-21 20:51:55,350 - __main__ - DEBUG - val Loss: 0.8712\n2021-07-21 20:51:55,352 - __main__ - DEBUG - Epoch 345/499\n2021-07-21 20:52:03,923 - __main__ - DEBUG - train Loss: 0.2005\n2021-07-21 20:52:06,639 - __main__ - DEBUG - val Loss: 0.7855\n2021-07-21 20:52:06,641 - __main__ - DEBUG - Epoch 346/499\n2021-07-21 20:52:14,965 - __main__ - DEBUG - train Loss: 0.1978\n2021-07-21 20:52:17,691 - __main__ - DEBUG - val Loss: 0.8043\n2021-07-21 20:52:17,693 - __main__ - DEBUG - Epoch 347/499\n2021-07-21 20:52:25,968 - __main__ - DEBUG - train Loss: 0.1941\n2021-07-21 20:52:28,636 - __main__ - DEBUG - val Loss: 0.6839\n2021-07-21 20:52:28,638 - __main__ - DEBUG - Epoch 348/499\n2021-07-21 20:52:37,002 - __main__ - DEBUG - train Loss: 0.1914\n2021-07-21 20:52:39,694 - __main__ - DEBUG - val Loss: 0.6572\n2021-07-21 20:52:39,696 - __main__ - DEBUG - Epoch 349/499\n2021-07-21 20:52:48,056 - __main__ - DEBUG - train Loss: 0.1823\n2021-07-21 20:52:50,874 - __main__ - DEBUG - val Loss: 0.6472\n2021-07-21 20:52:50,876 - __main__ - DEBUG - Epoch 350/499\n2021-07-21 20:52:59,409 - __main__ - DEBUG - train Loss: 0.1857\n2021-07-21 20:53:02,217 - __main__ - DEBUG - val Loss: 0.6588\n2021-07-21 20:53:02,219 - __main__ - DEBUG - Epoch 351/499\n2021-07-21 20:53:10,780 - __main__ - DEBUG - train Loss: 0.1801\n2021-07-21 20:53:13,575 - __main__ - DEBUG - val Loss: 0.6908\n2021-07-21 20:53:13,576 - __main__ - DEBUG - Epoch 352/499\n2021-07-21 20:53:21,904 - __main__ - DEBUG - train Loss: 0.2082\n2021-07-21 20:53:24,593 - __main__ - DEBUG - val Loss: 0.6821\n2021-07-21 20:53:24,594 - __main__ - DEBUG - Epoch 353/499\n2021-07-21 20:53:32,911 - __main__ - DEBUG - train Loss: 0.1877\n2021-07-21 20:53:35,571 - __main__ - DEBUG - val Loss: 0.6791\n2021-07-21 20:53:35,572 - __main__ - DEBUG - Epoch 354/499\n2021-07-21 20:53:43,811 - __main__ - DEBUG - train Loss: 0.1814\n2021-07-21 20:53:46,486 - __main__ - DEBUG - val Loss: 0.8051\n2021-07-21 20:53:46,488 - __main__ - DEBUG - Epoch 355/499\n2021-07-21 20:53:54,983 - __main__ - DEBUG - train Loss: 0.1851\n2021-07-21 20:53:57,845 - __main__ - DEBUG - val Loss: 0.7503\n2021-07-21 20:53:57,847 - __main__ - DEBUG - Epoch 356/499\n2021-07-21 20:54:06,384 - __main__ - DEBUG - train Loss: 0.1835\n2021-07-21 20:54:09,204 - __main__ - DEBUG - val Loss: 0.7995\n2021-07-21 20:54:09,207 - __main__ - DEBUG - Epoch 357/499\n2021-07-21 20:54:17,747 - __main__ - DEBUG - train Loss: 0.1895\n2021-07-21 20:54:20,389 - __main__ - DEBUG - val Loss: 0.8240\n2021-07-21 20:54:20,391 - __main__ - DEBUG - Epoch 358/499\n2021-07-21 20:54:28,756 - __main__ - DEBUG - train Loss: 0.1931\n2021-07-21 20:54:31,495 - __main__ - DEBUG - val Loss: 0.7655\n2021-07-21 20:54:31,496 - __main__ - DEBUG - Epoch 359/499\n2021-07-21 20:54:39,848 - __main__ - DEBUG - train Loss: 0.1900\n2021-07-21 20:54:42,456 - __main__ - DEBUG - val Loss: 0.6751\n2021-07-21 20:54:42,458 - __main__ - DEBUG - Epoch 360/499\n2021-07-21 20:54:50,765 - __main__ - DEBUG - train Loss: 0.1952\n2021-07-21 20:54:53,446 - __main__ - DEBUG - val Loss: 0.7949\n2021-07-21 20:54:53,449 - __main__ - DEBUG - Epoch 361/499\n2021-07-21 20:55:01,899 - __main__ - DEBUG - train Loss: 0.1992\n2021-07-21 20:55:04,740 - __main__ - DEBUG - val Loss: 0.7651\n2021-07-21 20:55:04,742 - __main__ - DEBUG - Epoch 362/499\n2021-07-21 20:55:13,319 - __main__ - DEBUG - train Loss: 0.2119\n2021-07-21 20:55:16,141 - __main__ - DEBUG - val Loss: 0.6916\n2021-07-21 20:55:16,143 - __main__ - DEBUG - Epoch 363/499\n2021-07-21 20:55:24,659 - __main__ - DEBUG - train Loss: 0.2030\n2021-07-21 20:55:27,312 - __main__ - DEBUG - val Loss: 0.7267\n2021-07-21 20:55:27,314 - __main__ - DEBUG - Epoch 364/499\n2021-07-21 20:55:35,621 - __main__ - DEBUG - train Loss: 0.1933\n2021-07-21 20:55:38,248 - __main__ - DEBUG - val Loss: 0.7149\n2021-07-21 20:55:38,250 - __main__ - DEBUG - Epoch 365/499\n2021-07-21 20:55:46,542 - __main__ - DEBUG - train Loss: 0.1913\n2021-07-21 20:55:49,191 - __main__ - DEBUG - val Loss: 0.7755\n2021-07-21 20:55:49,193 - __main__ - DEBUG - Epoch 366/499\n2021-07-21 20:55:57,589 - __main__ - DEBUG - train Loss: 0.1822\n2021-07-21 20:56:00,257 - __main__ - DEBUG - val Loss: 0.6533\n2021-07-21 20:56:00,259 - __main__ - DEBUG - Epoch 367/499\n2021-07-21 20:56:08,799 - __main__ - DEBUG - train Loss: 0.1809\n2021-07-21 20:56:11,625 - __main__ - DEBUG - val Loss: 0.7266\n2021-07-21 20:56:11,629 - __main__ - DEBUG - Epoch 368/499\n2021-07-21 20:56:20,229 - __main__ - DEBUG - train Loss: 0.1769\n2021-07-21 20:56:23,024 - __main__ - DEBUG - val Loss: 0.7307\n2021-07-21 20:56:23,026 - __main__ - DEBUG - Epoch 369/499\n2021-07-21 20:56:31,575 - __main__ - DEBUG - train Loss: 0.1614\n2021-07-21 20:56:34,314 - __main__ - DEBUG - val Loss: 0.6398\n2021-07-21 20:56:34,315 - __main__ - DEBUG - Epoch 370/499\n2021-07-21 20:56:43,030 - __main__ - DEBUG - train Loss: 0.1532\n2021-07-21 20:56:45,930 - __main__ - DEBUG - val Loss: 0.7411\n2021-07-21 20:56:45,932 - __main__ - DEBUG - Epoch 371/499\n2021-07-21 20:56:54,504 - __main__ - DEBUG - train Loss: 0.1529\n2021-07-21 20:56:57,305 - __main__ - DEBUG - val Loss: 0.7158\n2021-07-21 20:56:57,307 - __main__ - DEBUG - Epoch 372/499\n2021-07-21 20:57:05,885 - __main__ - DEBUG - train Loss: 0.1671\n2021-07-21 20:57:08,624 - __main__ - DEBUG - val Loss: 0.7599\n2021-07-21 20:57:08,626 - __main__ - DEBUG - Epoch 373/499\n2021-07-21 20:57:16,995 - __main__ - DEBUG - train Loss: 0.1810\n2021-07-21 20:57:19,667 - __main__ - DEBUG - val Loss: 0.8080\n2021-07-21 20:57:19,669 - __main__ - DEBUG - Epoch 374/499\n2021-07-21 20:57:27,929 - __main__ - DEBUG - train Loss: 0.1912\n2021-07-21 20:57:30,626 - __main__ - DEBUG - val Loss: 0.8501\n2021-07-21 20:57:30,628 - __main__ - DEBUG - Epoch 375/499\n2021-07-21 20:57:38,905 - __main__ - DEBUG - train Loss: 0.1791\n2021-07-21 20:57:41,624 - __main__ - DEBUG - val Loss: 0.8045\n2021-07-21 20:57:41,626 - __main__ - DEBUG - Epoch 376/499\n2021-07-21 20:57:50,229 - __main__ - DEBUG - train Loss: 0.1855\n2021-07-21 20:57:53,127 - __main__ - DEBUG - val Loss: 0.7174\n2021-07-21 20:57:53,129 - __main__ - DEBUG - Epoch 377/499\n2021-07-21 20:58:01,727 - __main__ - DEBUG - train Loss: 0.1824\n2021-07-21 20:58:04,534 - __main__ - DEBUG - val Loss: 0.7334\n2021-07-21 20:58:04,535 - __main__ - DEBUG - Epoch 378/499\n2021-07-21 20:58:12,777 - __main__ - DEBUG - train Loss: 0.1833\n2021-07-21 20:58:15,534 - __main__ - DEBUG - val Loss: 0.7145\n2021-07-21 20:58:15,536 - __main__ - DEBUG - Epoch 379/499\n2021-07-21 20:58:23,852 - __main__ - DEBUG - train Loss: 0.1793\n2021-07-21 20:58:26,573 - __main__ - DEBUG - val Loss: 0.6800\n2021-07-21 20:58:26,575 - __main__ - DEBUG - Epoch 380/499\n2021-07-21 20:58:34,899 - __main__ - DEBUG - train Loss: 0.1691\n2021-07-21 20:58:37,581 - __main__ - DEBUG - val Loss: 0.7067\n2021-07-21 20:58:37,583 - __main__ - DEBUG - Epoch 381/499\n2021-07-21 20:58:45,892 - __main__ - DEBUG - train Loss: 0.1596\n2021-07-21 20:58:48,785 - __main__ - DEBUG - val Loss: 0.7738\n2021-07-21 20:58:48,787 - __main__ - DEBUG - Epoch 382/499\n2021-07-21 20:58:57,376 - __main__ - DEBUG - train Loss: 0.1730\n2021-07-21 20:59:00,179 - __main__ - DEBUG - val Loss: 0.9183\n2021-07-21 20:59:00,182 - __main__ - DEBUG - Epoch 383/499\n2021-07-21 20:59:08,714 - __main__ - DEBUG - train Loss: 0.1831\n2021-07-21 20:59:11,597 - __main__ - DEBUG - val Loss: 0.9309\n2021-07-21 20:59:11,599 - __main__ - DEBUG - Epoch 384/499\n2021-07-21 20:59:19,934 - __main__ - DEBUG - train Loss: 0.1874\n2021-07-21 20:59:22,638 - __main__ - DEBUG - val Loss: 0.8295\n2021-07-21 20:59:22,640 - __main__ - DEBUG - Epoch 385/499\n2021-07-21 20:59:31,003 - __main__ - DEBUG - train Loss: 0.1820\n2021-07-21 20:59:33,736 - __main__ - DEBUG - val Loss: 0.8723\n2021-07-21 20:59:33,738 - __main__ - DEBUG - Epoch 386/499\n2021-07-21 20:59:42,032 - __main__ - DEBUG - train Loss: 0.1841\n2021-07-21 20:59:44,702 - __main__ - DEBUG - val Loss: 0.7032\n2021-07-21 20:59:44,704 - __main__ - DEBUG - Epoch 387/499\n2021-07-21 20:59:53,074 - __main__ - DEBUG - train Loss: 0.1707\n2021-07-21 20:59:55,917 - __main__ - DEBUG - val Loss: 0.7629\n2021-07-21 20:59:55,918 - __main__ - DEBUG - Epoch 388/499\n2021-07-21 21:00:04,492 - __main__ - DEBUG - train Loss: 0.1779\n2021-07-21 21:00:07,293 - __main__ - DEBUG - val Loss: 0.7083\n2021-07-21 21:00:07,295 - __main__ - DEBUG - Epoch 389/499\n2021-07-21 21:00:15,911 - __main__ - DEBUG - train Loss: 0.1955\n2021-07-21 21:00:18,697 - __main__ - DEBUG - val Loss: 0.7326\n2021-07-21 21:00:18,699 - __main__ - DEBUG - Epoch 390/499\n2021-07-21 21:00:26,946 - __main__ - DEBUG - train Loss: 0.1814\n2021-07-21 21:00:29,672 - __main__ - DEBUG - val Loss: 0.6677\n2021-07-21 21:00:29,674 - __main__ - DEBUG - Epoch 391/499\n2021-07-21 21:00:38,072 - __main__ - DEBUG - train Loss: 0.1895\n2021-07-21 21:00:40,736 - __main__ - DEBUG - val Loss: 0.7881\n2021-07-21 21:00:40,738 - __main__ - DEBUG - Epoch 392/499\n2021-07-21 21:00:49,045 - __main__ - DEBUG - train Loss: 0.1820\n2021-07-21 21:00:51,795 - __main__ - DEBUG - val Loss: 0.7696\n2021-07-21 21:00:51,797 - __main__ - DEBUG - Epoch 393/499\n2021-07-21 21:01:00,098 - __main__ - DEBUG - train Loss: 0.1858\n2021-07-21 21:01:02,947 - __main__ - DEBUG - val Loss: 0.7760\n2021-07-21 21:01:02,949 - __main__ - DEBUG - Epoch 394/499\n2021-07-21 21:01:11,452 - __main__ - DEBUG - train Loss: 0.1658\n2021-07-21 21:01:14,243 - __main__ - DEBUG - val Loss: 0.7122\n2021-07-21 21:01:14,245 - __main__ - DEBUG - Epoch 395/499\n2021-07-21 21:01:22,782 - __main__ - DEBUG - train Loss: 0.1672\n2021-07-21 21:01:25,532 - __main__ - DEBUG - val Loss: 0.7156\n2021-07-21 21:01:25,534 - __main__ - DEBUG - Epoch 396/499\n2021-07-21 21:01:33,881 - __main__ - DEBUG - train Loss: 0.1619\n2021-07-21 21:01:36,542 - __main__ - DEBUG - val Loss: 0.6904\n2021-07-21 21:01:36,544 - __main__ - DEBUG - Epoch 397/499\n2021-07-21 21:01:44,891 - __main__ - DEBUG - train Loss: 0.1619\n2021-07-21 21:01:47,611 - __main__ - DEBUG - val Loss: 0.6996\n2021-07-21 21:01:47,613 - __main__ - DEBUG - Epoch 398/499\n2021-07-21 21:01:55,990 - __main__ - DEBUG - train Loss: 0.1833\n2021-07-21 21:01:58,715 - __main__ - DEBUG - val Loss: 0.7707\n2021-07-21 21:01:58,717 - __main__ - DEBUG - Epoch 399/499\n2021-07-21 21:02:07,143 - __main__ - DEBUG - train Loss: 0.1780\n2021-07-21 21:02:09,974 - __main__ - DEBUG - val Loss: 0.7628\n2021-07-21 21:02:09,976 - __main__ - DEBUG - Epoch 400/499\n2021-07-21 21:02:18,636 - __main__ - DEBUG - train Loss: 0.1705\n2021-07-21 21:02:21,461 - __main__ - DEBUG - val Loss: 0.7354\n2021-07-21 21:02:21,463 - __main__ - DEBUG - Epoch 401/499\n2021-07-21 21:02:30,008 - __main__ - DEBUG - train Loss: 0.1579\n2021-07-21 21:02:32,750 - __main__ - DEBUG - val Loss: 0.6263\n2021-07-21 21:02:32,752 - __main__ - DEBUG - Epoch 402/499\n2021-07-21 21:02:41,136 - __main__ - DEBUG - train Loss: 0.1495\n2021-07-21 21:02:44,043 - __main__ - DEBUG - val Loss: 0.6559\n2021-07-21 21:02:44,046 - __main__ - DEBUG - Epoch 403/499\n2021-07-21 21:02:52,654 - __main__ - DEBUG - train Loss: 0.1585\n2021-07-21 21:02:55,514 - __main__ - DEBUG - val Loss: 0.6773\n2021-07-21 21:02:55,516 - __main__ - DEBUG - Epoch 404/499\n2021-07-21 21:03:04,098 - __main__ - DEBUG - train Loss: 0.1774\n2021-07-21 21:03:06,795 - __main__ - DEBUG - val Loss: 0.7621\n2021-07-21 21:03:06,796 - __main__ - DEBUG - Epoch 405/499\n2021-07-21 21:03:15,133 - __main__ - DEBUG - train Loss: 0.1703\n2021-07-21 21:03:17,841 - __main__ - DEBUG - val Loss: 0.7763\n2021-07-21 21:03:17,844 - __main__ - DEBUG - Epoch 406/499\n2021-07-21 21:03:26,130 - __main__ - DEBUG - train Loss: 0.1598\n2021-07-21 21:03:28,843 - __main__ - DEBUG - val Loss: 0.7846\n2021-07-21 21:03:28,851 - __main__ - DEBUG - Complete training (4935.647 seconds passed)\n"
]
],
[
[
"# Evaluation",
"_____no_output_____"
]
],
[
[
"rmse = partial(mean_squared_error, squared=False)",
"_____no_output_____"
],
[
"# qwk = partial(cohen_kappa_score, labels=np.sort(train['target'].unique()), weights='quadratic')",
"_____no_output_____"
],
[
"@np.vectorize\ndef predict(proba_0: float, proba_1: float, proba_2: float, proba_3: float) -> int:\n return np.argmax((proba_0, proba_1, proba_2, proba_3))",
"_____no_output_____"
],
[
"metrics = defaultdict(list)",
"_____no_output_____"
]
],
[
[
"## Training set",
"_____no_output_____"
]
],
[
[
"pred_train_dfs = []\nfor i in range(N_SPLITS):\n num_fold = i + 1\n logger.debug('Evaluate cv result (training set) Fold {}'.format(num_fold))\n # Read cv result\n filepath_fold_train = os.path.join(output_dir, f'cv_fold{num_fold}_training.csv')\n pred_train_df = pd.read_csv(filepath_fold_train)\n pred_train_df['actual'] = train.loc[pred_train_df['object_id'], TARGET].values\n if REGRESSION:\n if TARGET == 'target':\n pred_train_df['pred'].clip(lower=0, upper=3, inplace=True)\n else:\n pred_train_df['pred'] = np.vectorize(soring_date2target)(pred_train_df['pred'])\n pred_train_df['actual'] = np.vectorize(soring_date2target)(pred_train_df['actual'])\n else:\n pred_train_df['pred'] = predict(pred_train_df['0'], pred_train_df['1'],\n pred_train_df['2'], pred_train_df['3'])\n if not (REGRESSION and TARGET == 'target'):\n print(confusion_matrix(pred_train_df['actual'], pred_train_df['pred'],\n labels=np.sort(train['target'].unique())))\n loss = rmse(pred_train_df['actual'], pred_train_df['pred'])\n# score = qwk(pred_train_df['actual'], pred_train_df['pred'])\n logger.debug('Loss: {}'.format(loss))\n# logger.debug('Score: {}'.format(score))\n metrics['train_losses'].append(loss)\n# metrics['train_scores'].append(score)\n pred_train_dfs.append(pred_train_df)\n\nmetrics['train_losses_avg'] = np.mean(metrics['train_losses'])\nmetrics['train_losses_std'] = np.std(metrics['train_losses'])\n# metrics['train_scores_avg'] = np.mean(metrics['train_scores'])\n# metrics['train_scores_std'] = np.std(metrics['train_scores'])",
"2021-07-22 02:18:11,765 - __main__ - DEBUG - Evaluate cv result (training set) Fold 1\n2021-07-22 02:18:11,805 - __main__ - DEBUG - Loss: 1.002067925536256\n2021-07-22 02:18:11,806 - __main__ - DEBUG - Evaluate cv result (training set) Fold 2\n2021-07-22 02:18:11,838 - __main__ - DEBUG - Loss: 0.7936396960201898\n2021-07-22 02:18:11,839 - __main__ - DEBUG - Evaluate cv result (training set) Fold 3\n2021-07-22 02:18:11,868 - __main__ - DEBUG - Loss: 0.8190223348612593\n2021-07-22 02:18:11,869 - __main__ - DEBUG - Evaluate cv result (training set) Fold 4\n2021-07-22 02:18:11,900 - __main__ - DEBUG - Loss: 0.8886266709963703\n2021-07-22 02:18:11,901 - __main__ - DEBUG - Evaluate cv result (training set) Fold 5\n2021-07-22 02:18:11,937 - __main__ - DEBUG - Loss: 0.8696857134347314\n"
],
[
"pred_train = pd.concat(pred_train_dfs).groupby('object_id').sum()\npred_train = pred_train / N_SPLITS\nif not REGRESSION:\n pred_train['pred'] = predict(pred_train['0'], pred_train['1'], pred_train['2'], pred_train['3'])\npred_train['actual'] = train.loc[pred_train.index, TARGET].values\nif REGRESSION and TARGET == 'sorting_date':\n pred_train['actual'] = np.vectorize(soring_date2target)(pred_train['actual'])\n# for c in ('pred', 'actual'):\n# pred_train[c] = pred_train[c].astype('int')\npred_train",
"_____no_output_____"
],
[
"if not (REGRESSION and TARGET == 'target'):\n print(confusion_matrix(pred_train['actual'], pred_train['pred'], labels=np.sort(train['target'].unique())))\nloss = rmse(pred_train['actual'], pred_train['pred'])\n# score = qwk(pred_train['actual'], pred_train['pred'])\nmetrics['train_loss'] = loss\n# metrics['train_score'] = score\nlogger.info('Training loss: {}'.format(loss))\n# logger.info('Training score: {}'.format(score))",
"2021-07-22 02:18:12,035 - __main__ - INFO - Training loss: 0.7919136659096802\n"
],
[
"pred_train.to_csv(os.path.join(output_dir, 'prediction_train.csv'))\nlogger.debug('Write cv result to {}'.format(os.path.join(output_dir, 'prediction_train.csv')))",
"2021-07-22 02:18:12,072 - __main__ - DEBUG - Write cv result to ../scripts/../experiments/exp027/prediction_train.csv\n"
]
],
[
[
"## Validation set",
"_____no_output_____"
]
],
[
[
"pred_valid_dfs = []\nfor i in range(N_SPLITS):\n num_fold = i + 1\n logger.debug('Evaluate cv result (validation set) Fold {}'.format(num_fold))\n # Read cv result\n filepath_fold_valid = os.path.join(output_dir, f'cv_fold{num_fold}_validation.csv')\n pred_valid_df = pd.read_csv(filepath_fold_valid)\n pred_valid_df['actual'] = train.loc[pred_valid_df['object_id'], TARGET].values\n if REGRESSION:\n if TARGET == 'target':\n pred_valid_df['pred'].clip(lower=0, upper=3, inplace=True)\n else:\n pred_valid_df['pred'] = np.vectorize(soring_date2target)(pred_valid_df['pred'])\n pred_valid_df['actual'] = np.vectorize(soring_date2target)(pred_valid_df['actual'])\n else:\n pred_valid_df['pred'] = predict(pred_valid_df['0'], pred_valid_df['1'],\n pred_valid_df['2'], pred_valid_df['3']) \n \n if not (REGRESSION and TARGET == 'target'):\n print(confusion_matrix(pred_valid_df['actual'], pred_valid_df['pred'],\n labels=np.sort(train['target'].unique())))\n loss = rmse(pred_valid_df['actual'], pred_valid_df['pred'])\n# score = qwk(pred_valid_df['actual'], pred_valid_df['pred'])\n logger.debug('Loss: {}'.format(loss))\n# logger.debug('Score: {}'.format(score))\n metrics['valid_losses'].append(loss)\n# metrics['valid_scores'].append(score)\n pred_valid_dfs.append(pred_valid_df)\n \nmetrics['valid_losses_avg'] = np.mean(metrics['valid_losses'])\nmetrics['valid_losses_std'] = np.std(metrics['valid_losses'])\n# metrics['valid_scores_avg'] = np.mean(metrics['valid_scores'])\n# metrics['valid_scores_std'] = np.std(metrics['valid_scores'])",
"2021-07-22 02:18:12,098 - __main__ - DEBUG - Evaluate cv result (validation set) Fold 1\n2021-07-22 02:18:12,113 - __main__ - DEBUG - Loss: 1.0198777204240295\n2021-07-22 02:18:12,115 - __main__ - DEBUG - Evaluate cv result (validation set) Fold 2\n2021-07-22 02:18:12,132 - __main__ - DEBUG - Loss: 0.9824490753046746\n2021-07-22 02:18:12,134 - __main__ - DEBUG - Evaluate cv result (validation set) Fold 3\n2021-07-22 02:18:12,147 - __main__ - DEBUG - Loss: 0.9209241368444986\n2021-07-22 02:18:12,148 - __main__ - DEBUG - Evaluate cv result (validation set) Fold 4\n2021-07-22 02:18:12,160 - __main__ - DEBUG - Loss: 1.0063492701359626\n2021-07-22 02:18:12,161 - __main__ - DEBUG - Evaluate cv result (validation set) Fold 5\n2021-07-22 02:18:12,175 - __main__ - DEBUG - Loss: 1.0213190109755006\n"
],
[
"pred_valid = pd.concat(pred_valid_dfs).groupby('object_id').sum()\npred_valid = pred_valid / N_SPLITS\nif not REGRESSION:\n pred_valid['pred'] = predict(pred_valid['0'], pred_valid['1'], pred_valid['2'], pred_valid['3'])\npred_valid['actual'] = train.loc[pred_valid.index, TARGET].values\nif REGRESSION and TARGET == 'sorting_date':\n pred_valid['actual'] = np.vectorize(soring_date2target)(pred_valid['actual'])\n# for c in ('pred', 'actual'):\n# pred_valid[c] = pred_valid[c].astype('int')\npred_valid",
"_____no_output_____"
],
[
"if not REGRESSION:\n print(confusion_matrix(pred_valid['actual'], pred_valid['pred'], labels=np.sort(train['target'].unique())))\nloss = rmse(pred_valid['actual'], pred_valid['pred'])\n# score = qwk(pred_valid['actual'], pred_valid['pred'])\nmetrics['valid_loss'] = loss\n# metrics['valid_score'] = score\nlogger.info('Validatino loss: {}'.format(loss))\n# logger.info('Validatino score: {}'.format(score))",
"2021-07-22 02:18:12,250 - __main__ - INFO - Validatino loss: 0.9909419579026452\n"
],
[
"pred_valid.to_csv(os.path.join(output_dir, 'prediction_valid.csv'))\nlogger.debug('Write cv result to {}'.format(os.path.join(output_dir, 'prediction_valid.csv')))",
"2021-07-22 02:18:12,286 - __main__ - DEBUG - Write cv result to ../scripts/../experiments/exp027/prediction_valid.csv\n"
],
[
"with open(os.path.join(output_dir, 'metrics.json'), 'w') as f:\n json.dump(dict(metrics), f)\nlogger.debug('Write metrics to {}'.format(os.path.join(output_dir, 'metrics.json')))",
"2021-07-22 02:18:12,298 - __main__ - DEBUG - Write metrics to ../scripts/../experiments/exp027/metrics.json\n"
]
],
[
[
"# Prediction",
"_____no_output_____"
]
],
[
[
"pred_test_dfs = []\nfor i in range(N_SPLITS):\n num_fold = i + 1\n # Read cv result\n filepath_fold_test = os.path.join(output_dir, f'cv_fold{num_fold}_test.csv')\n pred_test_df = pd.read_csv(filepath_fold_test)\n pred_test_dfs.append(pred_test_df)",
"_____no_output_____"
],
[
"pred_test = pd.concat(pred_test_dfs).groupby('object_id').sum()\npred_test = pred_test / N_SPLITS\nif REGRESSION:\n if TARGET == 'target':\n pred_test['pred'].clip(lower=0, upper=3, inplace=True)\n else:\n pred_test['pred'] = np.vectorize(soring_date2target)(pred_test['pred'])\nelse:\n pred_test['pred'] = predict(pred_test['0'], pred_test['1'], pred_test['2'], pred_test['3'])\npred_test",
"_____no_output_____"
],
[
"test['target'] = pred_test.loc[test['object_id'], 'pred'].values\ntest = test[['target']]\ntest",
"_____no_output_____"
],
[
"sample_submission",
"_____no_output_____"
],
[
"test.to_csv(os.path.join(output_dir, f'{str(EXP_NO).zfill(3)}_submission.csv'), index=False)\nlogger.debug('Write submission to {}'.format(os.path.join(output_dir, f'{str(EXP_NO).zfill(3)}_submission.csv')))",
"2021-07-22 02:18:12,474 - __main__ - DEBUG - Write submission to ../scripts/../experiments/exp027/027_submission.csv\n"
],
[
"fig = plt.figure()\nif not (REGRESSION and TARGET == 'target'):\n sns.countplot(data=test, x='target')\nelse:\n sns.histplot(data=test, x='target')\nsns.despine()\nfig.savefig(os.path.join(output_dir, 'prediction.png'))\nlogger.debug('Write figure to {}'.format(os.path.join(output_dir, 'prediction.png')))",
"2021-07-22 02:18:12,544 - __main__ - DEBUG - Write figure to ../scripts/../experiments/exp027/prediction.png\n"
],
[
"logger.debug('Complete ({:.3f} seconds passed)'.format(time.time() - SINCE))",
"2021-07-22 02:18:12,639 - __main__ - DEBUG - Complete (23819.435 seconds passed)\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00d33ef8414c960908b091125da6b2ae0bd8972 | 731 | ipynb | Jupyter Notebook | syntax/test.ipynb | qrug/python-exercises | 1386cc407afca0b879ebb3a655962b3ef98da0aa | [
"Apache-2.0"
] | null | null | null | syntax/test.ipynb | qrug/python-exercises | 1386cc407afca0b879ebb3a655962b3ef98da0aa | [
"Apache-2.0"
] | null | null | null | syntax/test.ipynb | qrug/python-exercises | 1386cc407afca0b879ebb3a655962b3ef98da0aa | [
"Apache-2.0"
] | null | null | null | 16.613636 | 34 | 0.507524 | [
[
[
"print(\"hello world\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d00d36871b49d3ac12044bfa9125c7f07d4cb232 | 5,538 | ipynb | Jupyter Notebook | Qonto/Qonto_Get_statement_aggregated_by_date.ipynb | Charles-de-Montigny/awesome-notebooks | 79485142ba557e9c20e6f6dca4fdc12a3443813e | [
"BSD-3-Clause"
] | 1 | 2022-01-20T22:04:48.000Z | 2022-01-20T22:04:48.000Z | Qonto/Qonto_Get_statement_aggregated_by_date.ipynb | mmcfer/awesome-notebooks | 8d2892e40db480a323049e04decfefac45904af4 | [
"BSD-3-Clause"
] | 18 | 2021-10-02T02:49:32.000Z | 2021-12-27T21:39:14.000Z | Qonto/Qonto_Get_statement_aggregated_by_date.ipynb | mmcfer/awesome-notebooks | 8d2892e40db480a323049e04decfefac45904af4 | [
"BSD-3-Clause"
] | null | null | null | 23.974026 | 1,009 | 0.579451 | [
[
[
"<img width=\"10%\" alt=\"Naas\" src=\"https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160\"/>",
"_____no_output_____"
],
[
"# Qonto - Get statement aggregated by date\n<a href=\"https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Qonto/Qonto_Get_statement_aggregated_by_date.ipynb\" target=\"_parent\"><img src=\"https://img.shields.io/badge/-Open%20in%20Naas-success?labelColor=000000&logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iMTAyNHB4IiBoZWlnaHQ9IjEwMjRweCIgdmlld0JveD0iMCAwIDEwMjQgMTAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIj4KIDwhLS0gR2VuZXJhdGVkIGJ5IFBpeGVsbWF0b3IgUHJvIDIuMC41IC0tPgogPGRlZnM+CiAgPHRleHQgaWQ9InN0cmluZyIgdHJhbnNmb3JtPSJtYXRyaXgoMS4wIDAuMCAwLjAgMS4wIDIyOC4wIDU0LjUpIiBmb250LWZhbWlseT0iQ29tZm9ydGFhLVJlZ3VsYXIsIENvbWZvcnRhYSIgZm9udC1zaXplPSI4MDAiIHRleHQtZGVjb3JhdGlvbj0ibm9uZSIgZmlsbD0iI2ZmZmZmZiIgeD0iMS4xOTk5OTk5OTk5OTk5ODg2IiB5PSI3MDUuMCI+bjwvdGV4dD4KIDwvZGVmcz4KIDx1c2UgaWQ9Im4iIHhsaW5rOmhyZWY9IiNzdHJpbmciLz4KPC9zdmc+Cg==\"/></a>",
"_____no_output_____"
],
[
"**Tags:** #qonto #bank #statement #naas_drivers",
"_____no_output_____"
],
[
"## Input",
"_____no_output_____"
],
[
"### Import library",
"_____no_output_____"
]
],
[
[
"from naas_drivers import qonto",
"_____no_output_____"
]
],
[
[
"### Get your Qonto credentials\n<a href='https://www.notion.so/naas-official/Qonto-driver-Get-your-credentials-0cc97828b4e7467c8bfbcf704a77e5f4'>How to get your credentials ?</a>",
"_____no_output_____"
]
],
[
[
"QONTO_USER_ID = 'YOUR_USER_ID'\nQONTO_SECRET_KEY = 'YOUR_SECRET_KEY'",
"_____no_output_____"
]
],
[
[
"### Parameters",
"_____no_output_____"
]
],
[
[
"# Date to start extraction, format: \"AAAA-MM-JJ\", example: \"2021-01-01\"\ndate_from = None\n# Date to end extraction, format: \"AAAA-MM-JJ\", example: \"2021-01-01\", default = now\ndate_to = None",
"_____no_output_____"
]
],
[
[
"## Model",
"_____no_output_____"
],
[
"### Get statement aggregated by date",
"_____no_output_____"
]
],
[
[
"df_statement = qonto.connect(QONTO_USER_ID, QONTO_SECRET_KEY).statement.aggregated(date_from,\n date_to)",
"_____no_output_____"
]
],
[
[
"## Output",
"_____no_output_____"
],
[
"### Display result",
"_____no_output_____"
]
],
[
[
"df_statement",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d00d38272adc1830d22274ed1106e6be220feb74 | 2,582 | ipynb | Jupyter Notebook | notebooks/data_structure_queue.ipynb | misoncorp/cotylab | 253e8768fda30a8bc159c9b52cba5d719e457697 | [
"MIT"
] | null | null | null | notebooks/data_structure_queue.ipynb | misoncorp/cotylab | 253e8768fda30a8bc159c9b52cba5d719e457697 | [
"MIT"
] | null | null | null | notebooks/data_structure_queue.ipynb | misoncorp/cotylab | 253e8768fda30a8bc159c9b52cba5d719e457697 | [
"MIT"
] | null | null | null | 16.658065 | 77 | 0.459721 | [
[
[
"from collections import deque\nqueue = deque()\nqueue.append(1)\nqueue.append(2)\nqueue.append(3)\nqueue",
"_____no_output_____"
],
[
"queue.popleft()",
"_____no_output_____"
],
[
"queue.append(4)\nqueue.append(5)\nqueue",
"_____no_output_____"
],
[
"queue.popleft()",
"_____no_output_____"
],
[
"queue.popleft()",
"_____no_output_____"
],
[
"queue",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00d53da818568358640f5cde9a602547b6cb029 | 115,005 | ipynb | Jupyter Notebook | Models/Model1.ipynb | AmiteshBadkul/eeg-signal-classification | 5477f53981dafef71cecbe6129485df3976f734d | [
"MIT"
] | null | null | null | Models/Model1.ipynb | AmiteshBadkul/eeg-signal-classification | 5477f53981dafef71cecbe6129485df3976f734d | [
"MIT"
] | null | null | null | Models/Model1.ipynb | AmiteshBadkul/eeg-signal-classification | 5477f53981dafef71cecbe6129485df3976f734d | [
"MIT"
] | null | null | null | 116.284125 | 58,354 | 0.774392 | [
[
[
"%reset -f ",
"_____no_output_____"
],
[
"# libraries used\n# https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns \nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import confusion_matrix, classification_report\nfrom sklearn import preprocessing\nimport tensorflow as tf\nfrom tensorflow.keras import datasets, layers, models\nimport keras \nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout\nimport itertools",
"_____no_output_____"
],
[
"emotions = pd.read_csv(\"drive/MyDrive/EEG/emotions.csv\")\nemotions.replace(['NEGATIVE', 'POSITIVE', 'NEUTRAL'], [2, 1, 0], inplace=True)\nemotions['label'].unique()",
"_____no_output_____"
],
[
"X = emotions.drop('label', axis=1).copy()\ny = (emotions['label'].copy())",
"_____no_output_____"
],
[
"# Splitting data into training and testing as 80-20 \nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)\n\nx = X_train #returns a numpy array\nmin_max_scaler = preprocessing.MinMaxScaler()\nx_scaled = min_max_scaler.fit_transform(x)\ndf = pd.DataFrame(x_scaled)",
"_____no_output_____"
],
[
"# resetting the data - https://www.tensorflow.org/api_docs/python/tf/keras/backend/clear_session\ntf.keras.backend.clear_session()",
"_____no_output_____"
],
[
"model = Sequential()\nmodel.add(Dense((2*X_train.shape[1]/3), input_dim=X_train.shape[1], activation='relu'))\nmodel.add(Dense((2*X_train.shape[1]/3), activation='relu'))\nmodel.add(Dense((1*X_train.shape[1]/3), activation='relu'))\nmodel.add(Dense((1*X_train.shape[1]/3), activation='relu'))\nmodel.add(Dense(3, activation='softmax'))\n#model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\nprint(model.summary())\n\n\n\n",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense (Dense) (None, 1698) 4328202 \n_________________________________________________________________\ndense_1 (Dense) (None, 1698) 2884902 \n_________________________________________________________________\ndense_2 (Dense) (None, 849) 1442451 \n_________________________________________________________________\ndense_3 (Dense) (None, 849) 721650 \n_________________________________________________________________\ndense_4 (Dense) (None, 3) 2550 \n=================================================================\nTotal params: 9,379,755\nTrainable params: 9,379,755\nNon-trainable params: 0\n_________________________________________________________________\nNone\n"
],
[
"# for categorical entropy\n# https://stackoverflow.com/questions/63211181/error-while-using-categorical-crossentropy\nfrom tensorflow.keras.utils import to_categorical\nY_one_hot=to_categorical(y_train) # convert Y into an one-hot vector",
"_____no_output_____"
],
[
"# https://stackoverflow.com/questions/59737875/keras-change-learning-rate\n#optimizer = tf.keras.optimizers.Adam(0.001)\n#optimizer.learning_rate.assign(0.01)\n\nopt = keras.optimizers.SGD(learning_rate=0.001)\n\nmodel.compile(\n optimizer=opt,\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy']\n)\n\n# to be run for categorical cross entropy\n# model.compile(loss='categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(lr=0.01), metrics=['accuracy'])\n",
"_____no_output_____"
],
[
"# make sure that the input data is shuffled before hand so that the model doesn't notice patterns and generalizes well\n# change y_train to y_hot_encoded when using categorical cross entorpy\nimport time\nstart_time = time.time()\nhistory = model.fit(\n df,\n y_train,\n validation_split=0.2,\n batch_size=32,\n epochs=75)",
"Epoch 1/75\n43/43 [==============================] - 19s 98ms/step - loss: 1.0647 - accuracy: 0.4674 - val_loss: 0.9889 - val_accuracy: 0.6276\nEpoch 2/75\n43/43 [==============================] - 4s 86ms/step - loss: 0.9688 - accuracy: 0.6759 - val_loss: 0.9256 - val_accuracy: 0.6628\nEpoch 3/75\n43/43 [==============================] - 4s 82ms/step - loss: 0.9109 - accuracy: 0.7651 - val_loss: 0.8764 - val_accuracy: 0.6745\nEpoch 4/75\n43/43 [==============================] - 3s 81ms/step - loss: 0.8584 - accuracy: 0.7435 - val_loss: 0.8362 - val_accuracy: 0.6686\nEpoch 5/75\n43/43 [==============================] - 4s 86ms/step - loss: 0.8198 - accuracy: 0.7301 - val_loss: 0.8015 - val_accuracy: 0.8592\nEpoch 6/75\n43/43 [==============================] - 3s 77ms/step - loss: 0.7809 - accuracy: 0.7941 - val_loss: 0.7720 - val_accuracy: 0.7097\nEpoch 7/75\n43/43 [==============================] - 3s 81ms/step - loss: 0.7447 - accuracy: 0.7312 - val_loss: 0.7482 - val_accuracy: 0.8563\nEpoch 8/75\n43/43 [==============================] - 4s 89ms/step - loss: 0.7183 - accuracy: 0.8620 - val_loss: 0.7136 - val_accuracy: 0.8387\nEpoch 9/75\n43/43 [==============================] - 4s 82ms/step - loss: 0.6896 - accuracy: 0.8170 - val_loss: 0.6884 - val_accuracy: 0.8886\nEpoch 10/75\n43/43 [==============================] - 3s 81ms/step - loss: 0.6672 - accuracy: 0.8504 - val_loss: 0.6698 - val_accuracy: 0.7625\nEpoch 11/75\n43/43 [==============================] - 4s 87ms/step - loss: 0.6718 - accuracy: 0.8402 - val_loss: 0.6443 - val_accuracy: 0.7918\nEpoch 12/75\n43/43 [==============================] - 3s 76ms/step - loss: 0.6350 - accuracy: 0.8240 - val_loss: 0.6259 - val_accuracy: 0.9032\nEpoch 13/75\n43/43 [==============================] - 3s 79ms/step - loss: 0.6144 - accuracy: 0.8720 - val_loss: 0.6050 - val_accuracy: 0.8915\nEpoch 14/75\n43/43 [==============================] - 4s 90ms/step - loss: 0.5928 - accuracy: 0.8611 - val_loss: 0.5861 - val_accuracy: 0.9208\nEpoch 15/75\n43/43 [==============================] - 3s 79ms/step - loss: 0.5655 - accuracy: 0.8712 - val_loss: 0.5727 - val_accuracy: 0.9091\nEpoch 16/75\n43/43 [==============================] - 3s 81ms/step - loss: 0.5752 - accuracy: 0.8759 - val_loss: 0.5589 - val_accuracy: 0.8065\nEpoch 17/75\n43/43 [==============================] - 3s 77ms/step - loss: 0.5354 - accuracy: 0.8735 - val_loss: 0.5386 - val_accuracy: 0.9179\nEpoch 18/75\n43/43 [==============================] - 4s 82ms/step - loss: 0.5399 - accuracy: 0.8801 - val_loss: 0.5235 - val_accuracy: 0.9208\nEpoch 19/75\n43/43 [==============================] - 3s 76ms/step - loss: 0.5196 - accuracy: 0.9038 - val_loss: 0.5123 - val_accuracy: 0.9150\nEpoch 20/75\n43/43 [==============================] - 3s 78ms/step - loss: 0.5036 - accuracy: 0.9031 - val_loss: 0.5006 - val_accuracy: 0.9062\nEpoch 21/75\n43/43 [==============================] - 4s 83ms/step - loss: 0.5171 - accuracy: 0.8802 - val_loss: 0.4842 - val_accuracy: 0.9238\nEpoch 22/75\n43/43 [==============================] - 4s 87ms/step - loss: 0.4977 - accuracy: 0.8930 - val_loss: 0.4750 - val_accuracy: 0.9208\nEpoch 23/75\n43/43 [==============================] - 3s 81ms/step - loss: 0.4797 - accuracy: 0.8928 - val_loss: 0.4605 - val_accuracy: 0.9208\nEpoch 24/75\n43/43 [==============================] - 4s 89ms/step - loss: 0.4469 - accuracy: 0.9129 - val_loss: 0.4589 - val_accuracy: 0.8974\nEpoch 25/75\n43/43 [==============================] - 4s 90ms/step - loss: 0.4680 - accuracy: 0.8979 - val_loss: 0.4412 - val_accuracy: 0.9238\nEpoch 26/75\n43/43 [==============================] - 4s 84ms/step - loss: 0.4502 - accuracy: 0.9040 - val_loss: 0.4333 - val_accuracy: 0.9120\nEpoch 27/75\n43/43 [==============================] - 4s 82ms/step - loss: 0.4429 - accuracy: 0.8976 - val_loss: 0.4239 - val_accuracy: 0.9150\nEpoch 28/75\n43/43 [==============================] - 4s 83ms/step - loss: 0.4495 - accuracy: 0.8903 - val_loss: 0.4219 - val_accuracy: 0.9179\nEpoch 29/75\n43/43 [==============================] - 4s 83ms/step - loss: 0.4139 - accuracy: 0.8979 - val_loss: 0.4072 - val_accuracy: 0.9150\nEpoch 30/75\n43/43 [==============================] - 4s 89ms/step - loss: 0.4198 - accuracy: 0.9005 - val_loss: 0.3997 - val_accuracy: 0.9120\nEpoch 31/75\n43/43 [==============================] - 4s 84ms/step - loss: 0.4020 - accuracy: 0.8977 - val_loss: 0.3879 - val_accuracy: 0.9238\nEpoch 32/75\n43/43 [==============================] - 3s 73ms/step - loss: 0.4077 - accuracy: 0.8924 - val_loss: 0.3824 - val_accuracy: 0.9150\nEpoch 33/75\n43/43 [==============================] - 3s 81ms/step - loss: 0.3956 - accuracy: 0.9012 - val_loss: 0.3769 - val_accuracy: 0.9326\nEpoch 34/75\n43/43 [==============================] - 3s 81ms/step - loss: 0.4061 - accuracy: 0.8885 - val_loss: 0.3673 - val_accuracy: 0.9208\nEpoch 35/75\n43/43 [==============================] - 3s 79ms/step - loss: 0.3906 - accuracy: 0.8880 - val_loss: 0.3627 - val_accuracy: 0.9179\nEpoch 36/75\n43/43 [==============================] - 4s 87ms/step - loss: 0.3942 - accuracy: 0.8920 - val_loss: 0.3575 - val_accuracy: 0.9267\nEpoch 37/75\n43/43 [==============================] - 3s 80ms/step - loss: 0.3683 - accuracy: 0.9012 - val_loss: 0.3714 - val_accuracy: 0.8915\nEpoch 38/75\n43/43 [==============================] - 4s 89ms/step - loss: 0.3914 - accuracy: 0.8870 - val_loss: 0.3466 - val_accuracy: 0.9267\nEpoch 39/75\n43/43 [==============================] - 4s 84ms/step - loss: 0.3492 - accuracy: 0.9064 - val_loss: 0.3404 - val_accuracy: 0.9179\nEpoch 40/75\n43/43 [==============================] - 4s 89ms/step - loss: 0.3753 - accuracy: 0.8912 - val_loss: 0.3383 - val_accuracy: 0.9150\nEpoch 41/75\n43/43 [==============================] - 3s 76ms/step - loss: 0.3572 - accuracy: 0.8960 - val_loss: 0.3308 - val_accuracy: 0.9208\nEpoch 42/75\n43/43 [==============================] - 4s 84ms/step - loss: 0.3457 - accuracy: 0.9018 - val_loss: 0.3304 - val_accuracy: 0.9150\nEpoch 43/75\n43/43 [==============================] - 4s 83ms/step - loss: 0.3321 - accuracy: 0.9152 - val_loss: 0.3216 - val_accuracy: 0.9267\nEpoch 44/75\n43/43 [==============================] - 4s 89ms/step - loss: 0.3551 - accuracy: 0.8906 - val_loss: 0.3308 - val_accuracy: 0.9032\nEpoch 45/75\n43/43 [==============================] - 3s 77ms/step - loss: 0.3597 - accuracy: 0.8926 - val_loss: 0.3164 - val_accuracy: 0.9150\nEpoch 46/75\n43/43 [==============================] - 4s 90ms/step - loss: 0.3285 - accuracy: 0.9063 - val_loss: 0.3129 - val_accuracy: 0.9150\nEpoch 47/75\n43/43 [==============================] - 4s 89ms/step - loss: 0.3373 - accuracy: 0.9072 - val_loss: 0.3120 - val_accuracy: 0.9091\nEpoch 48/75\n43/43 [==============================] - 4s 89ms/step - loss: 0.3359 - accuracy: 0.8964 - val_loss: 0.3031 - val_accuracy: 0.9238\nEpoch 49/75\n43/43 [==============================] - 4s 83ms/step - loss: 0.3175 - accuracy: 0.9050 - val_loss: 0.3069 - val_accuracy: 0.9120\nEpoch 50/75\n43/43 [==============================] - 3s 73ms/step - loss: 0.3252 - accuracy: 0.9026 - val_loss: 0.2972 - val_accuracy: 0.9296\nEpoch 51/75\n43/43 [==============================] - 3s 82ms/step - loss: 0.3291 - accuracy: 0.9015 - val_loss: 0.3012 - val_accuracy: 0.9091\nEpoch 52/75\n43/43 [==============================] - 3s 76ms/step - loss: 0.3410 - accuracy: 0.8839 - val_loss: 0.2961 - val_accuracy: 0.9091\nEpoch 53/75\n43/43 [==============================] - 4s 85ms/step - loss: 0.3132 - accuracy: 0.9046 - val_loss: 0.3048 - val_accuracy: 0.9003\nEpoch 54/75\n43/43 [==============================] - 3s 81ms/step - loss: 0.3301 - accuracy: 0.8878 - val_loss: 0.2855 - val_accuracy: 0.9238\nEpoch 55/75\n43/43 [==============================] - 4s 82ms/step - loss: 0.3101 - accuracy: 0.9036 - val_loss: 0.2878 - val_accuracy: 0.9150\nEpoch 56/75\n43/43 [==============================] - 3s 77ms/step - loss: 0.3244 - accuracy: 0.8921 - val_loss: 0.2898 - val_accuracy: 0.9062\nEpoch 57/75\n43/43 [==============================] - 3s 78ms/step - loss: 0.3231 - accuracy: 0.8942 - val_loss: 0.2811 - val_accuracy: 0.9179\nEpoch 58/75\n43/43 [==============================] - 4s 89ms/step - loss: 0.2931 - accuracy: 0.9099 - val_loss: 0.2855 - val_accuracy: 0.9238\nEpoch 59/75\n43/43 [==============================] - 4s 84ms/step - loss: 0.3047 - accuracy: 0.8996 - val_loss: 0.2729 - val_accuracy: 0.9267\nEpoch 60/75\n43/43 [==============================] - 4s 89ms/step - loss: 0.3182 - accuracy: 0.8941 - val_loss: 0.2750 - val_accuracy: 0.9150\nEpoch 61/75\n43/43 [==============================] - 4s 83ms/step - loss: 0.3105 - accuracy: 0.8972 - val_loss: 0.2734 - val_accuracy: 0.9120\nEpoch 62/75\n43/43 [==============================] - 4s 88ms/step - loss: 0.3221 - accuracy: 0.8958 - val_loss: 0.2664 - val_accuracy: 0.9296\nEpoch 63/75\n43/43 [==============================] - 3s 81ms/step - loss: 0.3004 - accuracy: 0.9050 - val_loss: 0.2658 - val_accuracy: 0.9238\nEpoch 64/75\n43/43 [==============================] - 3s 81ms/step - loss: 0.2846 - accuracy: 0.9119 - val_loss: 0.2728 - val_accuracy: 0.9120\nEpoch 65/75\n43/43 [==============================] - 3s 80ms/step - loss: 0.2950 - accuracy: 0.9019 - val_loss: 0.2608 - val_accuracy: 0.9238\nEpoch 66/75\n43/43 [==============================] - 4s 88ms/step - loss: 0.2795 - accuracy: 0.9178 - val_loss: 0.2641 - val_accuracy: 0.9296\nEpoch 67/75\n43/43 [==============================] - 4s 86ms/step - loss: 0.2974 - accuracy: 0.8991 - val_loss: 0.2701 - val_accuracy: 0.9062\nEpoch 68/75\n43/43 [==============================] - 3s 78ms/step - loss: 0.3046 - accuracy: 0.8902 - val_loss: 0.2551 - val_accuracy: 0.9267\nEpoch 69/75\n43/43 [==============================] - 4s 84ms/step - loss: 0.2865 - accuracy: 0.9057 - val_loss: 0.2536 - val_accuracy: 0.9238\nEpoch 70/75\n43/43 [==============================] - 4s 90ms/step - loss: 0.2814 - accuracy: 0.9094 - val_loss: 0.2575 - val_accuracy: 0.9120\nEpoch 71/75\n43/43 [==============================] - 4s 84ms/step - loss: 0.2981 - accuracy: 0.9017 - val_loss: 0.2513 - val_accuracy: 0.9238\nEpoch 72/75\n43/43 [==============================] - 4s 90ms/step - loss: 0.2837 - accuracy: 0.9047 - val_loss: 0.2510 - val_accuracy: 0.9179\nEpoch 73/75\n43/43 [==============================] - 4s 89ms/step - loss: 0.3074 - accuracy: 0.8956 - val_loss: 0.2493 - val_accuracy: 0.9267\nEpoch 74/75\n43/43 [==============================] - 4s 90ms/step - loss: 0.2589 - accuracy: 0.9214 - val_loss: 0.2462 - val_accuracy: 0.9267\nEpoch 75/75\n43/43 [==============================] - 3s 80ms/step - loss: 0.2717 - accuracy: 0.9099 - val_loss: 0.2474 - val_accuracy: 0.9208\n"
],
[
"history.history",
"_____no_output_____"
],
[
"print(\"--- %s seconds ---\" % (time.time() - start_time))\n",
"--- 284.13894724845886 seconds ---\n"
],
[
"x_test = X_test #returns a numpy array\nmin_max_scaler = preprocessing.MinMaxScaler()\nx_scaled_test = min_max_scaler.fit_transform(x_test)\ndf_test = pd.DataFrame(x_scaled_test)",
"_____no_output_____"
],
[
"predictions = model.predict(x=df_test, batch_size=32)",
"_____no_output_____"
],
[
"rounded_predictions = np.argmax(predictions, axis=-1)",
"_____no_output_____"
],
[
"cm = confusion_matrix(y_true=y_test, y_pred=rounded_predictions)",
"_____no_output_____"
],
[
"label_mapping = {'NEGATIVE': 0, 'NEUTRAL': 1, 'POSITIVE': 2}\n# for diff dataset\n# label_mapping = {'NEGATIVE': 0, 'POSITIVE': 1}\nplt.figure(figsize=(8, 8))\nsns.heatmap(cm, annot=True, vmin=0, fmt='g', cbar=False, cmap='Blues')\nclr = classification_report(y_test, rounded_predictions, target_names=label_mapping.keys())\nplt.xticks(np.arange(3) + 0.5, label_mapping.keys())\nplt.yticks(np.arange(3) + 0.5, label_mapping.keys())\nplt.xlabel(\"Predicted\")\nplt.ylabel(\"Actual\")\nplt.title(\"Confusion Matrix\")\nplt.show()\n\nprint(\"Classification Report:\\n----------------------\\n\", clr)\n\n",
"_____no_output_____"
],
[
"# https://stackoverflow.com/questions/26413185/how-to-recover-matplotlib-defaults-after-setting-stylesheet\nimport matplotlib as mpl\nmpl.rcParams.update(mpl.rcParamsDefault)",
"_____no_output_____"
],
[
"training_acc = history.history['accuracy']\nvalidation_acc = history.history['val_accuracy']\ntraining_loss = history.history['loss']\nvalidation_loss = history.history['val_loss']",
"_____no_output_____"
],
[
"epochs = history.epoch\nplt.plot(epochs, training_acc, color = '#17e6e6', label='Training Accuracy')\nplt.plot(epochs, validation_acc,color = '#e61771', label='Validation Accuracy')\nplt.title('Accuracy vs Epochs')\nplt.xlabel('Epochs')\nplt.ylabel('Accuracy')\nplt.legend()\nplt.savefig('AccuracyVsEpochs.png')\nplt.show()\nfrom google.colab import files\nfiles.download('AccuracyVsEpochs.png') \n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00d5ea52fd52f90db7bf8d54063c2d38d45a79e | 18,125 | ipynb | Jupyter Notebook | jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb | maij/pyGSTi | 70e83e05fa689f53550feb3914c4fac40ca4a943 | [
"Apache-2.0"
] | 73 | 2016-01-28T05:02:05.000Z | 2022-03-30T07:46:33.000Z | jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb | maij/pyGSTi | 70e83e05fa689f53550feb3914c4fac40ca4a943 | [
"Apache-2.0"
] | 113 | 2016-02-25T15:32:18.000Z | 2022-03-31T13:18:13.000Z | jupyter_notebooks/Tutorials/01-Essential-Objects.ipynb | maij/pyGSTi | 70e83e05fa689f53550feb3914c4fac40ca4a943 | [
"Apache-2.0"
] | 41 | 2016-03-15T19:32:07.000Z | 2022-02-16T10:22:05.000Z | 43.154762 | 751 | 0.620193 | [
[
[
"# Essential Objects\nThis tutorial covers several object types that are foundational to much of what pyGSTi does: [circuits](#circuits), [processor specifications](#pspecs), [models](#models), and [data sets](#datasets). Our objective is to explain what these objects are and how they relate to one another at a high level while providing links to other notebooks that cover details we skip over here.",
"_____no_output_____"
]
],
[
[
"import pygsti\nfrom pygsti.circuits import Circuit\nfrom pygsti.models import Model\nfrom pygsti.data import DataSet",
"_____no_output_____"
]
],
[
[
"<a id=\"circuits\"></a>",
"_____no_output_____"
],
[
"## Circuits\nThe `Circuit` object encapsulates a quantum circuit as a sequence of *layers*, each of which contains zero or more non-identity *gates*. A `Circuit` has some number of labeled *lines* and each gate label is assigned to one or more lines. Line labels can be integers or strings. Gate labels have two parts: a `str`-type name and a tuple of line labels. A gate name typically begins with 'G' because this is expected when we parse circuits from text files.\n\nFor example, `('Gx',0)` is a gate label that means \"do the Gx gate on qubit 0\", and `('Gcnot',(2,3))` means \"do the Gcnot gate on qubits 2 and 3\".\n\nA `Circuit` can be created from a list of gate labels:",
"_____no_output_____"
]
],
[
[
"c = Circuit( [('Gx',0),('Gcnot',0,1),(),('Gy',3)], line_labels=[0,1,2,3])\nprint(c)",
"_____no_output_____"
]
],
[
[
"If you want multiple gates in a single layer, just put those gate labels in their own nested list:",
"_____no_output_____"
]
],
[
[
"c = Circuit( [('Gx',0),[('Gcnot',0,1),('Gy',3)],()] , line_labels=[0,1,2,3])\nprint(c)",
"_____no_output_____"
]
],
[
[
"We distinguish three basic types of circuit layers. We call layers containing quantum gates *operation layers*. All the circuits we've seen so far just have operation layers. It's also possible to have a *preparation layer* at the beginning of a circuit and a *measurement layer* at the end of a circuit. There can also be a fourth type of layer called an *instrument layer* which we dicuss in a separate [tutorial on Instruments](objects/advanced/Instruments.ipynb). Assuming that `'rho'` labels a (n-qubit) state preparation and `'Mz'` labels a (n-qubit) measurement, here's a circuit with all three types of layers:",
"_____no_output_____"
]
],
[
[
"c = Circuit( ['rho',('Gz',1),[('Gswap',0,1),('Gy',2)],'Mz'] , line_labels=[0,1,2])\nprint(c)",
"_____no_output_____"
]
],
[
[
"Finally, when dealing with small systems (e.g. 1 or 2 qubits), we typically just use a `str`-type label (without any line-labels) to denote every possible layer. In this case, all the labels operate on the entire state space so we don't need the notion of 'lines' in a `Circuit`. When there are no line-labels, a `Circuit` assumes a single default **'\\*'-label**, which you can usually just ignore:",
"_____no_output_____"
]
],
[
[
"c = Circuit( ['Gx','Gy','Gi'] )\nprint(c)",
"_____no_output_____"
]
],
[
[
"Pretty simple, right? The `Circuit` object allows you to easily manipulate its labels (similar to a NumPy array) and even perform some basic operations like depth reduction and simple compiling. For lots more details on how to create, modify, and use circuit objects see the [circuit tutorial](objects/Circuit.ipynb).\n<a id=\"models\"></a>",
"_____no_output_____"
],
[
"<a id=\"pspecs\"></a>",
"_____no_output_____"
],
[
"## Processor Specifications\nA processor specification describes the interface that a quantum processor exposes to the outside world. Actual quantum processors often have a \"native\" interface associated with them, but can also be viewed as implementing various other derived interfaces. For example, while a 1-qubit quantum processor may natively implement the $X(\\pi/2)$ and $Z(\\pi/2)$ gates, it can also implement the set of all 1-qubit Clifford gates. Both of these interfaces would correspond to a processor specification in pyGSTi.\n\nCurrently pyGSTi only supports processor specifications having an integral number of qubits. The `QubitProcessorSpec` object describes the number of qubits and what gates are available on them. For example,",
"_____no_output_____"
]
],
[
[
"pspec = pygsti.processors.QubitProcessorSpec(num_qubits=2, gate_names=['Gxpi2', 'Gypi2', 'Gcnot'],\n geometry=\"line\")\nprint(\"Qubit labels are\", pspec.qubit_labels)\nprint(\"X(pi/2) gates on qubits: \", pspec.resolved_availability('Gxpi2'))\nprint(\"CNOT gates on qubits: \", pspec.resolved_availability('Gcnot'))",
"_____no_output_____"
]
],
[
[
"creates a processor specification for a 2-qubits with $X(\\pi/2)$, $Y(\\pi/2)$, and CNOT gates. Setting the geometry to `\"line\"` causes 1-qubit gates to be available on each qubit and the CNOT between the two qubits (in either control/target direction). Processor specifications are used to build experiment designs and models, and so defining or importing an appropriate processor specification is often the first step in many analyses. To learn more about processor specification objects, see the [processor specification tutorial](objects/ProcessorSpec.ipynb).",
"_____no_output_____"
],
[
"## Models\nAn instance of the `Model` class represents something that can predict the outcome probabilities of quantum circuits. We define any such thing to be a \"QIP model\", or just a \"model\", as these probabilities define the behavior of some real or virtual QIP. Because there are so many types of models, the `Model` class in pyGSTi is just a base class and is never instaniated directly. Classes `ExplicitOpModel` and `ImplicitOpModel` (subclasses of `Model`) define two broad categories of models, both of which sequentially operate on circuit *layers* (the \"Op\" in the class names is short for \"layer operation\"). \n\n#### Explicit layer-operation models\nAn `ExplicitOpModel` is a container object. Its `.preps`, `.povms`, and `.operations` members are essentially dictionaires of state preparation, measurement, and layer-operation objects, respectively. How to create these objects and build up explicit models from scratch is a central capability of pyGSTi and a topic of the [explicit-model tutorial](objects/ExplicitModel.ipynb). Presently, we'll create a 2-qubit model using the processor specification above via the `create_explicit_model` function: ",
"_____no_output_____"
]
],
[
[
"mdl = pygsti.models.create_explicit_model(pspec)",
"_____no_output_____"
]
],
[
[
"This creates an `ExplicitOpModel` with a default preparation (prepares all qubits in the zero-state) labeled `'rho0'`, a default measurement labeled `'Mdefault'` in the Z-basis and with 5 layer-operations given by the labels in the 2nd argument (the first argument is akin to a circuit's line labels and the third argument contains special strings that the function understands): ",
"_____no_output_____"
]
],
[
[
"print(\"Preparations: \", ', '.join(map(str,mdl.preps.keys())))\nprint(\"Measurements: \", ', '.join(map(str,mdl.povms.keys())))\nprint(\"Layer Ops: \", ', '.join(map(str,mdl.operations.keys())))",
"_____no_output_____"
]
],
[
[
"We can now use this model to do what models were made to do: compute the outcome probabilities of circuits.",
"_____no_output_____"
]
],
[
[
"c = Circuit( [('Gxpi2',0),('Gcnot',0,1),('Gypi2',1)] , line_labels=[0,1])\nprint(c)\nmdl.probabilities(c) # Compute the outcome probabilities of circuit `c`",
"_____no_output_____"
]
],
[
[
"An `ExplictOpModel` only \"knows\" how to operate on circuit layers it explicitly contains in its dictionaries,\nso, for example, a circuit layer with two X gates in parallel (layer-label = `[('Gxpi2',0),('Gxpi2',1)]`) cannot be used with our model until we explicitly associate an operation with the layer-label `[('Gxpi2',0),('Gxpi2',1)]`:",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nc = Circuit( [[('Gxpi2',0),('Gxpi2',1)],('Gxpi2',1)] , line_labels=[0,1])\nprint(c)\n\ntry: \n p = mdl.probabilities(c)\nexcept KeyError as e:\n print(\"!!KeyError: \",str(e))\n \n #Create an operation for two parallel X-gates & rerun (now it works!)\n mdl.operations[ [('Gxpi2',0),('Gxpi2',1)] ] = np.dot(mdl.operations[('Gxpi2',0)].to_dense(),\n mdl.operations[('Gxpi2',1)].to_dense())\n p = mdl.probabilities(c)\n \nprint(\"Probability_of_outcome(00) = \", p['00']) # p is like a dictionary of outcomes",
"_____no_output_____"
],
[
"mdl.probabilities((('Gxpi2',0),('Gcnot',0,1)))",
"_____no_output_____"
]
],
[
[
"#### Implicit layer-operation models\nIn the above example, you saw how it is possible to manually add a layer-operation to an `ExplicitOpModel` based on its other, more primitive layer operations. This often works fine for a few qubits, but can quickly become tedious as the number of qubits increases (since the number of potential layers that involve a given set of gates grows exponentially with qubit number). This is where `ImplicitOpModel` objects come into play: these models contain rules for building up arbitrary layer-operations based on more primitive operations. PyGSTi offers several \"built-in\" types of implicit models and a rich set of tools for building your own custom ones. See the [tutorial on implicit models](objects/ImplicitModel.ipynb) for details. \n<a id=\"datasets\"></a>",
"_____no_output_____"
],
[
"## Data Sets\nThe `DataSet` object is a container for tabulated outcome counts. It behaves like a dictionary whose keys are `Circuit` objects and whose values are dictionaries that associate *outcome labels* with (usually) integer counts. There are two primary ways you go about getting a `DataSet`. The first is by reading in a simply formatted text file:",
"_____no_output_____"
]
],
[
[
"dataset_txt = \\\n\"\"\"## Columns = 00 count, 01 count, 10 count, 11 count\n{} 100 0 0 0\nGxpi2:0 55 5 40 0\nGxpi2:0Gypi2:1 20 27 23 30\nGxpi2:0^4 85 3 10 2\nGxpi2:0Gcnot:0:1 45 1 4 50\n[Gxpi2:0Gxpi2:1]Gypi2:0 25 32 17 26\n\"\"\"\nwith open(\"tutorial_files/Example_Short_Dataset.txt\",\"w\") as f:\n f.write(dataset_txt)\nds = pygsti.io.read_dataset(\"tutorial_files/Example_Short_Dataset.txt\")",
"_____no_output_____"
]
],
[
[
"The second is by simulating a `Model` and thereby generating \"fake data\". This essentially calls `mdl.probabilities(c)` for each circuit in a given list, and samples from the output probability distribution to obtain outcome counts:",
"_____no_output_____"
]
],
[
[
"circuit_list = pygsti.circuits.to_circuits([ (), \n (('Gxpi2',0),),\n (('Gxpi2',0),('Gypi2',1)),\n (('Gxpi2',0),)*4,\n (('Gxpi2',0),('Gcnot',0,1)),\n ((('Gxpi2',0),('Gxpi2',1)),('Gxpi2',0)) ], line_labels=(0,1))\nds_fake = pygsti.data.simulate_data(mdl, circuit_list, num_samples=100,\n sample_error='multinomial', seed=8675309)",
"_____no_output_____"
]
],
[
[
"Outcome counts are accessible by indexing a `DataSet` as if it were a dictionary with `Circuit` keys:",
"_____no_output_____"
]
],
[
[
"c = Circuit( (('Gxpi2',0),('Gypi2',1)), line_labels=(0,1) )\nprint(ds[c]) # index using a Circuit\nprint(ds[ [('Gxpi2',0),('Gypi2',1)] ]) # or with something that can be converted to a Circuit ",
"_____no_output_____"
]
],
[
[
"Because `DataSet` object can also store *timestamped* data (see the [time-dependent data tutorial](objects/advanced/TimestampedDataSets.ipynb), the values or \"rows\" of a `DataSet` aren't simple dictionary objects. When you'd like a `dict` of counts use the `.counts` member of a data set row:",
"_____no_output_____"
]
],
[
[
"row = ds[c]\nrow['00'] # this is ok\nfor outlbl, cnt in row.counts.items(): # Note: `row` doesn't have .items(), need \".counts\"\n print(outlbl, cnt)",
"_____no_output_____"
]
],
[
[
"Another thing to note is that `DataSet` objects are \"sparse\" in that 0-counts are not typically stored:",
"_____no_output_____"
]
],
[
[
"c = Circuit([('Gxpi2',0)], line_labels=(0,1))\nprint(\"No 01 or 11 outcomes here: \",ds_fake[c])\nfor outlbl, cnt in ds_fake[c].counts.items():\n print(\"Item: \",outlbl, cnt) # Note: this loop never loops over 01 or 11!",
"_____no_output_____"
]
],
[
[
"You can manipulate `DataSets` in a variety of ways, including:\n- adding and removing rows\n- \"trucating\" a `DataSet` to include only a subset of it's string\n- \"filtering\" a $n$-qubit `DataSet` to a $m < n$-qubit dataset\n\nTo find out more about these and other operations, see our [data set tutorial](objects/DataSet.ipynb).",
"_____no_output_____"
],
[
"## What's next?\nYou've learned about the three main object types in pyGSTi! The next step is to learn about how these objects are used within pyGSTi, which is the topic of the next [overview tutorial on applications](02-Applications.ipynb). Alternatively, if you're interested in learning more about the above-described or other objects, here are some links to relevant tutorials:\n- [Circuit](objects/Circuit.ipynb) - how to build circuits ([GST circuits](objects/advanced/GSTCircuitConstruction.ipynb) in particular)\n- [ExplicitModel](objects/ExplicitModel.ipynb) - constructing explicit layer-operation models\n- [ImplicitModel](objects/ImplicitModel.ipynb) - constructing implicit layer-operation models\n- [DataSet](objects/DataSet.ipynb) - constructing data sets ([timestamped data](objects/advanced/TimestampedDataSets.ipynb) in particular)\n- [Basis](objects/advanced/MatrixBases.ipynb) - defining matrix and vector bases\n- [Results](objects/advanced/Results.ipynb) - the container object for model-based results\n- [QubitProcessorSpec](objects/advanced/QubitProcessorSpec.ipynb) - represents a QIP as a collection of models and meta information. \n- [Instrument](objects/advanced/Instruments.ipynb) - allows for circuits with intermediate measurements\n- [Operation Factories](objects/advanced/OperationFactories.ipynb) - allows continuously parameterized gates",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d00d689e3720a32100c960d2230f92101f18d63a | 481,325 | ipynb | Jupyter Notebook | Cyber Hackathon.ipynb | MANUJMEHROTRA/CyberHacathon | 7e7e43a03b2c0754277466b30916f7f93a7de709 | [
"MIT"
] | null | null | null | Cyber Hackathon.ipynb | MANUJMEHROTRA/CyberHacathon | 7e7e43a03b2c0754277466b30916f7f93a7de709 | [
"MIT"
] | null | null | null | Cyber Hackathon.ipynb | MANUJMEHROTRA/CyberHacathon | 7e7e43a03b2c0754277466b30916f7f93a7de709 | [
"MIT"
] | null | null | null | 254.804129 | 371,944 | 0.892619 | [
[
[
"<img src=\"Techzooka.png\">",
"_____no_output_____"
],
[
"## Hacker Factory Cyber Hackathon Solution \n### by Team Jugaad (Abhiraj Singh Rajput, Deepanshu Gupta, Manuj Mehrotra)",
"_____no_output_____"
],
[
"We are a team of members, that are NOT moved by the buzzwords like Machine Learning, Data Science, AI etc. However we are a team of people who get adrenaline rush for seeking the solution to a problem. And the approach to solve the problem is never a constraint for us. Keeping our heads down we tried out bit to solve the problem of <i><b>“Preventive analytics with AI – How to use AI to predict probability of occurrence of a crime.”</b></i>\nFormally our team members: -\n<ul>\n <i><b>\n<li>Abhiraj Singh Rajput(BI Engineer, Emp ID -1052530)</li>\n<li>Deepanshu Gupta(Performance Test Engineer, Emp ID - 1048606)</li>\n<li>Manuj Mehrotra(Analyst-Data Science, Emp ID - 1061322)</li>\n </b></i>\n</ul>",
"_____no_output_____"
],
[
"### Preventive analytics with AI – How to use AI to predict probability of occurrence of a crime",
"_____no_output_____"
],
[
"<ul>\n<li><b>Context</b>:- We tried to create a classification ML model to analyze the data points and using that we tried to Forecast the occurrence of the Malware attack. For the study we have taken two separate datasets i.e. bifurcated datasets, on the basis of Static and Dynamic features (Sources: Ref[3]).</li><br>\n\n\n<li><b>Scope of the solution covered</b> :- Since the Model that we have created considers nearly 350 Features (i.e. 331 statics features and 13 dynamic Features) for predicting the attack, so the model is very Robust and is scalable very easily. The objective behind building this predictive model was to forecast the attack of a malicious app by capturing these features and hence preventing them from attacking the device.</li>\n</ul>\n",
"_____no_output_____"
],
[
"## Soultion Archicture ",
"_____no_output_____"
],
[
"<img src=\"Solution Architecture.png\">",
"_____no_output_____"
],
[
"# Additional Information – How can it enhance further",
"_____no_output_____"
],
[
"<ul>\n<li>The data set that we used for Static Analysis has just 398 data points that is comparatively very less to generalize a statistical model (to population).</li>\n\n<li>We haven’t tuned the all the hyper parameter of ML models, however we have considered the important hyper parameters while model building ex: - tuning K in K-NN.</li>\n\n<li>We have analyses the Static and Dynamic Analysis separately. However a more robust model will be that, analyzes both the features together, provided we have sufficient number of data points.</li>\n\n<li>Stacking or ensembling of the ML models from both the data sets could be done to make the model more Robust, provided we capture both the static and dynamic feature of the application.</li>\n\n<li>Dynamic features likes duracion , avg_local_pkt_rate and avg_remote_pkt_rate were not captured which would have degraded the model quality by some amount.</li>\n</ul>",
"_____no_output_____"
],
[
"# Proof Of Concept",
"_____no_output_____"
],
[
"### Static Analysis",
"_____no_output_____"
],
[
"Includes analysing the application that we want to analyse without executing it, like the study of resources, app permission etc.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndf = pd.read_csv(\"train.csv\", sep=\";\")",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
]
],
[
[
"### Let's get the top 10 of permissions that are used for our malware samples\n\n#### Malicious",
"_____no_output_____"
]
],
[
[
"series = pd.Series.sort_values(df[df.type==1].sum(axis=0), ascending=False)[1:11]\nseries",
"_____no_output_____"
],
[
"pd.Series.sort_values(df[df.type==0].sum(axis=0), ascending=False)[:10]",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nfig, axs = plt.subplots(nrows=2, sharex=True)\n\npd.Series.sort_values(df[df.type==0].sum(axis=0), ascending=False)[:10].plot.bar(ax=axs[0], color=\"green\")\npd.Series.sort_values(df[df.type==1].sum(axis=0), ascending=False)[1:11].plot.bar(ax=axs[1], color=\"red\")",
"_____no_output_____"
]
],
[
[
"## Now will try to predict with the exsisting data set, i.e. model creation\n### Machine Learning Models",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import GaussianNB, BernoulliNB\nfrom sklearn.metrics import accuracy_score, classification_report,roc_auc_score\nfrom sklearn.ensemble import BaggingClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import cohen_kappa_score\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.ensemble import RandomForestClassifier\n\nfrom sklearn import preprocessing\n#import torch\nfrom sklearn import svm\nfrom sklearn import tree\nimport pandas as pd\nfrom sklearn.externals import joblib\nimport pickle\nimport numpy as np\nimport seaborn as sns",
"C:\\Users\\Manuj Mehrotra\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release.\n from numpy.core.umath_tests import inner1d\n"
],
[
"y = df[\"type\"]\nX = df.drop(\"type\", axis=1)",
"_____no_output_____"
],
[
"X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.33,random_state=7)",
"_____no_output_____"
],
[
"# Naive Bayes algorithm\ngnb = GaussianNB()\ngnb.fit(X_train, y_train)\n\n# pred\npred = gnb.predict(X_test)\n\n# accuracy\naccuracy = accuracy_score(pred, y_test)\nprint(\"naive_bayes\")\nprint(accuracy)\nprint(classification_report(pred, y_test, labels=None))",
"naive_bayes\n0.8181818181818182\n precision recall f1-score support\n\n 0 0.95 0.75 0.84 84\n 1 0.68 0.94 0.79 48\n\navg / total 0.86 0.82 0.82 132\n\n"
],
[
"\nfor i in range(3,15,3):\n \n neigh = KNeighborsClassifier(n_neighbors=i)\n neigh.fit(X_train, y_train)\n pred = neigh.predict(X_test)\n # accuracy\n accuracy = accuracy_score(pred, y_test)\n print(\"kneighbors {}\".format(i))\n print(accuracy)\n print(classification_report(pred, y_test, labels=None))\n print(\"\")",
"kneighbors 3\n0.9015151515151515\n precision recall f1-score support\n\n 0 0.89 0.91 0.90 65\n 1 0.91 0.90 0.90 67\n\navg / total 0.90 0.90 0.90 132\n\n\nkneighbors 6\n0.8939393939393939\n precision recall f1-score support\n\n 0 0.94 0.86 0.90 72\n 1 0.85 0.93 0.89 60\n\navg / total 0.90 0.89 0.89 132\n\n\nkneighbors 9\n0.8863636363636364\n precision recall f1-score support\n\n 0 0.92 0.86 0.89 71\n 1 0.85 0.92 0.88 61\n\navg / total 0.89 0.89 0.89 132\n\n\nkneighbors 12\n0.8712121212121212\n precision recall f1-score support\n\n 0 0.92 0.84 0.88 73\n 1 0.82 0.92 0.86 59\n\navg / total 0.88 0.87 0.87 132\n\n\n"
],
[
"clf = tree.DecisionTreeClassifier()\nclf.fit(X_train, y_train)\n\n# Read the csv test file\n\npred = clf.predict(X_test)\n# accuracy\naccuracy = accuracy_score(pred, y_test)\nprint(clf)\nprint(accuracy)\nprint(classification_report(pred, y_test, labels=None))",
"DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,\n max_features=None, max_leaf_nodes=None,\n min_impurity_decrease=0.0, min_impurity_split=None,\n min_samples_leaf=1, min_samples_split=2,\n min_weight_fraction_leaf=0.0, presort=False, random_state=None,\n splitter='best')\n0.8560606060606061\n precision recall f1-score support\n\n 0 0.80 0.90 0.85 59\n 1 0.91 0.82 0.86 73\n\navg / total 0.86 0.86 0.86 132\n\n"
]
],
[
[
"### Dynamic Analysis\n\nFor this approach, we used a set of pcap files from the DroidCollector project integrated by 4705 benign and 7846 malicious applications. All of the files were processed by our feature extractor script (a result from [4]), the idea of this analysis is to answer the next question, according to the static analysis previously seen a lot of applications use a network connection, in other words, they are trying to communicate or transmit information, so.. is it possible to distinguish between malware and benign application using network traffic?",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndata = pd.read_csv(\"android_traffic.csv\", sep=\";\")\ndata.head()",
"_____no_output_____"
],
[
"data.columns",
"_____no_output_____"
],
[
"data.shape",
"_____no_output_____"
],
[
"data.type.value_counts()",
"_____no_output_____"
],
[
"data.isna().sum()",
"_____no_output_____"
],
[
"data = data.drop(['duracion','avg_local_pkt_rate','avg_remote_pkt_rate'], axis=1).copy()",
"_____no_output_____"
],
[
"data.describe()",
"_____no_output_____"
],
[
"sns.pairplot(data)",
"_____no_output_____"
],
[
"data.loc[data.tcp_urg_packet > 0].shape[0]",
"_____no_output_____"
],
[
"data = data.drop(columns=[\"tcp_urg_packet\"], axis=1).copy()\ndata.shape",
"_____no_output_____"
],
[
"data=data[data.tcp_packets<20000].copy()\ndata=data[data.dist_port_tcp<1400].copy()\ndata=data[data.external_ips<35].copy()\ndata=data[data.vulume_bytes<2000000].copy()\ndata=data[data.udp_packets<40].copy()\ndata=data[data.remote_app_packets<15000].copy()",
"_____no_output_____"
],
[
"data[data.duplicated()].sum()",
"_____no_output_____"
],
[
"data=data.drop('source_app_packets.1',axis=1).copy()",
"_____no_output_____"
],
[
"scaler = preprocessing.RobustScaler()\nscaledData = scaler.fit_transform(data.iloc[:,1:11])\nscaledData = pd.DataFrame(scaledData, columns=['tcp_packets','dist_port_tcp','external_ips','vulume_bytes','udp_packets','source_app_packets','remote_app_packets',' source_app_bytes','remote_app_bytes','dns_query_times'])",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(scaledData.iloc[:,0:10], data.type.astype(\"str\"), test_size=0.25, random_state=45)",
"_____no_output_____"
],
[
"gnb = GaussianNB()\ngnb.fit(X_train, y_train)\npred = gnb.predict(X_test)\n## accuracy\naccuracy = accuracy_score(y_test,pred)\nprint(\"naive_bayes\")\nprint(accuracy)\nprint(classification_report(y_test,pred, labels=None))\nprint(\"cohen kappa score\")\nprint(cohen_kappa_score(y_test, pred))",
"naive_bayes\n0.44688457609805926\n precision recall f1-score support\n\n benign 0.81 0.12 0.20 1190\n malicious 0.41 0.96 0.58 768\n\navg / total 0.66 0.45 0.35 1958\n\ncohen kappa score\n0.06082933470572538\n"
],
[
"for i in range(3,15,3):\n \n neigh = KNeighborsClassifier(n_neighbors=i)\n neigh.fit(X_train, y_train)\n pred = neigh.predict(X_test)\n # accuracy\n accuracy = accuracy_score(pred, y_test)\n print(\"kneighbors {}\".format(i))\n print(accuracy)\n print(classification_report(pred, y_test, labels=None))\n print(\"cohen kappa score\")\n print(cohen_kappa_score(y_test, pred))\n print(\"\")",
"kneighbors 3\n0.8861082737487231\n precision recall f1-score support\n\n benign 0.90 0.91 0.91 1173\n malicious 0.87 0.85 0.86 785\n\navg / total 0.89 0.89 0.89 1958\n\ncohen kappa score\n0.7620541314671169\n\nkneighbors 6\n0.8784473953013279\n precision recall f1-score support\n\n benign 0.92 0.88 0.90 1240\n malicious 0.81 0.87 0.84 718\n\navg / total 0.88 0.88 0.88 1958\n\ncohen kappa score\n0.7420746759356631\n\nkneighbors 9\n0.8707865168539326\n precision recall f1-score support\n\n benign 0.89 0.90 0.89 1175\n malicious 0.85 0.83 0.84 783\n\navg / total 0.87 0.87 0.87 1958\n\ncohen kappa score\n0.729919255030886\n\nkneighbors 12\n0.8615934627170582\n precision recall f1-score support\n\n benign 0.88 0.89 0.89 1185\n malicious 0.83 0.82 0.82 773\n\navg / total 0.86 0.86 0.86 1958\n\ncohen kappa score\n0.7100368862537227\n\n"
],
[
"rdF=RandomForestClassifier(n_estimators=250, max_depth=50,random_state=45)\nrdF.fit(X_train,y_train)\npred=rdF.predict(X_test)\ncm=confusion_matrix(y_test, pred)\n\naccuracy = accuracy_score(y_test,pred)\nprint(rdF)\nprint(accuracy)\nprint(classification_report(y_test,pred, labels=None))\nprint(\"cohen kappa score\")\nprint(cohen_kappa_score(y_test, pred))\nprint(cm)",
"RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',\n max_depth=50, max_features='auto', max_leaf_nodes=None,\n min_impurity_decrease=0.0, min_impurity_split=None,\n min_samples_leaf=1, min_samples_split=2,\n min_weight_fraction_leaf=0.0, n_estimators=250, n_jobs=1,\n oob_score=False, random_state=45, verbose=0, warm_start=False)\n0.9172625127681308\n precision recall f1-score support\n\n benign 0.93 0.94 0.93 1190\n malicious 0.90 0.88 0.89 768\n\navg / total 0.92 0.92 0.92 1958\n\ncohen kappa score\n0.8258206083396299\n[[1117 73]\n [ 89 679]]\n"
],
[
"from lightgbm import LGBMClassifier",
"_____no_output_____"
],
[
"rdF=LGBMClassifier(n_estimators=250, max_depth=50,random_state=45)\nrdF.fit(X_train,y_train)\npred=rdF.predict(X_test)\ncm=confusion_matrix(y_test, pred)\n\naccuracy = accuracy_score(y_test,pred)\nprint(rdF)\nprint(accuracy)\nprint(classification_report(y_test,pred, labels=None))\nprint(\"cohen kappa score\")\nprint(cohen_kappa_score(y_test, pred))\nprint(cm)",
"LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,\n importance_type='split', learning_rate=0.1, max_depth=50,\n min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,\n n_estimators=250, n_jobs=-1, num_leaves=31, objective=None,\n random_state=45, reg_alpha=0.0, reg_lambda=0.0, silent=True,\n subsample=1.0, subsample_for_bin=200000, subsample_freq=0)\n0.8932584269662921\n precision recall f1-score support\n\n benign 0.91 0.91 0.91 1190\n malicious 0.86 0.87 0.86 768\n\navg / total 0.89 0.89 0.89 1958\n\ncohen kappa score\n0.7763753108008415\n[[1083 107]\n [ 102 666]]\n"
],
[
"import pandas as pd\nfeature_importances = pd.DataFrame(rdF.feature_importances_,index = X_train.columns,columns=['importance']).sort_values('importance',ascending=False)",
"_____no_output_____"
],
[
"feature_importances",
"_____no_output_____"
],
[
"x= feature_importances.index",
"_____no_output_____"
],
[
"y=feature_importances[\"importance\"]",
"_____no_output_____"
],
[
"plt.figure(figsize=(6,4))\nsns.barplot(x=y,y=x)",
"_____no_output_____"
]
],
[
[
"# Refrence",
"_____no_output_____"
],
[
"<ul>\n\n<li>Christian Camilo Urcuqui López(https://github.com/urcuqui/)</li>\n<li>Android Genome Project (MalGenome)</li>\n<li>Data Set Source (https://www.kaggle.com/xwolf12/datasetandroidpermissions , https://www.kaggle.com/xwolf12/network-traffic-android-malware)</li>\n<li> [1] López, U., Camilo, C., García Peña, M., Osorio Quintero, J. L., & Navarro Cadavid, A. (2018). Ciberseguridad: un enfoque desde la ciencia de datos-Primera edición.</li>\n<li>[2] Navarro Cadavid, A., Londoño, S., Urcuqui López, C. C., & Gomez, J. (2014, June). Análisis y caracterización de frameworks para detección de aplicaciones maliciosas en Android. In Conference: XIV Jornada Internacional de Seguridad Informática ACIS-2014 (Vol. 14). ResearchGate.</li>\n<li>[3] Urcuqui-López, C., & Cadavid, A. N. (2016). Framework for malware analysis in Android.</li>\n<li>[4] Urcuqui, C., Navarro, A., Osorio, J., & Garcıa, M. (2017). Machine Learning Classifiers to Detect Malicious Websites. CEUR Workshop Proceedings. Vol 1950, 14-17.</li>\n<li>[5] López, C. C. U., Villarreal, J. S. D., Belalcazar, A. F. P., Cadavid, A. N., & Cely, J. G. D. (2018, May). Features to Detect Android Malware. In 2018 IEEE Colombian Conference on Communications and Computing (COLCOM) (pp. 1-6). IEEE</li>\n\n</ul>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d00d71bcb437b9a70dea61abc361c4d12520b89c | 146,021 | ipynb | Jupyter Notebook | salary-data.ipynb | JCode1986/data_analysis | 8f298369d4f89b15d13010d701e2e9da5c1415bd | [
"MIT"
] | null | null | null | salary-data.ipynb | JCode1986/data_analysis | 8f298369d4f89b15d13010d701e2e9da5c1415bd | [
"MIT"
] | null | null | null | salary-data.ipynb | JCode1986/data_analysis | 8f298369d4f89b15d13010d701e2e9da5c1415bd | [
"MIT"
] | null | null | null | 666.762557 | 64,716 | 0.809644 | [
[
[
"# Salary Data",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.model_selection import train_test_split\nimport seaborn as sns",
"_____no_output_____"
],
[
"salary = pd.read_csv(\"Salary_Data.csv\")\nsalary.head()",
"_____no_output_____"
],
[
"salary.info()\nsalary.describe()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 30 entries, 0 to 29\nData columns (total 2 columns):\nYearsExperience 30 non-null float64\nSalary 30 non-null float64\ndtypes: float64(2)\nmemory usage: 608.0 bytes\n"
],
[
"X = salary['YearsExperience'].values\ny = salary['Salary'].values \ny",
"_____no_output_____"
],
[
"minimum = salary['Salary'].min()\nmiddle = salary['Salary'].median()\nmaximum = salary['Salary'].max()\nprint(middle)\nprint(minimum)\nprint(maximum)",
"65237.0\n37731.0\n122391.0\n"
],
[
"\nX=X.reshape(-1,1)\ny=y.reshape(-1,1)",
"_____no_output_____"
],
[
"x_train, x_test, y_train, y_test = train_test_split(X,y,train_size=0.8,test_size=0.2,random_state=100)\nprint(f\"X_train shape {x_train.shape}\")\nprint(f\"y_train shape {y_train.shape}\")\nprint(f\"X_test shape {x_test.shape}\")\nprint(f\"y_test shape {y_test.shape}\")\nprint(y_test)\nprint(x_test)",
"X_train shape (24, 1)\ny_train shape (24, 1)\nX_test shape (6, 1)\ny_test shape (6, 1)\n[[ 57189.]\n [116969.]\n [122391.]\n [ 57081.]\n [ 56642.]\n [ 56957.]]\n[[ 3.7]\n [ 9.5]\n [10.3]\n [ 4.1]\n [ 2.9]\n [ 4. ]]\n"
],
[
"%matplotlib inline\n\nplt.scatter(x_train,y_train,color='red')\nplt.xlabel('Year of Experience')\nplt.ylabel('Salary')\nplt.title('Salary Data')\nplt.show()",
"_____no_output_____"
],
[
"sns.set()\nplt.figure(figsize=(12,6),dpi=100)\nsns.regplot(x='YearsExperience', y='Salary', data=salary, order=1)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00d7b2ea851ea1d08917c83d9b2d22cba45b26b | 57,025 | ipynb | Jupyter Notebook | 06.LinearRegresssionCSV.ipynb | sarincr/Data-Analytics-with-R | cc76c9c815987c40211e2bc6f7594c8500100291 | [
"Apache-2.0"
] | null | null | null | 06.LinearRegresssionCSV.ipynb | sarincr/Data-Analytics-with-R | cc76c9c815987c40211e2bc6f7594c8500100291 | [
"Apache-2.0"
] | null | null | null | 06.LinearRegresssionCSV.ipynb | sarincr/Data-Analytics-with-R | cc76c9c815987c40211e2bc6f7594c8500100291 | [
"Apache-2.0"
] | null | null | null | 222.753906 | 49,614 | 0.874511 | [
[
[
"dataset = read.csv('Data.csv')",
"_____no_output_____"
],
[
"dataset",
"_____no_output_____"
],
[
"regressor = lm(formula = Salary ~ YearsExperience,\n data = dataset)",
"_____no_output_____"
],
[
"y_pred = predict(regressor, newdata = dataset)",
"_____no_output_____"
],
[
"library(ggplot2)\nggplot() +\n geom_point(aes(x = dataset$YearsExperience, y = dataset$Salary),\n colour = 'red') +\n geom_line(aes(x = dataset$YearsExperience, y = predict(regressor, newdata = dataset)),\n colour = 'blue') +\n ggtitle('Salary vs Experience (Training set)') +\n xlab('Years of experience') +\n ylab('Salary')\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d00d7d05c33a0b75bc5ca1f82f750cc414a7ea07 | 173,591 | ipynb | Jupyter Notebook | 1_1_Image_Representation/6_2. Standardizing the Data.ipynb | georgiagn/CVND_Exercises | 4de186c80d14ed7d1e61c6bc51098ad0d9b4c54b | [
"MIT"
] | 1 | 2020-11-16T20:18:21.000Z | 2020-11-16T20:18:21.000Z | 1_1_Image_Representation/6_2. Standardizing the Data.ipynb | georgiagn/CVND_Exercises | 4de186c80d14ed7d1e61c6bc51098ad0d9b4c54b | [
"MIT"
] | null | null | null | 1_1_Image_Representation/6_2. Standardizing the Data.ipynb | georgiagn/CVND_Exercises | 4de186c80d14ed7d1e61c6bc51098ad0d9b4c54b | [
"MIT"
] | null | null | null | 516.639881 | 82,016 | 0.943684 | [
[
[
"# Day and Night Image Classifier\n---\n\nThe day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.\n\nWe'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!\n\n*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).*\n",
"_____no_output_____"
],
[
"### Import resources\n\nBefore you get started on the project code, import the libraries and resources that you'll need.",
"_____no_output_____"
]
],
[
[
"import cv2 # computer vision library\nimport helpers\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Training and Testing Data\nThe 200 day/night images are separated into training and testing datasets. \n\n* 60% of these images are training images, for you to use as you create a classifier.\n* 40% are test images, which will be used to test the accuracy of your classifier.\n\nFirst, we set some variables to keep track of some where our images are stored:\n\n image_dir_training: the directory where our training image data is stored\n image_dir_test: the directory where our test image data is stored",
"_____no_output_____"
]
],
[
[
"# Image data directories\nimage_dir_training = \"day_night_images/training/\"\nimage_dir_test = \"day_night_images/test/\"",
"_____no_output_____"
]
],
[
[
"## Load the datasets\n\nThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label (\"day\" or \"night\"). \n\nFor example, the first image-label pair in `IMAGE_LIST` can be accessed by index: \n``` IMAGE_LIST[0][:]```.\n",
"_____no_output_____"
]
],
[
[
"# Using the load_dataset function in helpers.py\n# Load training data\nIMAGE_LIST = helpers.load_dataset(image_dir_training)\n",
"_____no_output_____"
]
],
[
[
"---\n# 1. Visualize the input images\n",
"_____no_output_____"
]
],
[
[
"# Print out 1. The shape of the image and 2. The image's label\n\n# Select an image and its label by list index\nimage_index = 0\nselected_image = IMAGE_LIST[image_index][0]\nselected_label = IMAGE_LIST[image_index][1]\n\n# Display image and data about it\nplt.imshow(selected_image)\nprint(\"Shape: \"+str(selected_image.shape))\nprint(\"Label: \" + str(selected_label))\n",
"Shape: (458, 800, 3)\nLabel: day\n"
]
],
[
[
"# 2. Pre-process the Data\n\nAfter loading in each image, you have to standardize the input and output. \n\n#### Solution code\n\nYou are encouraged to try to complete this code on your own, but if you are struggling or want to make sure your code is correct, there i solution code in the `helpers.py` file in this directory. You can look at that python file to see complete `standardize_input` and `encode` function code. For this day and night challenge, you can often jump one notebook ahead to see the solution code for a previous notebook!\n",
"_____no_output_____"
],
[
"---\n### Input\n\nIt's important to make all your images the same size so that they can be sent through the same pipeline of classification steps! Every input image should be in the same format, of the same size, and so on.\n\n#### TODO: Standardize the input images\n\n* Resize each image to the desired input size: 600x1100px (hxw).",
"_____no_output_____"
]
],
[
[
"# This function should take in an RGB image and return a new, standardized version\ndef standardize_input(image):\n \n ## TODO: Resize image so that all \"standard\" images are the same size 600x1100 (hxw) \n standard_im = image[0:600, 0:1100]\n \n return standard_im\n ",
"_____no_output_____"
]
],
[
[
"### TODO: Standardize the output\n\nWith each loaded image, you also need to specify the expected output. For this, use binary numerical values 0/1 = night/day.",
"_____no_output_____"
]
],
[
[
"# Examples: \n# encode(\"day\") should return: 1\n# encode(\"night\") should return: 0\n\ndef encode(label):\n \n numerical_val = 0\n ## TODO: complete the code to produce a numerical label\n if label == \"day\":\n numerical_val = 1\n return numerical_val\n",
"_____no_output_____"
]
],
[
[
"## Construct a `STANDARDIZED_LIST` of input images and output labels.\n\nThis function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.\n\nThis uses the functions you defined above to standardize the input and output, so those functions must be complete for this standardization to work!\n",
"_____no_output_____"
]
],
[
[
"def standardize(image_list):\n \n # Empty image data array\n standard_list = []\n\n # Iterate through all the image-label pairs\n for item in image_list:\n image = item[0]\n label = item[1]\n\n # Standardize the image\n standardized_im = standardize_input(image)\n\n # Create a numerical label\n binary_label = encode(label) \n\n # Append the image, and it's one hot encoded label to the full, processed list of image data \n standard_list.append((standardized_im, binary_label))\n \n return standard_list\n\n# Standardize all training images\nSTANDARDIZED_LIST = standardize(IMAGE_LIST)",
"_____no_output_____"
]
],
[
[
"## Visualize the standardized data\n\nDisplay a standardized image from STANDARDIZED_LIST.",
"_____no_output_____"
]
],
[
[
"# Display a standardized image and its label\n\n# Select an image by index\nimage_num = 0\nselected_image = STANDARDIZED_LIST[image_num][0]\nselected_label = STANDARDIZED_LIST[image_num][1]\n\n# Display image and data about it\n## TODO: Make sure the images have numerical labels and are of the same size\nplt.imshow(selected_image)\nprint(\"Shape: \"+str(selected_image.shape))\nprint(\"Label [1 = day, 0 = night]: \" + str(selected_label))\n",
"Shape: (458, 800, 3)\nLabel [1 = day, 0 = night]: 1\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d00d877c7070a3d92bae577b1118de76df516a2d | 718,398 | ipynb | Jupyter Notebook | optimize.ipynb | QSCTech-Sange/Optimization_Project | a3e1382f8dc4ff8ca6838c0be88f0d65157d5a58 | [
"Apache-2.0"
] | null | null | null | optimize.ipynb | QSCTech-Sange/Optimization_Project | a3e1382f8dc4ff8ca6838c0be88f0d65157d5a58 | [
"Apache-2.0"
] | null | null | null | optimize.ipynb | QSCTech-Sange/Optimization_Project | a3e1382f8dc4ff8ca6838c0be88f0d65157d5a58 | [
"Apache-2.0"
] | null | null | null | 49.050799 | 35,642 | 0.689788 | [
[
[
"from plot import *\nfrom gen import *\n# from load_data import * \nfrom func_tools import *\nfrom AGM import *\nfrom GM import *\nfrom BFGS import *\nfrom LBFGS import *\nfrom sklearn import metrics\nimport warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"def purity_score(y_true, y_pred):\n contingency_matrix = metrics.cluster.contingency_matrix(y_true, y_pred)\n return np.sum(np.amax(contingency_matrix, axis=0)) / np.sum(contingency_matrix)",
"_____no_output_____"
]
],
[
[
"### 生成随机数",
"_____no_output_____"
]
],
[
[
"# 这里使用默认的参数,按照均匀分布的中心点\n# TODO: task 上说可以尝试有趣的pattern,我们可以手动给定centroid再生成周围点,详见 gen.py 的文档\ncentroids, points, N = gen_data() \n",
"_____no_output_____"
],
[
"y_true = np.repeat(np.arange(len(N)),N)\nlen(y_true)",
"_____no_output_____"
],
[
"len(points)",
"_____no_output_____"
],
[
"# 简单画个图\nplt.figure(figsize=(10,10))\nplot_generated_data(centroids, points, N)",
"_____no_output_____"
],
[
"len(points)",
"_____no_output_____"
]
],
[
[
"## AGM Sample",
"_____no_output_____"
]
],
[
[
"lbd = 0.05\ndelta = 1e-3\nn = len(points)\nstep = step_size(n,lbd,delta)\ngrad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D)",
"_____no_output_____"
],
[
"ans,AGM_loss = AGM(grad,points,step,0.005)",
"Iter: 1\nnorm(gX): 25.37348552958234\nIter: 2\nnorm(gX): 25.367593164300065\nIter: 3\nnorm(gX): 25.36004233971449\nIter: 4\nnorm(gX): 25.350876433742965\nIter: 5\nnorm(gX): 25.34012269592468\nIter: 6\nnorm(gX): 25.327800358088258\nIter: 7\nnorm(gX): 25.31392405851996\nIter: 8\nnorm(gX): 25.29850556719863\nIter: 9\nnorm(gX): 25.28155475835447\nIter: 10\nnorm(gX): 25.263080203617797\nIter: 11\nnorm(gX): 25.243089555612656\nIter: 12\nnorm(gX): 25.221589807585065\nIter: 13\nnorm(gX): 25.19858747565082\nIter: 14\nnorm(gX): 25.174088730602733\nIter: 15\nnorm(gX): 25.148099495640665\nIter: 16\nnorm(gX): 25.120625520375963\nIter: 17\nnorm(gX): 25.091672437887357\nIter: 18\nnorm(gX): 25.061245809397192\nIter: 19\nnorm(gX): 25.02935115972775\nIter: 20\nnorm(gX): 24.995994005771692\nIter: 21\nnorm(gX): 24.96117987958791\nIter: 22\nnorm(gX): 24.924914347305055\nIter: 23\nnorm(gX): 24.8872030247142\nIter: 24\nnorm(gX): 24.84805159021705\nIter: 25\nnorm(gX): 24.80746579564002\nIter: 26\nnorm(gX): 24.765451475309657\nIter: 27\nnorm(gX): 24.722014553699022\nIter: 28\nnorm(gX): 24.677161051890018\nIter: 29\nnorm(gX): 24.630897093047015\nIter: 30\nnorm(gX): 24.5832289070592\nIter: 31\nnorm(gX): 24.53416283447914\nIter: 32\nnorm(gX): 24.483705329861735\nIter: 33\nnorm(gX): 24.431862964589488\nIter: 34\nnorm(gX): 24.378642429254782\nIter: 35\nnorm(gX): 24.324050535658525\nIter: 36\nnorm(gX): 24.268094218474406\nIter: 37\nnorm(gX): 24.21078053662063\nIter: 38\nnorm(gX): 24.15211667437418\nIter: 39\nnorm(gX): 24.092109942257753\nIter: 40\nnorm(gX): 24.030767777724794\nIter: 41\nnorm(gX): 23.968097745664732\nIter: 42\nnorm(gX): 23.904107538747166\nIter: 43\nnorm(gX): 23.838804977621482\nIter: 44\nnorm(gX): 23.772198010986003\nIter: 45\nnorm(gX): 23.70429471553905\nIter: 46\nnorm(gX): 23.635103295822738\nIter: 47\nnorm(gX): 23.564632083969002\nIter: 48\nnorm(gX): 23.49288953935614\nIter: 49\nnorm(gX): 23.419884248183283\nIter: 50\nnorm(gX): 23.345624922969375\nIter: 51\nnorm(gX): 23.270120401982314\nIter: 52\nnorm(gX): 23.193379648603614\nIter: 53\nnorm(gX): 23.115411750633108\nIter: 54\nnorm(gX): 23.0362259195378\nIter: 55\nnorm(gX): 22.955831489648755\nIter: 56\nnorm(gX): 22.87423791730921\nIter: 57\nnorm(gX): 22.79145477997714\nIter: 58\nnorm(gX): 22.707491775284844\nIter: 59\nnorm(gX): 22.622358720058266\nIter: 60\nnorm(gX): 22.536065549298193\nIter: 61\nnorm(gX): 22.44862231512554\nIter: 62\nnorm(gX): 22.360039185692585\nIter: 63\nnorm(gX): 22.270326444061972\nIter: 64\nnorm(gX): 22.179494487055116\nIter: 65\nnorm(gX): 22.087553824071517\nIter: 66\nnorm(gX): 21.994515075880408\nIter: 67\nnorm(gX): 21.900388973386082\nIter: 68\nnorm(gX): 21.805186356368072\nIter: 69\nnorm(gX): 21.708918172197397\nIter: 70\nnorm(gX): 21.6115954745299\nIter: 71\nnorm(gX): 21.513229421977773\nIter: 72\nnorm(gX): 21.413831276760174\nIter: 73\nnorm(gX): 21.313412403333896\nIter: 74\nnorm(gX): 21.21198426700493\nIter: 75\nnorm(gX): 21.10955843252179\nIter: 76\nnorm(gX): 21.006146562651384\nIter: 77\nnorm(gX): 20.90176041673818\nIter: 78\nnorm(gX): 20.796411849247438\nIter: 79\nnorm(gX): 20.690112808293218\nIter: 80\nnorm(gX): 20.582875334151815\nIter: 81\nnorm(gX): 20.47471155776136\nIter: 82\nnorm(gX): 20.365633699208182\nIter: 83\nnorm(gX): 20.255654066200602\nIter: 84\nnorm(gX): 20.14478505253083\nIter: 85\nnorm(gX): 20.03303913652547\nIter: 86\nnorm(gX): 19.920428879485396\nIter: 87\nnorm(gX): 19.8069669241155\nIter: 88\nnorm(gX): 19.69266599294497\nIter: 89\nnorm(gX): 19.57753888673875\nIter: 90\nnorm(gX): 19.461598482900715\nIter: 91\nnorm(gX): 19.34485773386931\nIter: 92\nnorm(gX): 19.227329665506254\nIter: 93\nnorm(gX): 19.109027375479005\nIter: 94\nnorm(gX): 18.989964031637783\nIter: 95\nnorm(gX): 18.870152870387905\nIter: 96\nnorm(gX): 18.749607195058424\nIter: 97\nnorm(gX): 18.628340374268028\nIter: 98\nnorm(gX): 18.506365840289668\nIter: 99\nnorm(gX): 18.383697087415385\nIter: 100\nnorm(gX): 18.26034767032381\nIter: 101\nnorm(gX): 18.136331202453537\nIter: 102\nnorm(gX): 18.01166135438792\nIter: 103\nnorm(gX): 17.88635185226068\nIter: 104\nnorm(gX): 17.760416476201183\nIter: 105\nnorm(gX): 17.633869058860714\nIter: 106\nnorm(gX): 17.506723484125153\nIter: 107\nnorm(gX): 17.378993686330134\nIter: 108\nnorm(gX): 17.25069365115764\nIter: 109\nnorm(gX): 17.12183742423566\nIter: 110\nnorm(gX): 16.992403992874003\nIter: 111\nnorm(gX): 16.86247139913701\nIter: 112\nnorm(gX): 16.73211158637189\nIter: 113\nnorm(gX): 16.601321032150626\nIter: 114\nnorm(gX): 16.46998798132322\nIter: 115\nnorm(gX): 16.338100761590564\nIter: 116\nnorm(gX): 16.205756393958552\nIter: 117\nnorm(gX): 16.072969256320647\nIter: 118\nnorm(gX): 15.93975379331718\nIter: 119\nnorm(gX): 15.806124558281331\nIter: 120\nnorm(gX): 15.672096312196517\nIter: 121\nnorm(gX): 15.537684310500813\nIter: 122\nnorm(gX): 15.40290533712362\nIter: 123\nnorm(gX): 15.26776401208482\nIter: 124\nnorm(gX): 15.132275964591384\nIter: 125\nnorm(gX): 14.996555300234817\nIter: 126\nnorm(gX): 14.860597501045676\nIter: 127\nnorm(gX): 14.724253815305408\nIter: 128\nnorm(gX): 14.587554930834612\nIter: 129\nnorm(gX): 14.450585546878944\nIter: 130\nnorm(gX): 14.31336091517913\nIter: 131\nnorm(gX): 14.175895765825208\nIter: 132\nnorm(gX): 14.038204714321528\nIter: 133\nnorm(gX): 13.900302347179224\nIter: 134\nnorm(gX): 13.762203239090526\nIter: 135\nnorm(gX): 13.623921943103278\nIter: 136\nnorm(gX): 13.485352963373838\nIter: 137\nnorm(gX): 13.346611514836438\nIter: 138\nnorm(gX): 13.20778375213259\nIter: 139\nnorm(gX): 13.06888741307948\nIter: 140\nnorm(gX): 12.929935094774132\nIter: 141\nnorm(gX): 12.79092717879954\nIter: 142\nnorm(gX): 12.65178842360495\nIter: 143\nnorm(gX): 12.512584629292666\nIter: 144\nnorm(gX): 12.37333014893899\nIter: 145\nnorm(gX): 12.234039200334239\nIter: 146\nnorm(gX): 12.094725727099387\nIter: 147\nnorm(gX): 11.955403108521448\nIter: 148\nnorm(gX): 11.816083468182711\nIter: 149\nnorm(gX): 11.676765310601738\nIter: 150\nnorm(gX): 11.537396650251312\nIter: 151\nnorm(gX): 11.398254121217644\nIter: 152\nnorm(gX): 11.25936155513688\nIter: 153\nnorm(gX): 11.120613163518069\nIter: 154\nnorm(gX): 10.981832383323495\nIter: 155\nnorm(gX): 10.843745476970826\nIter: 156\nnorm(gX): 10.705515112574004\nIter: 157\nnorm(gX): 10.567503901069305\nIter: 158\nnorm(gX): 10.429262845380418\nIter: 159\nnorm(gX): 10.291090693548309\nIter: 160\nnorm(gX): 10.153096058701744\nIter: 161\nnorm(gX): 10.015295276757245\nIter: 162\nnorm(gX): 9.877703273029208\nIter: 163\nnorm(gX): 9.740334546032416\nIter: 164\nnorm(gX): 9.603203423207178\nIter: 165\nnorm(gX): 9.466324138532165\nIter: 166\nnorm(gX): 9.329710851589994\nIter: 167\nnorm(gX): 9.193377636442635\nIter: 168\nnorm(gX): 9.057338438604754\nIter: 169\nnorm(gX): 8.921606973454045\nIter: 170\nnorm(gX): 8.786196486964844\nIter: 171\nnorm(gX): 8.65111913938152\nIter: 172\nnorm(gX): 8.51638416502524\nIter: 173\nnorm(gX): 8.381798435554986\nIter: 174\nnorm(gX): 8.24731872107305\nIter: 175\nnorm(gX): 8.113028138213158\nIter: 176\nnorm(gX): 7.9793100889682185\nIter: 177\nnorm(gX): 7.846289995897251\nIter: 178\nnorm(gX): 7.714014690462166\nIter: 179\nnorm(gX): 7.582502613453087\nIter: 180\nnorm(gX): 7.452027736338623\nIter: 181\nnorm(gX): 7.323300580741963\nIter: 182\nnorm(gX): 7.195085513176095\nIter: 183\nnorm(gX): 7.065816951806562\nIter: 184\nnorm(gX): 6.9372649282373136\nIter: 185\nnorm(gX): 6.809684344412241\nIter: 186\nnorm(gX): 6.682983459052009\nIter: 187\nnorm(gX): 6.557426258332409\nIter: 188\nnorm(gX): 6.433347851973885\nIter: 189\nnorm(gX): 6.3090433923051865\nIter: 190\nnorm(gX): 6.183836649687684\nIter: 191\nnorm(gX): 6.05870573307859\nIter: 192\nnorm(gX): 5.93410394619031\nIter: 193\nnorm(gX): 5.809954695167608\nIter: 194\nnorm(gX): 5.686128497126739\nIter: 195\nnorm(gX): 5.5624772264077595\nIter: 196\nnorm(gX): 5.439023344330028\nIter: 197\nnorm(gX): 5.31652871663978\nIter: 198\nnorm(gX): 5.195241854971345\nIter: 199\nnorm(gX): 5.074942145790227\nIter: 200\nnorm(gX): 4.955673039287788\nIter: 201\nnorm(gX): 4.8369453909881335\nIter: 202\nnorm(gX): 4.717863458879985\nIter: 203\nnorm(gX): 4.5987688254111765\nIter: 204\nnorm(gX): 4.48145713581025\nIter: 205\nnorm(gX): 4.365669482720774\nIter: 206\nnorm(gX): 4.251625241578574\nIter: 207\nnorm(gX): 4.139400962552118\nIter: 208\nnorm(gX): 4.028933232709324\nIter: 209\nnorm(gX): 3.920789109339087\nIter: 210\nnorm(gX): 3.816741328260807\nIter: 211\nnorm(gX): 3.7155546273280744\nIter: 212\nnorm(gX): 3.617549409154409\nIter: 213\nnorm(gX): 3.528607360149521\nIter: 214\nnorm(gX): 3.480276463812604\nIter: 215\nnorm(gX): 3.4546549043877186\nIter: 216\nnorm(gX): 3.4474955805102474\nIter: 217\nnorm(gX): 3.3956304153734216\nIter: 218\nnorm(gX): 3.3285337770004726\nIter: 219\nnorm(gX): 3.2517156236396185\nIter: 220\nnorm(gX): 3.1735882609122963\nIter: 221\nnorm(gX): 3.094271567829454\nIter: 222\nnorm(gX): 3.009142945461251\nIter: 223\nnorm(gX): 2.9184447720513145\nIter: 224\nnorm(gX): 2.825332605828792\nIter: 225\nnorm(gX): 2.72026612648913\nIter: 226\nnorm(gX): 2.608093544580238\nIter: 227\nnorm(gX): 2.5029116550533788\nIter: 228\nnorm(gX): 2.3893589128579182\nIter: 229\nnorm(gX): 2.277623708609706\nIter: 230\nnorm(gX): 2.135682784980463\nIter: 231\nnorm(gX): 2.0344746070530237\nIter: 232\nnorm(gX): 1.9416875044189923\nIter: 233\nnorm(gX): 1.8753312678757907\nIter: 234\nnorm(gX): 1.8192325750321228\nIter: 235\nnorm(gX): 1.7787826277301797\nIter: 236\nnorm(gX): 1.7503468363759844\nIter: 237\nnorm(gX): 1.7190751593536708\nIter: 238\nnorm(gX): 1.6843025178997277\nIter: 239\nnorm(gX): 1.6491406163057998\nIter: 240\nnorm(gX): 1.6118159876613412\nIter: 241\nnorm(gX): 1.5716746217596516\nIter: 242\nnorm(gX): 1.5231630992901382\nIter: 243\nnorm(gX): 1.4295247727449523\nIter: 244\nnorm(gX): 1.337174765412034\nIter: 245\nnorm(gX): 1.2254320879976452\nIter: 246\nnorm(gX): 1.151809459092501\nIter: 247\nnorm(gX): 1.118876815847011\nIter: 248\nnorm(gX): 1.089769184366513\nIter: 249\nnorm(gX): 1.0772522110196852\nIter: 250\nnorm(gX): 1.0847798010265148\nIter: 251\nnorm(gX): 1.1066191395920004\nIter: 252\nnorm(gX): 1.1371201651547655\nIter: 253\nnorm(gX): 1.1740440614017797\nIter: 254\nnorm(gX): 1.2158399994078393\nIter: 255\nnorm(gX): 1.2616044972879417\nIter: 256\nnorm(gX): 1.3030806107725414\nIter: 257\nnorm(gX): 1.382041183835609\nIter: 258\nnorm(gX): 1.4556626034647353\nIter: 259\nnorm(gX): 1.518125984460998\nIter: 260\nnorm(gX): 1.579338568693033\nIter: 261\nnorm(gX): 1.6420492570135459\nIter: 262\nnorm(gX): 1.706219236574754\nIter: 263\nnorm(gX): 1.772292286555795\nIter: 264\nnorm(gX): 1.8408425661484031\nIter: 265\nnorm(gX): 1.9143889732245019\nIter: 266\nnorm(gX): 1.9968023818725422\nIter: 267\nnorm(gX): 2.0781478935039757\nIter: 268\nnorm(gX): 2.150549057716987\nIter: 269\nnorm(gX): 2.213395384488538\nIter: 270\nnorm(gX): 2.2657022243579403\nIter: 271\nnorm(gX): 2.3084094167713536\nIter: 272\nnorm(gX): 2.342850499554641\nIter: 273\nnorm(gX): 2.3683352505597854\nIter: 274\nnorm(gX): 2.3893562021793735\nIter: 275\nnorm(gX): 2.4115265544562337\nIter: 276\nnorm(gX): 2.43741221159055\nIter: 277\nnorm(gX): 2.4680739251844637\nIter: 278\nnorm(gX): 2.494296372984852\nIter: 279\nnorm(gX): 2.515076620441565\nIter: 280\nnorm(gX): 2.531731753033072\nIter: 281\nnorm(gX): 2.5442352172912877\nIter: 282\nnorm(gX): 2.5526092230730417\nIter: 283\nnorm(gX): 2.558098774093095\nIter: 284\nnorm(gX): 2.5651772566863253\nIter: 285\nnorm(gX): 2.5776382003560627\nIter: 286\nnorm(gX): 2.5971284717180887\nIter: 287\nnorm(gX): 2.621681957444592\nIter: 288\nnorm(gX): 2.6488062349626076\nIter: 289\nnorm(gX): 2.678075643590657\nIter: 290\nnorm(gX): 2.708074210356866\nIter: 291\nnorm(gX): 2.737491225799899\nIter: 292\nnorm(gX): 2.7656608967410334\nIter: 293\nnorm(gX): 2.792805902485329\nIter: 294\nnorm(gX): 2.81995900136295\nIter: 295\nnorm(gX): 2.857597515691069\nIter: 296\nnorm(gX): 2.9292576359563203\nIter: 297\nnorm(gX): 3.000590986019706\nIter: 298\nnorm(gX): 3.0491680528038994\nIter: 299\nnorm(gX): 3.1006865949807745\nIter: 300\nnorm(gX): 3.166697822589931\nIter: 301\nnorm(gX): 3.2042126956618713\nIter: 302\nnorm(gX): 3.2166358150323004\nIter: 303\nnorm(gX): 3.20833083674184\nIter: 304\nnorm(gX): 3.1821581629462625\nIter: 305\nnorm(gX): 3.15050581131545\nIter: 306\nnorm(gX): 3.11740995774112\nIter: 307\nnorm(gX): 3.088124045767446\nIter: 308\nnorm(gX): 3.08293460057767\nIter: 309\nnorm(gX): 3.087985851272267\nIter: 310\nnorm(gX): 3.096495297868662\nIter: 311\nnorm(gX): 3.1092716585781908\nIter: 312\nnorm(gX): 3.1227519741360976\nIter: 313\nnorm(gX): 3.1349450441282616\nIter: 314\nnorm(gX): 3.145760609182211\nIter: 315\nnorm(gX): 3.1555784728530147\nIter: 316\nnorm(gX): 3.164477891996307\nIter: 317\nnorm(gX): 3.172615844200737\nIter: 318\nnorm(gX): 3.180153309039277\nIter: 319\nnorm(gX): 3.1873644545270996\nIter: 320\nnorm(gX): 3.1945606656139027\nIter: 321\nnorm(gX): 3.201497638663289\nIter: 322\nnorm(gX): 3.2080875568862623\nIter: 323\nnorm(gX): 3.2142567868955227\nIter: 324\nnorm(gX): 3.219796949346153\nIter: 325\nnorm(gX): 3.224558930042942\nIter: 326\nnorm(gX): 3.2286127108633713\nIter: 327\nnorm(gX): 3.2319741956021755\nIter: 328\nnorm(gX): 3.2346353579250122\nIter: 329\nnorm(gX): 3.236631250580899\nIter: 330\nnorm(gX): 3.2379332895291135\nIter: 331\nnorm(gX): 3.2386149714430292\nIter: 332\nnorm(gX): 3.2386820263312472\nIter: 333\nnorm(gX): 3.2381342839581944\nIter: 334\nnorm(gX): 3.2369675219118963\nIter: 335\nnorm(gX): 3.235175139842893\nIter: 336\nnorm(gX): 3.2327495242585775\nIter: 337\nnorm(gX): 3.229683082720423\nIter: 338\nnorm(gX): 3.22596898620347\nIter: 339\nnorm(gX): 3.2216016755990253\nIter: 340\nnorm(gX): 3.216577183159304\nIter: 341\nnorm(gX): 3.2108933075743717\nIter: 342\nnorm(gX): 3.204549670463838\nIter: 343\nnorm(gX): 3.1975476751171983\nIter: 344\nnorm(gX): 3.1898903852597225\nIter: 345\nnorm(gX): 3.1815823413286886\nIter: 346\nnorm(gX): 3.1726169674748883\nIter: 347\nnorm(gX): 3.1629898165433574\nIter: 348\nnorm(gX): 3.1527365230859736\nIter: 349\nnorm(gX): 3.1418662640579105\nIter: 350\nnorm(gX): 3.1303880227633707\nIter: 351\nnorm(gX): 3.1183105265594318\nIter: 352\nnorm(gX): 3.1056284549851885\nIter: 353\nnorm(gX): 3.0923895357969173\nIter: 354\nnorm(gX): 3.0786193854021016\nIter: 355\nnorm(gX): 3.0643122286533693\nIter: 356\nnorm(gX): 3.0494582516719797\nIter: 357\nnorm(gX): 3.0340463386246723\nIter: 358\nnorm(gX): 3.018066790597504\nIter: 359\nnorm(gX): 3.0015136610885436\nIter: 360\nnorm(gX): 2.9843864231295614\nIter: 361\nnorm(gX): 2.9666907949022328\nIter: 362\nnorm(gX): 2.9484386765326662\nIter: 363\nnorm(gX): 2.929647272845111\nIter: 364\nnorm(gX): 2.910337579549763\nIter: 365\nnorm(gX): 2.89053248172655\nIter: 366\nnorm(gX): 2.870254746581699\nIter: 367\nnorm(gX): 2.8495251857642008\nIter: 368\nnorm(gX): 2.8283612197660872\nIter: 369\nnorm(gX): 2.8067760062970346\nIter: 370\nnorm(gX): 2.784778207374136\nIter: 371\nnorm(gX): 2.762367085855223\nIter: 372\nnorm(gX): 2.73951787964786\nIter: 373\nnorm(gX): 2.7162535689569345\nIter: 374\nnorm(gX): 2.6925820647321093\nIter: 375\nnorm(gX): 2.6685114346593686\nIter: 376\nnorm(gX): 2.644049922918499\nIter: 377\nnorm(gX): 2.6192059676751014\nIter: 378\nnorm(gX): 2.593988215786631\nIter: 379\nnorm(gX): 2.568405535880431\nIter: 380\nnorm(gX): 2.542467031382365\nIter: 381\nnorm(gX): 2.516182054534157\nIter: 382\nnorm(gX): 2.4895602212263945\nIter: 383\nnorm(gX): 2.4626114248906243\nIter: 384\nnorm(gX): 2.435345846043432\nIter: 385\nnorm(gX): 2.407773952663428\nIter: 386\nnorm(gX): 2.3799064857129704\nIter: 387\nnorm(gX): 2.3517544240898025\nIter: 388\nnorm(gX): 2.323328924381278\nIter: 389\nnorm(gX): 2.294641233175695\nIter: 390\nnorm(gX): 2.2656866111376552\nIter: 391\nnorm(gX): 2.236503734173073\nIter: 392\nnorm(gX): 2.2071097308215113\nIter: 393\nnorm(gX): 2.1775119541583865\nIter: 394\nnorm(gX): 2.147715185430978\nIter: 395\nnorm(gX): 2.11772241081949\nIter: 396\nnorm(gX): 2.0875357854555237\nIter: 397\nnorm(gX): 2.0571576466624926\nIter: 398\nnorm(gX): 2.0265914392084943\nIter: 399\nnorm(gX): 1.9958424351235808\nIter: 400\nnorm(gX): 1.9649181657478036\nIter: 401\nnorm(gX): 1.9338285282531678\nIter: 402\nnorm(gX): 1.9025855758847416\nIter: 403\nnorm(gX): 1.871203043721764\nIter: 404\nnorm(gX): 1.8396956942488583\nIter: 405\nnorm(gX): 1.8080785857652004\nIter: 406\nnorm(gX): 1.7763663701615544\nIter: 407\nnorm(gX): 1.7445727155203936\nIter: 408\nnorm(gX): 1.712709925852203\nIter: 409\nnorm(gX): 1.680788798969413\nIter: 410\nnorm(gX): 1.6488187287230514\nIter: 411\nnorm(gX): 1.6168080244495515\nIter: 412\nnorm(gX): 1.584749641224539\nIter: 413\nnorm(gX): 1.5526621866548753\nIter: 414\nnorm(gX): 1.520561708107228\nIter: 415\nnorm(gX): 1.4884602442929764\nIter: 416\nnorm(gX): 1.4563700450808577\nIter: 417\nnorm(gX): 1.4243036060791765\nIter: 418\nnorm(gX): 1.392273720775074\nIter: 419\nnorm(gX): 1.3602935481297518\nIter: 420\nnorm(gX): 1.3283766922637636\nIter: 421\nnorm(gX): 1.296537290094088\nIter: 422\nnorm(gX): 1.2647901026769834\nIter: 423\nnorm(gX): 1.2331348073950212\nIter: 424\nnorm(gX): 1.2016041180485617\nIter: 425\nnorm(gX): 1.1702227219955696\nIter: 426\nnorm(gX): 1.1390102520715077\nIter: 427\nnorm(gX): 1.1079860331602607\nIter: 428\nnorm(gX): 1.0771691673514636\nIter: 429\nnorm(gX): 1.0465788202120037\nIter: 430\nnorm(gX): 1.0162346862697427\nIter: 431\nnorm(gX): 0.9861575916473476\nIter: 432\nnorm(gX): 0.9563701781055348\nIter: 433\nnorm(gX): 0.9268976070867996\nIter: 434\nnorm(gX): 0.897768225090499\nIter: 435\nnorm(gX): 0.8690141421178488\nIter: 436\nnorm(gX): 0.8406716912644182\nIter: 437\nnorm(gX): 0.8127817573317826\nIter: 438\nnorm(gX): 0.7853899826837495\nIter: 439\nnorm(gX): 0.75854687651236\nIter: 440\nnorm(gX): 0.7323078664639256\nIter: 441\nnorm(gX): 0.7067333369613704\nIter: 442\nnorm(gX): 0.6818886949882565\nIter: 443\nnorm(gX): 0.6578444908334873\nIter: 444\nnorm(gX): 0.6346765984946418\nIter: 445\nnorm(gX): 0.612466429235538\nIter: 446\nnorm(gX): 0.5913011143951219\nIter: 447\nnorm(gX): 0.5712735534135949\nIter: 448\nnorm(gX): 0.5524821851874618\nIter: 449\nnorm(gX): 0.5350303121847085\nIter: 450\nnorm(gX): 0.5190247962147826\nIter: 451\nnorm(gX): 0.5045739629845305\nIter: 452\nnorm(gX): 0.49178461020312625\nIter: 453\nnorm(gX): 0.4807581180800726\nIter: 454\nnorm(gX): 0.4715858095891498\nIter: 455\nnorm(gX): 0.4643438837851973\nIter: 456\nnorm(gX): 0.45908841350112706\nIter: 457\nnorm(gX): 0.45585100971227893\nIter: 458\nnorm(gX): 0.4546357599346709\nIter: 459\nnorm(gX): 0.4554179203026051\nIter: 460\nnorm(gX): 0.4581445938841021\nIter: 461\nnorm(gX): 0.4627373180297419\nIter: 462\nnorm(gX): 0.4690961926902338\nIter: 463\nnorm(gX): 0.4771049841612495\nIter: 464\nnorm(gX): 0.48663657368959123\nIter: 465\nnorm(gX): 0.4975581816004208\nIter: 466\nnorm(gX): 0.5097359445293984\nIter: 467\nnorm(gX): 0.5230386030340266\nIter: 468\nnorm(gX): 0.5373402240788347\nIter: 469\nnorm(gX): 0.5525220105069666\nIter: 470\nnorm(gX): 0.5684733290179496\nIter: 471\nnorm(gX): 0.5850921238535784\nIter: 472\nnorm(gX): 0.6022848862240645\nIter: 473\nnorm(gX): 0.6199663315388224\nIter: 474\nnorm(gX): 0.6380589080507834\nIter: 475\nnorm(gX): 0.656492229104206\nIter: 476\nnorm(gX): 0.6752024916063708\nIter: 477\nnorm(gX): 0.6941319182737052\nIter: 478\nnorm(gX): 0.7132282417464673\nIter: 479\nnorm(gX): 0.732444234930459\nIter: 480\nnorm(gX): 0.7517372834017245\nIter: 481\nnorm(gX): 0.7710689915486533\nIter: 482\nnorm(gX): 0.7904048133127467\nIter: 483\nnorm(gX): 0.8097136998792988\nIter: 484\nnorm(gX): 0.8289677594768358\nIter: 485\nnorm(gX): 0.8481419277087806\nIter: 486\nnorm(gX): 0.8672136498749997\nIter: 487\nnorm(gX): 0.886162579054804\nIter: 488\nnorm(gX): 0.9049702950342889\nIter: 489\nnorm(gX): 0.9236200493834676\nIter: 490\nnorm(gX): 0.9420965412063826\nIter: 491\nnorm(gX): 0.960385726515184\nIter: 492\nnorm(gX): 0.9784746621155108\nIter: 493\nnorm(gX): 0.9963513826672845\nIter: 494\nnorm(gX): 1.014004807519783\nIter: 495\nnorm(gX): 1.0314246722778806\nIter: 496\nnorm(gX): 1.0486014790194722\nIter: 497\nnorm(gX): 1.0655264587398317\nIter: 498\nnorm(gX): 1.0821915399356774\nIter: 499\nnorm(gX): 1.0985893181640982\nIter: 500\nnorm(gX): 1.1147130227581685\nIter: 501\nnorm(gX): 1.130556478453017\nIter: 502\nnorm(gX): 1.1461140612660472\nIter: 503\nnorm(gX): 1.1613806493927459\nIter: 504\nnorm(gX): 1.1763515709759282\nIter: 505\nnorm(gX): 1.191022551285944\nIter: 506\nnorm(gX): 1.2053896620788864\nIter: 507\nnorm(gX): 1.2194492757046267\nIter: 508\nnorm(gX): 1.2331980259921802\nIter: 509\nnorm(gX): 1.246632777158091\nIter: 510\nnorm(gX): 1.2597506010933839\nIter: 511\nnorm(gX): 1.2725487625156975\nIter: 512\nnorm(gX): 1.285024710738315\nIter: 513\nnorm(gX): 1.297176076290468\nIter: 514\nnorm(gX): 1.3090006703706758\nIter: 515\nnorm(gX): 1.3204964851342942\nIter: 516\nnorm(gX): 1.3316616930796026\nIter: 517\nnorm(gX): 1.3424946442462529\nIter: 518\nnorm(gX): 1.3529938604995682\nIter: 519\nnorm(gX): 1.3631580267615138\nIter: 520\nnorm(gX): 1.3729859795851382\nIter: 521\nnorm(gX): 1.38247669389061\nIter: 522\nnorm(gX): 1.3916292689437568\nIter: 523\nnorm(gX): 1.400442914744833\nIter: 524\nnorm(gX): 1.408916939912145\nIter: 525\nnorm(gX): 1.4170507419203868\nIter: 526\nnorm(gX): 1.424843800231063\nIter: 527\nnorm(gX): 1.4322956724845497\nIter: 528\nnorm(gX): 1.4394059935637875\nIter: 529\nnorm(gX): 1.4461744770354854\nIter: 530\nnorm(gX): 1.4526009182613406\nIter: 531\nnorm(gX): 1.4586851983696532\nIter: 532\nnorm(gX): 1.4644272882896898\nIter: 533\nnorm(gX): 1.469827252165641\nIter: 534\nnorm(gX): 1.4748852496591842\nIter: 535\nnorm(gX): 1.4796015368866187\nIter: 536\nnorm(gX): 1.4839764659826702\nIter: 537\nnorm(gX): 1.488010483504148\nIter: 538\nnorm(gX): 1.4917041280553063\nIter: 539\nnorm(gX): 1.4950580276143908\nIter: 540\nnorm(gX): 1.498072897060585\nIter: 541\nnorm(gX): 1.500749536345722\nIter: 542\nnorm(gX): 1.503088829639525\nIter: 543\nnorm(gX): 1.5050917456207502\nIter: 544\nnorm(gX): 1.5067593389140426\nIter: 545\nnorm(gX): 1.5080927525076055\nIter: 546\nnorm(gX): 1.5090932208516084\nIter: 547\nnorm(gX): 1.5097620732473005\nIter: 548\nnorm(gX): 1.5101007371008617\nIter: 549\nnorm(gX): 1.5101107406346186\nIter: 550\nnorm(gX): 1.5097937147155114\nIter: 551\nnorm(gX): 1.5091513935638525\nIter: 552\nnorm(gX): 1.5081856142289987\nIter: 553\nnorm(gX): 1.5068983148451518\nIter: 554\nnorm(gX): 1.5052915317939624\nIter: 555\nnorm(gX): 1.5033673959881435\nIter: 556\nnorm(gX): 1.5011281285433924\nIter: 557\nnorm(gX): 1.4985760361215696\nIter: 558\nnorm(gX): 1.4957135062087985\nIter: 559\nnorm(gX): 1.4925430025443396\nIter: 560\nnorm(gX): 1.4890670608500634\nIter: 561\nnorm(gX): 1.4852882849375113\nIter: 562\nnorm(gX): 1.4812093432011044\nIter: 563\nnorm(gX): 1.4768329654517784\nIter: 564\nnorm(gX): 1.4721619400115507\nIter: 565\nnorm(gX): 1.4671991109793843\nIter: 566\nnorm(gX): 1.461947375591184\nIter: 567\nnorm(gX): 1.4564096816276075\nIter: 568\nnorm(gX): 1.450589024865914\nIter: 569\nnorm(gX): 1.4444884466181847\nIter: 570\nnorm(gX): 1.4381110314394911\nIter: 571\nnorm(gX): 1.4314599051191983\nIter: 572\nnorm(gX): 1.4245382330810787\nIter: 573\nnorm(gX): 1.4173492193111479\nIter: 574\nnorm(gX): 1.4098961059064394\nIter: 575\nnorm(gX): 1.4021821732964312\nIter: 576\nnorm(gX): 1.3942107411369806\nIter: 577\nnorm(gX): 1.3859851698208594\nIter: 578\nnorm(gX): 1.3775088624964411\nIter: 579\nnorm(gX): 1.3687852674433596\nIter: 580\nnorm(gX): 1.3598178806260353\nIter: 581\nnorm(gX): 1.3506102482361777\nIter: 582\nnorm(gX): 1.3411659690445081\nIter: 583\nnorm(gX): 1.3314886964087183\nIter: 584\nnorm(gX): 1.3215821398255287\nIter: 585\nnorm(gX): 1.3114500659647945\nIter: 586\nnorm(gX): 1.30109629917702\nIter: 587\nnorm(gX): 1.2905247215166697\nIter: 588\nnorm(gX): 1.2797392723668672\nIter: 589\nnorm(gX): 1.2687439477824842\nIter: 590\nnorm(gX): 1.2575427996859176\nIter: 591\nnorm(gX): 1.2461399350522007\nIter: 592\nnorm(gX): 1.234539515208925\nIter: 593\nnorm(gX): 1.222745755354167\nIter: 594\nnorm(gX): 1.2107629243661786\nIter: 595\nnorm(gX): 1.1985953449460103\nIter: 596\nnorm(gX): 1.1862473941029967\nIter: 597\nnorm(gX): 1.1737235039666256\nIter: 598\nnorm(gX): 1.1610281628894652\nIter: 599\nnorm(gX): 1.1481659167963052\nIter: 600\nnorm(gX): 1.1351413707342435\nIter: 601\nnorm(gX): 1.1219591905866058\nIter: 602\nnorm(gX): 1.108624104927918\nIter: 603\nnorm(gX): 1.0951409070154492\nIter: 604\nnorm(gX): 1.0815144569318198\nIter: 605\nnorm(gX): 1.0677496839106675\nIter: 606\nnorm(gX): 1.0538515888907976\nIter: 607\nnorm(gX): 1.0398252473525182\nIter: 608\nnorm(gX): 1.0256758124924525\nIter: 609\nnorm(gX): 1.0114085187904807\nIter: 610\nnorm(gX): 0.9970286860158474\nIter: 611\nnorm(gX): 0.9825417237108858\nIter: 612\nnorm(gX): 0.9679531361821849\nIter: 613\nnorm(gX): 0.9532685280226975\nIter: 614\nnorm(gX): 0.9384936101861459\nIter: 615\nnorm(gX): 0.9236342066383251\nIter: 616\nnorm(gX): 0.9086962616194304\nIter: 617\nnorm(gX): 0.8936858475671459\nIter: 618\nnorm(gX): 0.8786091737712999\nIter: 619\nnorm(gX): 0.8634725958561407\nIter: 620\nnorm(gX): 0.8482826262140287\nIter: 621\nnorm(gX): 0.8330459455428421\nIter: 622\nnorm(gX): 0.8177694156670676\nIter: 623\nnorm(gX): 0.8024600938478205\nIter: 624\nnorm(gX): 0.7871252488092252\nIter: 625\nnorm(gX): 0.7717723787271952\nIter: 626\nnorm(gX): 0.7564092314418644\nIter: 627\nnorm(gX): 0.7410438271674676\nIter: 628\nnorm(gX): 0.7256844839841853\nIter: 629\nnorm(gX): 0.7103398464066846\nIter: 630\nnorm(gX): 0.6950189173347353\nIter: 631\nnorm(gX): 0.6797310937030667\nIter: 632\nnorm(gX): 0.6644862061611907\nIter: 633\nnorm(gX): 0.6492945631280627\nIter: 634\nnorm(gX): 0.6341669995807131\nIter: 635\nnorm(gX): 0.6191149309469263\nIter: 636\nnorm(gX): 0.604150412476767\nIter: 637\nnorm(gX): 0.5892862044602067\nIter: 638\nnorm(gX): 0.5745358436317609\nIter: 639\nnorm(gX): 0.5599137210485033\nIter: 640\nnorm(gX): 0.5454351666328442\nIter: 641\nnorm(gX): 0.531116540421561\nIter: 642\nnorm(gX): 0.5169753303380739\nIter: 643\nnorm(gX): 0.5030302559824793\nIter: 644\nnorm(gX): 0.4893013774845547\nIter: 645\nnorm(gX): 0.47581020785447153\nIter: 646\nnorm(gX): 0.46257982645535584\nIter: 647\nnorm(gX): 0.44963499016933745\nIter: 648\nnorm(gX): 0.43700223749343814\nIter: 649\nnorm(gX): 0.42470997915152986\nIter: 650\nnorm(gX): 0.4127885668314129\nIter: 651\nnorm(gX): 0.40127032937725987\nIter: 652\nnorm(gX): 0.39018956327423376\nIter: 653\nnorm(gX): 0.37958246173252114\nIter: 654\nnorm(gX): 0.3694869644164028\nIter: 655\nnorm(gX): 0.35994250832976227\nIter: 656\nnorm(gX): 0.3509896601912397\nIter: 657\nnorm(gX): 0.3426696125867922\nIter: 658\nnorm(gX): 0.33502353111494826\nIter: 659\nnorm(gX): 0.3280917483800881\nIter: 660\nnorm(gX): 0.32191281341891276\nIter: 661\nnorm(gX): 0.3165224216566564\nIter: 662\nnorm(gX): 0.3119522694751157\nIter: 663\nnorm(gX): 0.3082288964656084\nIter: 664\nnorm(gX): 0.30537259395729577\nIter: 665\nnorm(gX): 0.30339646654738694\nIter: 666\nnorm(gX): 0.3023057307660787\nIter: 667\nnorm(gX): 0.302097320048729\nIter: 668\nnorm(gX): 0.3027598388245431\nIter: 669\nnorm(gX): 0.3042738745163471\nIter: 670\nnorm(gX): 0.30661264037456576\nIter: 671\nnorm(gX): 0.3097428906828284\nIter: 672\nnorm(gX): 0.3136260281464548\nIter: 673\nnorm(gX): 0.3182193139309294\nIter: 674\nnorm(gX): 0.32347709370987593\nIter: 675\nnorm(gX): 0.3293519656879203\nIter: 676\nnorm(gX): 0.33579583504715854\nIter: 677\nnorm(gX): 0.342760819646029\nIter: 678\nnorm(gX): 0.3501999908350046\nIter: 679\nnorm(gX): 0.3580679488745106\nIter: 680\nnorm(gX): 0.3663212437596134\nIter: 681\nnorm(gX): 0.37491865931684376\nIter: 682\nnorm(gX): 0.38382138186383036\nIter: 683\nnorm(gX): 0.3929930753896558\nIter: 684\nnorm(gX): 0.4023998840173331\nIter: 685\nnorm(gX): 0.41201038022340736\nIter: 686\nnorm(gX): 0.4217954745076756\nIter: 687\nnorm(gX): 0.43172829934254064\nIter: 688\nnorm(gX): 0.44178407754304855\nIter: 689\nnorm(gX): 0.4519399828242054\nIter: 690\nnorm(gX): 0.4621749983056094\nIter: 691\nnorm(gX): 0.4724697770863129\nIter: 692\nnorm(gX): 0.48280650771584616\nIter: 693\nnorm(gX): 0.49316878638652756\nIter: 694\nnorm(gX): 0.5035414969193136\nIter: 695\nnorm(gX): 0.51391069906358\nIter: 696\nnorm(gX): 0.5242635252386604\nIter: 697\nnorm(gX): 0.5345880855752446\nIter: 698\nnorm(gX): 0.5448733809383354\nIter: 699\nnorm(gX): 0.5551092235061104\nIter: 700\nnorm(gX): 0.5652861644215098\nIter: 701\nnorm(gX): 0.5753954280111572\nIter: 702\nnorm(gX): 0.5854288520675637\nIter: 703\nnorm(gX): 0.5953788337073803\nIter: 704\nnorm(gX): 0.6052382803442574\nIter: 705\nnorm(gX): 0.6150005653451361\nIter: 706\nnorm(gX): 0.6246594879707421\nIter: 707\nnorm(gX): 0.6342092372321783\nIter: 708\nnorm(gX): 0.6436443593250619\nIter: 709\nnorm(gX): 0.6529597283295402\nIter: 710\nnorm(gX): 0.6621505198889718\nIter: 711\nnorm(gX): 0.6712121876015674\nIter: 712\nnorm(gX): 0.6801404418787707\nIter: 713\nnorm(gX): 0.6889312310413791\nIter: 714\nnorm(gX): 0.6975807244401705\nIter: 715\nnorm(gX): 0.7060852974021177\nIter: 716\nnorm(gX): 0.7144415178169283\nIter: 717\nnorm(gX): 0.7226461341914243\nIter: 718\nnorm(gX): 0.7306960650117128\nIter: 719\nnorm(gX): 0.7385883892652151\nIter: 720\nnorm(gX): 0.7463203379864672\nIter: 721\nnorm(gX): 0.7538892867022451\nIter: 722\nnorm(gX): 0.7612927486628329\nIter: 723\nnorm(gX): 0.7685283687573693\nIter: 724\nnorm(gX): 0.7755939180217613\nIter: 725\nnorm(gX): 0.7824872886578479\nIter: 726\nnorm(gX): 0.7892064894920857\nIter: 727\nnorm(gX): 0.7957496418110245\nIter: 728\nnorm(gX): 0.8021149755192852\nIter: 729\nnorm(gX): 0.808300825573284\nIter: 730\nnorm(gX): 0.8143056286510921\nIter: 731\nnorm(gX): 0.8201279200249699\nIter: 732\nnorm(gX): 0.8257663306087458\nIter: 733\nnorm(gX): 0.8312195841571697\nIter: 734\nnorm(gX): 0.836486494598501\nIter: 735\nnorm(gX): 0.8415659634854085\nIter: 736\nnorm(gX): 0.8464569775521862\nIter: 737\nnorm(gX): 0.8511586063689707\nIter: 738\nnorm(gX): 0.855670000085636\nIter: 739\nnorm(gX): 0.8599903872597771\nIter: 740\nnorm(gX): 0.8641190727644419\nIter: 741\nnorm(gX): 0.8680554357722826\nIter: 742\nnorm(gX): 0.8717989278135655\nIter: 743\nnorm(gX): 0.875349070905872\nIter: 744\nnorm(gX): 0.8787054557538412\nIter: 745\nnorm(gX): 0.8818677400173748\nIter: 746\nnorm(gX): 0.8848356466469752\nIter: 747\nnorm(gX): 0.8876089622848641\nIter: 748\nnorm(gX): 0.8901875357305781\nIter: 749\nnorm(gX): 0.8925712764697027\nIter: 750\nnorm(gX): 0.8947601532643925\nIter: 751\nnorm(gX): 0.8967541928041569\nIter: 752\nnorm(gX): 0.8985534784154698\nIter: 753\nnorm(gX): 0.9001581488285194\nIter: 754\nnorm(gX): 0.9015683969994485\nIter: 755\nnorm(gX): 0.9027844689862621\nIter: 756\nnorm(gX): 0.9038066628766057\nIter: 757\nnorm(gX): 0.9046353277654751\nIter: 758\nnorm(gX): 0.9052708627810112\nIter: 759\nnorm(gX): 0.9057137161564857\nIter: 760\nnorm(gX): 0.9059643843467072\nIter: 761\nnorm(gX): 0.9060234111872524\nIter: 762\nnorm(gX): 0.9058913870950385\nIter: 763\nnorm(gX): 0.9055689483091106\nIter: 764\nnorm(gX): 0.9050567761707871\nIter: 765\nnorm(gX): 0.9043555964426434\nIter: 766\nnorm(gX): 0.9034661786662059\nIter: 767\nnorm(gX): 0.9023893355586401\nIter: 768\nnorm(gX): 0.9011259224490928\nIter: 769\nnorm(gX): 0.8996768367556404\nIter: 770\nnorm(gX): 0.8980430175042311\nIter: 771\nnorm(gX): 0.8962254448911532\nIter: 772\nnorm(gX): 0.8942251398908058\nIter: 773\nnorm(gX): 0.8920431639106351\nIter: 774\nnorm(gX): 0.8896806184951231\nIter: 775\nnorm(gX): 0.8871386450806685\nIter: 776\nnorm(gX): 0.884418424802986\nIter: 777\nnorm(gX): 0.8815211783585437\nIter: 778\nnorm(gX): 0.8784481659211313\nIter: 779\nnorm(gX): 0.875200687114398\nIter: 780\nnorm(gX): 0.8717800810407295\nIter: 781\nnorm(gX): 0.8681877263664576\nIter: 782\nnorm(gX): 0.8644250414628198\nIter: 783\nnorm(gX): 0.8604934846017063\nIter: 784\nnorm(gX): 0.8563945542046505\nIter: 785\nnorm(gX): 0.8521297891430257\nIter: 786\nnorm(gX): 0.8477007690869152\nIter: 787\nnorm(gX): 0.84310911489965\nIter: 788\nnorm(gX): 0.8383564890744671\nIter: 789\nnorm(gX): 0.8334445962093951\nIter: 790\nnorm(gX): 0.828375183516003\nIter: 791\nnorm(gX): 0.8231500413572937\nIter: 792\nnorm(gX): 0.8177710038098187\nIter: 793\nnorm(gX): 0.8122399492447562\nIter: 794\nnorm(gX): 0.8065588009226524\nIter: 795\nnorm(gX): 0.8007295275964826\nIter: 796\nnorm(gX): 0.7947541441177253\nIter: 797\nnorm(gX): 0.7886347120404408\nIter: 798\nnorm(gX): 0.7823733402185529\nIter: 799\nnorm(gX): 0.7759721853921582\nIter: 800\nnorm(gX): 0.7694334527591512\nIter: 801\nnorm(gX): 0.7627593965293825\nIter: 802\nnorm(gX): 0.7559523204593309\nIter: 803\nnorm(gX): 0.7490145783664622\nIter: 804\nnorm(gX): 0.741948574623591\nIter: 805\nnorm(gX): 0.7347567646349858\nIter: 806\nnorm(gX): 0.7274416552974133\nIter: 807\nnorm(gX): 0.72000580545105\nIter: 808\nnorm(gX): 0.7124518263269574\nIter: 809\nnorm(gX): 0.7047823819997934\nIter: 810\nnorm(gX): 0.6970001898565227\nIter: 811\nnorm(gX): 0.6891080210942045\nIter: 812\nnorm(gX): 0.6811087012622958\nIter: 813\nnorm(gX): 0.6730051108675312\nIter: 814\nnorm(gX): 0.6648001860622144\nIter: 815\nnorm(gX): 0.6564969194396316\nIter: 816\nnorm(gX): 0.6480983609634934\nIter: 817\nnorm(gX): 0.6396076190615517\nIter: 818\nnorm(gX): 0.6310278619170994\nIter: 819\nnorm(gX): 0.6223623189957848\nIter: 820\nnorm(gX): 0.6136142828490746\nIter: 821\nnorm(gX): 0.604787111239925\nIter: 822\nnorm(gX): 0.595884229640603\nIter: 823\nnorm(gX): 0.5869091341573386\nIter: 824\nnorm(gX): 0.5778653949414629\nIter: 825\nnorm(gX): 0.5687566601520029\nIter: 826\nnorm(gX): 0.5595866605404937\nIter: 827\nnorm(gX): 0.5503592147348695\nIter: 828\nnorm(gX): 0.5410782353061006\nIter: 829\nnorm(gX): 0.5317477357085112\nIter: 830\nnorm(gX): 0.5223718381929356\nIter: 831\nnorm(gX): 0.5129547828007752\nIter: 832\nnorm(gX): 0.5035009375571932\nIter: 833\nnorm(gX): 0.49401480999290687\nIter: 834\nnorm(gX): 0.4845010601368415\nIter: 835\nnorm(gX): 0.4749645151361697\nIter: 836\nnorm(gX): 0.46541018567639075\nIter: 837\nnorm(gX): 0.4558432843921158\nIter: 838\nnorm(gX): 0.4462692464792509\nIter: 839\nnorm(gX): 0.43669375274129213\nIter: 840\nnorm(gX): 0.42712275532630745\nIter: 841\nnorm(gX): 0.4175625064366248\nIter: 842\nnorm(gX): 0.4080195903194422\nIter: 843\nnorm(gX): 0.39850095887282533\nIter: 844\nnorm(gX): 0.38901397122580333\nIter: 845\nnorm(gX): 0.37956643767164755\nIter: 846\nnorm(gX): 0.3701666683461909\nIter: 847\nnorm(gX): 0.3608235270436591\nIter: 848\nnorm(gX): 0.3515464905442599\nIter: 849\nnorm(gX): 0.34234571378199485\nIter: 850\nnorm(gX): 0.3332321010957299\nIter: 851\nnorm(gX): 0.32421738366591607\nIter: 852\nnorm(gX): 0.3153142030232413\nIter: 853\nnorm(gX): 0.3065362001980243\nIter: 854\nnorm(gX): 0.2978981096288132\nIter: 855\nnorm(gX): 0.289415856327187\nIter: 856\nnorm(gX): 0.28110665396009094\nIter: 857\nnorm(gX): 0.2729891004148912\nIter: 858\nnorm(gX): 0.2650832660126701\nIter: 859\nnorm(gX): 0.2574107678015589\nIter: 860\nnorm(gX): 0.24999482129165435\nIter: 861\nnorm(gX): 0.2428602586372185\nIter: 862\nnorm(gX): 0.23603349976431925\nIter: 863\nnorm(gX): 0.22954246054146887\nIter: 864\nnorm(gX): 0.22341638021875102\nIter: 865\nnorm(gX): 0.2176855496308754\nIter: 866\nnorm(gX): 0.21238092286975863\nIter: 867\nnorm(gX): 0.20753359919241765\nIter: 868\nnorm(gX): 0.20317416968912155\nIter: 869\nnorm(gX): 0.1993319351927048\nIter: 870\nnorm(gX): 0.19603401782430255\nIter: 871\nnorm(gX): 0.19330440708690402\nIter: 872\nnorm(gX): 0.19116299985449453\nIter: 873\nnorm(gX): 0.18962470814602223\nIter: 874\nnorm(gX): 0.1886987150080423\nIter: 875\nnorm(gX): 0.18838795377840672\nIter: 876\nnorm(gX): 0.18868886827869183\nIter: 877\nnorm(gX): 0.18959148299646955\nIter: 878\nnorm(gX): 0.19107977798437037\nIter: 879\nnorm(gX): 0.19313232969735109\nIter: 880\nnorm(gX): 0.19572315277433452\nIter: 881\nnorm(gX): 0.1988226631969409\nIter: 882\nnorm(gX): 0.20239868152851334\nIter: 883\nnorm(gX): 0.20641740423368327\nIter: 884\nnorm(gX): 0.21084428762821975\nIter: 885\nnorm(gX): 0.21564480848354955\nIter: 886\nnorm(gX): 0.22078508400578775\nIter: 887\nnorm(gX): 0.22623234938191142\nIter: 888\nnorm(gX): 0.23195530224561856\nIter: 889\nnorm(gX): 0.2379243302528863\nIter: 890\nnorm(gX): 0.24411164113057776\nIter: 891\nnorm(gX): 0.250491315010806\nIter: 892\nnorm(gX): 0.25703929753479354\nIter: 893\nnorm(gX): 0.26373334989977404\nIter: 894\nnorm(gX): 0.2705529693238564\nIter: 895\nnorm(gX): 0.2774792907097541\nIter: 896\nnorm(gX): 0.2844949778286166\nIter: 897\nnorm(gX): 0.29158411022922986\nIter: 898\nnorm(gX): 0.2987320703362603\nIter: 899\nnorm(gX): 0.3059254338167444\nIter: 900\nnorm(gX): 0.31315186522601046\nIter: 901\nnorm(gX): 0.32040002014205365\nIter: 902\nnorm(gX): 0.32765945441081057\nIter: 903\nnorm(gX): 0.33492054070766475\nIter: 904\nnorm(gX): 0.34217439233357194\nIter: 905\nnorm(gX): 0.3494127939750762\nIter: 906\nnorm(gX): 0.35662813904016394\nIter: 907\nnorm(gX): 0.36381337311663964\nIter: 908\nnorm(gX): 0.3709619430708126\nIter: 909\nnorm(gX): 0.3780677513005647\nIter: 910\nnorm(gX): 0.3851251146696353\nIter: 911\nnorm(gX): 0.39212872767320456\nIter: 912\nnorm(gX): 0.39907362941412583\nIter: 913\nnorm(gX): 0.4059551740013501\nIter: 914\nnorm(gX): 0.412769004015268\nIter: 915\nnorm(gX): 0.4195110267171291\nIter: 916\nnorm(gX): 0.4261773927109631\nIter: 917\nnorm(gX): 0.4327644767955547\nIter: 918\nnorm(gX): 0.43926886077111554\nIter: 919\nnorm(gX): 0.44568731798994987\nIter: 920\nnorm(gX): 0.45201679946284706\nIter: 921\nnorm(gX): 0.45825442135323674\nIter: 922\nnorm(gX): 0.4643974537092229\nIter: 923\nnorm(gX): 0.47044331029998937\nIter: 924\nnorm(gX): 0.4763895394375635\nIter: 925\nnorm(gX): 0.4822338156778837\nIter: 926\nnorm(gX): 0.48797393230679004\nIter: 927\nnorm(gX): 0.49360779452669445\nIter: 928\nnorm(gX): 0.4991334132690828\nIter: 929\nnorm(gX): 0.5045488995660128\nIter: 930\nnorm(gX): 0.5098524594212221\nIter: 931\nnorm(gX): 0.5150423891279208\nIter: 932\nnorm(gX): 0.5201170709862117\nIter: 933\nnorm(gX): 0.5250749693783076\nIter: 934\nnorm(gX): 0.5299146271643966\nIter: 935\nnorm(gX): 0.5346346623661942\nIter: 936\nnorm(gX): 0.5392337651090184\nIter: 937\nnorm(gX): 0.5437106947965289\nIter: 938\nnorm(gX): 0.5480642774953576\nIter: 939\nnorm(gX): 0.552293403509517\nIter: 940\nnorm(gX): 0.5563970251269116\nIter: 941\nnorm(gX): 0.5603741545224413\nIter: 942\nnorm(gX): 0.5642238618040252\nIter: 943\nnorm(gX): 0.5679452731897092\nIter: 944\nnorm(gX): 0.5715375693053513\nIter: 945\nnorm(gX): 0.5749999835938446\nIter: 946\nnorm(gX): 0.5783318008278698\nIter: 947\nnorm(gX): 0.581532355719226\nIter: 948\nnorm(gX): 0.5846010316186085\nIter: 949\nnorm(gX): 0.587537259300446\nIter: 950\nnorm(gX): 0.590340515828028\nIter: 951\nnorm(gX): 0.593010323494626\nIter: 952\nnorm(gX): 0.5955462488368086\nIter: 953\nnorm(gX): 0.597947901716386\nIter: 954\nnorm(gX): 0.6002149344678482\nIter: 955\nnorm(gX): 0.6023470411081988\nIter: 956\nnorm(gX): 0.6043439566064344\nIter: 957\nnorm(gX): 0.6062054562099598\nIter: 958\nnorm(gX): 0.6079313548253388\nIter: 959\nnorm(gX): 0.6095215064509337\nIter: 960\nnorm(gX): 0.6109758036589824\nIter: 961\nnorm(gX): 0.6122941771248291\nIter: 962\nnorm(gX): 0.613476595200965\nIter: 963\nnorm(gX): 0.6145230635337392\nIter: 964\nnorm(gX): 0.6154336247205481\nIter: 965\nnorm(gX): 0.6162083580054855\nIter: 966\nnorm(gX): 0.6168473790114393\nIter: 967\nnorm(gX): 0.61735083950674\nIter: 968\nnorm(gX): 0.6177189272045549\nIter: 969\nnorm(gX): 0.6179518655932712\nIter: 970\nnorm(gX): 0.618049913796207\nIter: 971\nnorm(gX): 0.6180133664590512\nIter: 972\nnorm(gX): 0.6178425536634873\nIter: 973\nnorm(gX): 0.6175378408654693\nIter: 974\nnorm(gX): 0.6170996288566427\nIter: 975\nnorm(gX): 0.6165283537473855\nIter: 976\nnorm(gX): 0.6158244869698998\nIter: 977\nnorm(gX): 0.6149885352997068\nIter: 978\nnorm(gX): 0.6140210408938063\nIter: 979\nnorm(gX): 0.612922581343628\nIter: 980\nnorm(gX): 0.6116937697407697\nIter: 981\nnorm(gX): 0.6103352547533238\nIter: 982\nnorm(gX): 0.6088477207104868\nIter: 983\nnorm(gX): 0.6072318876928836\nIter: 984\nnorm(gX): 0.6054885116259875\nIter: 985\nnorm(gX): 0.6036183843737506\nIter: 986\nnorm(gX): 0.6016223338296289\nIter: 987\nnorm(gX): 0.5995012240019024\nIter: 988\nnorm(gX): 0.5972559550903902\nIter: 989\nnorm(gX): 0.594887463551587\nIter: 990\nnorm(gX): 0.5923967221494084\nIter: 991\nnorm(gX): 0.5897847399889113\nIter: 992\nnorm(gX): 0.5870525625307019\nIter: 993\nnorm(gX): 0.584201271583982\nIter: 994\nnorm(gX): 0.5812319852767186\nIter: 995\nnorm(gX): 0.5781458580018758\nIter: 996\nnorm(gX): 0.5749440803392251\nIter: 997\nnorm(gX): 0.5716278789528639\nIter: 998\nnorm(gX): 0.5681985164652505\nIter: 999\nnorm(gX): 0.5646572913092658\nIter: 1000\nnorm(gX): 0.5610055375605011\nIter: 1001\nnorm(gX): 0.55724462475272\nIter: 1002\nnorm(gX): 0.5533759576801187\nIter: 1003\nnorm(gX): 0.5494009761907725\nIter: 1004\nnorm(gX): 0.5453211549762144\nIter: 1005\nnorm(gX): 0.5411380033628418\nIter: 1006\nnorm(gX): 0.5368530651113088\nIter: 1007\nnorm(gX): 0.5324679182306972\nIter: 1008\nnorm(gX): 0.5279841748146997\nIter: 1009\nnorm(gX): 0.5234034809075093\nIter: 1010\nnorm(gX): 0.5187275164075108\nIter: 1011\nnorm(gX): 0.5139579950172326\nIter: 1012\nnorm(gX): 0.5090966642484493\nIter: 1013\nnorm(gX): 0.5041453054915063\nIter: 1014\nnorm(gX): 0.4991057341584508\nIter: 1015\nnorm(gX): 0.49397979990983315\nIter: 1016\nnorm(gX): 0.48876938697539735\nIter: 1017\nnorm(gX): 0.4834764145794276\nIter: 1018\nnorm(gX): 0.4781028374819113\nIter: 1019\nnorm(gX): 0.4726506466473616\nIter: 1020\nnorm(gX): 0.46712187005382133\nIter: 1021\nnorm(gX): 0.46151857365533105\nIter: 1022\nnorm(gX): 0.4558428625122187\nIter: 1023\nnorm(gX): 0.4500968821046371\nIter: 1024\nnorm(gX): 0.4442828198460847\nIter: 1025\nnorm(gX): 0.4384029068152812\nIter: 1026\nnorm(gX): 0.43245941972643687\nIter: 1027\nnorm(gX): 0.42645468316012347\nIter: 1028\nnorm(gX): 0.4203910720792291\nIter: 1029\nnorm(gX): 0.41427101465725824\nIter: 1030\nnorm(gX): 0.4080969954492814\nIter: 1031\nnorm(gX): 0.4018715589393743\nIter: 1032\nnorm(gX): 0.3955973135023671\nIter: 1033\nnorm(gX): 0.3892769358222835\nIter: 1034\nnorm(gX): 0.3829131758148946\nIter: 1035\nnorm(gX): 0.37650886210767326\nIter: 1036\nnorm(gX): 0.37006690813685406\nIter: 1037\nnorm(gX): 0.36359031892865157\nIter: 1038\nnorm(gX): 0.357082198639831\nIter: 1039\nnorm(gX): 0.3505457589420136\nIter: 1040\nnorm(gX): 0.34398432834416204\nIter: 1041\nnorm(gX): 0.33740136255903863\nIter: 1042\nnorm(gX): 0.3308004560317697\nIter: 1043\nnorm(gX): 0.32418535476219174\nIter: 1044\nnorm(gX): 0.31755997056734453\nIter: 1045\nnorm(gX): 0.3109283969462195\nIter: 1046\nnorm(gX): 0.3042949267253521\nIter: 1047\nnorm(gX): 0.29766407168102316\nIter: 1048\nnorm(gX): 0.2910405843507786\nIter: 1049\nnorm(gX): 0.28442948226328474\nIter: 1050\nnorm(gX): 0.2778360748297197\nIter: 1051\nnorm(gX): 0.2712659931504377\nIter: 1052\nnorm(gX): 0.26472522299508455\nIter: 1053\nnorm(gX): 0.25822014120956266\nIter: 1054\nnorm(gX): 0.2517575557844598\nIter: 1055\nnorm(gX): 0.245344749781281\nIter: 1056\nnorm(gX): 0.238989529246712\nIter: 1057\nnorm(gX): 0.2327002751411698\nIter: 1058\nnorm(gX): 0.22648599915296969\nIter: 1059\nnorm(gX): 0.22035640304686813\nIter: 1060\nnorm(gX): 0.2143219408858058\nIter: 1061\nnorm(gX): 0.20839388304362647\nIter: 1062\nnorm(gX): 0.20258438036821758\nIter: 1063\nnorm(gX): 0.19690652613160758\nIter: 1064\nnorm(gX): 0.1913744124918435\nIter: 1065\nnorm(gX): 0.18600317707585487\nIter: 1066\nnorm(gX): 0.1808090339768841\nIter: 1067\nnorm(gX): 0.17580928198238446\nIter: 1068\nnorm(gX): 0.17102228129885721\nIter: 1069\nnorm(gX): 0.16646738858495802\nIter: 1070\nnorm(gX): 0.16216483900686415\nIter: 1071\nnorm(gX): 0.15813556366622347\nIter: 1072\nnorm(gX): 0.15440093160110713\nIter: 1073\nnorm(gX): 0.15098240816430236\nIter: 1074\nnorm(gX): 0.14790112643724815\nIter: 1075\nnorm(gX): 0.1451773757275948\nIter: 1076\nnorm(gX): 0.1428300209786879\nIter: 1077\nnorm(gX): 0.14087587829772022\nIter: 1078\nnorm(gX): 0.13932908322634763\nIter: 1079\nnorm(gX): 0.13820049760528955\nIter: 1080\nnorm(gX): 0.13749720542744412\nIter: 1081\nnorm(gX): 0.13722214585224662\nIter: 1082\nnorm(gX): 0.13737392168130486\nIter: 1083\nnorm(gX): 0.13794680494168518\nIter: 1084\nnorm(gX): 0.13893094044957655\nIter: 1085\nnorm(gX): 0.14031272717360718\nIter: 1086\nnorm(gX): 0.14207533982226037\nIter: 1087\nnorm(gX): 0.14419934225857678\nIter: 1088\nnorm(gX): 0.14666334126487912\nIter: 1089\nnorm(gX): 0.14944463318359988\nIter: 1090\nnorm(gX): 0.15251980505900337\nIter: 1091\nnorm(gX): 0.15586526356025107\nIter: 1092\nnorm(gX): 0.15945767681985581\nIter: 1093\nnorm(gX): 0.1632743246601397\nIter: 1094\nnorm(gX): 0.16729336058402083\nIter: 1095\nnorm(gX): 0.1714939941534514\nIter: 1096\nnorm(gX): 0.17585660522779037\nIter: 1097\nnorm(gX): 0.18036280249227027\nIter: 1098\nnorm(gX): 0.18499543834295332\nIter: 1099\nnorm(gX): 0.18973859102619337\nIter: 1100\nnorm(gX): 0.19457752336559156\nIter: 1101\nnorm(gX): 0.19949862573684152\nIter: 1102\nnorm(gX): 0.20448934935375224\nIter: 1103\nnorm(gX): 0.20953813450715625\nIter: 1104\nnorm(gX): 0.21463433719450836\nIter: 1105\nnorm(gX): 0.21976815659656057\nIter: 1106\nnorm(gX): 0.2249305650819688\nIter: 1107\nnorm(gX): 0.2301132418243653\nIter: 1108\nnorm(gX): 0.23530851066932434\nIter: 1109\nnorm(gX): 0.24050928256200954\nIter: 1110\nnorm(gX): 0.24570900261398068\nIter: 1111\nnorm(gX): 0.2509016017280354\nIter: 1112\nnorm(gX): 0.25608145259482407\nIter: 1113\nnorm(gX): 0.2612433298102484\nIter: 1114\nnorm(gX): 0.2663823738269803\nIter: 1115\nnorm(gX): 0.271494058438402\nIter: 1116\nnorm(gX): 0.27657416149248093\nIter: 1117\nnorm(gX): 0.28161873854171765\nIter: 1118\nnorm(gX): 0.28662409914987574\nIter: 1119\nnorm(gX): 0.2915867855943284\nIter: 1120\nnorm(gX): 0.29650355372264875\nIter: 1121\nnorm(gX): 0.30137135574236357\nIter: 1122\nnorm(gX): 0.3061873247428565\nIter: 1123\nnorm(gX): 0.31094876076751343\nIter: 1124\nnorm(gX): 0.31565311827232295\nIter: 1125\nnorm(gX): 0.32029799482380833\nIter: 1126\nnorm(gX): 0.32488112090454896\nIter: 1127\nnorm(gX): 0.3294003507085279\nIter: 1128\nnorm(gX): 0.33385365382117344\nIter: 1129\nnorm(gX): 0.3382391076903181\nIter: 1130\nnorm(gX): 0.34255489080460105\nIter: 1131\nnorm(gX): 0.346799276504841\nIter: 1132\nnorm(gX): 0.3509706273622222\nIter: 1133\nnorm(gX): 0.3550673900642904\nIter: 1134\nnorm(gX): 0.35908809075628834\nIter: 1135\nnorm(gX): 0.3630313307911754\nIter: 1136\nnorm(gX): 0.3668957828467422\nIter: 1137\nnorm(gX): 0.37068018737287156\nIter: 1138\nnorm(gX): 0.3743833493360345\nIter: 1139\nnorm(gX): 0.37800413523179466\nIter: 1140\nnorm(gX): 0.38154147033924146\nIter: 1141\nnorm(gX): 0.38499433619427775\nIter: 1142\nnorm(gX): 0.3883617682611324\nIter: 1143\nnorm(gX): 0.3916428537838079\nIter: 1144\nnorm(gX): 0.3948367298012423\nIter: 1145\nnorm(gX): 0.3979425813116725\nIter: 1146\nnorm(gX): 0.4009596395733428\nIter: 1147\nnorm(gX): 0.40388718053010064\nIter: 1148\nnorm(gX): 0.4067245233516751\nIter: 1149\nnorm(gX): 0.4094710290794511\nIter: 1150\nnorm(gX): 0.412126099369588\nIter: 1151\nnorm(gX): 0.4146891753260932\nIter: 1152\nnorm(gX): 0.41715973641720044\nIter: 1153\nnorm(gX): 0.4195372994690096\nIter: 1154\nnorm(gX): 0.4218214177308122\nIter: 1155\nnorm(gX): 0.4240116800070917\nIter: 1156\nnorm(gX): 0.426107709851406\nIter: 1157\nnorm(gX): 0.4281091648178176\nIter: 1158\nnorm(gX): 0.43001573576567814\nIter: 1159\nnorm(gX): 0.4318271462139389\nIter: 1160\nnorm(gX): 0.4335431517412005\nIter: 1161\nnorm(gX): 0.43516353942802116\nIter: 1162\nnorm(gX): 0.4366881273380691\nIter: 1163\nnorm(gX): 0.4381167640348859\nIter: 1164\nnorm(gX): 0.4394493281311243\nIter: 1165\nnorm(gX): 0.44068572786732685\nIter: 1166\nnorm(gX): 0.44182590071734484\nIter: 1167\nnorm(gX): 0.44286981301769085\nIter: 1168\nnorm(gX): 0.4438174596182888\nIter: 1169\nnorm(gX): 0.44466886355214763\nIter: 1170\nnorm(gX): 0.4454240757216914\nIter: 1171\nnorm(gX): 0.4460831745996684\nIter: 1172\nnorm(gX): 0.4466462659426388\nIter: 1173\nnorm(gX): 0.44711348251533767\nIter: 1174\nnorm(gX): 0.4474849838242631\nIter: 1175\nnorm(gX): 0.4477609558591231\nIter: 1176\nnorm(gX): 0.4479416108409175\nIter: 1177\nnorm(gX): 0.4480271869756195\nIter: 1178\nnorm(gX): 0.44801794821260055\nIter: 1179\nnorm(gX): 0.44791418400716\nIter: 1180\nnorm(gX): 0.44771620908666204\nIter: 1181\nnorm(gX): 0.44742436321997814\nIter: 1182\nnorm(gX): 0.44703901099009125\nIter: 1183\nnorm(gX): 0.4465605415698738\nIter: 1184\nnorm(gX): 0.44598936850121457\nIter: 1185\nnorm(gX): 0.4453259294778218\nIter: 1186\nnorm(gX): 0.44457068613208045\nIter: 1187\nnorm(gX): 0.44372412382660403\nIter: 1188\nnorm(gX): 0.44278675145107127\nIter: 1189\nnorm(gX): 0.44175910122516526\nIter: 1190\nnorm(gX): 0.4406417285084202\nIter: 1191\nnorm(gX): 0.4394352116179377\nIter: 1192\nnorm(gX): 0.4381401516549251\nIter: 1193\nnorm(gX): 0.43675717234113115\nIter: 1194\nnorm(gX): 0.4352869198662388\nIter: 1195\nnorm(gX): 0.4337300627473561\nIter: 1196\nnorm(gX): 0.4320872917017655\nIter: 1197\nnorm(gX): 0.4303593195340732\nIter: 1198\nnorm(gX): 0.4285468810389993\nIter: 1199\nnorm(gX): 0.42665073292098754\nIter: 1200\nnorm(gX): 0.42467165373183774\nIter: 1201\nnorm(gX): 0.422610443827639\nIter: 1202\nnorm(gX): 0.42046792534613503\nIter: 1203\nnorm(gX): 0.41824494220587877\nIter: 1204\nnorm(gX): 0.41594236012832336\nIter: 1205\nnorm(gX): 0.4135610666841714\nIter: 1206\nnorm(gX): 0.41110197136523896\nIter: 1207\nnorm(gX): 0.40856600568315005\nIter: 1208\nnorm(gX): 0.405954123296226\nIter: 1209\nnorm(gX): 0.40326730016593876\nIter: 1210\nnorm(gX): 0.40050653474442244\nIter: 1211\nnorm(gX): 0.39767284819454574\nIter: 1212\nnorm(gX): 0.3947672846442155\nIter: 1213\nnorm(gX): 0.3917909114766476\nIter: 1214\nnorm(gX): 0.388744819658473\nIter: 1215\nnorm(gX): 0.3856301241077847\nIter: 1216\nnorm(gX): 0.38244796410433185\nIter: 1217\nnorm(gX): 0.3791995037443397\nIter: 1218\nnorm(gX): 0.37588593244269397\nIter: 1219\nnorm(gX): 0.3725084654855107\nIter: 1220\nnorm(gX): 0.3690683446364288\nIter: 1221\nnorm(gX): 0.36556683880040913\nIter: 1222\nnorm(gX): 0.3620052447491496\nIter: 1223\nnorm(gX): 0.3583848879128238\nIter: 1224\nnorm(gX): 0.3547071232432917\nIter: 1225\nnorm(gX): 0.3509733361546018\nIter: 1226\nnorm(gX): 0.3471849435472478\nIter: 1227\nnorm(gX): 0.3433433949233941\nIter: 1228\nnorm(gX): 0.3394501736011088\nIter: 1229\nnorm(gX): 0.33550679803659367\nIter: 1230\nnorm(gX): 0.33151482326437576\nIter: 1231\nnorm(gX): 0.32747584246654005\nIter: 1232\nnorm(gX): 0.32339148868339346\nIter: 1233\nnorm(gX): 0.31926343667920226\nIter: 1234\nnorm(gX): 0.31509340497821825\nIter: 1235\nnorm(gX): 0.31088315808781747\nIter: 1236\nnorm(gX): 0.3066345089273477\nIter: 1237\nnorm(gX): 0.3023493214832849\nIter: 1238\nnorm(gX): 0.29802951371339437\nIter: 1239\nnorm(gX): 0.293677060724925\nIter: 1240\nnorm(gX): 0.28929399825439506\nIter: 1241\nnorm(gX): 0.284882426479243\nIter: 1242\nnorm(gX): 0.28044451419450567\nIter: 1243\nnorm(gX): 0.27598250339087943\nIter: 1244\nnorm(gX): 0.271498714273745\nIter: 1245\nnorm(gX): 0.2669955507663108\nIter: 1246\nnorm(gX): 0.26247550654356383\nIter: 1247\nnorm(gX): 0.2579411716474802\nIter: 1248\nnorm(gX): 0.253395239737622\nIter: 1249\nnorm(gX): 0.248840516034816\nIter: 1250\nnorm(gX): 0.24427992601897974\nIter: 1251\nnorm(gX): 0.23971652494497356\nIter: 1252\nnorm(gX): 0.2351535082424509\nIter: 1253\nnorm(gX): 0.23059422286657694\nIter: 1254\nnorm(gX): 0.22604217966584825\nIter: 1255\nnorm(gX): 0.22150106683025778\nIter: 1256\nnorm(gX): 0.21697476447690645\nIter: 1257\nnorm(gX): 0.21246736042004172\nIter: 1258\nnorm(gX): 0.207983167156723\nIter: 1259\nnorm(gX): 0.2035267400766034\nIter: 1260\nnorm(gX): 0.1991028968723388\nIter: 1261\nnorm(gX): 0.19471673808372428\nIter: 1262\nnorm(gX): 0.19037366865073388\nIter: 1263\nnorm(gX): 0.18607942027490448\nIter: 1264\nnorm(gX): 0.18184007429119214\nIter: 1265\nnorm(gX): 0.17766208462931374\nIter: 1266\nnorm(gX): 0.1735523002906207\nIter: 1267\nnorm(gX): 0.16951798657970063\nIter: 1268\nnorm(gX): 0.16556684410627923\nIter: 1269\nnorm(gX): 0.16170702431173883\nIter: 1270\nnorm(gX): 0.1579471399776538\nIter: 1271\nnorm(gX): 0.15429626884881378\nIter: 1272\nnorm(gX): 0.15076394816451696\nIter: 1273\nnorm(gX): 0.14736015756416454\nIter: 1274\nnorm(gX): 0.14409528755284115\nIter: 1275\nnorm(gX): 0.14098009053086394\nIter: 1276\nnorm(gX): 0.13802561137414476\nIter: 1277\nnorm(gX): 0.13524309477828822\nIter: 1278\nnorm(gX): 0.13264386713354445\nIter: 1279\nnorm(gX): 0.1302391916595744\nIter: 1280\nnorm(gX): 0.12804009695323817\nIter: 1281\nnorm(gX): 0.12605718099735963\nIter: 1282\nnorm(gX): 0.12430039497685134\nIter: 1283\nnorm(gX): 0.12277881378971209\nIter: 1284\nnorm(gX): 0.1215004026596382\nIter: 1285\nnorm(gX): 0.1204717914016841\nIter: 1286\nnorm(gX): 0.11969806926771398\nIter: 1287\nnorm(gX): 0.1191826135389833\nIter: 1288\nnorm(gX): 0.11892696389481187\nIter: 1289\nnorm(gX): 0.11893075202526272\nIter: 1290\nnorm(gX): 0.11919169217682592\nIter: 1291\nnorm(gX): 0.11970563376350175\nIter: 1292\nnorm(gX): 0.12046667244281846\nIter: 1293\nnorm(gX): 0.1214673117880709\nIter: 1294\nnorm(gX): 0.12269866443082308\nIter: 1295\nnorm(gX): 0.12415067964790032\nIter: 1296\nnorm(gX): 0.1258123839192177\nIter: 1297\nnorm(gX): 0.12767212184472962\nIter: 1298\nnorm(gX): 0.12971778666534614\nIter: 1299\nnorm(gX): 0.13193703208602822\nIter: 1300\nnorm(gX): 0.13431745975582948\nIter: 1301\nnorm(gX): 0.13684677929355918\nIter: 1302\nnorm(gX): 0.13951293993435232\nIter: 1303\nnorm(gX): 0.14230423459270666\nIter: 1304\nnorm(gX): 0.1452093783619505\nIter: 1305\nnorm(gX): 0.1482175642340921\nIter: 1306\nnorm(gX): 0.15131849920064538\nIter: 1307\nnorm(gX): 0.1545024239728838\nIter: 1308\nnorm(gX): 0.157760119426669\nIter: 1309\nnorm(gX): 0.16108290260958866\nIter: 1310\nnorm(gX): 0.16446261480849378\nIter: 1311\nnorm(gX): 0.16789160380982954\nIter: 1312\nnorm(gX): 0.1713627021251152\nIter: 1313\nnorm(gX): 0.17486920261941769\nIter: 1314\nnorm(gX): 0.17840483268276602\nIter: 1315\nnorm(gX): 0.1819637278272366\nIter: 1316\nnorm(gX): 0.18554040537632896\nIter: 1317\nnorm(gX): 0.18912973873548172\nIter: 1318\nnorm(gX): 0.19272693258911305\nIter: 1319\nnorm(gX): 0.19632749925613727\nIter: 1320\nnorm(gX): 0.1999272363474153\nIter: 1321\nnorm(gX): 0.2035222058011827\nIter: 1322\nnorm(gX): 0.20710871432182493\nIter: 1323\nnorm(gX): 0.21068329521049878\nIter: 1324\nnorm(gX): 0.2142426915496701\nIter: 1325\nnorm(gX): 0.21778384068579454\nIter: 1326\nnorm(gX): 0.22130385994277083\nIter: 1327\nnorm(gX): 0.22480003349210873\nIter: 1328\nnorm(gX): 0.22826980030264116\nIter: 1329\nnorm(gX): 0.23171074309222336\nIter: 1330\nnorm(gX): 0.23512057820516782\nIter: 1331\nnorm(gX): 0.2384971463420151\nIter: 1332\nnorm(gX): 0.24183840407172466\nIter: 1333\nnorm(gX): 0.24514241606049095\nIter: 1334\nnorm(gX): 0.2484073479557687\nIter: 1335\nnorm(gX): 0.2516314598685187\nIter: 1336\nnorm(gX): 0.25481310040110705\nIter: 1337\nnorm(gX): 0.25795070117258645\nIter: 1338\nnorm(gX): 0.2610427717971572\nIter: 1339\nnorm(gX): 0.26408789527550586\nIter: 1340\nnorm(gX): 0.26708472376226566\nIter: 1341\nnorm(gX): 0.27003197467627305\nIter: 1342\nnorm(gX): 0.2729284271233311\nIter: 1343\nnorm(gX): 0.2757729186040488\nIter: 1344\nnorm(gX): 0.27856434198192787\nIter: 1345\nnorm(gX): 0.28130164268917934\nIter: 1346\nnorm(gX): 0.2839838161499705\nIter: 1347\nnorm(gX): 0.2866099054026709\nIter: 1348\nnorm(gX): 0.2891789989044319\nIter: 1349\nnorm(gX): 0.29169022850303555\nIter: 1350\nnorm(gX): 0.29414276756233526\nIter: 1351\nnorm(gX): 0.2965358292288932\nIter: 1352\nnorm(gX): 0.29886866482855856\nIter: 1353\nnorm(gX): 0.3011405623827553\nIter: 1354\nnorm(gX): 0.303350845235169\nIter: 1355\nnorm(gX): 0.30549887078032606\nIter: 1356\nnorm(gX): 0.3075840292863028\nIter: 1357\nnorm(gX): 0.30960574280446584\nIter: 1358\nnorm(gX): 0.31156346415973973\nIter: 1359\nnorm(gX): 0.3134566760153859\nIter: 1360\nnorm(gX): 0.31528489000681276\nIter: 1361\nnorm(gX): 0.3170476459393181\nIter: 1362\nnorm(gX): 0.31874451104510687\nIter: 1363\nnorm(gX): 0.3203750792951917\nIter: 1364\nnorm(gX): 0.32193897076221295\nIter: 1365\nnorm(gX): 0.32343583103042073\nIter: 1366\nnorm(gX): 0.3248653306493768\nIter: 1367\nnorm(gX): 0.3262271646281654\nIter: 1368\nnorm(gX): 0.32752105196715114\nIter: 1369\nnorm(gX): 0.3287467352244895\nIter: 1370\nnorm(gX): 0.3299039801148626\nIter: 1371\nnorm(gX): 0.33099257513806285\nIter: 1372\nnorm(gX): 0.3320123312351952\nIter: 1373\nnorm(gX): 0.3329630814705138\nIter: 1374\nnorm(gX): 0.3338446807369979\nIter: 1375\nnorm(gX): 0.33465700548394955\nIter: 1376\nnorm(gX): 0.33539995346505175\nIter: 1377\nnorm(gX): 0.33607344350542506\nIter: 1378\nnorm(gX): 0.33667741528640077\nIter: 1379\nnorm(gX): 0.3372118291467963\nIter: 1380\nnorm(gX): 0.3376766658996364\nIter: 1381\nnorm(gX): 0.3380719266633665\nIter: 1382\nnorm(gX): 0.3383976327066658\nIter: 1383\nnorm(gX): 0.3386538253061519\nIter: 1384\nnorm(gX): 0.33884056561625586\nIter: 1385\nnorm(gX): 0.3389579345507165\nIter: 1386\nnorm(gX): 0.3390060326751565\nIter: 1387\nnorm(gX): 0.338984980110336\nIter: 1388\nnorm(gX): 0.33889491644567665\nIter: 1389\nnorm(gX): 0.3387360006627471\nIter: 1390\nnorm(gX): 0.3385084110684775\nIter: 1391\nnorm(gX): 0.33821234523783017\nIter: 1392\nnorm(gX): 0.3378480199657996\nIter: 1393\nnorm(gX): 0.3374156712285577\nIter: 1394\nnorm(gX): 0.33691555415366325\nIter: 1395\nnorm(gX): 0.336347942999214\nIter: 1396\nnorm(gX): 0.33571313114188683\nIter: 1397\nnorm(gX): 0.33501143107381837\nIter: 1398\nnorm(gX): 0.3342431744082745\nIter: 1399\nnorm(gX): 0.3334087118940741\nIter: 1400\nnorm(gX): 0.3325084134387786\nIter: 1401\nnorm(gX): 0.331542668140591\nIter: 1402\nnorm(gX): 0.3305118843290267\nIter: 1403\nnorm(gX): 0.32941648961430336\nIter: 1404\nnorm(gX): 0.3282569309455121\nIter: 1405\nnorm(gX): 0.3270336746776097\nIter: 1406\nnorm(gX): 0.3257472066472665\nIter: 1407\nnorm(gX): 0.32439803225767544\nIter: 1408\nnorm(gX): 0.3229866765724341\nIter: 1409\nnorm(gX): 0.3215136844186189\nIter: 1410\nnorm(gX): 0.3199796204993026\nIter: 1411\nnorm(gX): 0.3183850695157052\nIter: 1412\nnorm(gX): 0.31673063629931913\nIter: 1413\nnorm(gX): 0.3150169459543926\nIter: 1414\nnorm(gX): 0.313244644011229\nIter: 1415\nnorm(gX): 0.3114143965908547\nIter: 1416\nnorm(gX): 0.30952689058175453\nIter: 1417\nnorm(gX): 0.30758283382945606\nIter: 1418\nnorm(gX): 0.30558295533987584\nIter: 1419\nnorm(gX): 0.30352800549753867\nIter: 1420\nnorm(gX): 0.30141875629989384\nIter: 1421\nnorm(gX): 0.2992560016091828\nIter: 1422\nnorm(gX): 0.29704055742347646\nIter: 1423\nnorm(gX): 0.29477326216873695\nIter: 1424\nnorm(gX): 0.2924549770139922\nIter: 1425\nnorm(gX): 0.29008658621197886\nIter: 1426\nnorm(gX): 0.2876689974678693\nIter: 1427\nnorm(gX): 0.2852031423390323\nIter: 1428\nnorm(gX): 0.2826899766690285\nIter: 1429\nnorm(gX): 0.28013048105949473\nIter: 1430\nnorm(gX): 0.2775256613838736\nIter: 1431\nnorm(gX): 0.27487654934733535\nIter: 1432\nnorm(gX): 0.27218420309776886\nIter: 1433\nnorm(gX): 0.2694497078930329\nIter: 1434\nnorm(gX): 0.26667417683032435\nIter: 1435\nnorm(gX): 0.2638587516439287\nIter: 1436\nnorm(gX): 0.26100460357822897\nIter: 1437\nnorm(gX): 0.25811293434352994\nIter: 1438\nnorm(gX): 0.2551849771628218\nIter: 1439\nnorm(gX): 0.25222199791835137\nIter: 1440\nnorm(gX): 0.24922529640768404\nIter: 1441\nnorm(gX): 0.2461962077196594\nIter: 1442\nnorm(gX): 0.24313610374158626\nIter: 1443\nnorm(gX): 0.2400463948099113\nIter: 1444\nnorm(gX): 0.2369285315175852\nIter: 1445\nnorm(gX): 0.23378400669239358\nIter: 1446\nnorm(gX): 0.2306143575616035\nIter: 1447\nnorm(gX): 0.2274211681194495\nIter: 1448\nnorm(gX): 0.22420607171512671\nIter: 1449\nnorm(gX): 0.22097075388020734\nIter: 1450\nnorm(gX): 0.21771695541561117\nIter: 1451\nnorm(gX): 0.21444647575953973\nIter: 1452\nnorm(gX): 0.21116117665882575\nIter: 1453\nnorm(gX): 0.20786298616743196\nIter: 1454\nnorm(gX): 0.20455390299663273\nIter: 1455\nnorm(gX): 0.20123600124223012\nIter: 1456\nnorm(gX): 0.197911435514529\nIter: 1457\nnorm(gX): 0.1945824464968465\nIter: 1458\nnorm(gX): 0.19125136695762388\nIter: 1459\nnorm(gX): 0.18792062823993105\nIter: 1460\nnorm(gX): 0.1845927672496163\nIter: 1461\nnorm(gX): 0.1812704339596386\nIter: 1462\nnorm(gX): 0.1779563994426452\nIter: 1463\nnorm(gX): 0.17465356443620153\nIter: 1464\nnorm(gX): 0.17136496843473167\nIter: 1465\nnorm(gX): 0.16809379928855214\nIter: 1466\nnorm(gX): 0.16484340327255334\nIter: 1467\nnorm(gX): 0.16161729556429807\nIter: 1468\nnorm(gX): 0.15841917104250103\nIter: 1469\nnorm(gX): 0.1552529152810344\nIter: 1470\nnorm(gX): 0.1521226155694511\nIter: 1471\nnorm(gX): 0.1490325717375727\nIter: 1472\nnorm(gX): 0.14598730649760275\nIter: 1473\nnorm(gX): 0.14299157494193596\nIter: 1474\nnorm(gX): 0.1400503727475836\nIter: 1475\nnorm(gX): 0.13716894253953807\nIter: 1476\nnorm(gX): 0.13435277775643772\nIter: 1477\nnorm(gX): 0.13160762324596323\nIter: 1478\nnorm(gX): 0.12893947169875083\nIter: 1479\nnorm(gX): 0.12635455491675127\nIter: 1480\nnorm(gX): 0.12385932881474883\nIter: 1481\nnorm(gX): 0.12146045098763629\nIter: 1482\nnorm(gX): 0.1191647496583333\nIter: 1483\nnorm(gX): 0.11697918287431978\nIter: 1484\nnorm(gX): 0.11491078696767526\nIter: 1485\nnorm(gX): 0.11296661355834425\nIter: 1486\nnorm(gX): 0.11115365478233052\nIter: 1487\nnorm(gX): 0.10947875697739116\nIter: 1488\nnorm(gX): 0.10794852375505756\nIter: 1489\nnorm(gX): 0.10656921020663525\nIter: 1490\nnorm(gX): 0.10534661088415806\nIter: 1491\nnorm(gX): 0.10428594509138712\nIter: 1492\nnorm(gX): 0.10339174381760002\nIter: 1493\nnorm(gX): 0.10266774323749396\nIter: 1494\nnorm(gX): 0.10211678997488535\nIter: 1495\nnorm(gX): 0.10174076319810632\nIter: 1496\nnorm(gX): 0.10154051803484684\nIter: 1497\nnorm(gX): 0.10151585377493207\nIter: 1498\nnorm(gX): 0.10166550894640415\nIter: 1499\nnorm(gX): 0.10198718373636365\nIter: 1500\nnorm(gX): 0.10247758855522089\nIter: 1501\nnorm(gX): 0.10313251599371999\nIter: 1502\nnorm(gX): 0.10394693215876767\nIter: 1503\nnorm(gX): 0.10491508251282645\nIter: 1504\nnorm(gX): 0.10603060693664022\nIter: 1505\nnorm(gX): 0.10728665877695293\nIter: 1506\nnorm(gX): 0.10867602306580836\nIter: 1507\nnorm(gX): 0.11019122980651296\nIter: 1508\nnorm(gX): 0.11182465909874381\nIter: 1509\nnorm(gX): 0.11356863581145761\nIter: 1510\nnorm(gX): 0.11541551241463179\nIter: 1511\nnorm(gX): 0.11735773938285293\nIter: 1512\nnorm(gX): 0.11938792324550693\nIter: 1513\nnorm(gX): 0.12149887286420254\nIter: 1514\nnorm(gX): 0.12368363487152911\nIter: 1515\nnorm(gX): 0.125935519422071\nIter: 1516\nnorm(gX): 0.12824811750986914\nIter: 1517\nnorm(gX): 0.1306153111214953\nIter: 1518\nnorm(gX): 0.13303127744498572\nIter: 1519\nnorm(gX): 0.13549048826356058\nIter: 1520\nnorm(gX): 0.13798770554704615\nIter: 1521\nnorm(gX): 0.14051797412697692\nIter: 1522\nnorm(gX): 0.14307661221328563\nIter: 1523\nnorm(gX): 0.14565920038820893\nIter: 1524\nnorm(gX): 0.14826156960056755\nIter: 1525\nnorm(gX): 0.15087978858327603\nIter: 1526\nnorm(gX): 0.1535101510295858\nIter: 1527\nnorm(gX): 0.1561491627888456\nIter: 1528\nnorm(gX): 0.158793529279986\nIter: 1529\nnorm(gX): 0.16144014326911263\nIter: 1530\nnorm(gX): 0.16408607311545156\nIter: 1531\nnorm(gX): 0.16672855155598465\nIter: 1532\nnorm(gX): 0.1693649650722789\nIter: 1533\nnorm(gX): 0.1719928438621009\nIter: 1534\nnorm(gX): 0.17460985242230515\nIter: 1535\nnorm(gX): 0.17721378073743768\nIter: 1536\nnorm(gX): 0.17980253605949248\nIter: 1537\nnorm(gX): 0.18237413525811955\nIter: 1538\nnorm(gX): 0.18492669771605477\nIter: 1539\nnorm(gX): 0.18745843874200543\nIter: 1540\nnorm(gX): 0.18996766347156802\nIter: 1541\nnorm(gX): 0.19245276122629787\nIter: 1542\nnorm(gX): 0.19491220030103396\nIter: 1543\nnorm(gX): 0.1973445231502882\nIter: 1544\nnorm(gX): 0.1997483419454571\nIter: 1545\nnorm(gX): 0.20212233447577552\nIter: 1546\nnorm(gX): 0.20446524036741034\nIter: 1547\nnorm(gX): 0.20677585759643644\nIter: 1548\nnorm(gX): 0.20905303927303812\nIter: 1549\nnorm(gX): 0.2112956906756906\nIter: 1550\nnorm(gX): 0.2135027665156346\nIter: 1551\nnorm(gX): 0.21567326841327886\nIter: 1552\nnorm(gX): 0.21780624256962142\nIter: 1553\nnorm(gX): 0.21990077761701401\nIter: 1554\nnorm(gX): 0.22195600263483886\nIter: 1555\nnorm(gX): 0.2239710853167878\nIter: 1556\nnorm(gX): 0.22594523027750432\nIter: 1557\nnorm(gX): 0.22787767748739662\nIter: 1558\nnorm(gX): 0.2297677008252304\nIter: 1559\nnorm(gX): 0.23161460673908443\nIter: 1560\nnorm(gX): 0.23341773300695712\nIter: 1561\nnorm(gX): 0.2351764475890518\nIter: 1562\nnorm(gX): 0.2368901475644743\nIter: 1563\nnorm(gX): 0.2385582581456104\nIter: 1564\nnorm(gX): 0.24018023176408423\nIter: 1565\nnorm(gX): 0.24175554722268341\nIter: 1566\nnorm(gX): 0.24328370890808457\nIter: 1567\nnorm(gX): 0.24476424605970404\nIter: 1568\nnorm(gX): 0.24619671209033844\nIter: 1569\nnorm(gX): 0.2475806839546387\nIter: 1570\nnorm(gX): 0.24891576156182457\nIter: 1571\nnorm(gX): 0.25020156722930254\nIter: 1572\nnorm(gX): 0.25143774517414336\nIter: 1573\nnorm(gX): 0.25262396103965595\nIter: 1574\nnorm(gX): 0.2537599014545024\nIter: 1575\nnorm(gX): 0.2548452736219849\nIter: 1576\nnorm(gX): 0.2558798049373942\nIter: 1577\nnorm(gX): 0.25686324263142324\nIter: 1578\nnorm(gX): 0.2577953534378435\nIter: 1579\nnorm(gX): 0.2586759232837839\nIter: 1580\nnorm(gX): 0.2595047570010768\nIter: 1581\nnorm(gX): 0.26028167805726965\nIter: 1582\nnorm(gX): 0.2610065283049741\nIter: 1583\nnorm(gX): 0.26167916774840744\nIter: 1584\nnorm(gX): 0.2622994743259808\nIter: 1585\nnorm(gX): 0.26286734370790404\nIter: 1586\nnorm(gX): 0.26338268910788254\nIter: 1587\nnorm(gX): 0.26384544110801494\nIter: 1588\nnorm(gX): 0.26425554749606894\nIter: 1589\nnorm(gX): 0.26461297311436055\nIter: 1590\nnorm(gX): 0.26491769971954565\nIter: 1591\nnorm(gX): 0.2651697258526285\nIter: 1592\nnorm(gX): 0.26536906671859395\nIter: 1593\nnorm(gX): 0.2655157540750278\nIter: 1594\nnorm(gX): 0.26560983612922423\nIter: 1595\nnorm(gX): 0.2656513774432099\nIter: 1596\nnorm(gX): 0.26564045884624127\nIter: 1597\nnorm(gX): 0.2655771773542702\nIter: 1598\nnorm(gX): 0.2654616460959671\nIter: 1599\nnorm(gX): 0.2652939942448892\nIter: 1600\nnorm(gX): 0.26507436695739917\nIter: 1601\nnorm(gX): 0.2648029253159979\nIter: 1602\nnorm(gX): 0.26447984627770105\nIter: 1603\nnorm(gX): 0.26410532262720215\nIter: 1604\nnorm(gX): 0.2636795629345088\nIter: 1605\nnorm(gX): 0.2632027915167946\nIter: 1606\nnorm(gX): 0.2626752484042844\nIter: 1607\nnorm(gX): 0.2620971893099381\nIter: 1608\nnorm(gX): 0.2614688856028162\nIter: 1609\nnorm(gX): 0.26079062428497546\nIter: 1610\nnorm(gX): 0.2600627079718277\nIter: 1611\nnorm(gX): 0.25928545487592414\nIter: 1612\nnorm(gX): 0.25845919879416945\nIter: 1613\nnorm(gX): 0.25758428909850156\nIter: 1614\nnorm(gX): 0.2566610907301644\nIter: 1615\nnorm(gX): 0.255689984197713\nIter: 1616\nnorm(gX): 0.2546713655789785\nIter: 1617\nnorm(gX): 0.25360564652726714\nIter: 1618\nnorm(gX): 0.25249325428214453\nIter: 1619\nnorm(gX): 0.2513346316852167\nIter: 1620\nnorm(gX): 0.2501302372013948\nIter: 1621\nnorm(gX): 0.24888054494624992\nIter: 1622\nnorm(gX): 0.24758604472005205\nIter: 1623\nnorm(gX): 0.24624724204932325\nIter: 1624\nnorm(gX): 0.24486465823666304\nIter: 1625\nnorm(gX): 0.24343883041985567\nIter: 1626\nnorm(gX): 0.2419703116412756\nIter: 1627\nnorm(gX): 0.2404596709287462\nIter: 1628\nnorm(gX): 0.23890749338916464\nIter: 1629\nnorm(gX): 0.2373143803162445\nIter: 1630\nnorm(gX): 0.23568094931395617\nIter: 1631\nnorm(gX): 0.23400783443733125\nIter: 1632\nnorm(gX): 0.23229568635239578\nIter: 1633\nnorm(gX): 0.23054517251729778\nIter: 1634\nnorm(gX): 0.22875697738667927\nIter: 1635\nnorm(gX): 0.22693180264166277\nIter: 1636\nnorm(gX): 0.22507036744792458\nIter: 1637\nnorm(gX): 0.22317340874454203\nIter: 1638\nnorm(gX): 0.22124168156652166\nIter: 1639\nnorm(gX): 0.21927595940412778\nIter: 1640\nnorm(gX): 0.21727703460239112\nIter: 1641\nnorm(gX): 0.21524571880433124\nIter: 1642\nnorm(gX): 0.21318284344188485\nIter: 1643\nnorm(gX): 0.21108926027860905\nIter: 1644\nnorm(gX): 0.20896584200867427\nIter: 1645\nnorm(gX): 0.20681348291693946\nIter: 1646\nnorm(gX): 0.20463309960526907\nIter: 1647\nnorm(gX): 0.2024256317905953\nIter: 1648\nnorm(gX): 0.20019204318067935\nIter: 1649\nnorm(gX): 0.1979333224338737\nIter: 1650\nnorm(gX): 0.19565048420971823\nIter: 1651\nnorm(gX): 0.1933445703176227\nIter: 1652\nnorm(gX): 0.191016650971377\nIter: 1653\nnorm(gX): 0.18866782615782982\nIter: 1654\nnorm(gX): 0.1862992271284697\nIter: 1655\nnorm(gX): 0.18391201802338034\nIter: 1656\nnorm(gX): 0.18150739763743223\nIter: 1657\nnorm(gX): 0.17908660133924206\nIter: 1658\nnorm(gX): 0.1766509031539445\nIter: 1659\nnorm(gX): 0.17420161802129006\nIter: 1660\nnorm(gX): 0.17174010424115477\nIter: 1661\nnorm(gX): 0.16926776611880487\nIter: 1662\nnorm(gX): 0.16678605682263528\nIter: 1663\nnorm(gX): 0.1642964814671366\nIter: 1664\nnorm(gX): 0.1618006004338162\nIter: 1665\nnorm(gX): 0.1593000329423496\nIter: 1666\nnorm(gX): 0.15679646088355417\nIter: 1667\nnorm(gX): 0.15429163292450662\nIter: 1668\nnorm(gX): 0.15178736889438502\nIter: 1669\nnorm(gX): 0.149285564457\nIter: 1670\nnorm(gX): 0.14678819607255567\nIter: 1671\nnorm(gX): 0.14429732624650626\nIter: 1672\nnorm(gX): 0.1418151090574173\nIter: 1673\nnorm(gX): 0.13934379594800436\nIter: 1674\nnorm(gX): 0.13688574175376975\nIter: 1675\nnorm(gX): 0.13444341093156106\nIter: 1676\nnorm(gX): 0.1320193839353763\nIter: 1677\nnorm(gX): 0.1296163636684386\nIter: 1678\nnorm(gX): 0.12723718191863165\nIter: 1679\nnorm(gX): 0.1248848056580623\nIter: 1680\nnorm(gX): 0.1225623430566727\nIter: 1681\nnorm(gX): 0.12027304902400518\nIter: 1682\nnorm(gX): 0.11802033005224859\nIter: 1683\nnorm(gX): 0.11580774808780545\nIter: 1684\nnorm(gX): 0.11363902310831851\nIter: 1685\nnorm(gX): 0.1115180340283818\nIter: 1686\nnorm(gX): 0.10944881750204093\nIter: 1687\nnorm(gX): 0.10743556413632241\nIter: 1688\nnorm(gX): 0.10548261158139187\nIter: 1689\nnorm(gX): 0.10359443392477455\nIter: 1690\nnorm(gX): 0.10177562679601103\nIter: 1691\nnorm(gX): 0.10003088759210534\nIter: 1692\nnorm(gX): 0.09836499027233869\nIter: 1693\nnorm(gX): 0.09678275425301122\nIter: 1694\nnorm(gX): 0.09528900706743744\nIter: 1695\nnorm(gX): 0.09388854065208097\nIter: 1696\nnorm(gX): 0.09258606137906068\nIter: 1697\nnorm(gX): 0.09138613427792082\nIter: 1698\nnorm(gX): 0.09029312226555437\nIter: 1699\nnorm(gX): 0.08931112161492083\nIter: 1700\nnorm(gX): 0.08844389531259769\nIter: 1701\nnorm(gX): 0.08769480634527385\nIter: 1702\nnorm(gX): 0.08706675327295949\nIter: 1703\nnorm(gX): 0.08656211064704433\nIter: 1704\nnorm(gX): 0.08618267687408498\nIter: 1705\nnorm(gX): 0.0859296319835183\nIter: 1706\nnorm(gX): 0.0858035074189349\nIter: 1707\nnorm(gX): 0.08580416945163237\nIter: 1708\nnorm(gX): 0.08593081714808762\nIter: 1709\nnorm(gX): 0.08618199506692319\nIter: 1710\nnorm(gX): 0.08655562008576612\nIter: 1711\nnorm(gX): 0.08704902103692241\nIter: 1712\nnorm(gX): 0.08765898922855896\nIter: 1713\nnorm(gX): 0.08838183749344575\nIter: 1714\nnorm(gX): 0.08921346516641888\nIter: 1715\nnorm(gX): 0.09014942634682296\nIter: 1716\nnorm(gX): 0.09118499893369834\nIter: 1717\nnorm(gX): 0.09231525219429532\nIter: 1718\nnorm(gX): 0.09353511099661697\nIter: 1719\nnorm(gX): 0.09483941525823211\nIter: 1720\nnorm(gX): 0.09622297359518392\nIter: 1721\nnorm(gX): 0.0976806105624622\nIter: 1722\nnorm(gX): 0.09920720723770099\nIter: 1723\nnorm(gX): 0.10079773519828222\nIter: 1724\nnorm(gX): 0.10244728417382036\nIter: 1725\nnorm(gX): 0.10415108382242354\nIter: 1726\nnorm(gX): 0.10590452018657354\nIter: 1727\nnorm(gX): 0.1077031474415677\nIter: 1728\nnorm(gX): 0.10954269556652982\nIter: 1729\nnorm(gX): 0.11141907455461313\nIter: 1730\nnorm(gX): 0.11332837574423256\nIter: 1731\nnorm(gX): 0.11526687080478007\nIter: 1732\nnorm(gX): 0.11723100885420289\nIter: 1733\nnorm(gX): 0.11921741212698106\nIter: 1734\nnorm(gX): 0.12122287055286356\nIter: 1735\nnorm(gX): 0.12324433555145932\nIter: 1736\nnorm(gX): 0.12527891329689408\nIter: 1737\nnorm(gX): 0.12732385766120813\nIter: 1738\nnorm(gX): 0.12937656300495226\nIter: 1739\nnorm(gX): 0.13143455694878328\nIter: 1740\nnorm(gX): 0.1334954932302813\nIter: 1741\nnorm(gX): 0.1355571447253126\nIter: 1742\nnorm(gX): 0.13761739669274026\nIter: 1743\nnorm(gX): 0.13967424028421277\nIter: 1744\nnorm(gX): 0.14172576634721049\nIter: 1745\nnorm(gX): 0.1437701595384253\nIter: 1746\nnorm(gX): 0.1458056927559508\nIter: 1747\nnorm(gX): 0.1478307218919727\nIter: 1748\nnorm(gX): 0.14984368090252664\nIter: 1749\nnorm(gX): 0.15184307718697798\nIter: 1750\nnorm(gX): 0.15382748726712459\nIter: 1751\nnorm(gX): 0.15579555275379628\nIter: 1752\nnorm(gX): 0.15774597658759698\nIter: 1753\nnorm(gX): 0.15967751953960788\nIter: 1754\nnorm(gX): 0.1615889969575756\nIter: 1755\nnorm(gX): 0.16347927574300772\nIter: 1756\nnorm(gX): 0.16534727154486972\nIter: 1757\nnorm(gX): 0.16719194615584912\nIter: 1758\nnorm(gX): 0.16901230509778045\nIter: 1759\nnorm(gX): 0.17080739538325793\nIter: 1760\nnorm(gX): 0.1725763034412221\nIter: 1761\nnorm(gX): 0.1743181531948932\nIter: 1762\nnorm(gX): 0.17603210428113522\nIter: 1763\nnorm(gX): 0.17771735040098502\nIter: 1764\nnorm(gX): 0.17937311779176293\nIter: 1765\nnorm(gX): 0.18099866381181273\nIter: 1766\nnorm(gX): 0.182593275629476\nIter: 1767\nnorm(gX): 0.18415626900863488\nIter: 1768\nnorm(gX): 0.18568698718351476\nIter: 1769\nnorm(gX): 0.18718479981611208\nIter: 1770\nnorm(gX): 0.18864910203006555\nIter: 1771\nnorm(gX): 0.19007931351515256\nIter: 1772\nnorm(gX): 0.19147487769718233\nIter: 1773\nnorm(gX): 0.19283526096827863\nIter: 1774\nnorm(gX): 0.19415995197307445\nIter: 1775\nnorm(gX): 0.19544846094653207\nIter: 1776\nnorm(gX): 0.19670031909953745\nIter: 1777\nnorm(gX): 0.19791507804864228\nIter: 1778\nnorm(gX): 0.19909230928661226\nIter: 1779\nnorm(gX): 0.20023160369068715\nIter: 1780\nnorm(gX): 0.20133257106571922\nIter: 1781\nnorm(gX): 0.20239483971950586\nIter: 1782\nnorm(gX): 0.20341805606789018\nIter: 1783\nnorm(gX): 0.20440188426736056\nIter: 1784\nnorm(gX): 0.20534600587304042\nIter: 1785\nnorm(gX): 0.2062501195201314\nIter: 1786\nnorm(gX): 0.20711394062700259\nIter: 1787\nnorm(gX): 0.20793720111827654\nIter: 1788\nnorm(gX): 0.20871964916633795\nIter: 1789\nnorm(gX): 0.20946104894991097\nIter: 1790\nnorm(gX): 0.21016118042826903\nIter: 1791\nnorm(gX): 0.21081983913000196\nIter: 1792\nnorm(gX): 0.2114368359551063\nIter: 1793\nnorm(gX): 0.21201199698940648\nIter: 1794\nnorm(gX): 0.2125451633303631\nIter: 1795\nnorm(gX): 0.21303619092336662\nIter: 1796\nnorm(gX): 0.2134849504077373\nIter: 1797\nnorm(gX): 0.2138913269716484\nIter: 1798\nnorm(gX): 0.21425522021536697\nIter: 1799\nnorm(gX): 0.21457654402213355\nIter: 1800\nnorm(gX): 0.21485522643615276\nIter: 1801\nnorm(gX): 0.21509120954718247\nIter: 1802\nnorm(gX): 0.21528444938129154\nIter: 1803\nnorm(gX): 0.21543491579730747\nIter: 1804\nnorm(gX): 0.21554259238870507\nIter: 1805\nnorm(gX): 0.21560747639049774\nIter: 1806\nnorm(gX): 0.21562957859094264\nIter: 1807\nnorm(gX): 0.21560892324776118\nIter: 1808\nnorm(gX): 0.215545548008723\nIter: 1809\nnorm(gX): 0.21543950383637805\nIter: 1810\nnorm(gX): 0.21529085493683384\nIter: 1811\nnorm(gX): 0.2150996786924757\nIter: 1812\nnorm(gX): 0.2148660655985474\nIter: 1813\nnorm(gX): 0.21459011920357762\nIter: 1814\nnorm(gX): 0.21427195605362076\nIter: 1815\nnorm(gX): 0.21391170564033357\nIter: 1816\nnorm(gX): 0.21350951035297755\nIter: 1817\nnorm(gX): 0.21306552543437038\nIter: 1818\nnorm(gX): 0.212579918940941\nIter: 1819\nnorm(gX): 0.21205287170698262\nIter: 1820\nnorm(gX): 0.2114845773133077\nIter: 1821\nnorm(gX): 0.21087524206047378\nIter: 1822\nnorm(gX): 0.2102250849467992\nIter: 1823\nnorm(gX): 0.20953433765143267\nIter: 1824\nnorm(gX): 0.2088032445227799\nIter: 1825\nnorm(gX): 0.20803206257255216\nIter: 1826\nnorm(gX): 0.20722106147582914\nIter: 1827\nnorm(gX): 0.20637052357752464\nIter: 1828\nnorm(gX): 0.2054807439056437\nIter: 1829\nnorm(gX): 0.20455203019183457\nIter: 1830\nnorm(gX): 0.20358470289972663\nIter: 1831\nnorm(gX): 0.20257909526161139\nIter: 1832\nnorm(gX): 0.20153555332405634\nIter: 1833\nnorm(gX): 0.20045443600314147\nIter: 1834\nnorm(gX): 0.19933611514999205\nIter: 1835\nnorm(gX): 0.19818097562742878\nIter: 1836\nnorm(gX): 0.19698941539854128\nIter: 1837\nnorm(gX): 0.1957618456281378\nIter: 1838\nnorm(gX): 0.19449869079803464\nIter: 1839\nnorm(gX): 0.19320038883731025\nIter: 1840\nnorm(gX): 0.191867391268678\nIter: 1841\nnorm(gX): 0.19050016337228237\nIter: 1842\nnorm(gX): 0.18909918436829057\nIter: 1843\nnorm(gX): 0.1876649476198448\nIter: 1844\nnorm(gX): 0.1861979608580089\nIter: 1845\nnorm(gX): 0.18469874643050666\nIter: 1846\nnorm(gX): 0.18316784157624322\nIter: 1847\nnorm(gX): 0.18160579872776975\nIter: 1848\nnorm(gX): 0.18001318584398512\nIter: 1849\nnorm(gX): 0.17839058677569086\nIter: 1850\nnorm(gX): 0.1767386016667416\nIter: 1851\nnorm(gX): 0.17505784739386684\nIter: 1852\nnorm(gX): 0.17334895804845457\nIter: 1853\nnorm(gX): 0.17161258546398145\nIter: 1854\nnorm(gX): 0.16984939979298858\nIter: 1855\nnorm(gX): 0.1680600901379818\nIter: 1856\nnorm(gX): 0.1662453652409795\nIter: 1857\nnorm(gX): 0.1644059542368346\nIter: 1858\nnorm(gX): 0.16254260747605717\nIter: 1859\nnorm(gX): 0.16065609742320447\nIter: 1860\nnorm(gX): 0.15874721963763586\nIter: 1861\nnorm(gX): 0.15681679384394256\nIter: 1862\nnorm(gX): 0.15486566510006833\nIter: 1863\nnorm(gX): 0.15289470507182845\nIter: 1864\nnorm(gX): 0.15090481342332573\nIter: 1865\nnorm(gX): 0.14889691933361934\nIter: 1866\nnorm(gX): 0.14687198315079145\nIter: 1867\nnorm(gX): 0.14483099819561665\nIter: 1868\nnorm(gX): 0.14277499272793578\nIter: 1869\nnorm(gX): 0.14070503208990953\nIter: 1870\nnorm(gX): 0.1386222210413755\nIter: 1871\nnorm(gX): 0.136527706303611\nIter: 1872\nnorm(gX): 0.13442267932889254\nIter: 1873\nnorm(gX): 0.1323083793142083\nIter: 1874\nnorm(gX): 0.1301860964785314\nIter: 1875\nnorm(gX): 0.12805717562375832\nIter: 1876\nnorm(gX): 0.12592302000010364\nIter: 1877\nnorm(gX): 0.12378509549696196\nIter: 1878\nnorm(gX): 0.12164493518016938\nIter: 1879\nnorm(gX): 0.11950414419578395\nIter: 1880\nnorm(gX): 0.11736440505906862\nIter: 1881\nnorm(gX): 0.11522748334475852\nIter: 1882\nnorm(gX): 0.11309523379079524\nIter: 1883\nnorm(gX): 0.11096960682214993\nIter: 1884\nnorm(gX): 0.10885265549353983\nIter: 1885\nnorm(gX): 0.10674654283933334\nIter: 1886\nnorm(gX): 0.10465354960506378\nIter: 1887\nnorm(gX): 0.10257608231697615\nIter: 1888\nnorm(gX): 0.10051668162293648\nIter: 1889\nnorm(gX): 0.09847803080912218\nIter: 1890\nnorm(gX): 0.09646296436081761\nIter: 1891\nnorm(gX): 0.09447447639165929\nIter: 1892\nnorm(gX): 0.0925157287125075\nIter: 1893\nnorm(gX): 0.09059005824822185\nIter: 1894\nnorm(gX): 0.08870098343743964\nIter: 1895\nnorm(gX): 0.08685220916735109\nIter: 1896\nnorm(gX): 0.08504762970371946\nIter: 1897\nnorm(gX): 0.08329132897856191\nIter: 1898\nnorm(gX): 0.08158757749899662\nIter: 1899\nnorm(gX): 0.07994082504763864\nIter: 1900\nnorm(gX): 0.07835568826807948\nIter: 1901\nnorm(gX): 0.07683693218188016\nIter: 1902\nnorm(gX): 0.07538944468330064\nIter: 1903\nnorm(gX): 0.07401820312447197\nIter: 1904\nnorm(gX): 0.07272823225803264\nIter: 1905\nnorm(gX): 0.07152455306665699\nIter: 1906\nnorm(gX): 0.07041212239407772\nIter: 1907\nnorm(gX): 0.06939576380563839\nIter: 1908\nnorm(gX): 0.06848009073732451\nIter: 1909\nnorm(gX): 0.06766942370990763\nIter: 1910\nnorm(gX): 0.06696770413476218\nIter: 1911\nnorm(gX): 0.06637840794267676\nIter: 1912\nnorm(gX): 0.06590446283270779\nIter: 1913\nnorm(gX): 0.0655481732663504\nIter: 1914\nnorm(gX): 0.06531115733759943\nIter: 1915\nnorm(gX): 0.06519429927816246\nIter: 1916\nnorm(gX): 0.06519772060439974\nIter: 1917\nnorm(gX): 0.06532077183090852\nIter: 1918\nnorm(gX): 0.06556204537202688\nIter: 1919\nnorm(gX): 0.06591940887299441\nIter: 1920\nnorm(gX): 0.06639005691705081\nIter: 1921\nnorm(gX): 0.0669705779887167\nIter: 1922\nnorm(gX): 0.06765703284257242\nIter: 1923\nnorm(gX): 0.06844504008149796\nIter: 1924\nnorm(gX): 0.06932986478158361\nIter: 1925\nnorm(gX): 0.07030650635691611\nIter: 1926\nnorm(gX): 0.07136978244844054\nIter: 1927\nnorm(gX): 0.07251440634730343\nIter: 1928\nnorm(gX): 0.07373505622928644\nIter: 1929\nnorm(gX): 0.07502643520541798\nIter: 1930\nnorm(gX): 0.07638332183041051\nIter: 1931\nnorm(gX): 0.07780061122469531\nIter: 1932\nnorm(gX): 0.07927334734822238\nIter: 1933\nnorm(gX): 0.08079674722072816\nIter: 1934\nnorm(gX): 0.08236621803026932\nIter: 1935\nnorm(gX): 0.08397736812983327\nIter: 1936\nnorm(gX): 0.08562601291331044\nIter: 1937\nnorm(gX): 0.08730817650697852\nIter: 1938\nnorm(gX): 0.0890200901284162\nIter: 1939\nnorm(gX): 0.09075818786541519\nIter: 1940\nnorm(gX): 0.09251910052299891\nIter: 1941\nnorm(gX): 0.09429964808461715\nIter: 1942\nnorm(gX): 0.09609683123822738\nIter: 1943\nnorm(gX): 0.09790782233227105\nIter: 1944\nnorm(gX): 0.09972995605141728\nIter: 1945\nnorm(gX): 0.10156072003755427\nIter: 1946\nnorm(gX): 0.10339774562759683\nIter: 1947\nnorm(gX): 0.10523879883490014\nIter: 1948\nnorm(gX): 0.10708177166492026\nIter: 1949\nnorm(gX): 0.1089246738265937\nIter: 1950\nnorm(gX): 0.1107656248780373\nIter: 1951\nnorm(gX): 0.1126028468272442\nIter: 1952\nnorm(gX): 0.11443465719493606\nIter: 1953\nnorm(gX): 0.11625946253639333\nIter: 1954\nnorm(gX): 0.11807575241175072\nIter: 1955\nnorm(gX): 0.11988209378881301\nIter: 1956\nnorm(gX): 0.12167712585904047\nIter: 1957\nnorm(gX): 0.12345955524495249\nIter: 1958\nnorm(gX): 0.12522815157606565\nIter: 1959\nnorm(gX): 0.12698174341000035\nIter: 1960\nnorm(gX): 0.12871921447545986\nIter: 1961\nnorm(gX): 0.13043950021435224\nIter: 1962\nnorm(gX): 0.1321415846011032\nIter: 1963\nnorm(gX): 0.13382449721820278\nIter: 1964\nnorm(gX): 0.1354873105681373\nIter: 1965\nnorm(gX): 0.1371291376030977\nIter: 1966\nnorm(gX): 0.13874912945496334\nIter: 1967\nnorm(gX): 0.1403464733493607\nIter: 1968\nnorm(gX): 0.1419203906886919\nIter: 1969\nnorm(gX): 0.14347013529019245\nIter: 1970\nnorm(gX): 0.14499499176616407\nIter: 1971\nnorm(gX): 0.1464942740344848\nIter: 1972\nnorm(gX): 0.1479673239485462\nIter: 1973\nnorm(gX): 0.14941351003655887\nIter: 1974\nnorm(gX): 0.15083222634106502\nIter: 1975\nnorm(gX): 0.15222289135024678\nIter: 1976\nnorm(gX): 0.1535849470132905\nIter: 1977\nnorm(gX): 0.15491785783279224\nIter: 1978\nnorm(gX): 0.15622111002770728\nIter: 1979\nnorm(gX): 0.15749421076095652\nIter: 1980\nnorm(gX): 0.15873668742629632\nIter: 1981\nnorm(gX): 0.15994808698948035\nIter: 1982\nnorm(gX): 0.16112797537924278\nIter: 1983\nnorm(gX): 0.16227593692393447\nIter: 1984\nnorm(gX): 0.1633915738300642\nIter: 1985\nnorm(gX): 0.16447450569926345\nIter: 1986\nnorm(gX): 0.16552436908056334\nIter: 1987\nnorm(gX): 0.16654081705506005\nIter: 1988\nnorm(gX): 0.16752351885033248\nIter: 1989\nnorm(gX): 0.16847215948223768\nIter: 1990\nnorm(gX): 0.1693864394218296\nIter: 1991\nnorm(gX): 0.17026607428540735\nIter: 1992\nnorm(gX): 0.17111079454585051\nIter: 1993\nnorm(gX): 0.1719203452635383\nIter: 1994\nnorm(gX): 0.1726944858353262\nIter: 1995\nnorm(gX): 0.17343298976016022\nIter: 1996\nnorm(gX): 0.17413564442003462\nIter: 1997\nnorm(gX): 0.1748022508751313\nIter: 1998\nnorm(gX): 0.17543262367203508\nIter: 1999\nnorm(gX): 0.17602659066407192\nIter: 2000\nnorm(gX): 0.17658399284285017\nIter: 2001\nnorm(gX): 0.17710468418020558\nIter: 2002\nnorm(gX): 0.17758853147977116\nIter: 2003\nnorm(gX): 0.1780354142375148\nIter: 2004\nnorm(gX): 0.17844522451063116\nIter: 2005\nnorm(gX): 0.1788178667941864\nIter: 2006\nnorm(gX): 0.1791532579050567\nIter: 2007\nnorm(gX): 0.17945132687263393\nIter: 2008\nnorm(gX): 0.1797120148359215\nIter: 2009\nnorm(gX): 0.1799352749466356\nIter: 2010\nnorm(gX): 0.18012107227793087\nIter: 2011\nnorm(gX): 0.18026938373848658\nIter: 2012\nnorm(gX): 0.1803801979916376\nIter: 2013\nnorm(gX): 0.18045351537933066\nIter: 2014\nnorm(gX): 0.18048934785064952\nIter: 2015\nnorm(gX): 0.18048771889476511\nIter: 2016\nnorm(gX): 0.18044866347807978\nIter: 2017\nnorm(gX): 0.18037222798545008\nIter: 2018\nnorm(gX): 0.18025847016536808\nIter: 2019\nnorm(gX): 0.18010745907895953\nIter: 2020\nnorm(gX): 0.1799192750527486\nIter: 2021\nnorm(gX): 0.17969400963508808\nIter: 2022\nnorm(gX): 0.1794317655562355\nIter: 2023\nnorm(gX): 0.17913265669200104\nIter: 2024\nnorm(gX): 0.178796808031018\nIter: 2025\nnorm(gX): 0.1784243556455599\nIter: 2026\nnorm(gX): 0.17801544666600155\nIter: 2027\nnorm(gX): 0.17757023925891985\nIter: 2028\nnorm(gX): 0.177088902608912\nIter: 2029\nnorm(gX): 0.17657161690422188\nIter: 2030\nnorm(gX): 0.17601857332627457\nIter: 2031\nnorm(gX): 0.17542997404323032\nIter: 2032\nnorm(gX): 0.17480603220775895\nIter: 2033\nnorm(gX): 0.17414697195915052\nIter: 2034\nnorm(gX): 0.1734530284300308\nIter: 2035\nnorm(gX): 0.17272444775789805\nIter: 2036\nnorm(gX): 0.17196148710174192\nIter: 2037\nnorm(gX): 0.17116441466408092\nIter: 2038\nnorm(gX): 0.17033350971875663\nIter: 2039\nnorm(gX): 0.16946906264486677\nIter: 2040\nnorm(gX): 0.1685713749673191\nIter: 2041\nnorm(gX): 0.16764075940441908\nIter: 2042\nnorm(gX): 0.16667753992315726\nIter: 2043\nnorm(gX): 0.16568205180266207\nIter: 2044\nnorm(gX): 0.1646546417066197\nIter: 2045\nnorm(gX): 0.1635956677653206\nIter: 2046\nnorm(gX): 0.16250549966820682\nIter: 2047\nnorm(gX): 0.1613845187678067\nIter: 2048\nnorm(gX): 0.16023311819606695\nIter: 2049\nnorm(gX): 0.1590517029942003\nIter: 2050\nnorm(gX): 0.1578406902572577\nIter: 2051\nnorm(gX): 0.15660050929476926\nIter: 2052\nnorm(gX): 0.15533160180894479\nIter: 2053\nnorm(gX): 0.15403442209203758\nIter: 2054\nnorm(gX): 0.1527094372446925\nIter: 2055\nnorm(gX): 0.15135712741719515\nIter: 2056\nnorm(gX): 0.14997798607583307\nIter: 2057\nnorm(gX): 0.1485725202967188\nIter: 2058\nnorm(gX): 0.1471412510896598\nIter: 2059\nnorm(gX): 0.14568471375499278\nIter: 2060\nnorm(gX): 0.14420345827649053\nIter: 2061\nnorm(gX): 0.14269804975381914\nIter: 2062\nnorm(gX): 0.14116906887834904\nIter: 2063\nnorm(gX): 0.1396171124565081\nIter: 2064\nnorm(gX): 0.13804279398527106\nIter: 2065\nnorm(gX): 0.13644674428484016\nIter: 2066\nnorm(gX): 0.1348296121940967\nIter: 2067\nnorm(gX): 0.13319206533492123\nIter: 2068\nnorm(gX): 0.13153479095211956\nIter: 2069\nnorm(gX): 0.12985849683636327\nIter: 2070\nnorm(gX): 0.12816391233826718\nIter: 2071\nnorm(gX): 0.12645178948254754\nIter: 2072\nnorm(gX): 0.12472290419211525\nIter: 2073\nnorm(gX): 0.1229780576328613\nIter: 2074\nnorm(gX): 0.12121807769100493\nIter: 2075\nnorm(gX): 0.11944382059599812\nIter: 2076\nnorm(gX): 0.11765617270319148\nIter: 2077\nnorm(gX): 0.11585605245181693\nIter: 2078\nnorm(gX): 0.11404441251526266\nIter: 2079\nnorm(gX): 0.11222224216210318\nIter: 2080\nnorm(gX): 0.1103905698479071\nIter: 2081\nnorm(gX): 0.10855046605951178\nIter: 2082\nnorm(gX): 0.10670304643504813\nIter: 2083\nnorm(gX): 0.10484947518462827\nIter: 2084\nnorm(gX): 0.10299096883821204\nIter: 2085\nnorm(gX): 0.10112880034839834\nIter: 2086\nnorm(gX): 0.09926430357711753\nIter: 2087\nnorm(gX): 0.09739887819575807\nIter: 2088\nnorm(gX): 0.09553399502829686\nIter: 2089\nnorm(gX): 0.0936712018661234\nIter: 2090\nnorm(gX): 0.09181212978110848\nIter: 2091\nnorm(gX): 0.08995849995969793\nIter: 2092\nnorm(gX): 0.08811213107477345\nIter: 2093\nnorm(gX): 0.08627494720314952\nIter: 2094\nnorm(gX): 0.08444898628392815\nIter: 2095\nnorm(gX): 0.08263640909548833\nIter: 2096\nnorm(gX): 0.08083950870556725\nIter: 2097\nnorm(gX): 0.07906072031796635\nIter: 2098\nnorm(gX): 0.07730263139960497\nIter: 2099\nnorm(gX): 0.07556799192079929\nIter: 2100\nnorm(gX): 0.07385972447818508\nIter: 2101\nnorm(gX): 0.07218093399158\nIter: 2102\nnorm(gX): 0.07053491657205711\nIter: 2103\nnorm(gX): 0.06892516704760561\nIter: 2104\nnorm(gX): 0.06735538450574813\nIter: 2105\nnorm(gX): 0.06582947507172515\nIter: 2106\nnorm(gX): 0.0643515509918749\nIter: 2107\nnorm(gX): 0.06292592494434888\nIter: 2108\nnorm(gX): 0.06155709836771772\nIter: 2109\nnorm(gX): 0.060249742503660185\nIter: 2110\nnorm(gX): 0.05900867082006678\nIter: 2111\nnorm(gX): 0.057838801549206835\nIter: 2112\nnorm(gX): 0.0567451092793199\nIter: 2113\nnorm(gX): 0.055732564913375776\nIter: 2114\nnorm(gX): 0.05480606388349709\nIter: 2115\nnorm(gX): 0.053970343292847826\nIter: 2116\nnorm(gX): 0.053229889627990086\nIter: 2117\nnorm(gX): 0.05258883978270204\nIter: 2118\nnorm(gX): 0.052050879252783386\nIter: 2119\nnorm(gX): 0.05161914235273957\nIter: 2120\nnorm(gX): 0.05129611999750357\nIter: 2121\nnorm(gX): 0.05108358081961289\nIter: 2122\nnorm(gX): 0.05098251103238468\nIter: 2123\nnorm(gX): 0.05099307746313668\nIter: 2124\nnorm(gX): 0.05111461663373231\nIter: 2125\nnorm(gX): 0.051345650832191274\nIter: 2126\nnorm(gX): 0.05168393005126105\nIter: 2127\nnorm(gX): 0.052126496749073346\nIter: 2128\nnorm(gX): 0.05266976886434824\nIter: 2129\nnorm(gX): 0.05330963556509185\nIter: 2130\nnorm(gX): 0.05404155988852518\nIter: 2131\nnorm(gX): 0.05486068269736427\nIter: 2132\nnorm(gX): 0.05576192310808736\nIter: 2133\nnorm(gX): 0.05674007157120914\nIter: 2134\nnorm(gX): 0.057789872927620785\nIter: 2135\nnorm(gX): 0.05890609788030771\nIter: 2136\nnorm(gX): 0.06008360230072948\nIter: 2137\nnorm(gX): 0.06131737457403819\nIter: 2138\nnorm(gX): 0.06260257175761821\nIter: 2139\nnorm(gX): 0.06393454569432147\nIter: 2140\nnorm(gX): 0.06530886041456124\nIter: 2141\nnorm(gX): 0.06672130221773073\nIter: 2142\nnorm(gX): 0.06816788378171486\nIter: 2143\nnorm(gX): 0.0696448435438492\nIter: 2144\nnorm(gX): 0.0711486414553965\nIter: 2145\nnorm(gX): 0.07267595205597575\nIter: 2146\nnorm(gX): 0.07422365565872813\nIter: 2147\nnorm(gX): 0.07578882829104601\nIter: 2148\nnorm(gX): 0.07736873090481343\nIter: 2149\nnorm(gX): 0.07896079825634168\nIter: 2150\nnorm(gX): 0.08056262776027985\nIter: 2151\nnorm(gX): 0.08217196854238393\nIter: 2152\nnorm(gX): 0.08378671085186468\nIter: 2153\nnorm(gX): 0.08540487594297894\nIter: 2154\nnorm(gX): 0.0870246064956097\nIter: 2155\nnorm(gX): 0.08864415761398634\nIter: 2156\nnorm(gX): 0.09026188841958081\nIter: 2157\nnorm(gX): 0.09187625423726899\nIter: 2158\nnorm(gX): 0.09348579936165631\nIter: 2159\nnorm(gX): 0.0950891503819679\nIter: 2160\nnorm(gX): 0.09668501003844895\nIter: 2161\nnorm(gX): 0.0982721515798266\nIter: 2162\nnorm(gX): 0.09984941358962515\nIter: 2163\nnorm(gX): 0.10141569524873661\nIter: 2164\nnorm(gX): 0.10296995200191628\nIter: 2165\nnorm(gX): 0.1045111915968846\nIter: 2166\nnorm(gX): 0.10603847046607799\nIter: 2167\nnorm(gX): 0.10755089042272063\nIter: 2168\nnorm(gX): 0.10904759564465064\nIter: 2169\nnorm(gX): 0.11052776992116112\nIter: 2170\nnorm(gX): 0.11199063413993524\nIter: 2171\nnorm(gX): 0.11343544399295104\nIter: 2172\nnorm(gX): 0.11486148788192195\nIter: 2173\nnorm(gX): 0.11626808500546582\nIter: 2174\nnorm(gX): 0.11765458361176556\nIter: 2175\nnorm(gX): 0.1190203594018156\nIter: 2176\nnorm(gX): 0.12036481406976555\nIter: 2177\nnorm(gX): 0.12168737396801081\nIter: 2178\nnorm(gX): 0.12298748888584266\nIter: 2179\nnorm(gX): 0.12426463093145655\nIter: 2180\nnorm(gX): 0.12551829350809018\nIter: 2181\nnorm(gX): 0.12674799037586165\nIter: 2182\nnorm(gX): 0.1279532547917038\nIter: 2183\nnorm(gX): 0.1291336387204428\nIter: 2184\nnorm(gX): 0.13028871211075413\nIter: 2185\nnorm(gX): 0.1314180622302774\nIter: 2186\nnorm(gX): 0.13252129305467497\nIter: 2187\nnorm(gX): 0.13359802470596355\nIter: 2188\nnorm(gX): 0.13464789293581286\nIter: 2189\nnorm(gX): 0.13567054864990152\nIter: 2190\nnorm(gX): 0.1366656574698102\nIter: 2191\nnorm(gX): 0.13763289932923392\nIter: 2192\nnorm(gX): 0.13857196810155564\nIter: 2193\nnorm(gX): 0.1394825712561118\nIter: 2194\nnorm(gX): 0.14036442954074663\nIter: 2195\nnorm(gX): 0.14121727668840744\nIter: 2196\nnorm(gX): 0.1420408591457578\nIter: 2197\nnorm(gX): 0.1428349358219815\nIter: 2198\nnorm(gX): 0.14359927785607354\nIter: 2199\nnorm(gX): 0.14433366840110123\nIter: 2200\nnorm(gX): 0.14503790242401832\nIter: 2201\nnorm(gX): 0.1457117865197156\nIter: 2202\nnorm(gX): 0.14635513873822553\nIter: 2203\nnorm(gX): 0.14696778842390185\nIter: 2204\nnorm(gX): 0.14754957606566976\nIter: 2205\nnorm(gX): 0.14810035315739573\nIter: 2206\nnorm(gX): 0.1486199820676113\nIter: 2207\nnorm(gX): 0.14910833591778636\nIter: 2208\nnorm(gX): 0.14956529846849445\nIter: 2209\nnorm(gX): 0.149990764012821\nIter: 2210\nnorm(gX): 0.15038463727648185\nIter: 2211\nnorm(gX): 0.15074683332407324\nIter: 2212\nnorm(gX): 0.15107727747099403\nIter: 2213\nnorm(gX): 0.15137590520062189\nIter: 2214\nnorm(gX): 0.1516426620863135\nIter: 2215\nnorm(gX): 0.1518775037178822\nIter: 2216\nnorm(gX): 0.1520803956322227\nIter: 2217\nnorm(gX): 0.15225131324776414\nIter: 2218\nnorm(gX): 0.1523902418025345\nIter: 2219\nnorm(gX): 0.15249717629550844\nIter: 2220\nnorm(gX): 0.15257212143109306\nIter: 2221\nnorm(gX): 0.15261509156654937\nIter: 2222\nnorm(gX): 0.15262611066213944\nIter: 2223\nnorm(gX): 0.1526052122338751\nIter: 2224\nnorm(gX): 0.15255243930876897\nIter: 2225\nnorm(gX): 0.15246784438241243\nIter: 2226\nnorm(gX): 0.15235148937887288\nIter: 2227\nnorm(gX): 0.15220344561277271\nIter: 2228\nnorm(gX): 0.15202379375354994\nIter: 2229\nnorm(gX): 0.15181262379183325\nIter: 2230\nnorm(gX): 0.15157003500795216\nIter: 2231\nnorm(gX): 0.15129613594255645\nIter: 2232\nnorm(gX): 0.1509910443693981\nIter: 2233\nnorm(gX): 0.15065488727029802\nIter: 2234\nnorm(gX): 0.15028780081237716\nIter: 2235\nnorm(gX): 0.14988993032761846\nIter: 2236\nnorm(gX): 0.14946143029488185\nIter: 2237\nnorm(gX): 0.1490024643244863\nIter: 2238\nnorm(gX): 0.1485132051455085\nIter: 2239\nnorm(gX): 0.14799383459597273\nIter: 2240\nnorm(gX): 0.1474445436161184\nIter: 2241\nnorm(gX): 0.14686553224497909\nIter: 2242\nnorm(gX): 0.14625700962052124\nIter: 2243\nnorm(gX): 0.1456191939835985\nIter: 2244\nnorm(gX): 0.14495231268608064\nIter: 2245\nnorm(gX): 0.1442566022034362\nIter: 2246\nnorm(gX): 0.14353230815221224\nIter: 2247\nnorm(gX): 0.14277968531277793\nIter: 2248\nnorm(gX): 0.1419989976578247\nIter: 2249\nnorm(gX): 0.14119051838713134\nIter: 2250\nnorm(gX): 0.14035452996910014\nIter: 2251\nnorm(gX): 0.13949132418975108\nIter: 2252\nnorm(gX): 0.13860120220976926\nIter: 2253\nnorm(gX): 0.13768447463035705\nIter: 2254\nnorm(gX): 0.13674146156871464\nIter: 2255\nnorm(gX): 0.1357724927439796\nIter: 2256\nnorm(gX): 0.1347779075746055\nIter: 2257\nnorm(gX): 0.13375805528820944\nIter: 2258\nnorm(gX): 0.13271329504503612\nIter: 2259\nnorm(gX): 0.1316439960762892\nIter: 2260\nnorm(gX): 0.13055053783868886\nIter: 2261\nnorm(gX): 0.12943331018676338\nIter: 2262\nnorm(gX): 0.12829271356449792\nIter: 2263\nnorm(gX): 0.1271291592181586\nIter: 2264\nnorm(gX): 0.12594306943224634\nIter: 2265\nnorm(gX): 0.12473487779074265\nIter: 2266\nnorm(gX): 0.12350502946603382\nIter: 2267\nnorm(gX): 0.12225398153811054\nIter: 2268\nnorm(gX): 0.12098220334689777\nIter: 2269\nnorm(gX): 0.11969017688086582\nIter: 2270\nnorm(gX): 0.11837839720540101\nIter: 2271\nnorm(gX): 0.11704737293472474\nIter: 2272\nnorm(gX): 0.11569762675156231\nIter: 2273\nnorm(gX): 0.11432969597917429\nIter: 2274\nnorm(gX): 0.11294413321083227\nIter: 2275\nnorm(gX): 0.11154150700238032\nIter: 2276\nnorm(gX): 0.11012240263398812\nIter: 2277\nnorm(gX): 0.10868742294800088\nIter: 2278\nnorm(gX): 0.10723718927030752\nIter: 2279\nnorm(gX): 0.10577234242354167\nIter: 2280\nnorm(gX): 0.10429354384124691\nIter: 2281\nnorm(gX): 0.10280147679302146\nIter: 2282\nnorm(gX): 0.10129684773168139\nIter: 2283\nnorm(gX): 0.09978038777467772\nIter: 2284\nnorm(gX): 0.09825285433300567\nIter: 2285\nnorm(gX): 0.09671503290235599\nIter: 2286\nnorm(gX): 0.09516773903248828\nIter: 2287\nnorm(gX): 0.09361182049234856\nIter: 2288\nnorm(gX): 0.09204815965000583\nIter: 2289\nnorm(gX): 0.09047767608809415\nIter: 2290\nnorm(gX): 0.08890132947710125\nIter: 2291\nnorm(gX): 0.0873201227304652\nIter: 2292\nnorm(gX): 0.08573510546709136\nIter: 2293\nnorm(gX): 0.08414737780820027\nIter: 2294\nnorm(gX): 0.08255809453674262\nIter: 2295\nnorm(gX): 0.08096846964823189\nIter: 2296\nnorm(gX): 0.07937978132210155\nIter: 2297\nnorm(gX): 0.07779337734186603\nIter: 2298\nnorm(gX): 0.07621068099051019\nIter: 2299\nnorm(gX): 0.07463319744383148\nIter: 2300\nnorm(gX): 0.07306252067867063\nIter: 2301\nnorm(gX): 0.07150034090425075\nIter: 2302\nnorm(gX): 0.06994845251215585\nIter: 2303\nnorm(gX): 0.06840876252318483\nIter: 2304\nnorm(gX): 0.06688329948558855\nIter: 2305\nnorm(gX): 0.0653742227479899\nIter: 2306\nnorm(gX): 0.06388383198955366\nIter: 2307\nnorm(gX): 0.06241457683793616\nIter: 2308\nnorm(gX): 0.06096906634032197\nIter: 2309\nnorm(gX): 0.05955007797239258\nIter: 2310\nnorm(gX): 0.05816056577304304\nIter: 2311\nnorm(gX): 0.05680366707863452\nIter: 2312\nnorm(gX): 0.05548270720000174\nIter: 2313\nnorm(gX): 0.054201201242193846\nIter: 2314\nnorm(gX): 0.05296285211720403\nIter: 2315\nnorm(gX): 0.051771543655357084\nIter: 2316\nnorm(gX): 0.050631327598901606\nIter: 2317\nnorm(gX): 0.04954640318548759\nIter: 2318\nnorm(gX): 0.04852108803107859\nIter: 2319\nnorm(gX): 0.04755977913817043\nIter: 2320\nnorm(gX): 0.04666690312493017\nIter: 2321\nnorm(gX): 0.04584685522986966\nIter: 2322\nnorm(gX): 0.04510392731595115\nIter: 2323\nnorm(gX): 0.044442225974871526\nIter: 2324\nnorm(gX): 0.043865582875146227\nIter: 2325\nnorm(gX): 0.04337746061929531\nIter: 2326\nnorm(gX): 0.04298085844038738\nIter: 2327\nnorm(gX): 0.042678222905785164\nIter: 2328\nnorm(gX): 0.0424713692258151\nIter: 2329\nnorm(gX): 0.042361418637034985\nIter: 2330\nnorm(gX): 0.0423487565666577\nIter: 2331\nnorm(gX): 0.04243301491509204\nIter: 2332\nnorm(gX): 0.042613079963220356\nIter: 2333\nnorm(gX): 0.04288712536235766\nIter: 2334\nnorm(gX): 0.04325266768784263\nIter: 2335\nnorm(gX): 0.0437066404075014\nIter: 2336\nnorm(gX): 0.04424548103499404\nIter: 2337\nnorm(gX): 0.044865225796224845\nIter: 2338\nnorm(gX): 0.045561606309582385\nIter: 2339\nnorm(gX): 0.04633014344890003\nIter: 2340\nnorm(gX): 0.04716623455014328\nIter: 2341\nnorm(gX): 0.04806523125592808\nIter: 2342\nnorm(gX): 0.04902250640840253\nIter: 2343\nnorm(gX): 0.050033509385566007\nIter: 2344\nnorm(gX): 0.051093810064096384\nIter: 2345\nnorm(gX): 0.05219913216167376\nIter: 2346\nnorm(gX): 0.05334537707452288\nIter: 2347\nnorm(gX): 0.05452863951248597\nIter: 2348\nnorm(gX): 0.05574521628336456\nIter: 2349\nnorm(gX): 0.05699160953060326\nIter: 2350\nnorm(gX): 0.05826452561860652\nIter: 2351\nnorm(gX): 0.05956087071680147\nIter: 2352\nnorm(gX): 0.060877743977833444\nIter: 2353\nnorm(gX): 0.06221242905168335\nIter: 2354\nnorm(gX): 0.06356238453494141\nIter: 2355\nnorm(gX): 0.06492523382787346\nIter: 2356\nnorm(gX): 0.06629875476311839\nIter: 2357\nnorm(gX): 0.06768086927889157\nIter: 2358\nnorm(gX): 0.06906963333523937\nIter: 2359\nnorm(gX): 0.07046322721219198\nIter: 2360\nnorm(gX): 0.07185994628196986\nIter: 2361\nnorm(gX): 0.0732581923110358\nIter: 2362\nnorm(gX): 0.07465646532052064\nIter: 2363\nnorm(gX): 0.076053356013103\nIter: 2364\nnorm(gX): 0.07744753875975632\nIter: 2365\nnorm(gX): 0.07883776512942649\nIter: 2366\nnorm(gX): 0.08022285793779078\nIter: 2367\nnorm(gX): 0.0816017057868198\nIter: 2368\nnorm(gX): 0.08297325806455631\nIter: 2369\nnorm(gX): 0.08433652037341742\nIter: 2370\nnorm(gX): 0.08569055035538346\nIter: 2371\nnorm(gX): 0.0870344538832145\nIter: 2372\nnorm(gX): 0.08836738158800042\nIter: 2373\nnorm(gX): 0.08968852569493584\nIter: 2374\nnorm(gX): 0.09099711714091176\nIter: 2375\nnorm(gX): 0.0922924229493284\nIter: 2376\nnorm(gX): 0.09357374383929538\nIter: 2377\nnorm(gX): 0.09484041204830901\nIter: 2378\nnorm(gX): 0.09609178934902375\nIter: 2379\nnorm(gX): 0.09732726524255268\nIter: 2380\nnorm(gX): 0.09854625531213598\nIter: 2381\nnorm(gX): 0.09974819972252456\nIter: 2382\nnorm(gX): 0.10093256185172178\nIter: 2383\nnorm(gX): 0.10209882704294615\nIter: 2384\nnorm(gX): 0.10324650146580379\nIter: 2385\nnorm(gX): 0.10437511107667409\nIter: 2386\nnorm(gX): 0.10548420066924952\nIter: 2387\nnorm(gX): 0.10657333300701281\nIter: 2388\nnorm(gX): 0.10764208803022966\nIter: 2389\nnorm(gX): 0.1086900621306551\nIter: 2390\nnorm(gX): 0.10971686748789185\nIter: 2391\nnorm(gX): 0.11072213146182747\nIter: 2392\nnorm(gX): 0.11170549603609739\nIter: 2393\nnorm(gX): 0.11266661730806661\nIter: 2394\nnorm(gX): 0.11360516502111992\nIter: 2395\nnorm(gX): 0.11452082213554554\nIter: 2396\nnorm(gX): 0.11541328443457241\nIter: 2397\nnorm(gX): 0.11628226016245459\nIter: 2398\nnorm(gX): 0.11712746969179927\nIter: 2399\nnorm(gX): 0.11794864521751462\nIter: 2400\nnorm(gX): 0.11874553047510107\nIter: 2401\nnorm(gX): 0.11951788048111349\nIter: 2402\nnorm(gX): 0.12026546129385353\nIter: 2403\nnorm(gX): 0.12098804979252867\nIter: 2404\nnorm(gX): 0.12168543347327081\nIter: 2405\nnorm(gX): 0.12235741026053133\nIter: 2406\nnorm(gX): 0.12300378833247465\nIter: 2407\nnorm(gX): 0.123624385959212\nIter: 2408\nnorm(gX): 0.12421903135268912\nIter: 2409\nnorm(gX): 0.12478756252721956\nIter: 2410\nnorm(gX): 0.12532982716976035\nIter: 2411\nnorm(gX): 0.12584568251901534\nIter: 2412\nnorm(gX): 0.12633499525264202\nIter: 2413\nnorm(gX): 0.12679764138179633\nIter: 2414\nnorm(gX): 0.12723350615242798\nIter: 2415\nnorm(gX): 0.127642483952658\nIter: 2416\nnorm(gX): 0.12802447822576837\nIter: 2417\nnorm(gX): 0.12837940138825468\nIter: 2418\nnorm(gX): 0.12870717475253293\nIter: 2419\nnorm(gX): 0.12900772845388594\nIter: 2420\nnorm(gX): 0.12928100138126805\nIter: 2421\nnorm(gX): 0.12952694111165663\nIter: 2422\nnorm(gX): 0.12974550384763678\nIter: 2423\nnorm(gX): 0.1299366543579538\nIter: 2424\nnorm(gX): 0.13010036592082017\nIter: 2425\nnorm(gX): 0.1302366202697256\nIter: 2426\nnorm(gX): 0.13034540754160295\nIter: 2427\nnorm(gX): 0.13042672622714888\nIter: 2428\nnorm(gX): 0.1304805831232078\nIter: 2429\nnorm(gX): 0.13050699328705803\nIter: 2430\nnorm(gX): 0.13050597999253993\nIter: 2431\nnorm(gX): 0.13047757468790425\nIter: 2432\nnorm(gX): 0.1304218169553663\nIter: 2433\nnorm(gX): 0.13033875447230617\nIter: 2434\nnorm(gX): 0.13022844297407674\nIter: 2435\nnorm(gX): 0.13009094621845102\nIter: 2436\nnorm(gX): 0.1299263359516633\nIter: 2437\nnorm(gX): 0.12973469187612477\nIter: 2438\nnorm(gX): 0.1295161016198252\nIter: 2439\nnorm(gX): 0.12927066070747292\nIter: 2440\nnorm(gX): 0.12899847253346472\nIter: 2441\nnorm(gX): 0.12869964833678693\nIter: 2442\nnorm(gX): 0.12837430717789108\nIter: 2443\nnorm(gX): 0.12802257591775523\nIter: 2444\nnorm(gX): 0.1276445891991992\nIter: 2445\nnorm(gX): 0.12724048943063632\nIter: 2446\nnorm(gX): 0.12681042677244994\nIter: 2447\nnorm(gX): 0.12635455912616905\nIter: 2448\nnorm(gX): 0.12587305212666788\nIter: 2449\nnorm(gX): 0.12536607913762057\nIter: 2450\nnorm(gX): 0.12483382125047407\nIter: 2451\nnorm(gX): 0.12427646728722412\nIter: 2452\nnorm(gX): 0.12369421380730386\nIter: 2453\nnorm(gX): 0.12308726511890482\nIter: 2454\nnorm(gX): 0.12245583329511979\nIter: 2455\nnorm(gX): 0.12180013819531972\nIter: 2456\nnorm(gX): 0.1211204074921406\nIter: 2457\nnorm(gX): 0.12041687670462431\nIter: 2458\nnorm(gX): 0.1196897892379857\nIter: 2459\nnorm(gX): 0.11893939643058575\nIter: 2460\nnorm(gX): 0.11816595760871088\nIter: 2461\nnorm(gX): 0.11736974014983324\nIter: 2462\nnorm(gX): 0.11655101955507331\nIter: 2463\nnorm(gX): 0.11571007953166694\nIter: 2464\nnorm(gX): 0.1148472120862698\nIter: 2465\nnorm(gX): 0.11396271763010238\nIter: 2466\nnorm(gX): 0.11305690509689569\nIter: 2467\nnorm(gX): 0.11213009207483082\nIter: 2468\nnorm(gX): 0.11118260495365542\nIter: 2469\nnorm(gX): 0.11021477908837747\nIter: 2470\nnorm(gX): 0.10922695898097054\nIter: 2471\nnorm(gX): 0.10821949848177106\nIter: 2472\nnorm(gX): 0.10719276101229053\nIter: 2473\nnorm(gX): 0.10614711981146702\nIter: 2474\nnorm(gX): 0.10508295820746101\nIter: 2475\nnorm(gX): 0.10400066991740213\nIter: 2476\nnorm(gX): 0.10290065937767538\nIter: 2477\nnorm(gX): 0.10178334210761629\nIter: 2478\nnorm(gX): 0.10064914510980921\nIter: 2479\nnorm(gX): 0.09949850731044392\nIter: 2480\nnorm(gX): 0.09833188004361398\nIter: 2481\nnorm(gX): 0.09714972758376474\nIter: 2482\nnorm(gX): 0.09595252773103186\nIter: 2483\nnorm(gX): 0.09474077245459517\nIter: 2484\nnorm(gX): 0.09351496859979247\nIter: 2485\nnorm(gX): 0.09227563866530507\nIter: 2486\nnorm(gX): 0.09102332165739567\nIter: 2487\nnorm(gX): 0.08975857402886982\nIter: 2488\nnorm(gX): 0.08848197071129156\nIter: 2489\nnorm(gX): 0.08719410624981762\nIter: 2490\nnorm(gX): 0.08589559605099964\nIter: 2491\nnorm(gX): 0.08458707775493346\nIter: 2492\nnorm(gX): 0.08326921274431506\nIter: 2493\nnorm(gX): 0.08194268780413698\nIter: 2494\nnorm(gX): 0.08060821694714437\nIter: 2495\nnorm(gX): 0.07926654342155218\nIter: 2496\nnorm(gX): 0.07791844191899985\nIter: 2497\nnorm(gX): 0.07656472100228594\nIter: 2498\nnorm(gX): 0.07520622577399433\nIter: 2499\nnorm(gX): 0.07384384080864423\nIter: 2500\nnorm(gX): 0.07247849337253283\nIter: 2501\nnorm(gX): 0.07111115695670375\nIter: 2502\nnorm(gX): 0.06974285514956292\nIter: 2503\nnorm(gX): 0.06837466587626126\nIter: 2504\nnorm(gX): 0.0670077260319589\nIter: 2505\nnorm(gX): 0.06564323653509592\nIter: 2506\nnorm(gX): 0.0642824678246934\nIter: 2507\nnorm(gX): 0.06292676582168066\nIter: 2508\nnorm(gX): 0.061577558368079655\nIter: 2509\nnorm(gX): 0.060236362148625405\nIter: 2510\nnorm(gX): 0.05890479008608202\nIter: 2511\nnorm(gX): 0.0575845591833935\nIter: 2512\nnorm(gX): 0.05627749876104643\nIter: 2513\nnorm(gX): 0.05498555900548977\nIter: 2514\nnorm(gX): 0.05371081970217989\nIter: 2515\nnorm(gX): 0.052455498973067574\nIter: 2516\nnorm(gX): 0.05122196177093298\nIter: 2517\nnorm(gX): 0.05001272780069438\nIter: 2518\nnorm(gX): 0.04883047843899243\nIter: 2519\nnorm(gX): 0.04767806210839946\nIter: 2520\nnorm(gX): 0.046558497433037624\nIter: 2521\nnorm(gX): 0.04547497336282832\nIter: 2522\nnorm(gX): 0.04443084531258342\nIter: 2523\nnorm(gX): 0.04342962623316499\nIter: 2524\nnorm(gX): 0.04247497143524469\nIter: 2525\nnorm(gX): 0.04157065594833026\nIter: 2526\nnorm(gX): 0.04072054325259637\nIter: 2527\nnorm(gX): 0.03992854440575913\nIter: 2528\nnorm(gX): 0.039198566939098275\nIter: 2529\nnorm(gX): 0.038534453443722715\nIter: 2530\nnorm(gX): 0.03793991051902193\nIter: 2531\nnorm(gX): 0.037418429686231075\nIter: 2532\nnorm(gX): 0.03697320291508268\nIter: 2533\nnorm(gX): 0.03660703645621108\nIter: 2534\nnorm(gX): 0.03632226756125122\nIter: 2535\nnorm(gX): 0.036120689230732375\nIter: 2536\nnorm(gX): 0.03600348819660386\nIter: 2537\nnorm(gX): 0.03597120081852683\nIter: 2538\nnorm(gX): 0.03602369044394623\nIter: 2539\nnorm(gX): 0.036160148158765545\nIter: 2540\nnorm(gX): 0.03637911694962903\nIter: 2541\nnorm(gX): 0.03667853738503899\nIter: 2542\nnorm(gX): 0.037055811277658196\nIter: 2543\nnorm(gX): 0.03750787863204613\nIter: 2544\nnorm(gX): 0.03803130262631636\nIter: 2545\nnorm(gX): 0.0386223574225779\nIter: 2546\nnorm(gX): 0.039277114151203656\nIter: 2547\nnorm(gX): 0.039991521307655584\nIter: 2548\nnorm(gX): 0.040761476860320195\nIter: 2549\nnorm(gX): 0.04158289043364089\nIter: 2550\nnorm(gX): 0.04245173488458339\nIter: 2551\nnorm(gX): 0.04336408736170427\nIter: 2552\nnorm(gX): 0.04431616049918836\nIter: 2553\nnorm(gX): 0.04530432476074241\nIter: 2554\nnorm(gX): 0.046325123138362935\nIter: 2555\nnorm(gX): 0.047375279466578435\nIter: 2556\nnorm(gX): 0.048451701572732674\nIter: 2557\nnorm(gX): 0.049551480382399864\nIter: 2558\nnorm(gX): 0.05067188596456294\nIter: 2559\nnorm(gX): 0.05181036135412696\nIter: 2560\nnorm(gX): 0.05296451484409096\nIter: 2561\nnorm(gX): 0.054132111305061376\nIter: 2562\nnorm(gX): 0.055311062970314996\nIter: 2563\nnorm(gX): 0.05649942002241192\nIter: 2564\nnorm(gX): 0.05769536123196039\nIter: 2565\nnorm(gX): 0.058897184829820495\nIter: 2566\nnorm(gX): 0.06010329973851684\nIter: 2567\nnorm(gX): 0.06131221724528545\nIter: 2568\nnorm(gX): 0.06252254316588764\nIter: 2569\nnorm(gX): 0.06373297052317989\nIter: 2570\nnorm(gX): 0.06494227274598552\nIter: 2571\nnorm(gX): 0.06614929738044864\nIter: 2572\nnorm(gX): 0.06735296029695258\nIter: 2573\nnorm(gX): 0.06855224036942133\nIter: 2574\nnorm(gX): 0.06974617460022842\nIter: 2575\nnorm(gX): 0.07093385366180974\nIter: 2576\nnorm(gX): 0.07211441782544341\nIter: 2577\nnorm(gX): 0.07328705324778347\nIter: 2578\nnorm(gX): 0.07445098858662738\nIter: 2579\nnorm(gX): 0.07560549191859185\nIter: 2580\nnorm(gX): 0.07674986793288781\nIter: 2581\nnorm(gX): 0.0778834553770744\nIter: 2582\nnorm(gX): 0.07900562473234077\nIter: 2583\nnorm(gX): 0.08011577609759611\nIter: 2584\nnorm(gX): 0.08121333726336064\nIter: 2585\nnorm(gX): 0.08229776195795004\nIter: 2586\nnorm(gX): 0.08336852825007701\nIter: 2587\nnorm(gX): 0.08442513709335438\nIter: 2588\nnorm(gX): 0.08546711099948244\nIter: 2589\nnorm(gX): 0.08649399282815579\nIter: 2590\nnorm(gX): 0.08750534468285075\nIter: 2591\nnorm(gX): 0.088500746902611\nIter: 2592\nnorm(gX): 0.08947979714095795\nIter: 2593\nnorm(gX): 0.0904421095238265\nIter: 2594\nnorm(gX): 0.09138731387926274\nIter: 2595\nnorm(gX): 0.09231505503226076\nIter: 2596\nnorm(gX): 0.09322499215876306\nIter: 2597\nnorm(gX): 0.09411679819342843\nIter: 2598\nnorm(gX): 0.09499015928627194\nIter: 2599\nnorm(gX): 0.0958447743037539\nIter: 2600\nnorm(gX): 0.09668035437028774\nIter: 2601\nnorm(gX): 0.09749662244658329\nIter: 2602\nnorm(gX): 0.09829331294145384\nIter: 2603\nnorm(gX): 0.09907017135420325\nIter: 2604\nnorm(gX): 0.0998269539447935\nIter: 2605\nnorm(gX): 0.10056342742940297\nIter: 2606\nnorm(gX): 0.10127936869911557\nIter: 2607\nnorm(gX): 0.10197456455972685\nIter: 2608\nnorm(gX): 0.102648811490819\nIter: 2609\nnorm(gX): 0.10330191542245305\nIter: 2610\nnorm(gX): 0.10393369152793752\nIter: 2611\nnorm(gX): 0.10454396403130507\nIter: 2612\nnorm(gX): 0.1051325660282273\nIter: 2613\nnorm(gX): 0.10569933931925028\nIter: 2614\nnorm(gX): 0.10624413425426624\nIter: 2615\nnorm(gX): 0.10676680958730754\nIter: 2616\nnorm(gX): 0.10726723234079871\nIter: 2617\nnorm(gX): 0.1077452776784524\nIter: 2618\nnorm(gX): 0.10820082878612615\nIter: 2619\nnorm(gX): 0.10863377675996146\nIter: 2620\nnorm(gX): 0.10904402050122902\nIter: 2621\nnorm(gX): 0.10943146661733451\nIter: 2622\nnorm(gX): 0.10979602932849535\nIter: 2623\nnorm(gX): 0.11013763037963961\nIter: 2624\nnorm(gX): 0.11045619895711664\nIter: 2625\nnorm(gX): 0.11075167160987671\nIter: 2626\nnorm(gX): 0.11102399217474752\nIter: 2627\nnorm(gX): 0.11127311170554034\nIter: 2628\nnorm(gX): 0.11149898840569748\nIter: 2629\nnorm(gX): 0.1117015875642455\nIter: 2630\nnorm(gX): 0.11188088149482074\nIter: 2631\nnorm(gX): 0.11203684947758294\nIter: 2632\nnorm(gX): 0.11216947770383418\nIter: 2633\nnorm(gX): 0.1122787592232067\nIter: 2634\nnorm(gX): 0.11236469389324814\nIter: 2635\nnorm(gX): 0.11242728833131278\nIter: 2636\nnorm(gX): 0.11246655586865248\nIter: 2637\nnorm(gX): 0.1124825165066073\nIter: 2638\nnorm(gX): 0.112475196874848\nIter: 2639\nnorm(gX): 0.11244463019157544\nIter: 2640\nnorm(gX): 0.11239085622566546\nIter: 2641\nnorm(gX): 0.11231392126070429\nIter: 2642\nnorm(gX): 0.11221387806092398\nIter: 2643\nnorm(gX): 0.11209078583897666\nIter: 2644\nnorm(gX): 0.11194471022560884\nIter: 2645\nnorm(gX): 0.11177572324122965\nIter: 2646\nnorm(gX): 0.11158390326936947\nIter: 2647\nnorm(gX): 0.11136933503212802\nIter: 2648\nnorm(gX): 0.11113210956761718\nIter: 2649\nnorm(gX): 0.11087232420945851\nIter: 2650\nnorm(gX): 0.11059008256847132\nIter: 2651\nnorm(gX): 0.11028549451654203\nIter: 2652\nnorm(gX): 0.10995867617287608\nIter: 2653\nnorm(gX): 0.10960974989266999\nIter: 2654\nnorm(gX): 0.10923884425838097\nIter: 2655\nnorm(gX): 0.10884609407373089\nIter: 2656\nnorm(gX): 0.10843164036055972\nIter: 2657\nnorm(gX): 0.1079956303588093\nIter: 2658\nnorm(gX): 0.10753821752971231\nIter: 2659\nnorm(gX): 0.10705956156251513\nIter: 2660\nnorm(gX): 0.10655982838488742\nIter: 2661\nnorm(gX): 0.10603919017733998\nIter: 2662\nnorm(gX): 0.10549782539191341\nIter: 2663\nnorm(gX): 0.10493591877545912\nIter: 2664\nnorm(gX): 0.1043536613978563\nIter: 2665\nnorm(gX): 0.10375125068554401\nIter: 2666\nnorm(gX): 0.10312889046078033\nIter: 2667\nnorm(gX): 0.10248679098709298\nIter: 2668\nnorm(gX): 0.1018251690213909\nIter: 2669\nnorm(gX): 0.10114424787329646\nIter: 2670\nnorm(gX): 0.10044425747229685\nIter: 2671\nnorm(gX): 0.09972543444333151\nIter: 2672\nnorm(gX): 0.09898802219153957\nIter: 2673\nnorm(gX): 0.09823227099696849\nIter: 2674\nnorm(gX): 0.09745843812003747\nIter: 2675\nnorm(gX): 0.09666678791871783\nIter: 2676\nnorm(gX): 0.09585759197846745\nIter: 2677\nnorm(gX): 0.09503112925596947\nIter: 2678\nnorm(gX): 0.094187686237998\nIter: 2679\nnorm(gX): 0.09332755711664491\nIter: 2680\nnorm(gX): 0.0924510439824784\nIter: 2681\nnorm(gX): 0.0915584570372033\nIter: 2682\nnorm(gX): 0.09065011482761633\nIter: 2683\nnorm(gX): 0.08972634450284513\nIter: 2684\nnorm(gX): 0.08878748209699158\nIter: 2685\nnorm(gX): 0.08783387283958419\nIter: 2686\nnorm(gX): 0.08686587149645622\nIter: 2687\nnorm(gX): 0.08588384274394378\nIter: 2688\nnorm(gX): 0.0848881615795733\nIter: 2689\nnorm(gX): 0.08387921377277485\nIter: 2690\nnorm(gX): 0.08285739635948987\nIter: 2691\nnorm(gX): 0.08182311818494371\nIter: 2692\nnorm(gX): 0.0807768004993406\nIter: 2693\nnorm(gX): 0.07971887761167097\nIter: 2694\nnorm(gX): 0.07864979760740978\nIter: 2695\nnorm(gX): 0.07757002313646978\nIter: 2696\nnorm(gX): 0.07648003227843761\nIter: 2697\nnorm(gX): 0.07538031949282119\nIter: 2698\nnorm(gX): 0.07427139666288883\nIter: 2699\nnorm(gX): 0.07315379424247061\nIter: 2700\nnorm(gX): 0.07202806251613594\nIter: 2701\nnorm(gX): 0.07089477298407564\nIter: 2702\nnorm(gX): 0.06975451988421595\nIter: 2703\nnorm(gX): 0.06860792186520571\nIter: 2704\nnorm(gX): 0.06745562382518412\nIter: 2705\nnorm(gX): 0.06629829893254599\nIter: 2706\nnorm(gX): 0.06513665084623234\nIter: 2707\nnorm(gX): 0.06397141615441292\nIter: 2708\nnorm(gX): 0.06280336705171656\nIter: 2709\nnorm(gX): 0.06163331427637959\nIter: 2710\nnorm(gX): 0.06046211032958428\nIter: 2711\nnorm(gX): 0.05929065300005618\nIter: 2712\nnorm(gX): 0.058119889217076864\nIter: 2713\nnorm(gX): 0.05695081925464426\nIter: 2714\nnorm(gX): 0.05578450130809444\nIter: 2715\nnorm(gX): 0.0546220564616849\nIter: 2716\nnorm(gX): 0.05346467406120139\nIter: 2717\nnorm(gX): 0.052313617498777984\nIter: 2718\nnorm(gX): 0.0511702304072459\nIter: 2719\nnorm(gX): 0.050035943247542194\nIter: 2720\nnorm(gX): 0.04891228025394182\nIter: 2721\nnorm(gX): 0.04780086667694853\nIter: 2722\nnorm(gX): 0.04670343623105777\nIter: 2723\nnorm(gX): 0.045621838612916965\nIter: 2724\nnorm(gX): 0.044558046902808464\nIter: 2725\nnorm(gX): 0.04351416459758894\nIter: 2726\nnorm(gX): 0.042492431944831446\nIter: 2727\nnorm(gX): 0.041495231155784414\nIter: 2728\nnorm(gX): 0.0405250899692625\nIter: 2729\nnorm(gX): 0.03958468292315506\nIter: 2730\nnorm(gX): 0.038676829570010535\nIter: 2731\nnorm(gX): 0.037804488758165146\nIter: 2732\nnorm(gX): 0.03697074800477519\nIter: 2733\nnorm(gX): 0.03617880693204402\nIter: 2734\nnorm(gX): 0.03543195374916028\nIter: 2735\nnorm(gX): 0.034733533870592574\nIter: 2736\nnorm(gX): 0.03408690999827785\nIter: 2737\nnorm(gX): 0.03349541338875151\nIter: 2738\nnorm(gX): 0.03296228659121861\nIter: 2739\nnorm(gX): 0.032490618671853724\nIter: 2740\nnorm(gX): 0.03208327479280949\nIter: 2741\nnorm(gX): 0.031742822911968324\nIter: 2742\nnorm(gX): 0.031471461194349445\nIter: 2743\nnorm(gX): 0.03127095033410403\nIter: 2744\nnorm(gX): 0.031142555233390982\nIter: 2745\nnorm(gX): 0.031087000259501934\nIter: 2746\nnorm(gX): 0.031104441561051712\nIter: 2747\nnorm(gX): 0.03119445871649582\nIter: 2748\nnorm(gX): 0.03135606645641947\nIter: 2749\nnorm(gX): 0.0315877455580716\nIter: 2750\nnorm(gX): 0.03188749049513473\nIter: 2751\nnorm(gX): 0.03225287024802666\nIter: 2752\nnorm(gX): 0.03268109797787098\nIter: 2753\nnorm(gX): 0.03316910508384007\nIter: 2754\nnorm(gX): 0.0337136154495361\nIter: 2755\nnorm(gX): 0.034311216324313276\nIter: 2756\nnorm(gX): 0.03495842313307046\nIter: 2757\nnorm(gX): 0.0356517364190556\nIter: 2758\nnorm(gX): 0.03638768998211925\nIter: 2759\nnorm(gX): 0.037162890001575605\nIter: 2760\nnorm(gX): 0.0379740454908651\nIter: 2761\nnorm(gX): 0.03881799081465699\nIter: 2762\nnorm(gX): 0.03969170122487005\nIter: 2763\nnorm(gX): 0.040592302469434745\nIter: 2764\nnorm(gX): 0.04151707552882751\nIter: 2765\nnorm(gX): 0.04246345747204669\nIter: 2766\nnorm(gX): 0.04342903932150294\nIter: 2767\nnorm(gX): 0.04441156169596788\nIter: 2768\nnorm(gX): 0.04540890887659686\nIter: 2769\nnorm(gX): 0.046419101822508614\nIter: 2770\nnorm(gX): 0.04744029055519635\nIter: 2771\nnorm(gX): 0.04847074623753287\nIter: 2772\nnorm(gX): 0.049508853194243106\nIter: 2773\nnorm(gX): 0.05055310105550848\nIter: 2774\nnorm(gX): 0.05160207715293561\nIter: 2775\nnorm(gX): 0.0526544592555098\nIter: 2776\nnorm(gX): 0.05370900870095644\nIter: 2777\nnorm(gX): 0.05476456395326603\nIter: 2778\nnorm(gX): 0.055820034598888166\nIter: 2779\nnorm(gX): 0.056874395780656055\nIter: 2780\nnorm(gX): 0.057926683058984615\nIter: 2781\nnorm(gX): 0.058975987683510735\nIter: 2782\nnorm(gX): 0.060021452254078615\nIter: 2783\nnorm(gX): 0.06106226674760352\nIter: 2784\nnorm(gX): 0.062097664886141254\nIter: 2785\nnorm(gX): 0.0631269208212687\nIter: 2786\nnorm(gX): 0.06414934611033568\nIter: 2787\nnorm(gX): 0.06516428696092191\nIter: 2788\nnorm(gX): 0.06617112172110112\nIter: 2789\nnorm(gX): 0.06716925859435516\nIter: 2790\nnorm(gX): 0.06815813355942334\nIter: 2791\nnorm(gX): 0.06913720847683368\nIter: 2792\nnorm(gX): 0.0701059693652777\nIter: 2793\nnorm(gX): 0.0710639248323606\nIter: 2794\nnorm(gX): 0.07201060464556917\nIter: 2795\nnorm(gX): 0.07294555843059339\nIter: 2796\nnorm(gX): 0.0738683544852062\nIter: 2797\nnorm(gX): 0.07477857869806623\nIter: 2798\nnorm(gX): 0.07567583356271536\nIter: 2799\nnorm(gX): 0.07655973727799446\nIter: 2800\nnorm(gX): 0.07742992292693653\nIter: 2801\nnorm(gX): 0.0782860377268997\nIter: 2802\nnorm(gX): 0.07912774234441307\nIter: 2803\nnorm(gX): 0.07995471026884679\nIter: 2804\nnorm(gX): 0.08076662723955877\nIter: 2805\nnorm(gX): 0.08156319072166217\nIter: 2806\nnorm(gX): 0.08234410942603998\nIter: 2807\nnorm(gX): 0.0831091028696524\nIter: 2808\nnorm(gX): 0.08385790097253859\nIter: 2809\nnorm(gX): 0.0845902436882484\nIter: 2810\nnorm(gX): 0.0853058806647719\nIter: 2811\nnorm(gX): 0.08600457093326859\nIter: 2812\nnorm(gX): 0.08668608262220165\nIter: 2813\nnorm(gX): 0.08735019269463243\nIter: 2814\nnorm(gX): 0.08799668670670466\nIter: 2815\nnorm(gX): 0.08862535858548118\nIter: 2816\nnorm(gX): 0.0892360104244904\nIter: 2817\nnorm(gX): 0.0898284522954587\nIter: 2818\nnorm(gX): 0.09040250207487042\nIter: 2819\nnorm(gX): 0.09095798528410678\nIter: 2820\nnorm(gX): 0.0914947349420048\nIter: 2821\nnorm(gX): 0.09201259142881074\nIter: 2822\nnorm(gX): 0.09251140236058568\nIter: 2823\nnorm(gX): 0.09299102247316868\nIter: 2824\nnorm(gX): 0.09345131351493052\nIter: 2825\nnorm(gX): 0.0938921441475778\nIter: 2826\nnorm(gX): 0.09431338985433811\nIter: 2827\nnorm(gX): 0.0947149328549462\nIter: 2828\nnorm(gX): 0.09509666202682114\nIter: 2829\nnorm(gX): 0.09545847283199761\nIter: 2830\nnorm(gX): 0.09580026724925203\nIter: 2831\nnorm(gX): 0.09612195371109473\nIter: 2832\nnorm(gX): 0.09642344704513837\nIter: 2833\nnorm(gX): 0.09670466841956334\nIter: 2834\nnorm(gX): 0.09696554529227881\nIter: 2835\nnorm(gX): 0.09720601136353674\nIter: 2836\nnorm(gX): 0.09742600653167588\nIter: 2837\nnorm(gX): 0.09762547685176734\nIter: 2838\nnorm(gX): 0.09780437449690854\nIter: 2839\nnorm(gX): 0.0979626577219801\nIter: 2840\nnorm(gX): 0.09810029082964566\nIter: 2841\nnorm(gX): 0.0982172441384244\nIter: 2842\nnorm(gX): 0.0983134939526937\nIter: 2843\nnorm(gX): 0.09838902253445221\nIter: 2844\nnorm(gX): 0.0984438180767266\nIter: 2845\nnorm(gX): 0.09847787467851396\nIter: 2846\nnorm(gX): 0.0984911923211379\nIter: 2847\nnorm(gX): 0.0984837768459402\nIter: 2848\nnorm(gX): 0.0984556399332279\nIter: 2849\nnorm(gX): 0.09840679908240058\nIter: 2850\nnorm(gX): 0.09833727759322888\nIter: 2851\nnorm(gX): 0.09824710454820144\nIter: 2852\nnorm(gX): 0.09813631479593467\nIter: 2853\nnorm(gX): 0.09800494893560653\nIter: 2854\nnorm(gX): 0.09785305330243425\nIter: 2855\nnorm(gX): 0.09768067995413177\nIter: 2856\nnorm(gX): 0.09748788665844195\nIter: 2857\nnorm(gX): 0.0972747368816662\nIter: 2858\nnorm(gX): 0.09704129977833058\nIter: 2859\nnorm(gX): 0.09678765018191729\nIter: 2860\nnorm(gX): 0.09651386859683314\nIter: 2861\nnorm(gX): 0.09622004119157318\nIter: 2862\nnorm(gX): 0.09590625979324449\nIter: 2863\nnorm(gX): 0.09557262188349978\nIter: 2864\nnorm(gX): 0.09521923059599174\nIter: 2865\nnorm(gX): 0.0948461947154858\nIter: 2866\nnorm(gX): 0.0944536286787617\nIter: 2867\nnorm(gX): 0.0940416525774554\nIter: 2868\nnorm(gX): 0.0936103921630206\nIter: 2869\nnorm(gX): 0.09315997885400451\nIter: 2870\nnorm(gX): 0.0926905497458391\nIter: 2871\nnorm(gX): 0.09220224762340185\nIter: 2872\nnorm(gX): 0.09169522097660186\nIter: 2873\nnorm(gX): 0.09116962401926203\nIter: 2874\nnorm(gX): 0.0906256167116371\nIter: 2875\nnorm(gX): 0.09006336478690727\nIter: 2876\nnorm(gX): 0.08948303978202737\nIter: 2877\nnorm(gX): 0.08888481907334603\nIter: 2878\nnorm(gX): 0.0882688859174749\nIter: 2879\nnorm(gX): 0.08763542949789659\nIter: 2880\nnorm(gX): 0.08698464497788251\nIter: 2881\nnorm(gX): 0.08631673356031597\nIter: 2882\nnorm(gX): 0.0856319025551324\nIter: 2883\nnorm(gX): 0.08493036545506263\nIter: 2884\nnorm(gX): 0.08421234202054702\nIter: 2885\nnorm(gX): 0.08347805837469294\nIter: 2886\nnorm(gX): 0.08272774710928565\nIter: 2887\nnorm(gX): 0.08196164740292655\nIter: 2888\nnorm(gX): 0.08118000515252362\nIter: 2889\nnorm(gX): 0.08038307311948481\nIter: 2890\nnorm(gX): 0.07957111109207085\nIter: 2891\nnorm(gX): 0.0787443860655487\nIter: 2892\nnorm(gX): 0.07790317244200201\nIter: 2893\nnorm(gX): 0.0770477522517578\nIter: 2894\nnorm(gX): 0.0761784153987284\nIter: 2895\nnorm(gX): 0.07529545993212901\nIter: 2896\nnorm(gX): 0.07439919234735148\nIter: 2897\nnorm(gX): 0.07348992791910382\nIter: 2898\nnorm(gX): 0.07256799107024968\nIter: 2899\nnorm(gX): 0.07163371578020584\nIter: 2900\nnorm(gX): 0.07068744603721108\nIter: 2901\nnorm(gX): 0.06972953633926787\nIter: 2902\nnorm(gX): 0.06876035224917329\nIter: 2903\nnorm(gX): 0.06778027100965207\nIter: 2904\nnorm(gX): 0.06678968222538827\nIter: 2905\nnorm(gX): 0.06578898861957022\nIter: 2906\nnorm(gX): 0.0647786068734744\nIter: 2907\nnorm(gX): 0.06375896855867272\nIter: 2908\nnorm(gX): 0.06273052117269659\nIter: 2909\nnorm(gX): 0.061693729290251895\nIter: 2910\nnorm(gX): 0.060649075843657085\nIter: 2911\nnorm(gX): 0.05959706354783947\nIter: 2912\nnorm(gX): 0.058538216487142886\nIter: 2913\nnorm(gX): 0.05747308188341705\nIter: 2914\nnorm(gX): 0.0564022320670427\nIter: 2915\nnorm(gX): 0.05532626667546947\nIter: 2916\nnorm(gX): 0.054245815106536456\nIter: 2917\nnorm(gX): 0.05316153925718363\nIter: 2918\nnorm(gX): 0.05207413658157065\nIter: 2919\nnorm(gX): 0.05098434350643864\nIter: 2920\nnorm(gX): 0.04989293924529643\nIter: 2921\nnorm(gX): 0.0488007500572507\nIter: 2922\nnorm(gX): 0.04770865400005334\nIter: 2923\nnorm(gX): 0.04661758623079627\nIter: 2924\nnorm(gX): 0.045528544910647485\nIter: 2925\nnorm(gX): 0.04444259777205425\nIter: 2926\nnorm(gX): 0.04336088940704993\nIter: 2927\nnorm(gX): 0.04228464933275996\nIter: 2928\nnorm(gX): 0.04121520088370988\nIter: 2929\nnorm(gX): 0.04015397096824621\nIter: 2930\nnorm(gX): 0.03910250070610251\nIter: 2931\nnorm(gX): 0.03806245693281815\nIter: 2932\nnorm(gX): 0.03703564451061918\nIter: 2933\nnorm(gX): 0.036024019319803556\nIter: 2934\nnorm(gX): 0.035029701713994645\nIter: 2935\nnorm(gX): 0.034054990100685065\nIter: 2936\nnorm(gX): 0.03310237414841651\nIter: 2937\nnorm(gX): 0.03217454691782102\nIter: 2938\nnorm(gX): 0.03127441496169751\nIter: 2939\nnorm(gX): 0.030405105140069964\nIter: 2940\nnorm(gX): 0.029569966559078967\nIter: 2941\nnorm(gX): 0.028772565690255346\nIter: 2942\nnorm(gX): 0.02801667240109297\nIter: 2943\nnorm(gX): 0.027306234395613683\nIter: 2944\nnorm(gX): 0.026645337519572654\nIter: 2945\nnorm(gX): 0.02603814964838222\nIter: 2946\nnorm(gX): 0.025488846576411373\nIter: 2947\nnorm(gX): 0.025001519573987985\nIter: 2948\nnorm(gX): 0.02458006611583807\nIter: 2949\nnorm(gX): 0.024228067630590017\nIter: 2950\nnorm(gX): 0.02394866071996378\nIter: 2951\nnorm(gX): 0.02374441070705029\nIter: 2952\nnorm(gX): 0.023617198023592433\nIter: 2953\nnorm(gX): 0.023568128268891732\nIter: 2954\nnorm(gX): 0.023597475397273304\nIter: 2955\nnorm(gX): 0.023704664419679656\nIter: 2956\nnorm(gX): 0.02388829569177616\nIter: 2957\nnorm(gX): 0.024146208123668363\nIter: 2958\nnorm(gX): 0.024475574434507517\nIter: 2959\nnorm(gX): 0.024873018678581427\nIter: 2960\nnorm(gX): 0.025334745097154426\nIter: 2961\nnorm(gX): 0.02585666785966646\nIter: 2962\nnorm(gX): 0.026434533045702476\nIter: 2963\nnorm(gX): 0.027064026705099612\nIter: 2964\nnorm(gX): 0.027740865449551885\nIter: 2965\nnorm(gX): 0.028460868349039648\nIter: 2966\nnorm(gX): 0.029220010693224222\nIter: 2967\nnorm(gX): 0.03001446136641819\nIter: 2968\nnorm(gX): 0.030840606229150143\nIter: 2969\nnorm(gX): 0.031695060115607854\nIter: 2970\nnorm(gX): 0.03257466997647852\nIter: 2971\nnorm(gX): 0.033476511439630534\nIter: 2972\nnorm(gX): 0.03439788071981266\nIter: 2973\nnorm(gX): 0.035336283447612346\nIter: 2974\nnorm(gX): 0.03628942164695688\nIter: 2975\nnorm(gX): 0.03725517979078519\nIter: 2976\nnorm(gX): 0.038231610613707154\nIter: 2977\nnorm(gX): 0.0392169211587122\nIter: 2978\nnorm(gX): 0.040209459377649547\nIter: 2979\nnorm(gX): 0.041207701486029855\nIter: 2980\nnorm(gX): 0.04221024018454479\nIter: 2981\nnorm(gX): 0.04321577379648845\nIter: 2982\nnorm(gX): 0.04422309632614316\nIter: 2983\nnorm(gX): 0.045231088413568585\nIter: 2984\nnorm(gX): 0.04623870914231482\nIter: 2985\nnorm(gX): 0.04724498864551125\nIter: 2986\nnorm(gX): 0.048249021450190874\nIter: 2987\nnorm(gX): 0.04924996049793634\nIter: 2988\nnorm(gX): 0.05024701178085801\nIter: 2989\nnorm(gX): 0.05123942953426148\nIter: 2990\nnorm(gX): 0.05222651193096\nIter: 2991\nnorm(gX): 0.053207597226169266\nIter: 2992\nnorm(gX): 0.05418206030610303\nIter: 2993\nnorm(gX): 0.05514930959768148\nIter: 2994\nnorm(gX): 0.05610878430083767\nIter: 2995\nnorm(gX): 0.05705995190872257\nIter: 2996\nnorm(gX): 0.058002305984754816\nIter: 2997\nnorm(gX): 0.05893536416873505\nIter: 2998\nnorm(gX): 0.059858666387254574\nIter: 2999\nnorm(gX): 0.0607717732463004\nIter: 3000\nnorm(gX): 0.06167426458644834\nIter: 3001\nnorm(gX): 0.06256573818318793\nIter: 3002\nnorm(gX): 0.06344580857683511\nIter: 3003\nnorm(gX): 0.06431410601830896\nIter: 3004\nnorm(gX): 0.06517027551845608\nIter: 3005\nnorm(gX): 0.06601397599009735\nIter: 3006\nnorm(gX): 0.0668448794730993\nIter: 3007\nnorm(gX): 0.06766267043384959\nIter: 3008\nnorm(gX): 0.06846704513150163\nIter: 3009\nnorm(gX): 0.0692577110441552\nIter: 3010\nnorm(gX): 0.07003438634888629\nIter: 3011\nnorm(gX): 0.07079679945020989\nIter: 3012\nnorm(gX): 0.07154468855214538\nIter: 3013\nnorm(gX): 0.07227780126953828\nIter: 3014\nnorm(gX): 0.07299589427477117\nIter: 3015\nnorm(gX): 0.07369873297642315\nIter: 3016\nnorm(gX): 0.0743860912267526\nIter: 3017\nnorm(gX): 0.07505775105522167\nIter: 3018\nnorm(gX): 0.07571350242557354\nIter: 3019\nnorm(gX): 0.07635314301421349\nIter: 3020\nnorm(gX): 0.07697647800787269\nIter: 3021\nnorm(gX): 0.07758331991873495\nIter: 3022\nnorm(gX): 0.07817348841538987\nIter: 3023\nnorm(gX): 0.07874681016813606\nIter: 3024\nnorm(gX): 0.07930311870730025\nIter: 3025\nnorm(gX): 0.07984225429336086\nIter: 3026\nnorm(gX): 0.08036406379778042\nIter: 3027\nnorm(gX): 0.0808684005935814\nIter: 3028\nnorm(gX): 0.08135512445475662\nIter: 3029\nnorm(gX): 0.0818241014636814\nIter: 3030\nnorm(gX): 0.08227520392583451\nIter: 3031\nnorm(gX): 0.08270831029113519\nIter: 3032\nnorm(gX): 0.083123305081285\nIter: 3033\nnorm(gX): 0.08352007882255856\nIter: 3034\nnorm(gX): 0.0838985279835845\nIter: 3035\nnorm(gX): 0.08425855491759157\nIter: 3036\nnorm(gX): 0.08460006780874997\nIter: 3037\nnorm(gX): 0.0849229806222349\nIter: 3038\nnorm(gX): 0.08522721305761473\nIter: 3039\nnorm(gX): 0.08551269050532456\nIter: 3040\nnorm(gX): 0.08577934400586171\nIter: 3041\nnorm(gX): 0.0860271102115062\nIter: 3042\nnorm(gX): 0.08625593135028059\nIter: 3043\nnorm(gX): 0.086465755191981\nIter: 3044\nnorm(gX): 0.0866565350160359\nIter: 3045\nnorm(gX): 0.08682822958103756\nIter: 3046\nnorm(gX): 0.08698080309579609\nIter: 3047\nnorm(gX): 0.08711422519174186\nIter: 3048\nnorm(gX): 0.08722847089656786\nIter: 3049\nnorm(gX): 0.0873235206089938\nIter: 3050\nnorm(gX): 0.08739936007451779\nIter: 3051\nnorm(gX): 0.08745598036211381\nIter: 3052\nnorm(gX): 0.08749337784171705\nIter: 3053\nnorm(gX): 0.0875115541625086\nIter: 3054\nnorm(gX): 0.08751051623187614\nIter: 3055\nnorm(gX): 0.0874902761950087\nIter: 3056\nnorm(gX): 0.08745085141510182\nIter: 3057\nnorm(gX): 0.08739226445411288\nIter: 3058\nnorm(gX): 0.08731454305404988\nIter: 3059\nnorm(gX): 0.08721772011877026\nIter: 3060\nnorm(gX): 0.08710183369627165\nIter: 3061\nnorm(gX): 0.08696692696149438\nIter: 3062\nnorm(gX): 0.08681304819959353\nIter: 3063\nnorm(gX): 0.08664025078975567\nIter: 3064\nnorm(gX): 0.086448593189492\nIter: 3065\nnorm(gX): 0.08623813891953498\nIter: 3066\nnorm(gX): 0.08600895654928932\nIter: 3067\nnorm(gX): 0.08576111968293\nIter: 3068\nnorm(gX): 0.0854947069461814\nIter: 3069\nnorm(gX): 0.08520980197383203\nIter: 3070\nnorm(gX): 0.0849064933980683\nIter: 3071\nnorm(gX): 0.08458487483771289\nIter: 3072\nnorm(gX): 0.08424504488843558\nIter: 3073\nnorm(gX): 0.08388710711405256\nIter: 3074\nnorm(gX): 0.08351117003904622\nIter: 3075\nnorm(gX): 0.08311734714238611\nIter: 3076\nnorm(gX): 0.08270575685285196\nIter: 3077\nnorm(gX): 0.0822765225459624\nIter: 3078\nnorm(gX): 0.0818297725427187\nIter: 3079\nnorm(gX): 0.08136564011034249\nIter: 3080\nnorm(gX): 0.0808842634652032\nIter: 3081\nnorm(gX): 0.08038578577821118\nIter: 3082\nnorm(gX): 0.07987035518286771\nIter: 3083\nnorm(gX): 0.07933812478633569\nIter: 3084\nnorm(gX): 0.07878925268375732\nIter: 3085\nnorm(gX): 0.07822390197623097\nIter: 3086\nnorm(gX): 0.07764224079279701\nIter: 3087\nnorm(gX): 0.0770444423168318\nIter: 3088\nnorm(gX): 0.07643068481735092\nIter: 3089\nnorm(gX): 0.0758011516857093\nIter: 3090\nnorm(gX): 0.07515603147825674\nIter: 3091\nnorm(gX): 0.07449551796559477\nIter: 3092\nnorm(gX): 0.0738198101890961\nIter: 3093\nnorm(gX): 0.07312911252547599\nIter: 3094\nnorm(gX): 0.07242363476027258\nIter: 3095\nnorm(gX): 0.07170359217115485\nIter: 3096\nnorm(gX): 0.07096920562213936\nIter: 3097\nnorm(gX): 0.07022070166986516\nIter: 3098\nnorm(gX): 0.06945831268328516\nIter: 3099\nnorm(gX): 0.06868227697815281\nIter: 3100\nnorm(gX): 0.06789283896804427\nIter: 3101\nnorm(gX): 0.06709024933366387\nIter: 3102\nnorm(gX): 0.0662747652125438\nIter: 3103\nnorm(gX): 0.06544665041143453\nIter: 3104\nnorm(gX): 0.0646061756439802\nIter: 3105\nnorm(gX): 0.06375361879661305\nIter: 3106\nnorm(gX): 0.06288926522599235\nIter: 3107\nnorm(gX): 0.062013408091696975\nIter: 3108\nnorm(gX): 0.06112634872840755\nIter: 3109\nnorm(gX): 0.060228397062393614\nIter: 3110\nnorm(gX): 0.05931987207772037\nIter: 3111\nnorm(gX): 0.05840110233835826\nIter: 3112\nnorm(gX): 0.057472426573252834\nIter: 3113\nnorm(gX): 0.056534194332378315\nIter: 3114\nnorm(gX): 0.0555867667229297\nIter: 3115\nnorm(gX): 0.054630517236151144\nIter: 3116\nnorm(gX): 0.05366583267679845\nIter: 3117\nnorm(gX): 0.05269311420901062\nIter: 3118\nnorm(gX): 0.051712778534386404\nIter: 3119\nnorm(gX): 0.05072525922047157\nIter: 3120\nnorm(gX): 0.04973100820055816\nIter: 3121\nnorm(gX): 0.04873049746893136\nIter: 3122\nnorm(gX): 0.04772422099938667\nIter: 3123\nnorm(gX): 0.0467126969191059\nIter: 3124\nnorm(gX): 0.04569646997498762\nIter: 3125\nnorm(gX): 0.04467611433529875\nIter: 3126\nnorm(gX): 0.04365223677608278\nIter: 3127\nnorm(gX): 0.042625480309493306\nIter: 3128\nnorm(gX): 0.041596528319909544\nIter: 3129\nnorm(gX): 0.04056610928363828\nIter: 3130\nnorm(gX): 0.03953500215932021\nIter: 3131\nnorm(gX): 0.038504042548539216\nIter: 3132\nnorm(gX): 0.03747412973994373\nIter: 3133\nnorm(gX): 0.03644623476474696\nIter: 3134\nnorm(gX): 0.03542140960658962\nIter: 3135\nnorm(gX): 0.034400797723313434\nIter: 3136\nnorm(gX): 0.03338564605083833\nIter: 3137\nnorm(gX): 0.03237731866765656\nIter: 3138\nnorm(gX): 0.03137731229875662\nIter: 3139\nnorm(gX): 0.030387273824867337\nIter: 3140\nnorm(gX): 0.02940901992859639\nIter: 3141\nnorm(gX): 0.028444558942342366\nIter: 3142\nnorm(gX): 0.027496114848076732\nIter: 3143\nnorm(gX): 0.026566153195385866\nIter: 3144\nnorm(gX): 0.025657408424581078\nIter: 3145\nnorm(gX): 0.02477291167351959\nIter: 3146\nnorm(gX): 0.023916017574052515\nIter: 3147\nnorm(gX): 0.023090427773297905\nIter: 3148\nnorm(gX): 0.022300207928508422\nIter: 3149\nnorm(gX): 0.021549793742119905\nIter: 3150\nnorm(gX): 0.020843980317366034\nIter: 3151\nnorm(gX): 0.020187887932572687\nIter: 3152\nnorm(gX): 0.01958689662336198\nIter: 3153\nnorm(gX): 0.019046542276243035\nIter: 3154\nnorm(gX): 0.018572368954123675\nIter: 3155\nnorm(gX): 0.018169736525132766\nIter: 3156\nnorm(gX): 0.01784358959747382\nIter: 3157\nnorm(gX): 0.01759820269546797\nIter: 3158\nnorm(gX): 0.017436925806612297\nIter: 3159\nnorm(gX): 0.01736196108806182\nIter: 3160\nnorm(gX): 0.017374202600016386\nIter: 3161\nnorm(gX): 0.01747316454485515\nIter: 3162\nnorm(gX): 0.017657010215343667\nIter: 3163\nnorm(gX): 0.017922677023731837\nIter: 3164\nnorm(gX): 0.018266077477263047\nIter: 3165\nnorm(gX): 0.018682346088671534\nIter: 3166\nnorm(gX): 0.01916609991078434\nIter: 3167\nnorm(gX): 0.019711684822165707\nIter: 3168\nnorm(gX): 0.020313388164307382\nIter: 3169\nnorm(gX): 0.020965607759177454\nIter: 3170\nnorm(gX): 0.021662975408964315\nIter: 3171\nnorm(gX): 0.022400438586194504\nIter: 3172\nnorm(gX): 0.023173307076199398\nIter: 3173\nnorm(gX): 0.023977272344106487\nIter: 3174\nnorm(gX): 0.024808407063506982\nIter: 3175\nnorm(gX): 0.0256631511958536\nIter: 3176\nnorm(gX): 0.02653828970867705\nIter: 3177\nnorm(gX): 0.027430925748724664\nIter: 3178\nnorm(gX): 0.02833845198440121\nIter: 3179\nnorm(gX): 0.029258521948222652\nIter: 3180\nnorm(gX): 0.030189022540364607\nIter: 3181\nnorm(gX): 0.031128048369168288\nIter: 3182\nnorm(gX): 0.03207387826694861\nIter: 3183\nnorm(gX): 0.03302495409406088\nIter: 3184\nnorm(gX): 0.03397986180049766\nIter: 3185\nnorm(gX): 0.03493731462829632\nIter: 3186\nnorm(gX): 0.03589613829140759\nIter: 3187\nnorm(gX): 0.03685525794918057\nIter: 3188\nnorm(gX): 0.03781368678576398\nIter: 3189\nnorm(gX): 0.03877051601400458\nIter: 3190\nnorm(gX): 0.0397249061343558\nIter: 3191\nnorm(gX): 0.040676079294209105\nIter: 3192\nnorm(gX): 0.041623312608662796\nIter: 3193\nnorm(gX): 0.04256593231949679\nIter: 3194\nnorm(gX): 0.04350330868364945\nIter: 3195\nnorm(gX): 0.04443485149621568\nIter: 3196\nnorm(gX): 0.04536000616507882\nIter: 3197\nnorm(gX): 0.0462782502651898\nIter: 3198\nnorm(gX): 0.04718909051009452\nIter: 3199\nnorm(gX): 0.04809206008663945\nIter: 3200\nnorm(gX): 0.04898671630609266\nIter: 3201\nnorm(gX): 0.049872638531220655\nIter: 3202\nnorm(gX): 0.05074942634429446\nIter: 3203\nnorm(gX): 0.05161669792574016\nIter: 3204\nnorm(gX): 0.05247408861716872\nIter: 3205\nnorm(gX): 0.0533212496460274\nIter: 3206\nnorm(gX): 0.05415784699212242\nIter: 3207\nnorm(gX): 0.05498356037885705\nIter: 3208\nnorm(gX): 0.055798082374263896\nIter: 3209\nnorm(gX): 0.056601117588799134\nIter: 3210\nnorm(gX): 0.05739238195858675\nIter: 3211\nnorm(gX): 0.05817160210419966\nIter: 3212\nnorm(gX): 0.05893851475628885\nIter: 3213\nnorm(gX): 0.05969286624049533\nIter: 3214\nnorm(gX): 0.06043441201497007\nIter: 3215\nnorm(gX): 0.06116291625466108\nIter: 3216\nnorm(gX): 0.06187815147716992\nIter: 3217\nnorm(gX): 0.06257989820573834\nIter: 3218\nnorm(gX): 0.06326794466523794\nIter: 3219\nnorm(gX): 0.06394208650772235\nIter: 3220\nnorm(gX): 0.06460212656433748\nIter: 3221\nnorm(gX): 0.06524787462087024\nIter: 3222\nnorm(gX): 0.06587914721444162\nIter: 3223\nnorm(gX): 0.06649576744912881\nIter: 3224\nnorm(gX): 0.0670975648286373\nIter: 3225\nnorm(gX): 0.06768437510420622\nIter: 3226\nnorm(gX): 0.06825604013625544\nIter: 3227\nnorm(gX): 0.06881240776837204\nIter: 3228\nnorm(gX): 0.0693533317123783\nIter: 3229\nnorm(gX): 0.0698786714434168\nIter: 3230\nnorm(gX): 0.07038829210400606\nIter: 3231\nnorm(gX): 0.07088206441622083\nIter: 3232\nnorm(gX): 0.07135986460118063\nIter: 3233\nnorm(gX): 0.07182157430511954\nIter: 3234\nnorm(gX): 0.07226708053138624\nIter: 3235\nnorm(gX): 0.07269627557783309\nIter: 3236\nnorm(gX): 0.07310905697900487\nIter: 3237\nnorm(gX): 0.07350532745270169\nIter: 3238\nnorm(gX): 0.07388499485047545\nIter: 3239\nnorm(gX): 0.0742479721116583\nIter: 3240\nnorm(gX): 0.07459417722061672\nIter: 3241\nnorm(gX): 0.07492353316685248\nIter: 3242\nnorm(gX): 0.07523596790772498\nIter: 3243\nnorm(gX): 0.07553141433351139\nIter: 3244\nnorm(gX): 0.07580981023456283\nIter: 3245\nnorm(gX): 0.07607109827037647\nIter: 3246\nnorm(gX): 0.07631522594034608\nIter: 3247\nnorm(gX): 0.07654214555609296\nIter: 3248\nnorm(gX): 0.07675181421512699\nIter: 3249\nnorm(gX): 0.07694419377579126\nIter: 3250\nnorm(gX): 0.07711925083329484\nIter: 3251\nnorm(gX): 0.07727695669676735\nIter: 3252\nnorm(gX): 0.07741728736718662\nIter: 3253\nnorm(gX): 0.07754022351616147\nIter: 3254\nnorm(gX): 0.07764575046539694\nIter: 3255\nnorm(gX): 0.0777338581668468\nIter: 3256\nnorm(gX): 0.07780454118345957\nIter: 3257\nnorm(gX): 0.0778577986704384\nIter: 3258\nnorm(gX): 0.07789363435703572\nIter: 3259\nnorm(gX): 0.07791205652875438\nIter: 3260\nnorm(gX): 0.07791307800999758\nIter: 3261\nnorm(gX): 0.07789671614708601\nIter: 3262\nnorm(gX): 0.07786299279165487\nIter: 3263\nnorm(gX): 0.07781193428439061\nIter: 3264\nnorm(gX): 0.07774357143910365\nIter: 3265\nnorm(gX): 0.0776579395271472\nIter: 3266\nnorm(gX): 0.07755507826214572\nIter: 3267\nnorm(gX): 0.0774350317850888\nIter: 3268\nnorm(gX): 0.07729784864973761\nIter: 3269\nnorm(gX): 0.07714358180842867\nIter: 3270\nnorm(gX): 0.0769722885982319\nIter: 3271\nnorm(gX): 0.07678403072753333\nIter: 3272\nnorm(gX): 0.0765788742630589\nIter: 3273\nnorm(gX): 0.07635688961737386\nIter: 3274\nnorm(gX): 0.07611815153691015\nIter: 3275\nnorm(gX): 0.07586273909056375\nIter: 3276\nnorm(gX): 0.07559073565893402\nIter: 3277\nnorm(gX): 0.07530222892425838\nIter: 3278\nnorm(gX): 0.07499731086111948\nIter: 3279\nnorm(gX): 0.07467607772800064\nIter: 3280\nnorm(gX): 0.07433863005980133\nIter: 3281\nnorm(gX): 0.07398507266139584\nIter: 3282\nnorm(gX): 0.07361551460233758\nIter: 3283\nnorm(gX): 0.07323006921288125\nIter: 3284\nnorm(gX): 0.07282885408138215\nIter: 3285\nnorm(gX): 0.07241199105332488\nIter: 3286\nnorm(gX): 0.07197960623204834\nIter: 3287\nnorm(gX): 0.071531829981425\nIter: 3288\nnorm(gX): 0.07106879693066111\nIter: 3289\nnorm(gX): 0.07059064598146986\nIter: 3290\nnorm(gX): 0.07009752031783412\nIter: 3291\nnorm(gX): 0.0695895674186792\nIter: 3292\nnorm(gX): 0.06906693907370592\nIter: 3293\nnorm(gX): 0.06852979140276809\nIter: 3294\nnorm(gX): 0.06797828487914688\nIter: 3295\nnorm(gX): 0.0674125843571357\nIter: 3296\nnorm(gX): 0.06683285910438762\nIter: 3297\nnorm(gX): 0.06623928283954963\nIter: 3298\nnorm(gX): 0.06563203377574804\nIter: 3299\nnorm(gX): 0.06501129467052343\nIter: 3300\nnorm(gX): 0.06437725288294455\nIter: 3301\nnorm(gX): 0.06373010043867951\nIter: 3302\nnorm(gX): 0.06307003410387128\nIter: 3303\nnorm(gX): 0.062397255468805195\nIter: 3304\nnorm(gX): 0.06171197104244539\nIter: 3305\nnorm(gX): 0.06101439235906854\nIter: 3306\nnorm(gX): 0.06030473609834187\nIter: 3307\nnorm(gX): 0.059583224220382015\nIter: 3308\nnorm(gX): 0.05885008411753145\nIter: 3309\nnorm(gX): 0.05810554878476272\nIter: 3310\nnorm(gX): 0.05734985701093\nIter: 3311\nnorm(gX): 0.05658325359331444\nIter: 3312\nnorm(gX): 0.05580598957827829\nIter: 3313\nnorm(gX): 0.05501832253117045\nIter: 3314\nnorm(gX): 0.05422051683911677\nIter: 3315\nnorm(gX): 0.05341284405074252\nIter: 3316\nnorm(gX): 0.05259558325752524\nIter: 3317\nnorm(gX): 0.05176902152202908\nIter: 3318\nnorm(gX): 0.050933454359168226\nIter: 3319\nnorm(gX): 0.050089186277363605\nIter: 3320\nnorm(gX): 0.0492365313875945\nIter: 3321\nnorm(gX): 0.04837581408948511\nIter: 3322\nnorm(gX): 0.047507369844927545\nIter: 3323\nnorm(gX): 0.046631546051381915\nIter: 3324\nnorm(gX): 0.0457487030287929\nIter: 3325\nnorm(gX): 0.04485921513634429\nIter: 3326\nnorm(gX): 0.043963472037753526\nIter: 3327\nnorm(gX): 0.043061880136773506\nIter: 3328\nnorm(gX): 0.04215486420821106\nIter: 3329\nnorm(gX): 0.04124286925365676\nIter: 3330\nnorm(gX): 0.040326362616165845\nIter: 3331\nnorm(gX): 0.03940583639358311\nIter: 3332\nnorm(gX): 0.03848181019691138\nIter: 3333\nnorm(gX): 0.0375548343078272\nIter: 3334\nnorm(gX): 0.03662549329846565\nIter: 3335\nnorm(gX): 0.035694410186989094\nIter: 3336\nnorm(gX): 0.03476225121476502\nIter: 3337\nnorm(gX): 0.03382973134469188\nIter: 3338\nnorm(gX): 0.03289762059619132\nIter: 3339\nnorm(gX): 0.031966751350117936\nIter: 3340\nnorm(gX): 0.0310380267764007\nIter: 3341\nnorm(gX): 0.030112430558113225\nIter: 3342\nnorm(gX): 0.02919103810693725\nIter: 3343\nnorm(gX): 0.02827502948508889\nIter: 3344\nnorm(gX): 0.027365704264611802\nIter: 3345\nnorm(gX): 0.02646449856246548\nIter: 3346\nnorm(gX): 0.02557300448139728\nIter: 3347\nnorm(gX): 0.024692992152057336\nIter: 3348\nnorm(gX): 0.023826434495063598\nIter: 3349\nnorm(gX): 0.022975534680298847\nIter: 3350\nnorm(gX): 0.02214275602349172\nIter: 3351\nnorm(gX): 0.021330853685692205\nIter: 3352\nnorm(gX): 0.020542906978173313\nIter: 3353\nnorm(gX): 0.01978235026628558\nIter: 3354\nnorm(gX): 0.019052999357974682\nIter: 3355\nnorm(gX): 0.018359068831761846\nIter: 3356\nnorm(gX): 0.017705174051977912\nIter: 3357\nnorm(gX): 0.01709630981828822\nIter: 3358\nnorm(gX): 0.01653779610427462\nIter: 3359\nnorm(gX): 0.016035180848378457\nIter: 3360\nnorm(gX): 0.015594091251412303\nIter: 3361\nnorm(gX): 0.015220029594714956\nIter: 3362\nnorm(gX): 0.014918117949998325\nIter: 3363\nnorm(gX): 0.014692807943103894\nIter: 3364\nnorm(gX): 0.014547584784591954\nIter: 3365\nnorm(gX): 0.014484705004105567\nIter: 3366\nnorm(gX): 0.014505009865647615\nIter: 3367\nnorm(gX): 0.014607848032046828\nIter: 3368\nnorm(gX): 0.014791122415068974\nIter: 3369\nnorm(gX): 0.015051452597112472\nIter: 3370\nnorm(gX): 0.01538442345548564\nIter: 3371\nnorm(gX): 0.015784879008740305\nIter: 3372\nnorm(gX): 0.016247220156625677\nIter: 3373\nnorm(gX): 0.016765673581385726\nIter: 3374\nnorm(gX): 0.017334511918528656\nIter: 3375\nnorm(gX): 0.017948217869445036\nIter: 3376\nnorm(gX): 0.018601594436226793\nIter: 3377\nnorm(gX): 0.01928982908261587\nIter: 3378\nnorm(gX): 0.02000852187147642\nIter: 3379\nnorm(gX): 0.020753687558473167\nIter: 3380\nnorm(gX): 0.02152174028847232\nIter: 3381\nnorm(gX): 0.022309467736149623\nIter: 3382\nnorm(gX): 0.023113999739381136\nIter: 3383\nnorm(gX): 0.02393277493128243\nIter: 3384\nnorm(gX): 0.0247635076611397\nIter: 3385\nnorm(gX): 0.025604156595236318\nIter: 3386\nnorm(gX): 0.026452895756309654\nIter: 3387\nnorm(gX): 0.027308088335796392\nIter: 3388\nnorm(gX): 0.02816826334117088\nIter: 3389\nnorm(gX): 0.029032094976722725\nIter: 3390\nnorm(gX): 0.029898384565097913\nIter: 3391\nnorm(gX): 0.03076604477393553\nIter: 3392\nnorm(gX): 0.031634085899186364\nIter: 3393\nnorm(gX): 0.03250160396207653\nIter: 3394\nnorm(gX): 0.03336777039218265\nIter: 3395\nnorm(gX): 0.0342318230895025\nIter: 3396\nnorm(gX): 0.0350930586807142\nIter: 3397\nnorm(gX): 0.03595082580671136\nIter: 3398\nnorm(gX): 0.03680451929932741\nIter: 3399\nnorm(gX): 0.03765357512405361\nIter: 3400\nnorm(gX): 0.03849746598241664\nIter: 3401\nnorm(gX): 0.039335697482656654\nIter: 3402\nnorm(gX): 0.040167804800204196\nIter: 3403\nnorm(gX): 0.040993349760731525\nIter: 3404\nnorm(gX): 0.04181191828816797\nIter: 3405\nnorm(gX): 0.04262311816831017\nIter: 3406\nnorm(gX): 0.04342657708577335\nIter: 3407\nnorm(gX): 0.04422194089798699\nIter: 3408\nnorm(gX): 0.04500887211512697\nIter: 3409\nnorm(gX): 0.0457870485592425\nIter: 3410\nnorm(gX): 0.04655616217955462\nIter: 3411\nnorm(gX): 0.04731591800411665\nIter: 3412\nnorm(gX): 0.048066033210714204\nIter: 3413\nnorm(gX): 0.04880623630223546\nIter: 3414\nnorm(gX): 0.04953626637369271\nIter: 3415\nnorm(gX): 0.05025587245981725\nIter: 3416\nnorm(gX): 0.0509648129535812\nIter: 3417\nnorm(gX): 0.051662855087249986\nIter: 3418\nnorm(gX): 0.052349774468668946\nIter: 3419\nnorm(gX): 0.05302535466639736\nIter: 3420\nnorm(gX): 0.05368938683810098\nIter: 3421\nnorm(gX): 0.05434166939734068\nIter: 3422\nnorm(gX): 0.05498200771442475\nIter: 3423\nnorm(gX): 0.05561021384759299\nIter: 3424\nnorm(gX): 0.05622610630119864\nIter: 3425\nnorm(gX): 0.05682950980795497\nIter: 3426\nnorm(gX): 0.057420255132668366\nIter: 3427\nnorm(gX): 0.057998178895169374\nIter: 3428\nnorm(gX): 0.05856312341042945\nIter: 3429\nnorm(gX): 0.0591149365440287\nIter: 3430\nnorm(gX): 0.059653471581430854\nIter: 3431\nnorm(gX): 0.06017858710959665\nIter: 3432\nnorm(gX): 0.06069014690970141\nIter: 3433\nnorm(gX): 0.06118801985980724\nIter: 3434\nnorm(gX): 0.06167207984650751\nIter: 3435\nnorm(gX): 0.06214220568459816\nIter: 3436\nnorm(gX): 0.06259828104401931\nIter: 3437\nnorm(gX): 0.06304019438330503\nIter: 3438\nnorm(gX): 0.06346783888890595\nIter: 3439\nnorm(gX): 0.06388111241978837\nIter: 3440\nnorm(gX): 0.06427991745682127\nIter: 3441\nnorm(gX): 0.06466416105641633\nIter: 3442\nnorm(gX): 0.0650337548080595\nIter: 3443\nnorm(gX): 0.06538861479529776\nIter: 3444\nnorm(gX): 0.06572866155987228\nIter: 3445\nnorm(gX): 0.06605382006865966\nIter: 3446\nnorm(gX): 0.06636401968314988\nIter: 3447\nnorm(gX): 0.06665919413120022\nIter: 3448\nnorm(gX): 0.06693928148084423\nIter: 3449\nnorm(gX): 0.06720422411592103\nIter: 3450\nnorm(gX): 0.06745396871336905\nIter: 3451\nnorm(gX): 0.06768846622198843\nIter: 3452\nnorm(gX): 0.06790767184252813\nIter: 3453\nnorm(gX): 0.06811154500894545\nIter: 3454\nnorm(gX): 0.06830004937073228\nIter: 3455\nnorm(gX): 0.06847315277617623\nIter: 3456\nnorm(gX): 0.06863082725645378\nIter: 3457\nnorm(gX): 0.06877304901047952\nIter: 3458\nnorm(gX): 0.06889979839041056\nIter: 3459\nnorm(gX): 0.06901105988774472\nIter: 3460\nnorm(gX): 0.06910682211992802\nIter: 3461\nnorm(gX): 0.06918707781744585\nIter: 3462\nnorm(gX): 0.06925182381130517\nIter: 3463\nnorm(gX): 0.06930106102090078\nIter: 3464\nnorm(gX): 0.06933479444220537\nIter: 3465\nnorm(gX): 0.06935303313624681\nIter: 3466\nnorm(gX): 0.06935579021788411\nIter: 3467\nnorm(gX): 0.06934308284478828\nIter: 3468\nnorm(gX): 0.06931493220669342\nIter: 3469\nnorm(gX): 0.069271363514849\nIter: 3470\nnorm(gX): 0.06921240599169924\nIter: 3471\nnorm(gX): 0.06913809286076489\nIter: 3472\nnorm(gX): 0.06904846133674176\nIter: 3473\nnorm(gX): 0.06894355261584055\nIter: 3474\nnorm(gX): 0.06882341186635067\nIter: 3475\nnorm(gX): 0.06868808821946683\nIter: 3476\nnorm(gX): 0.0685376347603989\nIter: 3477\nnorm(gX): 0.06837210851979407\nIter: 3478\nnorm(gX): 0.06819157046550169\nIter: 3479\nnorm(gX): 0.06799608549472289\nIter: 3480\nnorm(gX): 0.06778572242660778\nIter: 3481\nnorm(gX): 0.0675605539953178\nIter: 3482\nnorm(gX): 0.0673206568436522\nIter: 3483\nnorm(gX): 0.0670661115172797\nIter: 3484\nnorm(gX): 0.0667970024596718\nIter: 3485\nnorm(gX): 0.06651341800780548\nIter: 3486\nnorm(gX): 0.06621545038872804\nIter: 3487\nnorm(gX): 0.06590319571711059\nIter: 3488\nnorm(gX): 0.0655767539938784\nIter: 3489\nnorm(gX): 0.06523622910605749\nIter: 3490\nnorm(gX): 0.06488172882797572\nIter: 3491\nnorm(gX): 0.06451336482398519\nIter: 3492\nnorm(gX): 0.06413125265284106\nIter: 3493\nnorm(gX): 0.06373551177396752\nIter: 3494\nnorm(gX): 0.06332626555578105\nIter: 3495\nnorm(gX): 0.06290364128633213\nIter: 3496\nnorm(gX): 0.06246777018647611\nIter: 3497\nnorm(gX): 0.06201878742588486\nIter: 3498\nnorm(gX): 0.06155683214219001\nIter: 3499\nnorm(gX): 0.06108204746358515\nIter: 3500\nnorm(gX): 0.06059458053528295\nIter: 3501\nnorm(gX): 0.06009458255021015\nIter: 3502\nnorm(gX): 0.05958220878440984\nIter: 3503\nnorm(gX): 0.059057618637653894\nIter: 3504\nnorm(gX): 0.05852097567983254\nIter: 3505\nnorm(gX): 0.057972447703701156\nIter: 3506\nnorm(gX): 0.05741220678472486\nIter: 3507\nnorm(gX): 0.05684042934875417\nIter: 3508\nnorm(gX): 0.05625729624836236\nIter: 3509\nnorm(gX): 0.05566299284885589\nIter: 3510\nnorm(gX): 0.05505770912493043\nIter: 3511\nnorm(gX): 0.05444163976923972\nIter: 3512\nnorm(gX): 0.05381498431411056\nIter: 3513\nnorm(gX): 0.053177947267960546\nIter: 3514\nnorm(gX): 0.05253073826802396\nIter: 3515\nnorm(gX): 0.05187357225126972\nIter: 3516\nnorm(gX): 0.05120666964559786\nIter: 3517\nnorm(gX): 0.050530256583656986\nIter: 3518\nnorm(gX): 0.04984456514198875\nIter: 3519\nnorm(gX): 0.04914983360842027\nIter: 3520\nnorm(gX): 0.04844630678117442\nIter: 3521\nnorm(gX): 0.04773423630348328\nIter: 3522\nnorm(gX): 0.0470138810380916\nIter: 3523\nnorm(gX): 0.046285507486575964\nIter: 3524\nnorm(gX): 0.04554939025912106\nIter: 3525\nnorm(gX): 0.04480581260115407\nIter: 3526\nnorm(gX): 0.04405506698413583\nIter: 3527\nnorm(gX): 0.043297455768876864\nIter: 3528\nnorm(gX): 0.04253329195089045\nIter: 3529\nnorm(gX): 0.04176289999874794\nIter: 3530\nnorm(gX): 0.040986616797937706\nIter: 3531\nnorm(gX): 0.040204792714659474\nIter: 3532\nnorm(gX): 0.03941779279601988\nIter: 3533\nnorm(gX): 0.03862599812571183\nIter: 3534\nnorm(gX): 0.03782980735702727\nIter: 3535\nnorm(gX): 0.03702963844841607\nIter: 3536\nnorm(gX): 0.03622593063066328\nIter: 3537\nnorm(gX): 0.035419146639102235\nIter: 3538\nnorm(gX): 0.03460977524942527\nIter: 3539\nnorm(gX): 0.03379833416144974\nIter: 3540\nnorm(gX): 0.032985373281745986\nIter: 3541\nnorm(gX): 0.032171478463577784\nIter: 3542\nnorm(gX): 0.03135727577087779\nIter: 3543\nnorm(gX): 0.030543436342215048\nIter: 3544\nnorm(gX): 0.029730681940577347\nIter: 3545\nnorm(gX): 0.02891979128523849\nIter: 3546\nnorm(gX): 0.028111607272310806\nIter: 3547\nnorm(gX): 0.027307045200227448\nIter: 3548\nnorm(gX): 0.026507102123908586\nIter: 3549\nnorm(gX): 0.025712867464889214\nIter: 3550\nnorm(gX): 0.024925535001134088\nIter: 3551\nnorm(gX): 0.024146416345654335\nIter: 3552\nnorm(gX): 0.023376955990528747\nIter: 3553\nnorm(gX): 0.022618747934679096\nIter: 3554\nnorm(gX): 0.0218735538171471\nIter: 3555\nnorm(gX): 0.021143322327680698\nIter: 3556\nnorm(gX): 0.02043020944317123\nIter: 3557\nnorm(gX): 0.01973659871922941\nIter: 3558\nnorm(gX): 0.01906512042624079\nIter: 3559\nnorm(gX): 0.018418667738591646\nIter: 3560\nnorm(gX): 0.017800407457711897\nIter: 3561\nnorm(gX): 0.017213781898178408\nIter: 3562\nnorm(gX): 0.016662497670083105\nIter: 3563\nnorm(gX): 0.016150496313490005\nIter: 3564\nnorm(gX): 0.015681901357741775\nIter: 3565\nnorm(gX): 0.015260936782864724\nIter: 3566\nnorm(gX): 0.014891813517140801\nIter: 3567\nnorm(gX): 0.014578583915236837\nIter: 3568\nnorm(gX): 0.014324969228262294\nIter: 3569\nnorm(gX): 0.014134171415774855\nIter: 3570\nnorm(gX): 0.014008686991593636\nIter: 3571\nnorm(gX): 0.01395014500071436\nIter: 3572\nnorm(gX): 0.01395919166009562\nIter: 3573\nnorm(gX): 0.014035439469276528\nIter: 3574\nnorm(gX): 0.014177489191822542\nIter: 3575\nnorm(gX): 0.014383021329293493\nIter: 3576\nnorm(gX): 0.014648942864023688\nIter: 3577\nnorm(gX): 0.014971568050348591\nIter: 3578\nnorm(gX): 0.015346810239372245\nIter: 3579\nnorm(gX): 0.01577036461979215\nIter: 3580\nnorm(gX): 0.01623786755910715\nIter: 3581\nnorm(gX): 0.01674502483878744\nIter: 3582\nnorm(gX): 0.017287706869018298\nIter: 3583\nnorm(gX): 0.017862013109909942\nIter: 3584\nnorm(gX): 0.01846431029085811\nIter: 3585\nnorm(gX): 0.01909124991553091\nIter: 3586\nnorm(gX): 0.019739770433240003\nIter: 3587\nnorm(gX): 0.020407088790719192\nIter: 3588\nnorm(gX): 0.021090685185606806\nIter: 3589\nnorm(gX): 0.02178828393810384\nIter: 3590\nnorm(gX): 0.0224978325935468\nIter: 3591\nnorm(gX): 0.023217480711006308\nIter: 3592\nnorm(gX): 0.023945559285311734\nIter: 3593\nnorm(gX): 0.024680561375543093\nIter: 3594\nnorm(gX): 0.02542112424790719\nIter: 3595\nnorm(gX): 0.02616601316009625\nIter: 3596\nnorm(gX): 0.026914106796269703\nIter: 3597\nnorm(gX): 0.02766438428863397\nIter: 3598\nnorm(gX): 0.028415913719819663\nIter: 3599\nnorm(gX): 0.029167841979601906\nIter: 3600\nnorm(gX): 0.02991938584251927\nIter: 3601\nnorm(gX): 0.030669824134849488\nIter: 3602\nnorm(gX): 0.03141849086612552\nIter: 3603\nnorm(gX): 0.0321647692101205\nIter: 3604\nnorm(gX): 0.032908086230902224\nIter: 3605\nnorm(gX): 0.033647908260710055\nIter: 3606\nnorm(gX): 0.034383736846941945\nIter: 3607\nnorm(gX): 0.03511510519548935\nIter: 3608\nnorm(gX): 0.03584157504675707\nIter: 3609\nnorm(gX): 0.03656273392876754\nIter: 3610\nnorm(gX): 0.037278192739029666\nIter: 3611\nnorm(gX): 0.03798758361311368\nIter: 3612\nnorm(gX): 0.038690558043535765\nIter: 3613\nnorm(gX): 0.03938678521727527\nIter: 3614\nnorm(gX): 0.0400759505445385\nIter: 3615\nnorm(gX): 0.04075775435496311\nIter: 3616\nnorm(gX): 0.041431910740598585\nIter: 3617\nnorm(gX): 0.042098146527755606\nIter: 3618\nnorm(gX): 0.04275620036209947\nIter: 3619\nnorm(gX): 0.04340582189345083\nIter: 3620\nnorm(gX): 0.044046771048455856\nIter: 3621\nnorm(gX): 0.044678817380804335\nIter: 3622\nnorm(gX): 0.045301739490021364\nIter: 3623\nnorm(gX): 0.04591532450090347\nIter: 3624\nnorm(gX): 0.046519367596787736\nIter: 3625\nnorm(gX): 0.04711367160050022\nIter: 3626\nnorm(gX): 0.04769804659775086\nIter: 3627\nnorm(gX): 0.04827230959826896\nIter: 3628\nnorm(gX): 0.04883628423056644\nIter: 3629\nnorm(gX): 0.04938980046668596\nIter: 3630\nnorm(gX): 0.0499326943737769\nIter: 3631\nnorm(gX): 0.050464807889580104\nIter: 3632\nnorm(gX): 0.05098598861938578\nIter: 3633\nnorm(gX): 0.05149608965219571\nIter: 3634\nnorm(gX): 0.051994969394097666\nIter: 3635\nnorm(gX): 0.05248249141714366\nIter: 3636\nnorm(gX): 0.05295852432209206\nIter: 3637\nnorm(gX): 0.053422941613659986\nIter: 3638\nnorm(gX): 0.053875621587038\nIter: 3639\nnorm(gX): 0.054316447224519636\nIter: 3640\nnorm(gX): 0.05474530610126265\nIter: 3641\nnorm(gX): 0.05516209029928275\nIter: 3642\nnorm(gX): 0.055566696328882476\nIter: 3643\nnorm(gX): 0.05595902505677025\nIter: 3644\nnorm(gX): 0.056338981640244784\nIter: 3645\nnorm(gX): 0.05670647546683175\nIter: 3646\nnorm(gX): 0.05706142009886995\nIter: 3647\nnorm(gX): 0.05740373322257203\nIter: 3648\nnorm(gX): 0.05773333660108603\nIter: 3649\nnorm(gX): 0.058050156031233445\nIter: 3650\nnorm(gX): 0.05835412130352107\nIter: 3651\nnorm(gX): 0.058645166165142905\nIter: 3652\nnorm(gX): 0.058923228285667005\nIter: 3653\nnorm(gX): 0.059188249225141674\nIter: 3654\nnorm(gX): 0.0594401744044081\nIter: 3655\nnorm(gX): 0.05967895307739216\nIter: 3656\nnorm(gX): 0.059904538305166365\nIter: 3657\nnorm(gX): 0.06011688693165506\nIter: 3658\nnorm(gX): 0.060315959560751975\nIter: 3659\nnorm(gX): 0.060501720534789694\nIter: 3660\nnorm(gX): 0.06067413791415657\nIter: 3661\nnorm(gX): 0.06083318345799995\nIter: 3662\nnorm(gX): 0.060978832605873\nIter: 3663\nnorm(gX): 0.061111064460256215\nIter: 3664\nnorm(gX): 0.06122986176985455\nIter: 3665\nnorm(gX): 0.06133521091360022\nIter: 3666\nnorm(gX): 0.06142710188529471\nIter: 3667\nnorm(gX): 0.06150552827882072\nIter: 3668\nnorm(gX): 0.06157048727389062\nIter: 3669\nnorm(gX): 0.06162197962227174\nIter: 3670\nnorm(gX): 0.06166000963445678\nIter: 3671\nnorm(gX): 0.06168458516674353\nIter: 3672\nnorm(gX): 0.061695717608703814\nIter: 3673\nnorm(gX): 0.061693421871006085\nIter: 3674\nnorm(gX): 0.06167771637360606\nIter: 3675\nnorm(gX): 0.06164862303425657\nIter: 3676\nnorm(gX): 0.06160616725737204\nIter: 3677\nnorm(gX): 0.061550377923210754\nIter: 3678\nnorm(gX): 0.06148128737741225\nIter: 3679\nnorm(gX): 0.06139893142087652\nIter: 3680\nnorm(gX): 0.06130334930001589\nIter: 3681\nnorm(gX): 0.06119458369739976\nIter: 3682\nnorm(gX): 0.06107268072279674\nIter: 3683\nnorm(gX): 0.06093768990467698\nIter: 3684\nnorm(gX): 0.06078966418219674\nIter: 3685\nnorm(gX): 0.0606286598976957\nIter: 3686\nnorm(gX): 0.060454736789772964\nIter: 3687\nnorm(gX): 0.06026795798699661\nIter: 3688\nnorm(gX): 0.0600683900022952\nIter: 3689\nnorm(gX): 0.059856102728102024\nIter: 3690\nnorm(gX): 0.05963116943234514\nIter: 3691\nnorm(gX): 0.05939366675534287\nIter: 3692\nnorm(gX): 0.059143674707716705\nIter: 3693\nnorm(gX): 0.05888127666941008\nIter: 3694\nnorm(gX): 0.058606559389919884\nIter: 3695\nnorm(gX): 0.05831961298990568\nIter: 3696\nnorm(gX): 0.058020530964239224\nIter: 3697\nnorm(gX): 0.05770941018670736\nIter: 3698\nnorm(gX): 0.057386350916496444\nIter: 3699\nnorm(gX): 0.05705145680663261\nIter: 3700\nnorm(gX): 0.05670483491459846\nIter: 3701\nnorm(gX): 0.056346595715305724\nIter: 3702\nnorm(gX): 0.05597685311668099\nIter: 3703\nnorm(gX): 0.05559572447808964\nIter: 3704\nnorm(gX): 0.0552033306319305\nIter: 3705\nnorm(gX): 0.05479979590865169\nIter: 3706\nnorm(gX): 0.05438524816555256\nIter: 3707\nnorm(gX): 0.05395981881973447\nIter: 3708\nnorm(gX): 0.05352364288562717\nIter: 3709\nnorm(gX): 0.05307685901751345\nIter: 3710\nnorm(gX): 0.05261960955756439\nIter: 3711\nnorm(gX): 0.0521520405899203\nIter: 3712\nnorm(gX): 0.0516743020014431\nIter: 3713\nnorm(gX): 0.05118654754977488\nIter: 3714\nnorm(gX): 0.05068893493948346\nIter: 3715\nnorm(gX): 0.050181625907052406\nIter: 3716\nnorm(gX): 0.04966478631571636\nIter: 3717\nnorm(gX): 0.04913858626103602\nIter: 3718\nnorm(gX): 0.04860320018844692\nIter: 3719\nnorm(gX): 0.0480588070239311\nIter: 3720\nnorm(gX): 0.047505590319282906\nIter: 3721\nnorm(gX): 0.04694373841346063\nIter: 3722\nnorm(gX): 0.046373444611781944\nIter: 3723\nnorm(gX): 0.04579490738490666\nIter: 3724\nnorm(gX): 0.04520833058974317\nIter: 3725\nnorm(gX): 0.04461392371473478\nIter: 3726\nnorm(gX): 0.04401190215223263\nIter: 3727\nnorm(gX): 0.04340248750100807\nIter: 3728\nnorm(gX): 0.04278590790237919\nIter: 3729\nnorm(gX): 0.04216239841372675\nIter: 3730\nnorm(gX): 0.04153220142389863\nIter: 3731\nnorm(gX): 0.04089556711525789\nIter: 3732\nnorm(gX): 0.04025275397803937\nIter: 3733\nnorm(gX): 0.039604029383219976\nIter: 3734\nnorm(gX): 0.03894967022098181\nIter: 3735\nnorm(gX): 0.03828996361280312\nIter: 3736\nnorm(gX): 0.03762520770621741\nIter: 3737\nnorm(gX): 0.036955712562513704\nIter: 3738\nnorm(gX): 0.03628180114901602\nIter: 3739\nnorm(gX): 0.03560381044908115\nIter: 3740\nnorm(gX): 0.03492209270475483\nIter: 3741\nnorm(gX): 0.03423701680897274\nIter: 3742\nnorm(gX): 0.03354896986636734\nIter: 3743\nnorm(gX): 0.0328583589442778\nIter: 3744\nnorm(gX): 0.032165613038262465\nIter: 3745\nnorm(gX): 0.0314711852794343\nIter: 3746\nnorm(gX): 0.030775555414285952\nIter: 3747\nnorm(gX): 0.030079232591132217\nIter: 3748\nnorm(gX): 0.02938275849107819\nIter: 3749\nnorm(gX): 0.028686710845151203\nIter: 3750\nnorm(gX): 0.027991707382935584\nIter: 3751\nnorm(gX): 0.027298410261326286\nIter: 3752\nnorm(gX): 0.026607531024494434\nIter: 3753\nnorm(gX): 0.02591983614726667\nIter: 3754\nnorm(gX): 0.025236153212999508\nIter: 3755\nnorm(gX): 0.024557377772382392\nIter: 3756\nnorm(gX): 0.023884480919985417\nIter: 3757\nnorm(gX): 0.023218517608451467\nIter: 3758\nnorm(gX): 0.022560635692854686\nIter: 3759\nnorm(gX): 0.02191208565668137\nIter: 3760\nnorm(gX): 0.02127423091058245\nIter: 3761\nnorm(gX): 0.02064855847037059\nIter: 3762\nnorm(gX): 0.020036689704637133\nIter: 3763\nnorm(gX): 0.01944039068809436\nIter: 3764\nnorm(gX): 0.018861581498117994\nIter: 3765\nnorm(gX): 0.018302343545684177\nIter: 3766\nnorm(gX): 0.017764923740574427\nIter: 3767\nnorm(gX): 0.01725173396774372\nIter: 3768\nnorm(gX): 0.016765344027717555\nIter: 3769\nnorm(gX): 0.01630846592252621\nIter: 3770\nnorm(gX): 0.015883927234123713\nIter: 3771\nnorm(gX): 0.01549463145766778\nIter: 3772\nnorm(gX): 0.015143503651062333\nIter: 3773\nnorm(gX): 0.014833420770016049\nIter: 3774\nnorm(gX): 0.014567127644069183\nIter: 3775\nnorm(gX): 0.014347141662057014\nIter: 3776\nnorm(gX): 0.014175651642106211\nIter: 3777\nnorm(gX): 0.014054418623026333\nIter: 3778\nnorm(gX): 0.013984687842612736\nIter: 3779\nnorm(gX): 0.013967121378303173\nIter: 3780\nnorm(gX): 0.014001759453810101\nIter: 3781\nnorm(gX): 0.014088015320646966\nIter: 3782\nnorm(gX): 0.014224704453105488\nIter: 3783\nnorm(gX): 0.014410104451476844\nIter: 3784\nnorm(gX): 0.014642038494158393\nIter: 3785\nnorm(gX): 0.014917973114312835\nIter: 3786\nnorm(gX): 0.015235120734980604\nIter: 3787\nnorm(gX): 0.015590538544756548\nIter: 3788\nnorm(gX): 0.015981217397267548\nIter: 3789\nnorm(gX): 0.0164041568578387\nIter: 3790\nnorm(gX): 0.016856424793445676\nIter: 3791\nnorm(gX): 0.0173352016992911\nIter: 3792\nnorm(gX): 0.017837811167044266\nIter: 3793\nnorm(gX): 0.018361738562461066\nIter: 3794\nnorm(gX): 0.018904640207659848\nIter: 3795\nnorm(gX): 0.01946434529316438\nIter: 3796\nnorm(gX): 0.020038852500738658\nIter: 3797\nnorm(gX): 0.02062632299606072\nIter: 3798\nnorm(gX): 0.021225071114860466\nIter: 3799\nnorm(gX): 0.021833553755577318\nIter: 3800\nnorm(gX): 0.022450359224378377\nIter: 3801\nnorm(gX): 0.023074196060086163\nIter: 3802\nnorm(gX): 0.023703882195395314\nIter: 3803\nnorm(gX): 0.0243383346808773\nIter: 3804\nnorm(gX): 0.024976560102753027\nIter: 3805\nnorm(gX): 0.025617645757194\nIter: 3806\nnorm(gX): 0.026260751596696243\nIter: 3807\nnorm(gX): 0.02690510293270118\nIter: 3808\nnorm(gX): 0.02754998385880761\nIter: 3809\nnorm(gX): 0.02819473134735722\nIter: 3810\nnorm(gX): 0.02883872996643997\nIter: 3811\nnorm(gX): 0.02948140716252512\nIter: 3812\nnorm(gX): 0.03012222905471622\nIter: 3813\nnorm(gX): 0.030760696689030243\nIter: 3814\nnorm(gX): 0.031396342704396715\nIter: 3815\nnorm(gX): 0.03202872836597641\nIter: 3816\nnorm(gX): 0.032657440925279745\nIter: 3817\nnorm(gX): 0.03328209127058618\nIter: 3818\nnorm(gX): 0.03390231183487804\nIter: 3819\nnorm(gX): 0.03451775473204269\nIter: 3820\nnorm(gX): 0.035128090095344386\nIter: 3821\nnorm(gX): 0.03573300459507656\nIter: 3822\nnorm(gX): 0.03633220011499039\nIter: 3823\nnorm(gX): 0.03692539256946258\nIter: 3824\nnorm(gX): 0.03751231084544236\nIter: 3825\nnorm(gX): 0.03809269585514322\nIter: 3826\nnorm(gX): 0.03866629968705484\nIter: 3827\nnorm(gX): 0.0392328848443557\nIter: 3828\nnorm(gX): 0.039792223561006444\nIter: 3829\nnorm(gX): 0.0403440971870884\nIter: 3830\nnorm(gX): 0.04088829563575649\nIter: 3831\nnorm(gX): 0.04142461688522206\nIter: 3832\nnorm(gX): 0.04195286652983619\nIter: 3833\nnorm(gX): 0.04247285737506398\nIter: 3834\nnorm(gX): 0.04298440907173872\nIter: 3835\nnorm(gX): 0.04348734778549665\nIter: 3836\nnorm(gX): 0.04398150589772764\nIter: 3837\nnorm(gX): 0.04446672173484144\nIter: 3838\nnorm(gX): 0.04494283932294685\nIter: 3839\nnorm(gX): 0.04540970816539314\nIter: 3840\nnorm(gX): 0.04586718304086795\nIter: 3841\nnorm(gX): 0.04631512382003153\nIter: 3842\nnorm(gX): 0.04675339529885959\nIter: 3843\nnorm(gX): 0.04718186704703288\nIter: 3844\nnorm(gX): 0.04760041326997057\nIter: 3845\nnorm(gX): 0.048008912683129464\nIter: 3846\nnorm(gX): 0.048407248397440115\nIter: 3847\nnorm(gX): 0.04879530781480346\nIter: 3848\nnorm(gX): 0.049172982532687655\nIter: 3849\nnorm(gX): 0.04954016825699643\nIter: 3850\nnorm(gX): 0.049896764722406506\nIter: 3851\nnorm(gX): 0.05024267561949771\nIter: 3852\nnorm(gX): 0.05057780852803339\nIter: 3853\nnorm(gX): 0.05090207485584303\nIter: 3854\nnorm(gX): 0.05121538978275362\nIter: 3855\nnorm(gX): 0.051517672209141954\nIter: 3856\nnorm(gX): 0.05180884470868602\nIter: 3857\nnorm(gX): 0.05208883348488027\nIter: 3858\nnorm(gX): 0.0523575683310498\nIter: 3859\nnorm(gX): 0.052614982593460966\nIter: 3860\nnorm(gX): 0.052861013137328036\nIter: 3861\nnorm(gX): 0.053095600315366656\nIter: 3862\nnorm(gX): 0.053318687938732845\nIter: 3863\nnorm(gX): 0.05353022325009812\nIter: 3864\nnorm(gX): 0.0537301568986548\nIter: 3865\nnorm(gX): 0.05391844291689263\nIter: 3866\nnorm(gX): 0.054095038698996414\nIter: 3867\nnorm(gX): 0.05425990498066018\nIter: 3868\nnorm(gX): 0.054413005820273024\nIter: 3869\nnorm(gX): 0.054554308581264005\nIter: 3870\nnorm(gX): 0.05468378391556623\nIter: 3871\nnorm(gX): 0.05480140574805739\nIter: 3872\nnorm(gX): 0.05490715126190642\nIter: 3873\nnorm(gX): 0.055001000884739415\nIter: 3874\nnorm(gX): 0.05508293827556048\nIter: 3875\nnorm(gX): 0.05515295031234774\nIter: 3876\nnorm(gX): 0.05521102708027237\nIter: 3877\nnorm(gX): 0.05525716186050374\nIter: 3878\nnorm(gX): 0.05529135111952593\nIter: 3879\nnorm(gX): 0.05531359449896574\nIter: 3880\nnorm(gX): 0.05532389480585075\nIter: 3881\nnorm(gX): 0.05532225800332132\nIter: 3882\nnorm(gX): 0.05530869320174092\nIter: 3883\nnorm(gX): 0.05528321265018764\nIter: 3884\nnorm(gX): 0.05524583172833251\nIter: 3885\nnorm(gX): 0.055196568938680426\nIter: 3886\nnorm(gX): 0.05513544589918346\nIter: 3887\nnorm(gX): 0.05506248733620965\nIter: 3888\nnorm(gX): 0.0549777210779007\nIter: 3889\nnorm(gX): 0.05488117804788549\nIter: 3890\nnorm(gX): 0.054772892259442325\nIter: 3891\nnorm(gX): 0.05465290081001806\nIter: 3892\nnorm(gX): 0.05452124387624491\nIter: 3893\nnorm(gX): 0.05437796470940869\nIter: 3894\nnorm(gX): 0.05422310963142808\nIter: 3895\nnorm(gX): 0.05405672803138712\nIter: 3896\nnorm(gX): 0.053878872362685105\nIter: 3897\nnorm(gX): 0.05368959814080669\nIter: 3898\nnorm(gX): 0.0534889639418438\nIter: 3899\nnorm(gX): 0.05327703140176451\nIter: 3900\nnorm(gX): 0.05305386521655556\nIter: 3901\nnorm(gX): 0.05281953314329404\nIter: 3902\nnorm(gX): 0.05257410600226795\nIter: 3903\nnorm(gX): 0.052317657680185925\nIter: 3904\nnorm(gX): 0.05205026513468308\nIter: 3905\nnorm(gX): 0.0517720084001483\nIter: 3906\nnorm(gX): 0.051482970595073625\nIter: 3907\nnorm(gX): 0.05118323793104985\nIter: 3908\nnorm(gX): 0.05087289972356394\nIter: 3909\nnorm(gX): 0.05055204840481267\nIter: 3910\nnorm(gX): 0.050220779538668193\nIter: 3911\nnorm(gX): 0.04987919183808964\nIter: 3912\nnorm(gX): 0.04952738718515968\nIter: 3913\nnorm(gX): 0.04916547065403927\nIter: 3914\nnorm(gX): 0.048793550537118056\nIter: 3915\nnorm(gX): 0.0484117383746802\nIter: 3916\nnorm(gX): 0.048020148988468396\nIter: 3917\nnorm(gX): 0.04761890051947799\nIter: 3918\nnorm(gX): 0.047208114470463074\nIter: 3919\nnorm(gX): 0.046787915753615114\nIter: 3920\nnorm(gX): 0.04635843274394145\nIter: 3921\nnorm(gX): 0.04591979733889878\nIter: 3922\nnorm(gX): 0.045472145025005774\nIter: 3923\nnorm(gX): 0.04501561495206111\nIter: 3924\nnorm(gX): 0.04455035001584618\nIter: 3925\nnorm(gX): 0.04407649695014568\nIter: 3926\nnorm(gX): 0.043594206429140935\nIter: 3927\nnorm(gX): 0.04310363318120321\nIter: 3928\nnorm(gX): 0.04260493611540655\nIter: 3929\nnorm(gX): 0.04209827846209041\nIter: 3930\nnorm(gX): 0.041583827929039925\nIter: 3931\nnorm(gX): 0.04106175687503004\nIter: 3932\nnorm(gX): 0.04053224250266243\nIter: 3933\nnorm(gX): 0.03999546707269534\nIter: 3934\nnorm(gX): 0.03945161814234743\nIter: 3935\nnorm(gX): 0.03890088883032849\nIter: 3936\nnorm(gX): 0.03834347811174194\nIter: 3937\nnorm(gX): 0.03777959114638816\nIter: 3938\nnorm(gX): 0.037209439644496745\nIter: 3939\nnorm(gX): 0.03663324227437094\nIter: 3940\nnorm(gX): 0.03605122511711438\nIter: 3941\nnorm(gX): 0.035463622174241004\nIter: 3942\nnorm(gX): 0.034870675934789624\nIter: 3943\nnorm(gX): 0.034272638009449095\nIter: 3944\nnorm(gX): 0.033669769840269065\nIter: 3945\nnorm(gX): 0.03306234349566201\nIter: 3946\nnorm(gX): 0.03245064256182273\nIter: 3947\nnorm(gX): 0.03183496314321344\nIter: 3948\nnorm(gX): 0.031215614986516822\nIter: 3949\nnorm(gX): 0.030592922744533207\nIter: 3950\nnorm(gX): 0.02996722739879007\nIter: 3951\nnorm(gX): 0.029338887862217018\nIter: 3952\nnorm(gX): 0.02870828278621807\nIter: 3953\nnorm(gX): 0.02807581259985551\nIter: 3954\nnorm(gX): 0.027441901812430603\nIter: 3955\nnorm(gX): 0.02680700161498727\nIter: 3956\nnorm(gX): 0.026171592820629655\nIter: 3957\nnorm(gX): 0.025536189188340976\nIter: 3958\nnorm(gX): 0.024901341179915916\nIter: 3959\nnorm(gX): 0.024267640204617892\nIter: 3960\nnorm(gX): 0.02363572341068169\nIter: 3961\nnorm(gX): 0.023006279086584435\nIter: 3962\nnorm(gX): 0.022380052737143587\nIter: 3963\nnorm(gX): 0.021757853898815557\nIter: 3964\nnorm(gX): 0.021140563753758158\nIter: 3965\nnorm(gX): 0.020529143590600818\nIter: 3966\nnorm(gX): 0.019924644138723042\nIter: 3967\nnorm(gX): 0.01932821576775438\nIter: 3968\nnorm(gX): 0.018741119489534698\nIter: 3969\nnorm(gX): 0.01816473861898735\nIter: 3970\nnorm(gX): 0.017600590834623616\nIter: 3971\nnorm(gX): 0.017050340219061867\nIter: 3972\nnorm(gX): 0.016515808645001125\nIter: 3973\nnorm(gX): 0.015998985594388455\nIter: 3974\nnorm(gX): 0.015502035155319963\nIter: 3975\nnorm(gX): 0.015027298540911295\nIter: 3976\nnorm(gX): 0.014577290045594339\nIter: 3977\nnorm(gX): 0.014154683955368046\nIter: 3978\nnorm(gX): 0.01376228965936481\nIter: 3979\nnorm(gX): 0.013403012216433684\nIter: 3980\nnorm(gX): 0.013079796096782361\nIter: 3981\nnorm(gX): 0.012795550935333563\nIter: 3982\nnorm(gX): 0.012553060029113251\nIter: 3983\nnorm(gX): 0.012354874957113645\nIter: 3984\nnorm(gX): 0.012203202808819693\nIter: 3985\nnorm(gX): 0.012099795478982121\nIter: 3986\nnorm(gX): 0.012045852486441496\nIter: 3987\nnorm(gX): 0.01204194896516746\nIter: 3988\nnorm(gX): 0.01208799835898778\nIter: 3989\nnorm(gX): 0.012183255079758332\nIter: 3990\nnorm(gX): 0.012326356836474869\nIter: 3991\nnorm(gX): 0.012515400854417894\nIter: 3992\nnorm(gX): 0.012748044100723098\nIter: 3993\nnorm(gX): 0.013021615735279709\nIter: 3994\nnorm(gX): 0.013333230401893654\nIter: 3995\nnorm(gX): 0.013679893121326465\nIter: 3996\nnorm(gX): 0.014058589588896135\nIter: 3997\nnorm(gX): 0.01446635878911233\nIter: 3998\nnorm(gX): 0.014900347442530658\nIter: 3999\nnorm(gX): 0.01535784763234811\nIter: 4000\nnorm(gX): 0.015836320010076288\nIter: 4001\nnorm(gX): 0.01633340539138436\nIter: 4002\nnorm(gX): 0.016846927519180762\nIter: 4003\nnorm(gX): 0.01737488947522572\nIter: 4004\nnorm(gX): 0.017915465807349918\nIter: 4005\nnorm(gX): 0.018466992003468373\nIter: 4006\nnorm(gX): 0.019027952541365883\nIter: 4007\nnorm(gX): 0.01959696840129403\nIter: 4008\nnorm(gX): 0.020172784653413902\nIter: 4009\nnorm(gX): 0.020754258520475563\nIter: 4010\nnorm(gX): 0.021340348159188602\nIter: 4011\nnorm(gX): 0.021930102290914192\nIter: 4012\nnorm(gX): 0.022522650734155766\nIter: 4013\nnorm(gX): 0.023117195839033232\nIter: 4014\nnorm(gX): 0.023713004790580798\nIter: 4015\nnorm(gX): 0.02430940272790654\nIter: 4016\nnorm(gX): 0.024905766615916626\nIter: 4017\nnorm(gX): 0.025501519802127847\nIter: 4018\nnorm(gX): 0.026096127191241263\nIter: 4019\nnorm(gX): 0.026689090972789348\nIter: 4020\nnorm(gX): 0.027279946841304303\nIter: 4021\nnorm(gX): 0.0278682606533496\nIter: 4022\nnorm(gX): 0.02845362547095914\nIter: 4023\nnorm(gX): 0.029035658946156027\nIter: 4024\nnorm(gX): 0.02961400100610796\nIter: 4025\nnorm(gX): 0.030188311803075987\nIter: 4026\nnorm(gX): 0.030758269897485233\nIter: 4027\nnorm(gX): 0.03132357064618865\nIter: 4028\nnorm(gX): 0.031883924771433404\nIter: 4029\nnorm(gX): 0.032439057088955975\nIter: 4030\nnorm(gX): 0.03298870537634951\nIter: 4031\nnorm(gX): 0.033532619365177554\nIter: 4032\nnorm(gX): 0.034070559842274396\nIter: 4033\nnorm(gX): 0.034602297847569366\nIter: 4034\nnorm(gX): 0.035127613957252814\nIter: 4035\nnorm(gX): 0.035646297642554406\nIter: 4036\nnorm(gX): 0.03615814669547534\nIter: 4037\nnorm(gX): 0.03666296671401333\nIter: 4038\nnorm(gX): 0.03716057064018401\nIter: 4039\nnorm(gX): 0.03765077834503552\nIter: 4040\nnorm(gX): 0.03813341625550279\nIter: 4041\nnorm(gX): 0.03860831701854656\nIter: 4042\nnorm(gX): 0.039075319198604574\nIter: 4043\nnorm(gX): 0.039534267004766345\nIter: 4044\nnorm(gX): 0.039985010044557165\nIter: 4045\nnorm(gX): 0.04042740310156332\nIter: 4046\nnorm(gX): 0.04086130593438586\nIter: 4047\nnorm(gX): 0.04128658309477696\nIter: 4048\nnorm(gX): 0.04170310376297143\nIter: 4049\nnorm(gX): 0.04211074159849342\nIter: 4050\nnorm(gX): 0.042509374604879385\nIter: 4051\nnorm(gX): 0.04289888500692728\nIter: 4052\nnorm(gX): 0.04327915913926376\nIter: 4053\nnorm(gX): 0.04365008734506016\nIter: 4054\nnorm(gX): 0.04401156388398716\nIter: 4055\nnorm(gX): 0.04436348684843336\nIter: 4056\nnorm(gX): 0.04470575808724202\nIter: 4057\nnorm(gX): 0.04503828313622915\nIter: 4058\nnorm(gX): 0.045360971154831814\nIter: 4059\nnorm(gX): 0.045673734868318154\nIter: 4060\nnorm(gX): 0.045976490514997706\nIter: 4061\nnorm(gX): 0.04626915779800393\nIter: 4062\nnorm(gX): 0.04655165984118381\nIter: 4063\nnorm(gX): 0.04682392314871867\nIter: 4064\nnorm(gX): 0.047085877568129264\nIter: 4065\nnorm(gX): 0.047337456256326425\nIter: 4066\nnorm(gX): 0.04757859564845618\nIter: 4067\nnorm(gX): 0.04780923542923629\nIter: 4068\nnorm(gX): 0.04802931850656305\nIter: 4069\nnorm(gX): 0.048238790987184935\nIter: 4070\nnorm(gX): 0.04843760215423478\nIter: 4071\nnorm(gX): 0.048625704446433896\nIter: 4072\nnorm(gX): 0.048803053438821\nIter: 4073\nnorm(gX): 0.04896960782486313\nIter: 4074\nnorm(gX): 0.04912532939977532\nIter: 4075\nnorm(gX): 0.04927018304500554\nIter: 4076\nnorm(gX): 0.04940413671367758\nIter: 4077\nnorm(gX): 0.04952716141696483\nIter: 4078\nnorm(gX): 0.04963923121128737\nIter: 4079\nnorm(gX): 0.049740323186227745\nIter: 4080\nnorm(gX): 0.049830417453110475\nIter: 4081\nnorm(gX): 0.049909497134185625\nIter: 4082\nnorm(gX): 0.04997754835233357\nIter: 4083\nnorm(gX): 0.05003456022126141\nIter: 4084\nnorm(gX): 0.05008052483612289\nIter: 4085\nnorm(gX): 0.05011543726454448\nIter: 4086\nnorm(gX): 0.050139295537993685\nIter: 4087\nnorm(gX): 0.05015210064348182\nIter: 4088\nnorm(gX): 0.05015385651554281\nIter: 4089\nnorm(gX): 0.05014457002852419\nIter: 4090\nnorm(gX): 0.050124250989103086\nIter: 4091\nnorm(gX): 0.05009291212905412\nIter: 4092\nnorm(gX): 0.05005056909826133\nIter: 4093\nnorm(gX): 0.04999724045795051\nIter: 4094\nnorm(gX): 0.04993294767414801\nIter: 4095\nnorm(gX): 0.04985771511137897\nIter: 4096\nnorm(gX): 0.049771570026580336\nIter: 4097\nnorm(gX): 0.049674542563299416\nIter: 4098\nnorm(gX): 0.04956666574610752\nIter: 4099\nnorm(gX): 0.049447975475341514\nIter: 4100\nnorm(gX): 0.04931851052212122\nIter: 4101\nnorm(gX): 0.04917831252370686\nIter: 4102\nnorm(gX): 0.04902742597922279\nIter: 4103\nnorm(gX): 0.048865898245794494\nIter: 4104\nnorm(gX): 0.04869377953512329\nIter: 4105\nnorm(gX): 0.04851112291055612\nIter: 4106\nnorm(gX): 0.04831798428472195\nIter: 4107\nnorm(gX): 0.04811442241777321\nIter: 4108\nnorm(gX): 0.04790049891629406\nIter: 4109\nnorm(gX): 0.04767627823300233\nIter: 4110\nnorm(gX): 0.047441827667265755\nIter: 4111\nnorm(gX): 0.04719721736656656\nIter: 4112\nnorm(gX): 0.046942520329006004\nIter: 4113\nnorm(gX): 0.0466778124069694\nIter: 4114\nnorm(gX): 0.04640317231205162\nIter: 4115\nnorm(gX): 0.046118681621426974\nIter: 4116\nnorm(gX): 0.045824424785774684\nIter: 4117\nnorm(gX): 0.04552048913894829\nIter: 4118\nnorm(gX): 0.04520696490957219\nIter: 4119\nnorm(gX): 0.04488394523477552\nIter: 4120\nnorm(gX): 0.04455152617626906\nIter: 4121\nnorm(gX): 0.04420980673903846\nIter: 4122\nnorm(gX): 0.04385888889292509\nIter: 4123\nnorm(gX): 0.04349887759737325\nIter: 4124\nnorm(gX): 0.04312988082973676\nIter: 4125\nnorm(gX): 0.04275200961744149\nIter: 4126\nnorm(gX): 0.04236537807450135\nIter: 4127\nnorm(gX): 0.041970103442781966\nIter: 4128\nnorm(gX): 0.04156630613855483\nIter: 4129\nnorm(gX): 0.04115410980489191\nIter: 4130\nnorm(gX): 0.04073364137057413\nIter: 4131\nnorm(gX): 0.040305031116150726\nIter: 4132\nnorm(gX): 0.03986841274800308\nIter: 4133\nnorm(gX): 0.03942392348123421\nIter: 4134\nnorm(gX): 0.03897170413243342\nIter: 4135\nnorm(gX): 0.03851189922333697\nIter: 4136\nnorm(gX): 0.038044657096706506\nIter: 4137\nnorm(gX): 0.037570130045739523\nIter: 4138\nnorm(gX): 0.037088474458626536\nIter: 4139\nnorm(gX): 0.036599850979989496\nIter: 4140\nnorm(gX): 0.036104424691189636\nIter: 4141\nnorm(gX): 0.03560236531175148\nIter: 4142\nnorm(gX): 0.035093847424445976\nIter: 4143\nnorm(gX): 0.03457905072689082\nIter: 4144\nnorm(gX): 0.03405816031296365\nIter: 4145\nnorm(gX): 0.03353136698767726\nIter: 4146\nnorm(gX): 0.03299886761981081\nIter: 4147\nnorm(gX): 0.03246086553703418\nIter: 4148\nnorm(gX): 0.03191757096906491\nIter: 4149\nnorm(gX): 0.03136920154509553\nIter: 4150\nnorm(gX): 0.030815982852681164\nIter: 4151\nnorm(gX): 0.030258149066313494\nIter: 4152\nnorm(gX): 0.029695943655097173\nIter: 4153\nnorm(gX): 0.029129620180402306\nIter: 4154\nnorm(gX): 0.02855944319590899\nIter: 4155\nnorm(gX): 0.027985689264428668\nIter: 4156\nnorm(gX): 0.027408648107976063\nIter: 4157\nnorm(gX): 0.026828623910197892\nIter: 4158\nnorm(gX): 0.026245936793027464\nIter: 4159\nnorm(gX): 0.025660924492934552\nIter: 4160\nnorm(gX): 0.0250739442658974\nIter: 4161\nnorm(gX): 0.02448537505465478\nIter: 4162\nnorm(gX): 0.023895619956865465\nIter: 4163\nnorm(gX): 0.02330510903842752\nIter: 4164\nnorm(gX): 0.02271430254252144\nIter: 4165\nnorm(gX): 0.02212369455200712\nIter: 4166\nnorm(gX): 0.021533817170245374\nIter: 4167\nnorm(gX): 0.02094524529328277\nIter: 4168\nnorm(gX): 0.02035860205425287\nIter: 4169\nnorm(gX): 0.019774565027849654\nIter: 4170\nnorm(gX): 0.01919387328841503\nIter: 4171\nnorm(gX): 0.01861733541732879\nIter: 4172\nnorm(gX): 0.018045838552185296\nIter: 4173\nnorm(gX): 0.017480358558040757\nIter: 4174\nnorm(gX): 0.01692197137500093\nIter: 4175\nnorm(gX): 0.016371865549672873\nIter: 4176\nnorm(gX): 0.015831355881493345\nIter: 4177\nnorm(gX): 0.015301897996197916\nIter: 4178\nnorm(gX): 0.01478510348272379\nIter: 4179\nnorm(gX): 0.014282754978812602\nIter: 4180\nnorm(gX): 0.013796820246201527\nIter: 4181\nnorm(gX): 0.013329463823841684\nIter: 4182\nnorm(gX): 0.012883054284919435\nIter: 4183\nnorm(gX): 0.012460164472599598\nIter: 4184\nnorm(gX): 0.012063561418849358\nIter: 4185\nnorm(gX): 0.01169618209517423\nIter: 4186\nnorm(gX): 0.011361090927043063\nIter: 4187\nnorm(gX): 0.011061415435210697\nIter: 4188\nnorm(gX): 0.01080025780032528\nIter: 4189\nnorm(gX): 0.010580582862317832\nIter: 4190\nnorm(gX): 0.010405087080698464\nIter: 4191\nnorm(gX): 0.010276057831641205\nIter: 4192\nnorm(gX): 0.010195237037000062\nIter: 4193\nnorm(gX): 0.010163705986051158\nIter: 4194\nnorm(gX): 0.010181807845835902\nIter: 4195\nnorm(gX): 0.010249120061070111\nIter: 4196\nnorm(gX): 0.010364481264884522\nIter: 4197\nnorm(gX): 0.010526068417078482\nIter: 4198\nnorm(gX): 0.01073151218584631\nIter: 4199\nnorm(gX): 0.010978034108994254\nIter: 4200\nnorm(gX): 0.011262588535442371\nIter: 4201\nnorm(gX): 0.011581995116016626\nIter: 4202\nnorm(gX): 0.011933052221403624\nIter: 4203\nnorm(gX): 0.01231262657026409\nIter: 4204\nnorm(gX): 0.012717718448291634\nIter: 4205\nnorm(gX): 0.013145504694022285\nIter: 4206\nnorm(gX): 0.0135933631182065\nIter: 4207\nnorm(gX): 0.014058882487366572\nIter: 4208\nnorm(gX): 0.014539861995466061\nIter: 4209\nnorm(gX): 0.01503430358838794\nIter: 4210\nnorm(gX): 0.0155403998238318\nIter: 4211\nnorm(gX): 0.016056519284842887\nIter: 4212\nnorm(gX): 0.016581190989287814\nIter: 4213\nnorm(gX): 0.017113088774233495\nIter: 4214\nnorm(gX): 0.017651016281331443\nIter: 4215\nnorm(gX): 0.018193892912175276\nIter: 4216\nnorm(gX): 0.01874074094251253\nIter: 4217\nnorm(gX): 0.019290673862852204\nIter: 4218\nnorm(gX): 0.019842885934941816\nIter: 4219\nnorm(gX): 0.020396642906138716\nIter: 4220\nnorm(gX): 0.02095127379741102\nIter: 4221\nnorm(gX): 0.02150616366857895\nIter: 4222\nnorm(gX): 0.022060747261370663\nIter: 4223\nnorm(gX): 0.02261450342351863\nIter: 4224\nnorm(gX): 0.023166950222914984\nIter: 4225\nnorm(gX): 0.023717640668476318\nIter: 4226\nnorm(gX): 0.024266158962442833\nIter: 4227\nnorm(gX): 0.024812117217092602\nIter: 4228\nnorm(gX): 0.02535515257660136\nIter: 4229\nnorm(gX): 0.025894924692024073\nIter: 4230\nnorm(gX): 0.02643111350392726\nIter: 4231\nnorm(gX): 0.026963417293066033\nIter: 4232\nnorm(gX): 0.027491550964636475\nIter: 4233\nnorm(gX): 0.028015244536221118\nIter: 4234\nnorm(gX): 0.028534241803527434\nIter: 4235\nnorm(gX): 0.029048299161401617\nIter: 4236\nnorm(gX): 0.029557184560655658\nIter: 4237\nnorm(gX): 0.030060676583801214\nIter: 4238\nnorm(gX): 0.030558563625042407\nIter: 4239\nnorm(gX): 0.031050643161773944\nIter: 4240\nnorm(gX): 0.031536721106513066\nIter: 4241\nnorm(gX): 0.03201661122966142\nIter: 4242\nnorm(gX): 0.03249013464468755\nIter: 4243\nnorm(gX): 0.032957119348400545\nIter: 4244\nnorm(gX): 0.03341739980993399\nIter: 4245\nnorm(gX): 0.03387081660287108\nIter: 4246\nnorm(gX): 0.03431721607556169\nIter: 4247\nnorm(gX): 0.034756450055394635\nIter: 4248\nnorm(gX): 0.035188375583211896\nIter: 4249\nnorm(gX): 0.035612854674555654\nIter: 4250\nnorm(gX): 0.03602975410483953\nIter: 4251\nnorm(gX): 0.03643894521585305\nIter: 4252\nnorm(gX): 0.03684030374131521\nIter: 4253\nnorm(gX): 0.03723370964948175\nIter: 4254\nnorm(gX): 0.03761904700099036\nIter: 4255\nnorm(gX): 0.03799620382039719\nIter: 4256\nnorm(gX): 0.03836507197995582\nIter: 4257\nnorm(gX): 0.03872554709443207\nIter: 4258\nnorm(gX): 0.03907752842578296\nIter: 4259\nnorm(gX): 0.03942091879675784\nIter: 4260\nnorm(gX): 0.03975562451249908\nIter: 4261\nnorm(gX): 0.0400815552893659\nIter: 4262\nnorm(gX): 0.04039862419024168\nIter: 4263\nnorm(gX): 0.04070674756573904\nIter: 4264\nnorm(gX): 0.04100584500065982\nIter: 4265\nnorm(gX): 0.04129583926525846\nIter: 4266\nnorm(gX): 0.04157665627080208\nIter: 4267\nnorm(gX): 0.041848225029042806\nIter: 4268\nnorm(gX): 0.04211047761519116\nIter: 4269\nnorm(gX): 0.0423633491340883\nIter: 4270\nnorm(gX): 0.042606777689239414\nIter: 4271\nnorm(gX): 0.042840704354456006\nIter: 4272\nnorm(gX): 0.043065073147835815\nIter: 4273\nnorm(gX): 0.04327983100787576\nIter: 4274\nnorm(gX): 0.0434849277714902\nIter: 4275\nnorm(gX): 0.04368031615376017\nIter: 4276\nnorm(gX): 0.04386595172924368\nIter: 4277\nnorm(gX): 0.04404179291470773\nIter: 4278\nnorm(gX): 0.04420780095310733\nIter: 4279\nnorm(gX): 0.04436393989871562\nIter: 4280\nnorm(gX): 0.044510176603299895\nIter: 4281\nnorm(gX): 0.044646480703207675\nIter: 4282\nnorm(gX): 0.044772824607281145\nIter: 4283\nnorm(gX): 0.044889183485544465\nIter: 4284\nnorm(gX): 0.04499553525852467\nIter: 4285\nnorm(gX): 0.04509186058722564\nIter: 4286\nnorm(gX): 0.045178142863587095\nIter: 4287\nnorm(gX): 0.04525436820147811\nIter: 4288\nnorm(gX): 0.04532052542808954\nIter: 4289\nnorm(gX): 0.0453766060757451\nIter: 4290\nnorm(gX): 0.04542260437404524\nIter: 4291\nnorm(gX): 0.045458517242347014\nIter: 4292\nnorm(gX): 0.04548434428250204\nIter: 4293\nnorm(gX): 0.04550008777191001\nIter: 4294\nnorm(gX): 0.04550575265674954\nIter: 4295\nnorm(gX): 0.04550134654550313\nIter: 4296\nnorm(gX): 0.04548687970262567\nIter: 4297\nnorm(gX): 0.04546236504247352\nIter: 4298\nnorm(gX): 0.0454278181233998\nIter: 4299\nnorm(gX): 0.04538325714204431\nIter: 4300\nnorm(gX): 0.0453287029278383\nIter: 4301\nnorm(gX): 0.04526417893768711\nIter: 4302\nnorm(gX): 0.0451897112508769\nIter: 4303\nnorm(gX): 0.04510532856420273\nIter: 4304\nnorm(gX): 0.04501106218732138\nIter: 4305\nnorm(gX): 0.0449069460383832\nIter: 4306\nnorm(gX): 0.04479301663992297\nIter: 4307\nnorm(gX): 0.04466931311507229\nIter: 4308\nnorm(gX): 0.04453587718412093\nIter: 4309\nnorm(gX): 0.04439275316143306\nIter: 4310\nnorm(gX): 0.04423998795281113\nIter: 4311\nnorm(gX): 0.044077631053313616\nIter: 4312\nnorm(gX): 0.04390573454559021\nIter: 4313\nnorm(gX): 0.04372435309881901\nIter: 4314\nnorm(gX): 0.04353354396825768\nIter: 4315\nnorm(gX): 0.043333366995539364\nIter: 4316\nnorm(gX): 0.04312388460975344\nIter: 4317\nnorm(gX): 0.04290516182941892\nIter: 4318\nnorm(gX): 0.04267726626543576\nIter: 4319\nnorm(gX): 0.042440268125130035\nIter: 4320\nnorm(gX): 0.042194240217513446\nIter: 4321\nnorm(gX): 0.04193925795987817\nIter: 4322\nnorm(gX): 0.04167539938587221\nIter: 4323\nnorm(gX): 0.04140274515521947\nIter: 4324\nnorm(gX): 0.04112137856525324\nIter: 4325\nnorm(gX): 0.040831385564464216\nIter: 4326\nnorm(gX): 0.04053285476823735\nIter: 4327\nnorm(gX): 0.04022587747707109\nIter: 4328\nnorm(gX): 0.03991054769747323\nIter: 4329\nnorm(gX): 0.039586962165871424\nIter: 4330\nnorm(gX): 0.0392552203758001\nIter: 4331\nnorm(gX): 0.038915424608748604\nIter: 4332\nnorm(gX): 0.03856767996903644\nIter: 4333\nnorm(gX): 0.038212094423142974\nIter: 4334\nnorm(gX): 0.03784877884396133\nIter: 4335\nnorm(gX): 0.0374778470605093\nIter: 4336\nnorm(gX): 0.03709941591365688\nIter: 4337\nnorm(gX): 0.03671360531855307\nIter: 4338\nnorm(gX): 0.03632053833443505\nIter: 4339\nnorm(gX): 0.03592034124265244\nIter: 4340\nnorm(gX): 0.0355131436337935\nIter: 4341\nnorm(gX): 0.035099078504932754\nIter: 4342\nnorm(gX): 0.034678282368093345\nIter: 4343\nnorm(gX): 0.03425089537124675\nIter: 4344\nnorm(gX): 0.03381706143320049\nIter: 4345\nnorm(gX): 0.033376928394024244\nIter: 4346\nnorm(gX): 0.03293064818279193\nIter: 4347\nnorm(gX): 0.03247837700464582\nIter: 4348\nnorm(gX): 0.03202027554950138\nIter: 4349\nnorm(gX): 0.0315565092249621\nIter: 4350\nnorm(gX): 0.03108724841637424\nIter: 4351\nnorm(gX): 0.030612668777332756\nIter: 4352\nnorm(gX): 0.030132951554411657\nIter: 4353\nnorm(gX): 0.029648283950397047\nIter: 4354\nnorm(gX): 0.029158859530892\nIter: 4355\nnorm(gX): 0.028664878679842888\nIter: 4356\nnorm(gX): 0.028166549110313154\nIter: 4357\nnorm(gX): 0.027664086437750365\nIter: 4358\nnorm(gX): 0.027157714823987412\nIter: 4359\nnorm(gX): 0.02664766770146247\nIter: 4360\nnorm(gX): 0.026134188588477134\nIter: 4361\nnorm(gX): 0.025617532007889837\nIter: 4362\nnorm(gX): 0.025097964523529512\nIter: 4363\nnorm(gX): 0.02457576591059304\nIter: 4364\nnorm(gX): 0.024051230478782215\nIter: 4365\nnorm(gX): 0.023524668569627517\nIter: 4366\nnorm(gX): 0.022996408252554938\nIter: 4367\nnorm(gX): 0.022466797247794944\nIter: 4368\nnorm(gX): 0.021936205108225717\nIter: 4369\nnorm(gX): 0.02140502569654188\nIter: 4370\nnorm(gX): 0.020873679999137657\nIter: 4371\nnorm(gX): 0.020342619323161847\nIter: 4372\nnorm(gX): 0.019812328928701828\nIter: 4373\nnorm(gX): 0.01928333215358301\nIter: 4374\nnorm(gX): 0.018756195093284356\nIter: 4375\nnorm(gX): 0.018231531902698297\nIter: 4376\nnorm(gX): 0.017710010788746833\nIter: 4377\nnorm(gX): 0.017192360761926626\nIter: 4378\nnorm(gX): 0.016679379208775748\nIter: 4379\nnorm(gX): 0.016171940333190735\nIter: 4380\nnorm(gX): 0.0156710044884412\nIter: 4381\nnorm(gX): 0.015177628378714877\nIter: 4382\nnorm(gX): 0.014692976041489033\nIter: 4383\nnorm(gX): 0.014218330421664977\nIter: 4384\nnorm(gX): 0.013755105204024627\nIter: 4385\nnorm(gX): 0.013304856370476532\nIter: 4386\nnorm(gX): 0.012869292681405532\nIter: 4387\nnorm(gX): 0.012450283938514637\nIter: 4388\nnorm(gX): 0.012049865472571078\nIter: 4389\nnorm(gX): 0.011670236834899633\nIter: 4390\nnorm(gX): 0.011313752209779837\nIter: 4391\nnorm(gX): 0.01098289970554785\nIter: 4392\nnorm(gX): 0.01068026658073681\nIter: 4393\nnorm(gX): 0.010408487826976655\nIter: 4394\nnorm(gX): 0.010170176592583183\nIter: 4395\nnorm(gX): 0.009967836866073864\nIter: 4396\nnorm(gX): 0.009803761652227877\nIter: 4397\nnorm(gX): 0.009679923263541527\nIter: 4398\nnorm(gX): 0.00959786563691908\nIter: 4399\nnorm(gX): 0.009558610785555548\nIter: 4400\nnorm(gX): 0.009562591615939605\nIter: 4401\nnorm(gX): 0.009609620815882174\nIter: 4402\nnorm(gX): 0.009698900601842905\nIter: 4403\nnorm(gX): 0.009829071928237593\nIter: 4404\nnorm(gX): 0.00999829593829147\nIter: 4405\nnorm(gX): 0.010204356461563108\nIter: 4406\nnorm(gX): 0.010444771011682374\nIter: 4407\nnorm(gX): 0.010716898879488134\nIter: 4408\nnorm(gX): 0.011018037743424188\nIter: 4409\nnorm(gX): 0.011345503687022552\nIter: 4410\nnorm(gX): 0.011696692740004715\nIter: 4411\nnorm(gX): 0.012069124525713269\nIter: 4412\nnorm(gX): 0.012460470147839827\nIter: 4413\nnorm(gX): 0.012868567180857021\nIter: 4414\nnorm(gX): 0.01329142475611272\nIter: 4415\nnorm(gX): 0.01372722148916827\nIter: 4416\nnorm(gX): 0.014174298563961825\nIter: 4417\nnorm(gX): 0.014631149807784193\nIter: 4418\nnorm(gX): 0.015096410136214187\nIter: 4419\nnorm(gX): 0.015568843357456813\nIter: 4420\nnorm(gX): 0.016047330012369302\nIter: 4421\nnorm(gX): 0.01653085568702904\nIter: 4422\nnorm(gX): 0.017018500058835474\nIter: 4423\nnorm(gX): 0.017509426812416035\nIter: 4424\nnorm(gX): 0.018002874476594467\nIter: 4425\nnorm(gX): 0.018498148177857325\nIter: 4426\nnorm(gX): 0.018994612271078036\nIter: 4427\nnorm(gX): 0.019491683788476766\nIter: 4428\nnorm(gX): 0.019988826637977963\nIter: 4429\nnorm(gX): 0.020485546478991215\nIter: 4430\nnorm(gX): 0.020981386204747345\nIter: 4431\nnorm(gX): 0.021475921963929306\nIter: 4432\nnorm(gX): 0.021968759659343848\nIter: 4433\nnorm(gX): 0.0224595318669432\nIter: 4434\nnorm(gX): 0.02294789512432577\nIter: 4435\nnorm(gX): 0.023433527543344667\nIter: 4436\nnorm(gX): 0.02391612670675844\nIter: 4437\nnorm(gX): 0.02439540781361529\nIter: 4438\nnorm(gX): 0.02487110204243526\nIter: 4439\nnorm(gX): 0.025342955105080337\nIter: 4440\nnorm(gX): 0.025810725967726034\nIter: 4441\nnorm(gX): 0.02627418571828292\nIter: 4442\nnorm(gX): 0.02673311656227471\nIter: 4443\nnorm(gX): 0.027187310931566316\nIter: 4444\nnorm(gX): 0.027636570692242044\nIter: 4445\nnorm(gX): 0.028080706439749192\nIter: 4446\nnorm(gX): 0.028519536870927877\nIter: 4447\nnorm(gX): 0.028952888223849737\nIter: 4448\nnorm(gX): 0.029380593777568298\nIter: 4449\nnorm(gX): 0.029802493404811517\nIter: 4450\nnorm(gX): 0.030218433171600768\nIter: 4451\nnorm(gX): 0.030628264978437986\nIter: 4452\nnorm(gX): 0.031031846238410432\nIter: 4453\nnorm(gX): 0.03142903958809561\nIter: 4454\nnorm(gX): 0.0318197126276875\nIter: 4455\nnorm(gX): 0.032203737687138495\nIter: 4456\nnorm(gX): 0.03258099161549114\nIter: 4457\nnorm(gX): 0.03295135559097369\nIter: 4458\nnorm(gX): 0.03331471494962313\nIter: 4459\nnorm(gX): 0.03367095903049554\nIter: 4460\nnorm(gX): 0.03401998103576196\nIter: 4461\nnorm(gX): 0.03436167790414024\nIter: 4462\nnorm(gX): 0.03469595019630001\nIter: 4463\nnorm(gX): 0.03502270199104616\nIter: 4464\nnorm(gX): 0.035341840791166285\nIter: 4465\nnorm(gX): 0.03565327743801694\nIter: 4466\nnorm(gX): 0.03595692603395125\nIter: 4467\nnorm(gX): 0.036252703871838904\nIter: 4468\nnorm(gX): 0.03654053137095689\nIter: 4469\nnorm(gX): 0.03682033201867411\nIter: 4470\nnorm(gX): 0.03709203231732659\nIter: 4471\nnorm(gX): 0.03735556173580825\nIter: 4472\nnorm(gX): 0.03761085266541972\nIter: 4473\nnorm(gX): 0.037857840379562066\nIter: 4474\nnorm(gX): 0.0380964629969296\nIter: 4475\nnorm(gX): 0.038326661447832804\nIter: 4476\nnorm(gX): 0.03854837944339061\nIter: 4477\nnorm(gX): 0.03876156344730045\nIter: 4478\nnorm(gX): 0.03896616264995035\nIter: 4479\nnorm(gX): 0.039162128944641855\nIter: 4480\nnorm(gX): 0.0393494169057318\nIter: 4481\nnorm(gX): 0.03952798376851498\nIter: 4482\nnorm(gX): 0.03969778941067222\nIter: 4483\nnorm(gX): 0.039858796335137454\nIter: 4484\nnorm(gX): 0.04001096965424998\nIter: 4485\nnorm(gX): 0.04015427707507868\nIter: 4486\nnorm(gX): 0.04028868888578959\nIter: 4487\nnorm(gX): 0.040414177942970704\nIter: 4488\nnorm(gX): 0.04053071965980469\nIter: 4489\nnorm(gX): 0.04063829199504045\nIter: 4490\nnorm(gX): 0.04073687544265396\nIter: 4491\nnorm(gX): 0.04082645302214581\nIter: 4492\nnorm(gX): 0.04090701026942938\nIter: 4493\nnorm(gX): 0.040978535228235184\nIter: 4494\nnorm(gX): 0.0410410184419853\nIter: 4495\nnorm(gX): 0.041094452946118736\nIter: 4496\nnorm(gX): 0.04113883426080262\nIter: 4497\nnorm(gX): 0.04117416038401488\nIter: 4498\nnorm(gX): 0.041200431784957456\nIter: 4499\nnorm(gX): 0.04121765139780404\nIter: 4500\nnorm(gX): 0.04122582461572697\nIter: 4501\nnorm(gX): 0.04122495928520671\nIter: 4502\nnorm(gX): 0.041215065700630084\nIter: 4503\nnorm(gX): 0.041196156599131686\nIter: 4504\nnorm(gX): 0.0411682471557093\nIter: 4505\nnorm(gX): 0.04113135497859796\nIter: 4506\nnorm(gX): 0.04108550010490636\nIter: 4507\nnorm(gX): 0.04103070499652748\nIter: 4508\nnorm(gX): 0.040966994536340236\nIter: 4509\nnorm(gX): 0.04089439602469841\nIter: 4510\nnorm(gX): 0.040812939176248154\nIter: 4511\nnorm(gX): 0.0407226561170872\nIter: 4512\nnorm(gX): 0.04062358138228584\nIter: 4513\nnorm(gX): 0.040515751913815234\nIter: 4514\nnorm(gX): 0.04039920705891954\nIter: 4515\nnorm(gX): 0.04027398856893401\nIter: 4516\nnorm(gX): 0.040140140598686286\nIter: 4517\nnorm(gX): 0.039997709706400486\nIter: 4518\nnorm(gX): 0.03984674485430204\nIter: 4519\nnorm(gX): 0.039687297409866755\nIter: 4520\nnorm(gX): 0.03951942114787573\nIter: 4521\nnorm(gX): 0.03934317225327649\nIter: 4522\nnorm(gX): 0.0391586093250161\nIter: 4523\nnorm(gX): 0.03896579338085899\nIter: 4524\nnorm(gX): 0.03876478786335086\nIter: 4525\nnorm(gX): 0.03855565864702101\nIter: 4526\nnorm(gX): 0.03833847404694429\nIter: 4527\nnorm(gX): 0.03811330482879402\nIter: 4528\nnorm(gX): 0.03788022422056912\nIter: 4529\nnorm(gX): 0.03763930792610074\nIter: 4530\nnorm(gX): 0.03739063414058081\nIter: 4531\nnorm(gX): 0.037134283568258764\nIter: 4532\nnorm(gX): 0.036870339442564584\nIter: 4533\nnorm(gX): 0.03659888754885259\nIter: 4534\nnorm(gX): 0.036320016250075894\nIter: 4535\nnorm(gX): 0.03603381651565347\nIter: 4536\nnorm(gX): 0.035740381953821974\nIter: 4537\nnorm(gX): 0.03543980884789507\nIter: 4538\nnorm(gX): 0.0351321961967275\nIter: 4539\nnorm(gX): 0.034817645759878384\nIter: 4540\nnorm(gX): 0.0344962621079095\nIter: 4541\nnorm(gX): 0.03416815267834904\nIter: 4542\nnorm(gX): 0.033833427837879286\nIter: 4543\nnorm(gX): 0.03349220095142454\nIter: 4544\nnorm(gX): 0.03314458845878872\nIter: 4545\nnorm(gX): 0.032790709959679705\nIter: 4546\nnorm(gX): 0.03243068830795341\nIter: 4547\nnorm(gX): 0.03206464971606305\nIter: 4548\nnorm(gX): 0.0316927238707831\nIter: 4549\nnorm(gX): 0.03131504406141918\nIter: 4550\nnorm(gX): 0.0309317473218196\nIter: 4551\nnorm(gX): 0.03054297458770828\nIter: 4552\nnorm(gX): 0.03014887087098614\nIter: 4553\nnorm(gX): 0.029749585452895022\nIter: 4554\nnorm(gX): 0.0293452720980814\nIter: 4555\nnorm(gX): 0.028936089291983966\nIter: 4556\nnorm(gX): 0.028522200504083946\nIter: 4557\nnorm(gX): 0.02810377448003582\nIter: 4558\nnorm(gX): 0.027680985565940584\nIter: 4559\nnorm(gX): 0.027254014068517452\nIter: 4560\nnorm(gX): 0.026823046655340705\nIter: 4561\nnorm(gX): 0.026388276799852842\nIter: 4562\nnorm(gX): 0.025949905276462872\nIter: 4563\nnorm(gX): 0.025508140711678368\nIter: 4564\nnorm(gX): 0.02506320019803525\nIter: 4565\nnorm(gX): 0.0246153099783005\nIter: 4566\nnorm(gX): 0.02416470620855413\nIter: 4567\nnorm(gX): 0.023711635809620615\nIter: 4568\nnorm(gX): 0.02325635741765392\nIter: 4569\nnorm(gX): 0.0227991424458505\nIter: 4570\nnorm(gX): 0.02234027627077884\nIter: 4571\nnorm(gX): 0.021880059558313528\nIter: 4572\nnorm(gX): 0.021418809745804873\nIter: 4573\nnorm(gX): 0.020956862698921504\nIter: 4574\nnorm(gX): 0.020494574563314976\nIter: 4575\nnorm(gX): 0.020032323833077632\nIter: 4576\nnorm(gX): 0.019570513659602617\nIter: 4577\nnorm(gX): 0.019109574425802717\nIter: 4578\nnorm(gX): 0.01864996661155843\nIter: 4579\nnorm(gX): 0.018192183976307657\nIter: 4580\nnorm(gX): 0.017736757083584076\nIter: 4581\nnorm(gX): 0.017284257189285644\nIter: 4582\nnorm(gX): 0.01683530050994561\nIter: 4583\nnorm(gX): 0.016390552877982904\nIter: 4584\nnorm(gX): 0.015950734776716623\nIter: 4585\nnorm(gX): 0.015516626726904873\nIter: 4586\nnorm(gX): 0.015089074966685653\nIter: 4587\nnorm(gX): 0.01466899732552219\nIter: 4588\nnorm(gX): 0.014257389137059106\nIter: 4589\nnorm(gX): 0.013855328962917152\nIter: 4590\nnorm(gX): 0.013463983806460156\nIter: 4591\nnorm(gX): 0.013084613381013986\nIter: 4592\nnorm(gX): 0.01271857286161988\nIter: 4593\nnorm(gX): 0.012367313397675743\nIter: 4594\nnorm(gX): 0.012032379507227029\nIter: 4595\nnorm(gX): 0.011715402332241996\nIter: 4596\nnorm(gX): 0.011418087639809709\nIter: 4597\nnorm(gX): 0.011142197451457704\nIter: 4598\nnorm(gX): 0.01088952432694944\nIter: 4599\nnorm(gX): 0.010661857678036865\nIter: 4600\nnorm(gX): 0.010460942088307956\nIter: 4601\nnorm(gX): 0.01028842847863976\nIter: 4602\nnorm(gX): 0.010145820034642114\nIter: 4603\nnorm(gX): 0.010034415971159498\nIter: 4604\nnorm(gX): 0.009955257235779036\nIter: 4605\nnorm(gX): 0.009909078884028266\nIter: 4606\nnorm(gX): 0.009896273848901033\nIter: 4607\nnorm(gX): 0.009916872039161846\nIter: 4608\nnorm(gX): 0.00997053718109549\nIter: 4609\nnorm(gX): 0.01005658181589742\nIter: 4610\nnorm(gX): 0.010173998774380225\nIter: 4611\nnorm(gX): 0.010321505695140438\nIter: 4612\nnorm(gX): 0.010497598052400752\nIter: 4613\nnorm(gX): 0.010700605846843935\nIter: 4614\nnorm(gX): 0.010928749518396334\nIter: 4615\nnorm(gX): 0.011180191550682525\nIter: 4616\nnorm(gX): 0.011453081384636653\nIter: 4617\nnorm(gX): 0.01174559240092741\nIter: 4618\nnorm(gX): 0.012055950699362269\nIter: 4619\nnorm(gX): 0.012382456116754476\nIter: 4620\nnorm(gX): 0.012723496372522862\nIter: 4621\nnorm(gX): 0.013077555450024588\nIter: 4622\nnorm(gX): 0.0134432173675773\nIter: 4623\nnorm(gX): 0.013819166425300063\nIter: 4624\nnorm(gX): 0.014204184882288692\nIter: 4625\nnorm(gX): 0.014597148860755435\nIter: 4626\nnorm(gX): 0.014997023114669454\nIter: 4627\nnorm(gX): 0.015402855154568454\nIter: 4628\nnorm(gX): 0.015813769094730847\nIter: 4629\nnorm(gX): 0.016228959485702302\nIter: 4630\nnorm(gX): 0.016647685313342226\nIter: 4631\nnorm(gX): 0.01706926428262594\nIter: 4632\nnorm(gX): 0.017493067457310947\nIter: 4633\nnorm(gX): 0.017918514292264535\nIter: 4634\nnorm(gX): 0.018345068070967117\nIter: 4635\nnorm(gX): 0.018772231743980378\nIter: 4636\nnorm(gX): 0.019199544153380953\nIter: 4637\nnorm(gX): 0.019626576621289207\nIter: 4638\nnorm(gX): 0.02005292987700533\nIter: 4639\nnorm(gX): 0.020478231295569468\nIter: 4640\nnorm(gX): 0.020902132420473927\nIter: 4641\nnorm(gX): 0.021324306743916565\nIter: 4642\nnorm(gX): 0.021744447719426917\nIter: 4643\nnorm(gX): 0.022162266983337207\nIter: 4644\nnorm(gX): 0.02257749276344549\nIter: 4645\nnorm(gX): 0.022989868455086132\nIter: 4646\nnorm(gX): 0.023399151346729992\nIter: 4647\nnorm(gX): 0.02380511147895913\nIter: 4648\nnorm(gX): 0.024207530622352817\nIter: 4649\nnorm(gX): 0.02460620136133256\nIter: 4650\nnorm(gX): 0.02500092627244095\nIter: 4651\nnorm(gX): 0.02539151718673678\nIter: 4652\nnorm(gX): 0.025777794527221957\nIter: 4653\nnorm(gX): 0.026159586713122338\nIter: 4654\nnorm(gX): 0.026536729623839867\nIter: 4655\nnorm(gX): 0.026909066116178184\nIter: 4656\nnorm(gX): 0.02727644558914834\nIter: 4657\nnorm(gX): 0.027638723591312024\nIter: 4658\nnorm(gX): 0.02799576146618592\nIter: 4659\nnorm(gX): 0.028347426031727588\nIter: 4660\nnorm(gX): 0.028693589290377443\nIter: 4661\nnorm(gX): 0.029034128166466163\nIter: 4662\nnorm(gX): 0.029368924268255388\nIter: 4663\nnorm(gX): 0.02969786367205084\nIter: 4664\nnorm(gX): 0.03002083672620413\nIter: 4665\nnorm(gX): 0.03033773787301044\nIter: 4666\nnorm(gX): 0.03064846548672299\nIter: 4667\nnorm(gX): 0.030952921726087698\nIter: 4668\nnorm(gX): 0.031251012400025376\nIter: 4669\nnorm(gX): 0.03154264684512632\nIter: 4670\nnorm(gX): 0.03182773781388355\nIter: 4671\nnorm(gX): 0.03210620137259713\nIter: 4672\nnorm(gX): 0.03237795680804682\nIter: 4673\nnorm(gX): 0.03264292654212659\nIter: 4674\nnorm(gX): 0.032901036053655415\nIter: 4675\nnorm(gX): 0.03315221380675603\nIter: 4676\nnorm(gX): 0.03339639118513437\nIter: 4677\nnorm(gX): 0.03363350243176501\nIter: 4678\nnorm(gX): 0.03386348459347201\nIter: 4679\nnorm(gX): 0.034086277469957786\nIter: 4680\nnorm(gX): 0.034301823566885145\nIter: 4681\nnorm(gX): 0.034510068052659916\nIter: 4682\nnorm(gX): 0.034710958718553175\nIter: 4683\nnorm(gX): 0.034904445941900585\nIter: 4684\nnorm(gX): 0.03509048265208938\nIter: 4685\nnorm(gX): 0.03526902429909283\nIter: 4686\nnorm(gX): 0.03544002882432\nIter: 4687\nnorm(gX): 0.035603456633605486\nIter: 4688\nnorm(gX): 0.03575927057210618\nIter: 4689\nnorm(gX): 0.035907435901004946\nIter: 4690\nnorm(gX): 0.036047920275807714\nIter: 4691\nnorm(gX): 0.036180693726136894\nIter: 4692\nnorm(gX): 0.03630572863686707\nIter: 4693\nnorm(gX): 0.03642299973052483\nIter: 4694\nnorm(gX): 0.036532484050814794\nIter: 4695\nnorm(gX): 0.03663416094719708\nIter: 4696\nnorm(gX): 0.03672801206043146\nIter: 4697\nnorm(gX): 0.03681402130901358\nIter: 4698\nnorm(gX): 0.036892174876416196\nIter: 4699\nnorm(gX): 0.03696246119911553\nIter: 4700\nnorm(gX): 0.03702487095529003\nIter: 4701\nnorm(gX): 0.03707939705420147\nIter: 4702\nnorm(gX): 0.037126034626175974\nIter: 4703\nnorm(gX): 0.037164781013156095\nIter: 4704\nnorm(gX): 0.037195635759805355\nIter: 4705\nnorm(gX): 0.0372186006051228\nIter: 4706\nnorm(gX): 0.037233679474535854\nIter: 4707\nnorm(gX): 0.037240878472484494\nIter: 4708\nnorm(gX): 0.03724020587544542\nIter: 4709\nnorm(gX): 0.03723167212541494\nIter: 4710\nnorm(gX): 0.03721528982380731\nIter: 4711\nnorm(gX): 0.037191073725810896\nIter: 4712\nnorm(gX): 0.03715904073516243\nIter: 4713\nnorm(gX): 0.03711920989936995\nIter: 4714\nnorm(gX): 0.03707160240537898\nIter: 4715\nnorm(gX): 0.03701624157567815\nIter: 4716\nnorm(gX): 0.03695315286492208\nIter: 4717\nnorm(gX): 0.03688236385700142\nIter: 4718\nnorm(gX): 0.036803904262643905\nIter: 4719\nnorm(gX): 0.03671780591757224\nIter: 4720\nnorm(gX): 0.036624102781217346\nIter: 4721\nnorm(gX): 0.03652283093605744\nIter: 4722\nnorm(gX): 0.03641402858759068\nIter: 4723\nnorm(gX): 0.03629773606503168\nIter: 4724\nnorm(gX): 0.036173995822733465\nIter: 4725\nnorm(gX): 0.036042852442419455\nIter: 4726\nnorm(gX): 0.03590435263630265\nIter: 4727\nnorm(gX): 0.03575854525111552\nIter: 4728\nnorm(gX): 0.03560548127317239\nIter: 4729\nnorm(gX): 0.03544521383452143\nIter: 4730\nnorm(gX): 0.03527779822028786\nIter: 4731\nnorm(gX): 0.03510329187730808\nIter: 4732\nnorm(gX): 0.034921754424161996\nIter: 4733\nnorm(gX): 0.03473324766271622\nIter: 4734\nnorm(gX): 0.034537835591334276\nIter: 4735\nnorm(gX): 0.034335584419865506\nIter: 4736\nnorm(gX): 0.03412656258659224\nIter: 4737\nnorm(gX): 0.033910840777291396\nIter: 4738\nnorm(gX): 0.033688491946619575\nIter: 4739\nnorm(gX): 0.033459591341989364\nIter: 4740\nnorm(gX): 0.0332242165302038\nIter: 4741\nnorm(gX): 0.032982447427064396\nIter: 4742\nnorm(gX): 0.03273436633022952\nIter: 4743\nnorm(gX): 0.03248005795560808\nIter: 4744\nnorm(gX): 0.03221960947764691\nIter: 4745\nnorm(gX): 0.03195311057379873\nIter: 4746\nnorm(gX): 0.03168065347362087\nIter: 4747\nnorm(gX): 0.03140233301290111\nIter: 4748\nnorm(gX): 0.03111824669327457\nIter: 4749\nnorm(gX): 0.030828494747845457\nIter: 4750\nnorm(gX): 0.030533180213388373\nIter: 4751\nnorm(gX): 0.03023240900974852\nIter: 4752\nnorm(gX): 0.02992629002710782\nIter: 4753\nnorm(gX): 0.029614935221879857\nIter: 4754\nnorm(gX): 0.029298459722069875\nIter: 4755\nnorm(gX): 0.0289769819430062\nIter: 4756\nnorm(gX): 0.028650623714447935\nIter: 4757\nnorm(gX): 0.02831951042020443\nIter: 4758\nnorm(gX): 0.027983771151486994\nIter: 4759\nnorm(gX): 0.02764353887532411\nIter: 4760\nnorm(gX): 0.027298950619627883\nIter: 4761\nnorm(gX): 0.02695014767647614\nIter: 4762\nnorm(gX): 0.026597275825541146\nIter: 4763\nnorm(gX): 0.026240485579626653\nIter: 4764\nnorm(gX): 0.025879932454655752\nIter: 4765\nnorm(gX): 0.025515777266516312\nIter: 4766\nnorm(gX): 0.025148186457574745\nIter: 4767\nnorm(gX): 0.02477733245592261\nIter: 4768\nnorm(gX): 0.02440339407067169\nIter: 4769\nnorm(gX): 0.02402655692707181\nIter: 4770\nnorm(gX): 0.023647013945516805\nIter: 4771\nnorm(gX): 0.023264965868980997\nIter: 4772\nnorm(gX): 0.022880621843814487\nIter: 4773\nnorm(gX): 0.022494200059344268\nIter: 4774\nnorm(gX): 0.02210592845220059\nIter: 4775\nnorm(gX): 0.021716045481834177\nIter: 4776\nnorm(gX): 0.021324800984170347\nIter: 4777\nnorm(gX): 0.020932457110912413\nIter: 4778\nnorm(gX): 0.02053928936248901\nIter: 4779\nnorm(gX): 0.020145587723063612\nIter: 4780\nnorm(gX): 0.019751657906392113\nIter: 4781\nnorm(gX): 0.01935782272146315\nIter: 4782\nnorm(gX): 0.018964423566854523\nIter: 4783\nnorm(gX): 0.01857182206235149\nIter: 4784\nnorm(gX): 0.01818040182551121\nIter: 4785\nnorm(gX): 0.017790570399502076\nIter: 4786\nnorm(gX): 0.017402761336126192\nIter: 4787\nnorm(gX): 0.017017436434706497\nIter: 4788\nnorm(gX): 0.01663508813255781\nIter: 4789\nnorm(gX): 0.0162562420361885\nIter: 4790\nnorm(gX): 0.015881459573402704\nIter: 4791\nnorm(gX): 0.01551134073459022\nIter: 4792\nnorm(gX): 0.015146526856166072\nIter: 4793\nnorm(gX): 0.01478770337962788\nIter: 4794\nnorm(gX): 0.014435602495369173\nIter: 4795\nnorm(gX): 0.014091005550816323\nIter: 4796\nnorm(gX): 0.013754745067237959\nIter: 4797\nnorm(gX): 0.013427706169070908\nIter: 4798\nnorm(gX): 0.013110827184745589\nIter: 4799\nnorm(gX): 0.012805099130934281\nIter: 4800\nnorm(gX): 0.012511563746731772\nIter: 4801\nnorm(gX): 0.012231309706086404\nIter: 4802\nnorm(gX): 0.011965466614237072\nIter: 4803\nnorm(gX): 0.011715196397237237\nIter: 4804\nnorm(gX): 0.0114816817355422\nIter: 4805\nnorm(gX): 0.011266111285855333\nIter: 4806\nnorm(gX): 0.011069661591517862\nIter: 4807\nnorm(gX): 0.010893475806123926\nIter: 4808\nnorm(gX): 0.010738639644552597\nIter: 4809\nnorm(gX): 0.010606155312124542\nIter: 4810\nnorm(gX): 0.01049691451267069\nIter: 4811\nnorm(gX): 0.010411671950181648\nIter: 4812\nnorm(gX): 0.01035102095742672\nIter: 4813\nnorm(gX): 0.010315372951112917\nIter: 4814\nnorm(gX): 0.010304942285516115\nIter: 4815\nnorm(gX): 0.010319737744506934\nIter: 4816\nnorm(gX): 0.010359561404097955\nIter: 4817\nnorm(gX): 0.010424014980125674\nIter: 4818\nnorm(gX): 0.010512513139845713\nIter: 4819\nnorm(gX): 0.010624302697432614\nIter: 4820\nnorm(gX): 0.010758486209935516\nIter: 4821\nnorm(gX): 0.010914048285881632\nIter: 4822\nnorm(gX): 0.011089882917249792\nIter: 4823\nnorm(gX): 0.011284820316220755\nIter: 4824\nnorm(gX): 0.011497652026997056\nIter: 4825\nnorm(gX): 0.011727153430070598\nIter: 4826\nnorm(gX): 0.011972103106559634\nIter: 4827\nnorm(gX): 0.012231298843481722\nIter: 4828\nnorm(gX): 0.01250357031290856\nIter: 4829\nnorm(gX): 0.01278778864056834\nIter: 4830\nnorm(gX): 0.013082873195373358\nIter: 4831\nnorm(gX): 0.013387795990388605\nIter: 4832\nnorm(gX): 0.013701584101037136\nIter: 4833\nnorm(gX): 0.014023320490689462\nIter: 4834\nnorm(gX): 0.014352143598907368\nIter: 4835\nnorm(gX): 0.01468724600272301\nIter: 4836\nnorm(gX): 0.015027872412974123\nIter: 4837\nnorm(gX): 0.015373317220654237\nIter: 4838\nnorm(gX): 0.015722921764971358\nIter: 4839\nnorm(gX): 0.016076071456795855\nIter: 4840\nnorm(gX): 0.016432192859014497\nIter: 4841\nnorm(gX): 0.01679075079859225\nIter: 4842\nnorm(gX): 0.0171512455636858\nIter: 4843\nnorm(gX): 0.01751321022211241\nIter: 4844\nnorm(gX): 0.01787620808431785\nIter: 4845\nnorm(gX): 0.01823983032393907\nIter: 4846\nnorm(gX): 0.018603693761607337\nIter: 4847\nnorm(gX): 0.018967438812190353\nIter: 4848\nnorm(gX): 0.019330727591806995\nIter: 4849\nnorm(gX): 0.0196932421784252\nIter: 4850\nnorm(gX): 0.020054683018056654\nIter: 4851\nnorm(gX): 0.020414767467688787\nIter: 4852\nnorm(gX): 0.02077322846555418\nIter: 4853\nnorm(gX): 0.021129813319298637\nIter: 4854\nnorm(gX): 0.02148428260270603\nIter: 4855\nnorm(gX): 0.021836409152084478\nIter: 4856\nnorm(gX): 0.022185977153714915\nIter: 4857\nnorm(gX): 0.0225327813144415\nIter: 4858\nnorm(gX): 0.022876626107927133\nIter: 4859\nnorm(gX): 0.023217325089688712\nIter: 4860\nnorm(gX): 0.023554700274621875\nIter: 4861\nnorm(gX): 0.02388858157115257\nIter: 4862\nnorm(gX): 0.024218806266781537\nIter: 4863\nnorm(gX): 0.024545218560181048\nIter: 4864\nnorm(gX): 0.024867669135454212\nIter: 4865\nnorm(gX): 0.025186014774639855\nIter: 4866\nnorm(gX): 0.025500118004789837\nIter: 4867\nnorm(gX): 0.025809846776440885\nIter: 4868\nnorm(gX): 0.02611507417051961\nIter: 4869\nnorm(gX): 0.026415678130992097\nIter: 4870\nnorm(gX): 0.026711541220917623\nIter: 4871\nnorm(gX): 0.027002550399680875\nIter: 4872\nnorm(gX): 0.027288596819503304\nIter: 4873\nnorm(gX): 0.027569575639405774\nIter: 4874\nnorm(gX): 0.027845385855077946\nIter: 4875\nnorm(gX): 0.028115930143195068\nIter: 4876\nnorm(gX): 0.02838111471884877\nIter: 4877\nnorm(gX): 0.02864084920495389\nIter: 4878\nnorm(gX): 0.028895046512544507\nIter: 4879\nnorm(gX): 0.029143622730984118\nIter: 4880\nnorm(gX): 0.029386497027250266\nIter: 4881\nnorm(gX): 0.029623591553428887\nIter: 4882\nnorm(gX): 0.029854831361801358\nIter: 4883\nnorm(gX): 0.030080144326763494\nIter: 4884\nnorm(gX): 0.030299461073073967\nIter: 4885\nnorm(gX): 0.0305127149098318\nIter: 4886\nnorm(gX): 0.030719841769729788\nIter: 4887\nnorm(gX): 0.030920780153113264\nIter: 4888\nnorm(gX): 0.03111547107647368\nIter: 4889\nnorm(gX): 0.03130385802494988\nIter: 4890\nnorm(gX): 0.03148588690857862\nIter: 4891\nnorm(gX): 0.031661506021915226\nIter: 4892\nnorm(gX): 0.03183066600678891\nIter: 4893\nnorm(gX): 0.03199331981791546\nIter: 4894\nnorm(gX): 0.03214942269115837\nIter: 4895\nnorm(gX): 0.03229893211418808\nIter: 4896\nnorm(gX): 0.03244180779938883\nIter: 4897\nnorm(gX): 0.03257801165880282\nIter: 4898\nnorm(gX): 0.03270750778094618\nIter: 4899\nnorm(gX): 0.03283026240939907\nIter: 4900\nnorm(gX): 0.0329462439229534\nIter: 4901\nnorm(gX): 0.033055422817274284\nIter: 4902\nnorm(gX): 0.0331577716878969\nIter: 4903\nnorm(gX): 0.03325326521450954\nIter: 4904\nnorm(gX): 0.03334188014637596\nIter: 4905\nnorm(gX): 0.03342359528886415\nIter: 4906\nnorm(gX): 0.03349839149095198\nIter: 4907\nnorm(gX): 0.03356625163368806\nIter: 4908\nnorm(gX): 0.03362716061949757\nIter: 4909\nnorm(gX): 0.033681105362303405\nIter: 4910\nnorm(gX): 0.033728074778414184\nIter: 4911\nnorm(gX): 0.033768059778098235\nIter: 4912\nnorm(gX): 0.033801053257834876\nIter: 4913\nnorm(gX): 0.03382705009320244\nIter: 4914\nnorm(gX): 0.03384604713234724\nIter: 4915\nnorm(gX): 0.033858043190029996\nIter: 4916\nnorm(gX): 0.03386303904220911\nIter: 4917\nnorm(gX): 0.03386103742115079\nIter: 4918\nnorm(gX): 0.03385204301105216\nIter: 4919\nnorm(gX): 0.033836062444141626\nIter: 4920\nnorm(gX): 0.033813104297284256\nIter: 4921\nnorm(gX): 0.03378317908905099\nIter: 4922\nnorm(gX): 0.03374629927728228\nIter: 4923\nnorm(gX): 0.03370247925711334\nIter: 4924\nnorm(gX): 0.033651735359502\nIter: 4925\nnorm(gX): 0.03359408585024748\nIter: 4926\nnorm(gX): 0.03352955092950949\nIter: 4927\nnorm(gX): 0.033458152731861465\nIter: 4928\nnorm(gX): 0.033379915326889484\nIter: 4929\nnorm(gX): 0.03329486472036184\nIter: 4930\nnorm(gX): 0.03320302885598819\nIter: 4931\nnorm(gX): 0.033104437617839724\nIter: 4932\nnorm(gX): 0.032999122833407744\nIter: 4933\nnorm(gX): 0.03288711827740195\nIter: 4934\nnorm(gX): 0.032768459676309224\nIter: 4935\nnorm(gX): 0.03264318471376902\nIter: 4936\nnorm(gX): 0.03251133303682501\nIter: 4937\nnorm(gX): 0.03237294626314125\nIter: 4938\nnorm(gX): 0.0322280679892156\nIter: 4939\nnorm(gX): 0.03207674379971047\nIter: 4940\nnorm(gX): 0.03191902127797762\nIter: 4941\nnorm(gX): 0.03175495001787122\nIter: 4942\nnorm(gX): 0.03158458163696005\nIter: 4943\nnorm(gX): 0.03140796979125886\nIter: 4944\nnorm(gX): 0.031225170191621385\nIter: 4945\nnorm(gX): 0.031036240621896465\nIter: 4946\nnorm(gX): 0.030841240959072428\nIter: 4947\nnorm(gX): 0.030640233195506295\nIter: 4948\nnorm(gX): 0.03043328146348181\nIter: 4949\nnorm(gX): 0.03022045206228144\nIter: 4950\nnorm(gX): 0.03000181348799072\nIter: 4951\nnorm(gX): 0.029777436466279596\nIter: 4952\nnorm(gX): 0.029547393988467015\nIter: 4953\nnorm(gX): 0.029311761351119337\nIter: 4954\nnorm(gX): 0.029070616199544982\nIter: 4955\nnorm(gX): 0.028824038575518105\nIter: 4956\nnorm(gX): 0.028572110969641816\nIter: 4957\nnorm(gX): 0.028314918378774184\nIter: 4958\nnorm(gX): 0.028052548369012387\nIter: 4959\nnorm(gX): 0.02778509114472369\nIter: 4960\nnorm(gX): 0.027512639624237084\nIter: 4961\nnorm(gX): 0.027235289522816248\nIter: 4962\nnorm(gX): 0.026953139443601375\nIter: 4963\nnorm(gX): 0.02666629097732715\nIter: 4964\nnorm(gX): 0.02637484881165338\nIter: 4965\nnorm(gX): 0.026078920851017904\nIter: 4966\nnorm(gX): 0.025778618348140856\nIter: 4967\nnorm(gX): 0.025474056048236503\nIter: 4968\nnorm(gX): 0.025165352347272216\nIter: 4969\nnorm(gX): 0.024852629465660882\nIter: 4970\nnorm(gX): 0.024536013638963877\nIter: 4971\nnorm(gX): 0.02421563532729889\nIter: 4972\nnorm(gX): 0.023891629445413596\nIter: 4973\nnorm(gX): 0.023564135615519993\nIter: 4974\nnorm(gX): 0.023233298445217043\nIter: 4975\nnorm(gX): 0.022899267833160902\nIter: 4976\nnorm(gX): 0.022562199305271164\nIter: 4977\nnorm(gX): 0.02222225438473804\nIter: 4978\nnorm(gX): 0.02187960099926755\nIter: 4979\nnorm(gX): 0.021534413929492888\nIter: 4980\nnorm(gX): 0.021186875302776534\nIter: 4981\nnorm(gX): 0.02083717513713124\nIter: 4982\nnorm(gX): 0.02048551194036328\nIter: 4983\nnorm(gX): 0.02013209337010104\nIter: 4984\nnorm(gX): 0.019777136960781923\nIter: 4985\nnorm(gX): 0.019420870924274254\nIter: 4986\nnorm(gX): 0.01906353503125092\nIter: 4987\nnorm(gX): 0.01870538158089978\nIter: 4988\nnorm(gX): 0.018346676467078143\nIter: 4989\nnorm(gX): 0.017987700349247703\nIter: 4990\nnorm(gX): 0.017628749936797\nIter: 4991\nnorm(gX): 0.017270139395317387\nIter: 4992\nnorm(gX): 0.016912201883042273\nIter: 4993\nnorm(gX): 0.01655529122494334\nIter: 4994\nnorm(gX): 0.016199783730591652\nIter: 4995\nnorm(gX): 0.015846080159733367\nIter: 4996\nnorm(gX): 0.015494607836329718\nIter: 4997\nnorm(gX): 0.01514582290726614\nIter: 4998\nnorm(gX): 0.014800212735454371\nIter: 4999\nnorm(gX): 0.014458298408651508\nIter: 5000\nnorm(gX): 0.01412063733378145\nIter: 5001\nnorm(gX): 0.013787825871901535\nIter: 5002\nnorm(gX): 0.013460501950134724\nIter: 5003\nnorm(gX): 0.013139347563487627\nIter: 5004\nnorm(gX): 0.012825091050901032\nIter: 5005\nnorm(gX): 0.012518508995889417\nIter: 5006\nnorm(gX): 0.012220427563042126\nIter: 5007\nnorm(gX): 0.011931723038372496\nIter: 5008\nnorm(gX): 0.011653321296202453\nIter: 5009\nnorm(gX): 0.01138619587180949\nIter: 5010\nnorm(gX): 0.011131364282889845\nIter: 5011\nnorm(gX): 0.010889882222458451\nIter: 5012\nnorm(gX): 0.010662835251045059\nIter: 5013\nnorm(gX): 0.010451327659550473\nIter: 5014\nnorm(gX): 0.01025646826827119\nIter: 5015\nnorm(gX): 0.010079353083007152\nIter: 5016\nnorm(gX): 0.009921044951060403\nIter: 5017\nnorm(gX): 0.009782550643326794\nIter: 5018\nnorm(gX): 0.009664796115063575\nIter: 5019\nnorm(gX): 0.009568601032720496\nIter: 5020\nnorm(gX): 0.009494653947527035\nIter: 5021\nnorm(gX): 0.009443489691063438\nIter: 5022\nnorm(gX): 0.009415470608698413\nIter: 5023\nnorm(gX): 0.00941077309715898\nIter: 5024\nnorm(gX): 0.009429380567232883\nIter: 5025\nnorm(gX): 0.009471083445035023\nIter: 5026\nnorm(gX): 0.009535486224375207\nIter: 5027\nnorm(gX): 0.009622020979142332\nIter: 5028\nnorm(gX): 0.009729966229690913\nIter: 5029\nnorm(gX): 0.009858469701726104\nIter: 5030\nnorm(gX): 0.010006573356240996\nIter: 5031\nnorm(gX): 0.010173239101363831\nIter: 5032\nnorm(gX): 0.010357373786522323\nIter: 5033\nnorm(gX): 0.010557852371631801\nIter: 5034\nnorm(gX): 0.010773538500752656\nIter: 5035\nnorm(gX): 0.011003302040382214\nIter: 5036\nnorm(gX): 0.011246033431137923\nIter: 5037\nnorm(gX): 0.011500654928477325\nIter: 5038\nnorm(gX): 0.011766128967839464\nIter: 5039\nnorm(gX): 0.01204146398689069\nIter: 5040\nnorm(gX): 0.012325718082947093\nIter: 5041\nnorm(gX): 0.012618000889914566\nIter: 5042\nnorm(gX): 0.012917474038560632\nIter: 5043\nnorm(gX): 0.013223350527517518\nIter: 5044\nnorm(gX): 0.013534893287946576\nIter: 5045\nnorm(gX): 0.013851413178699975\nIter: 5046\nnorm(gX): 0.014172266604496518\nIter: 5047\nnorm(gX): 0.014496852909673301\nIter: 5048\nnorm(gX): 0.01482461166524733\nIter: 5049\nnorm(gX): 0.015155019937857674\nIter: 5050\nnorm(gX): 0.015487589605127047\nIter: 5051\nnorm(gX): 0.015821864762868926\nIter: 5052\nnorm(gX): 0.0161574192544172\nIter: 5053\nnorm(gX): 0.016493854340829155\nIter: 5054\nnorm(gX): 0.01683079652193626\nIter: 5055\nnorm(gX): 0.017167895511715946\nIter: 5056\nnorm(gX): 0.01750482236693381\nIter: 5057\nnorm(gX): 0.017841267764577774\nIter: 5058\nnorm(gX): 0.018176940421645263\nIter: 5059\nnorm(gX): 0.018511565649297956\nIter: 5060\nnorm(gX): 0.018844884032832156\nIter: 5061\nnorm(gX): 0.019176650228442383\nIter: 5062\nnorm(gX): 0.01950663186789827\nIter: 5063\nnorm(gX): 0.01983460856238748\nIter: 5064\nnorm(gX): 0.02016037099719753\nIter: 5065\nnorm(gX): 0.02048372010932459\nIter: 5066\nnorm(gX): 0.020804466340657916\nIter: 5067\nnorm(gX): 0.02112242895987554\nIter: 5068\nnorm(gX): 0.021437435446745454\nIter: 5069\nnorm(gX): 0.021749320933013683\nIter: 5070\nnorm(gX): 0.02205792769462493\nIter: 5071\nnorm(gX): 0.02236310469039019\nIter: 5072\nnorm(gX): 0.022664707142767918\nIter: 5073\nnorm(gX): 0.02296259615673939\nIter: 5074\nnorm(gX): 0.02325663837317722\nIter: 5075\nnorm(gX): 0.023546705653476892\nIter: 5076\nnorm(gX): 0.023832674792446676\nIter: 5077\nnorm(gX): 0.02411442725683519\nIter: 5078\nnorm(gX): 0.024391848947085473\nIter: 5079\nnorm(gX): 0.024664829980108664\nIter: 5080\nnorm(gX): 0.024933264491177387\nIter: 5081\nnorm(gX): 0.02519705045309339\nIter: 5082\nnorm(gX): 0.02545608951110376\nIter: 5083\nnorm(gX): 0.02571028683207047\nIter: 5084\nnorm(gX): 0.025959550966614526\nIter: 5085\nnorm(gX): 0.026203793723049176\nIter: 5086\nnorm(gX): 0.026442930052026404\nIter: 5087\nnorm(gX): 0.026676877940960006\nIter: 5088\nnorm(gX): 0.026905558317313973\nIter: 5089\nnorm(gX): 0.027128894959999098\nIter: 5090\nnorm(gX): 0.027346814418149564\nIter: 5091\nnorm(gX): 0.027559245936613387\nIter: 5092\nnorm(gX): 0.027766121387605782\nIter: 5093\nnorm(gX): 0.02796737520794427\nIter: 5094\nnorm(gX): 0.02816294434140959\nIter: 5095\nnorm(gX): 0.02835276818579324\nIter: 5096\nnorm(gX): 0.02853678854420013\nIter: 5097\nnorm(gX): 0.028714949580273767\nIter: 5098\nnorm(gX): 0.028887197776987342\nIter: 5099\nnorm(gX): 0.02905348189871508\nIter: 5100\nnorm(gX): 0.02921375295628177\nIter: 5101\nnorm(gX): 0.029367964174764703\nIter: 5102\nnorm(gX): 0.02951607096381012\nIter: 5103\nnorm(gX): 0.029658030890233836\nIter: 5104\nnorm(gX): 0.029793803652737573\nIter: 5105\nnorm(gX): 0.02992335105857009\nIter: 5106\nnorm(gX): 0.030046637001932455\nIter: 5107\nnorm(gX): 0.03016362744402082\nIter: 5108\nnorm(gX): 0.03027429039457428\nIter: 5109\nnorm(gX): 0.03037859589476789\nIter: 5110\nnorm(gX): 0.030476516001363172\nIter: 5111\nnorm(gX): 0.03056802477202848\nIter: 5112\nnorm(gX): 0.03065309825170637\nIter: 5113\nnorm(gX): 0.030731714459963092\nIter: 5114\nnorm(gX): 0.030803853379218975\nIter: 5115\nnorm(gX): 0.03086949694383437\nIter: 5116\nnorm(gX): 0.030928629029939498\nIter: 5117\nnorm(gX): 0.030981235445965035\nIter: 5118\nnorm(gX): 0.03102730392384432\nIter: 5119\nnorm(gX): 0.03106682411081157\nIter: 5120\nnorm(gX): 0.031099787561761134\nIter: 5121\nnorm(gX): 0.031126187732147897\nIter: 5122\nnorm(gX): 0.03114601997136905\nIter: 5123\nnorm(gX): 0.031159281516614328\nIter: 5124\nnorm(gX): 0.03116597148716242\nIter: 5125\nnorm(gX): 0.031166090879096057\nIter: 5126\nnorm(gX): 0.031159642560424625\nIter: 5127\nnorm(gX): 0.03114663126658766\nIter: 5128\nnorm(gX): 0.031127063596351284\nIter: 5129\nnorm(gX): 0.031100948008076394\nIter: 5130\nnorm(gX): 0.03106829481633996\nIter: 5131\nnorm(gX): 0.031029116188965637\nIter: 5132\nnorm(gX): 0.030983426144381834\nIter: 5133\nnorm(gX): 0.030931240549414693\nIter: 5134\nnorm(gX): 0.030872577117429537\nIter: 5135\nnorm(gX): 0.030807455406909774\nIter: 5136\nnorm(gX): 0.030735896820448135\nIter: 5137\nnorm(gX): 0.030657924604193153\nIter: 5138\nnorm(gX): 0.030573563847754965\nIter: 5139\nnorm(gX): 0.030482841484634263\nIter: 5140\nnorm(gX): 0.030385786293181928\nIter: 5141\nnorm(gX): 0.030282428898132787\nIter: 5142\nnorm(gX): 0.030172801772746522\nIter: 5143\nnorm(gX): 0.030056939241643214\nIter: 5144\nnorm(gX): 0.02993487748434796\nIter: 5145\nnorm(gX): 0.029806654539616582\nIter: 5146\nnorm(gX): 0.029672310310619177\nIter: 5147\nnorm(gX): 0.029531886571041294\nIter: 5148\nnorm(gX): 0.029385426972188274\nIter: 5149\nnorm(gX): 0.02923297705120186\nIter: 5150\nnorm(gX): 0.029074584240432463\nIter: 5151\nnorm(gX): 0.028910297878158065\nIter: 5152\nnorm(gX): 0.028740169220671245\nIter: 5153\nnorm(gX): 0.02856425145595263\nIter: 5154\nnorm(gX): 0.028382599719015404\nIter: 5155\nnorm(gX): 0.028195271109091858\nIter: 5156\nnorm(gX): 0.028002324708836197\nIter: 5157\nnorm(gX): 0.027803821605749856\nIter: 5158\nnorm(gX): 0.027599824915980772\nIter: 5159\nnorm(gX): 0.027390399810788713\nIter: 5160\nnorm(gX): 0.027175613545874392\nIter: 5161\nnorm(gX): 0.026955535493866985\nIter: 5162\nnorm(gX): 0.02673023718028494\nIter: 5163\nnorm(gX): 0.026499792323248356\nIter: 5164\nnorm(gX): 0.026264276877376643\nIter: 5165\nnorm(gX): 0.026023769082211257\nIter: 5166\nnorm(gX): 0.02577834951564631\nIter: 5167\nnorm(gX): 0.025528101152832525\nIter: 5168\nnorm(gX): 0.025273109431100338\nIter: 5169\nnorm(gX): 0.025013462321508896\nIter: 5170\nnorm(gX): 0.02474925040765661\nIter: 5171\nnorm(gX): 0.024480566972505675\nIter: 5172\nnorm(gX): 0.024207508094014785\nIter: 5173\nnorm(gX): 0.02393017275048972\nIter: 5174\nnorm(gX): 0.02364866293663107\nIter: 5175\nnorm(gX): 0.02336308379140607\nIter: 5176\nnorm(gX): 0.02307354373893466\nIter: 5177\nnorm(gX): 0.022780154643813597\nIter: 5178\nnorm(gX): 0.022483031982343082\nIter: 5179\nnorm(gX): 0.0221822950313712\nIter: 5180\nnorm(gX): 0.021878067076646864\nIter: 5181\nnorm(gX): 0.021570475642762668\nIter: 5182\nnorm(gX): 0.021259652747049895\nIter: 5183\nnorm(gX): 0.020945735180006232\nIter: 5184\nnorm(gX): 0.020628864815205816\nIter: 5185\nnorm(gX): 0.020309188951882082\nIter: 5186\nnorm(gX): 0.01998686069383343\nIter: 5187\nnorm(gX): 0.019662039368680516\nIter: 5188\nnorm(gX): 0.019334890991922035\nIter: 5189\nnorm(gX): 0.019005588780806278\nIter: 5190\nnorm(gX): 0.018674313723521917\nIter: 5191\nnorm(gX): 0.0183412552098443\nIter: 5192\nnorm(gX): 0.018006611730015516\nIter: 5193\nnorm(gX): 0.01767059164930213\nIter: 5194\nnorm(gX): 0.017333414066413887\nIter: 5195\nnorm(gX): 0.01699530976472165\nIter: 5196\nnorm(gX): 0.016656522265943295\nIter: 5197\nnorm(gX): 0.016317308996675706\nIter: 5198\nnorm(gX): 0.01597794257884652\nIter: 5199\nnorm(gX): 0.01563871225560116\nIter: 5200\nnorm(gX): 0.015299925464485756\nIter: 5201\nnorm(gX): 0.01496190956968445\nIter: 5202\nnorm(gX): 0.014625013764541466\nIter: 5203\nnorm(gX): 0.01428961115426232\nIter: 5204\nnorm(gX): 0.013956101026489105\nIter: 5205\nnorm(gX): 0.013624911313658427\nIter: 5206\nnorm(gX): 0.01329650124566617\nIter: 5207\nnorm(gX): 0.01297136418340823\nIter: 5208\nnorm(gX): 0.012650030612665413\nIter: 5209\nnorm(gX): 0.01233307126282405\nIter: 5210\nnorm(gX): 0.012021100294928724\nIter: 5211\nnorm(gX): 0.011714778477425433\nIter: 5212\nnorm(gX): 0.011414816235031109\nIter: 5213\nnorm(gX): 0.01112197641497995\nIter: 5214\nnorm(gX): 0.010837076565533859\nIter: 5215\nnorm(gX): 0.01056099046419559\nIter: 5216\nnorm(gX): 0.01029464856933913\nIter: 5217\nnorm(gX): 0.010039037002900738\nIter: 5218\nnorm(gX): 0.009795194610043814\nIter: 5219\nnorm(gX): 0.009564207594852332\nIter: 5220\nnorm(gX): 0.009347201213677048\nIter: 5221\nnorm(gX): 0.009145328038235246\nIter: 5222\nnorm(gX): 0.008959752400076708\nIter: 5223\nnorm(gX): 0.008791630815811956\nIter: 5224\nnorm(gX): 0.008642088481866893\nIter: 5225\nnorm(gX): 0.0085121923165\nIter: 5226\nnorm(gX): 0.008402921491728775\nIter: 5227\nnorm(gX): 0.00831513688665585\nIter: 5228\nnorm(gX): 0.008249551328125455\nIter: 5229\nnorm(gX): 0.008206702769785613\nIter: 5230\nnorm(gX): 0.008186932606470545\nIter: 5231\nnorm(gX): 0.008190371069547505\nIter: 5232\nnorm(gX): 0.00821693109889152\nIter: 5233\nnorm(gX): 0.008266311305498193\nIter: 5234\nnorm(gX): 0.00833800775010819\nIter: 5235\nnorm(gX): 0.008431333421129237\nIter: 5236\nnorm(gX): 0.008545443640600883\nIter: 5237\nnorm(gX): 0.008679365253032555\nIter: 5238\nnorm(gX): 0.00883202738463164\nIter: 5239\nnorm(gX): 0.009002291762212374\nIter: 5240\nnorm(gX): 0.009188980971081265\nIter: 5241\nnorm(gX): 0.009390903513027253\nIter: 5242\nnorm(gX): 0.009606875011645716\nIter: 5243\nnorm(gX): 0.009835735338960771\nIter: 5244\nnorm(gX): 0.010076361769806301\nIter: 5245\nnorm(gX): 0.010327678498841395\nIter: 5246\nnorm(gX): 0.010588662987616364\nIter: 5247\nnorm(gX): 0.010858349663879262\nIter: 5248\nnorm(gX): 0.011135831493379926\nIter: 5249\nnorm(gX): 0.011420259905854964\nIter: 5250\nnorm(gX): 0.01171084349820341\nIter: 5251\nnorm(gX): 0.012006845871071412\nIter: 5252\nnorm(gX): 0.01230758288870124\nIter: 5253\nnorm(gX): 0.012612419590658636\nIter: 5254\nnorm(gX): 0.012920766930552857\nIter: 5255\nnorm(gX): 0.013232078471949005\nIter: 5256\nnorm(gX): 0.013545847135088907\nIter: 5257\nnorm(gX): 0.013861602059047191\nIter: 5258\nnorm(gX): 0.01417890562150006\nIter: 5259\nnorm(gX): 0.014497350641312351\nIter: 5260\nnorm(gX): 0.014816557776585951\nIter: 5261\nnorm(gX): 0.015136173121715333\nIter: 5262\nnorm(gX): 0.015455866000662557\nIter: 5263\nnorm(gX): 0.015775326949405668\nIter: 5264\nnorm(gX): 0.016094265877723234\nIter: 5265\nnorm(gX): 0.016412410398943506\nIter: 5266\nnorm(gX): 0.016729504315502765\nIter: 5267\nnorm(gX): 0.017045306247976025\nIter: 5268\nnorm(gX): 0.017359588395454564\nIter: 5269\nnorm(gX): 0.017672135415660067\nIter: 5270\nnorm(gX): 0.017982743413769937\nIter: 5271\nnorm(gX): 0.018291219029735604\nIter: 5272\nnorm(gX): 0.01859737861460513\nIter: 5273\nnorm(gX): 0.018901047487139198\nIter: 5274\nnorm(gX): 0.019202059262811752\nIter: 5275\nnorm(gX): 0.019500255247994\nIter: 5276\nnorm(gX): 0.0197954838927523\nIter: 5277\nnorm(gX): 0.020087600296459784\nIter: 5278\nnorm(gX): 0.02037646576080083\nIter: 5279\nnorm(gX): 0.02066194738554336\nIter: 5280\nnorm(gX): 0.020943917702649133\nIter: 5281\nnorm(gX): 0.021222254344986924\nIter: 5282\nnorm(gX): 0.021496839746145118\nIter: 5283\nnorm(gX): 0.02176756086827671\nIter: 5284\nnorm(gX): 0.022034308955180385\nIter: 5285\nnorm(gX): 0.02229697930815659\nIter: 5286\nnorm(gX): 0.022555471082398446\nIter: 5287\nnorm(gX): 0.022809687101921283\nIter: 5288\nnorm(gX): 0.023059533691236773\nIter: 5289\nnorm(gX): 0.023304920522156176\nIter: 5290\nnorm(gX): 0.023545760474302527\nIter: 5291\nnorm(gX): 0.023781969507976203\nIter: 5292\nnorm(gX): 0.02401346654827514\nIter: 5293\nnorm(gX): 0.02424017337937769\nIter: 5294\nnorm(gX): 0.024462014548015804\nIter: 5295\nnorm(gX): 0.024678917275372598\nIter: 5296\nnorm(gX): 0.02489081137652032\nIter: 5297\nnorm(gX): 0.025097629186801213\nIter: 5298\nnorm(gX): 0.02529930549446999\nIter: 5299\nnorm(gX): 0.02549577747904675\nIter: 5300\nnorm(gX): 0.025686984654856908\nIter: 5301\nnorm(gX): 0.025872868819323512\nIter: 5302\nnorm(gX): 0.0260533740055346\nIter: 5303\nnorm(gX): 0.026228446438779095\nIter: 5304\nnorm(gX): 0.02639803449662307\nIter: 5305\nnorm(gX): 0.026562088672279896\nIter: 5306\nnorm(gX): 0.026720561540947742\nIter: 5307\nnorm(gX): 0.026873407728877332\nIter: 5308\nnorm(gX): 0.027020583884898634\nIter: 5309\nnorm(gX): 0.02716204865424773\nIter: 5310\nnorm(gX): 0.02729776265444624\nIter: 5311\nnorm(gX): 0.027427688453069355\nIter: 5312\nnorm(gX): 0.027551790547259453\nIter: 5313\nnorm(gX): 0.02767003534479815\nIter: 5314\nnorm(gX): 0.027782391146658827\nIter: 5315\nnorm(gX): 0.027888828130836596\nIter: 5316\nnorm(gX): 0.0279893183374241\nIter: 5317\nnorm(gX): 0.02808383565477469\nIter: 5318\nnorm(gX): 0.028172355806679403\nIter: 5319\nnorm(gX): 0.02825485634046972\nIter: 5320\nnorm(gX): 0.028331316615980838\nIter: 5321\nnorm(gX): 0.028401717795281608\nIter: 5322\nnorm(gX): 0.028466042833116812\nIter: 5323\nnorm(gX): 0.028524276468010906\nIter: 5324\nnorm(gX): 0.028576405213974948\nIter: 5325\nnorm(gX): 0.028622417352770338\nIter: 5326\nnorm(gX): 0.028662302926676875\nIter: 5327\nnorm(gX): 0.028696053731734106\nIter: 5328\nnorm(gX): 0.028723663311443756\nIter: 5329\nnorm(gX): 0.02874512695086105\nIter: 5330\nnorm(gX): 0.02876044167108122\nIter: 5331\nnorm(gX): 0.02876960622408353\nIter: 5332\nnorm(gX): 0.028772621087923757\nIter: 5333\nnorm(gX): 0.028769488462247816\nIter: 5334\nnorm(gX): 0.028760212264127793\nIter: 5335\nnorm(gX): 0.02874479812419341\nIter: 5336\nnorm(gX): 0.028723253383071183\nIter: 5337\nnorm(gX): 0.02869558708812116\nIter: 5338\nnorm(gX): 0.028661809990460377\nIter: 5339\nnorm(gX): 0.028621934542284514\nIter: 5340\nnorm(gX): 0.028575974894505484\nIter: 5341\nnorm(gX): 0.02852394689468331\nIter: 5342\nnorm(gX): 0.02846586808530102\nIter: 5343\nnorm(gX): 0.028401757702366862\nIter: 5344\nnorm(gX): 0.028331636674371143\nIter: 5345\nnorm(gX): 0.02825552762162986\nIter: 5346\nnorm(gX): 0.028173454856028787\nIter: 5347\nnorm(gX): 0.028085444381203778\nIter: 5348\nnorm(gX): 0.02799152389317696\nIter: 5349\nnorm(gX): 0.02789172278150589\nIter: 5350\nnorm(gX): 0.02778607213097777\nIter: 5351\nnorm(gX): 0.02767460472388374\nIter: 5352\nnorm(gX): 0.027557355042939085\nIter: 5353\nnorm(gX): 0.027434359274922257\nIter: 5354\nnorm(gX): 0.027305655315025407\nIter: 5355\nnorm(gX): 0.027171282772107508\nIter: 5356\nnorm(gX): 0.027031282974785693\nIter: 5357\nnorm(gX): 0.02688569897856637\nIter: 5358\nnorm(gX): 0.026734575574035406\nIter: 5359\nnorm(gX): 0.026577959296236454\nIter: 5360\nnorm(gX): 0.02641589843536146\nIter: 5361\nnorm(gX): 0.02624844304883911\nIter: 5362\nnorm(gX): 0.02607564497498606\nIter: 5363\nnorm(gX): 0.02589755784836277\nIter: 5364\nnorm(gX): 0.025714237116973973\nIter: 5365\nnorm(gX): 0.025525740061544143\nIter: 5366\nnorm(gX): 0.025332125816983093\nIter: 5367\nnorm(gX): 0.025133455396344977\nIter: 5368\nnorm(gX): 0.02492979171744996\nIter: 5369\nnorm(gX): 0.024721199632462637\nIter: 5370\nnorm(gX): 0.024507745960718117\nIter: 5371\nnorm(gX): 0.024289499525087778\nIter: 5372\nnorm(gX): 0.024066531192266073\nIter: 5373\nnorm(gX): 0.023838913917343316\nIter: 5374\nnorm(gX): 0.023606722793097196\nIter: 5375\nnorm(gX): 0.02337003510450024\nIter: 5376\nnorm(gX): 0.02312893038892739\nIter: 5377\nnorm(gX): 0.022883490502698438\nIter: 5378\nnorm(gX): 0.022633799694541542\nIter: 5379\nnorm(gX): 0.02237994468676568\nIter: 5380\nnorm(gX): 0.02212201476488364\nIter: 5381\nnorm(gX): 0.021860101876610086\nIter: 5382\nnorm(gX): 0.021594300741212963\nIter: 5383\nnorm(gX): 0.021324708970332493\nIter: 5384\nnorm(gX): 0.021051427201491404\nIter: 5385\nnorm(gX): 0.020774559245684655\nIter: 5386\nnorm(gX): 0.02049421225059705\nIter: 5387\nnorm(gX): 0.020210496881140797\nIter: 5388\nnorm(gX): 0.019923527519296252\nIter: 5389\nnorm(gX): 0.019633422485362515\nIter: 5390\nnorm(gX): 0.01934030428307833\nIter: 5391\nnorm(gX): 0.019044299871319417\nIter: 5392\nnorm(gX): 0.018745540965432555\nIter: 5393\nnorm(gX): 0.018444164371599823\nIter: 5394\nnorm(gX): 0.01814031235810674\nIter: 5395\nnorm(gX): 0.017834133067788725\nIter: 5396\nnorm(gX): 0.017525780976515654\nIter: 5397\nnorm(gX): 0.017215417403100473\nIter: 5398\nnorm(gX): 0.01690321107673046\nIter: 5399\nnorm(gX): 0.016589338768640816\nIter: 5400\nnorm(gX): 0.016273985995678735\nIter: 5401\nnorm(gX): 0.015957347804098514\nIter: 5402\nnorm(gX): 0.015639629643002993\nIter: 5403\nnorm(gX): 0.015321048337722892\nIter: 5404\nnorm(gX): 0.015001833174497018\nIter: 5405\nnorm(gX): 0.014682227108877096\nIter: 5406\nnorm(gX): 0.014362488111227973\nIter: 5407\nnorm(gX): 0.014042890663676958\nIter: 5408\nnorm(gX): 0.013723727423548085\nIter: 5409\nnorm(gX): 0.013405311068808162\nIter: 5410\nnorm(gX): 0.013087976340876084\nIter: 5411\nnorm(gX): 0.012772082299431423\nIter: 5412\nnorm(gX): 0.012458014801850928\nIter: 5413\nnorm(gX): 0.012146189216539809\nIter: 5414\nnorm(gX): 0.011837053373715227\nIter: 5415\nnorm(gX): 0.011531090748919061\nIter: 5416\nnorm(gX): 0.011228823861964951\nIter: 5417\nnorm(gX): 0.010930817856860363\nIter: 5418\nnorm(gX): 0.01063768420432769\nIter: 5419\nnorm(gX): 0.010350084436760637\nIter: 5420\nnorm(gX): 0.010068733783998349\nIter: 5421\nnorm(gX): 0.009794404525695515\nIter: 5422\nnorm(gX): 0.009527928811282074\nIter: 5423\nnorm(gX): 0.009270200621898136\nIter: 5424\nnorm(gX): 0.009022176462191368\nIter: 5425\nnorm(gX): 0.008784874278843998\nIter: 5426\nnorm(gX): 0.008559370017365535\nIter: 5427\nnorm(gX): 0.00834679116424361\nIter: 5428\nnorm(gX): 0.008148306601037342\nIter: 5429\nnorm(gX): 0.007965112149081728\nIter: 5430\nnorm(gX): 0.0077984113399360636\nIter: 5431\nnorm(gX): 0.00764939123690869\nIter: 5432\nnorm(gX): 0.007519193571985394\nIter: 5433\nnorm(gX): 0.007408882038545123\nIter: 5434\nnorm(gX): 0.007319407242087632\nIter: 5435\nnorm(gX): 0.007251571461532136\nIter: 5436\nnorm(gX): 0.0072059958777108465\nIter: 5437\nnorm(gX): 0.007183093137998177\nIter: 5438\nnorm(gX): 0.007183047932847792\nIter: 5439\nnorm(gX): 0.0072058076253206345\nIter: 5440\nnorm(gX): 0.007251083969082997\nIter: 5441\nnorm(gX): 0.0073183657458581915\nIter: 5442\nnorm(gX): 0.007406940979834634\nIter: 5443\nnorm(gX): 0.007515926465521533\nIter: 5444\nnorm(gX): 0.007644301825252051\nIter: 5445\nnorm(gX): 0.007790945235658285\nIter: 5446\nnorm(gX): 0.007954668268246739\nIter: 5447\nnorm(gX): 0.008134247851003445\nIter: 5448\nnorm(gX): 0.008328454028239016\nIter: 5449\nnorm(gX): 0.008536072847401252\nIter: 5450\nnorm(gX): 0.008755924248221032\nIter: 5451\nnorm(gX): 0.008986875231074262\nIter: 5452\nnorm(gX): 0.009227848834769925\nIter: 5453\nnorm(gX): 0.009477829579198876\nIter: 5454\nnorm(gX): 0.009735866056802003\nIter: 5455\nnorm(gX): 0.010001071320305768\nIter: 5456\nnorm(gX): 0.01027262164053902\nIter: 5457\nnorm(gX): 0.010549754118089194\nIter: 5458\nnorm(gX): 0.010831763540564645\nIter: 5459\nnorm(gX): 0.011117998791602936\nIter: 5460\nnorm(gX): 0.011407859043279537\nIter: 5461\nnorm(gX): 0.011700789901205759\nIter: 5462\nnorm(gX): 0.011996279621666436\nIter: 5463\nnorm(gX): 0.012293855481018594\nIter: 5464\nnorm(gX): 0.012593080347835947\nIter: 5465\nnorm(gX): 0.012893549486242103\nIter: 5466\nnorm(gX): 0.013194887602964082\nIter: 5467\nnorm(gX): 0.013496746139361231\nIter: 5468\nnorm(gX): 0.013798800802257614\nIter: 5469\nnorm(gX): 0.014100749322360915\nIter: 5470\nnorm(gX): 0.014402309426240309\nIter: 5471\nnorm(gX): 0.014703217006293927\nIter: 5472\nnorm(gX): 0.015003224472610342\nIter: 5473\nnorm(gX): 0.01530209927077342\nIter: 5474\nnorm(gX): 0.015599622550258796\nIter: 5475\nnorm(gX): 0.01589558796890578\nIter: 5476\nnorm(gX): 0.01618980061999091\nIter: 5477\nnorm(gX): 0.016482076069442235\nIter: 5478\nnorm(gX): 0.016772239491870743\nIter: 5479\nnorm(gX): 0.01706012489508595\nIter: 5480\nnorm(gX): 0.017345574423838456\nIter: 5481\nnorm(gX): 0.017628437734414856\nIter: 5482\nnorm(gX): 0.017908571432550032\nIter: 5483\nnorm(gX): 0.018185838568012133\nIter: 5484\nnorm(gX): 0.01846010817978751\nIter: 5485\nnorm(gX): 0.018731254886595945\nIter: 5486\nnorm(gX): 0.018999158517891038\nIter: 5487\nnorm(gX): 0.019263703781154225\nIter: 5488\nnorm(gX): 0.019524779961671585\nIter: 5489\nnorm(gX): 0.019782280651415726\nIter: 5490\nnorm(gX): 0.02003610350404091\nIter: 5491\nnorm(gX): 0.020286150013308153\nIter: 5492\nnorm(gX): 0.020532325312559944\nIter: 5493\nnorm(gX): 0.020774537993102155\nIter: 5494\nnorm(gX): 0.021012699939607155\nIter: 5495\nnorm(gX): 0.021246726180825334\nIter: 5496\nnorm(gX): 0.02147653475410976\nIter: 5497\nnorm(gX): 0.021702046582379685\nIter: 5498\nnorm(gX): 0.021923185362300424\nIter: 5499\nnorm(gX): 0.02213987746262638\nIter: 5500\nnorm(gX): 0.022352051831695093\nIter: 5501\nnorm(gX): 0.022559639913208137\nIter: 5502\nnorm(gX): 0.022762575569527085\nIter: 5503\nnorm(gX): 0.02296079501176367\nIter: 5504\nnorm(gX): 0.023154236735995686\nIter: 5505\nnorm(gX): 0.02334284146512134\nIter: 5506\nnorm(gX): 0.02352655209574089\nIter: 5507\nnorm(gX): 0.02370531364964969\nIter: 5508\nnorm(gX): 0.023879073229516663\nIter: 5509\nnorm(gX): 0.024047779978347686\nIter: 5510\nnorm(gX): 0.024211385042426193\nIter: 5511\nnorm(gX): 0.024369841537356482\nIter: 5512\nnorm(gX): 0.024523104517000998\nIter: 5513\nnorm(gX): 0.024671130945007728\nIter: 5514\nnorm(gX): 0.02481387966868699\nIter: 5515\nnorm(gX): 0.02495131139507564\nIter: 5516\nnorm(gX): 0.025083388668943187\nIter: 5517\nnorm(gX): 0.025210075852592922\nIter: 5518\nnorm(gX): 0.02533133910730317\nIter: 5519\nnorm(gX): 0.025447146376237784\nIter: 5520\nnorm(gX): 0.025557467368719045\nIter: 5521\nnorm(gX): 0.025662273545741996\nIter: 5522\nnorm(gX): 0.025761538106599896\nIter: 5523\nnorm(gX): 0.025855235976535405\nIter: 5524\nnorm(gX): 0.02594334379533975\nIter: 5525\nnorm(gX): 0.026025839906795935\nIter: 5526\nnorm(gX): 0.02610270434888698\nIter: 5527\nnorm(gX): 0.026173918844739182\nIter: 5528\nnorm(gX): 0.026239466794184096\nIter: 5529\nnorm(gX): 0.02629933326593735\nIter: 5530\nnorm(gX): 0.02635350499029434\nIter: 5531\nnorm(gX): 0.026401970352345972\nIter: 5532\nnorm(gX): 0.02644471938561275\nIter: 5533\nnorm(gX): 0.026481743766137805\nIter: 5534\nnorm(gX): 0.026513036806932736\nIter: 5535\nnorm(gX): 0.026538593452791744\nIter: 5536\nnorm(gX): 0.02655841027543126\nIter: 5537\nnorm(gX): 0.026572485468941597\nIter: 5538\nnorm(gX): 0.026580818845521218\nIter: 5539\nnorm(gX): 0.026583411831492033\nIter: 5540\nnorm(gX): 0.026580267463570995\nIter: 5541\nnorm(gX): 0.02657139038539823\nIter: 5542\nnorm(gX): 0.0265567868442978\nIter: 5543\nnorm(gX): 0.02653646468829377\nIter: 5544\nnorm(gX): 0.02651043336335536\nIter: 5545\nnorm(gX): 0.026478703910875305\nIter: 5546\nnorm(gX): 0.0264412889654177\nIter: 5547\nnorm(gX): 0.026398202752683703\nIter: 5548\nnorm(gX): 0.02634946108776492\nIter: 5549\nnorm(gX): 0.02629508137364554\nIter: 5550\nnorm(gX): 0.026235082600028204\nIter: 5551\nnorm(gX): 0.026169485342419044\nIter: 5552\nnorm(gX): 0.026098311761576\nIter: 5553\nnorm(gX): 0.02602158560330295\nIter: 5554\nnorm(gX): 0.025939332198614568\nIter: 5555\nnorm(gX): 0.02585157846432802\nIter: 5556\nnorm(gX): 0.025758352904095526\nIter: 5557\nnorm(gX): 0.02565968560992413\nIter: 5558\nnorm(gX): 0.02555560826425166\nIter: 5559\nnorm(gX): 0.02544615414257318\nIter: 5560\nnorm(gX): 0.02533135811673353\nIter: 5561\nnorm(gX): 0.025211256658908412\nIter: 5562\nnorm(gX): 0.025085887846352885\nIter: 5563\nnorm(gX): 0.024955291366990588\nIter: 5564\nnorm(gX): 0.024819508525911714\nIter: 5565\nnorm(gX): 0.02467858225290927\nIter: 5566\nnorm(gX): 0.024532557111073248\nIter: 5567\nnorm(gX): 0.02438147930664701\nIter: 5568\nnorm(gX): 0.02422539670017471\nIter: 5569\nnorm(gX): 0.02406435881911884\nIter: 5570\nnorm(gX): 0.02389841687207653\nIter: 5571\nnorm(gX): 0.023727623764741648\nIter: 5572\nnorm(gX): 0.02355203411779702\nIter: 5573\nnorm(gX): 0.023371704286909784\nIter: 5574\nnorm(gX): 0.023186692385035228\nIter: 5575\nnorm(gX): 0.022997058307274162\nIter: 5576\nnorm(gX): 0.02280286375851087\nIter: 5577\nnorm(gX): 0.022604172284124202\nIter: 5578\nnorm(gX): 0.022401049304037596\nIter: 5579\nnorm(gX): 0.02219356215052343\nIter: 5580\nnorm(gX): 0.021981780110021224\nIter: 5581\nnorm(gX): 0.0217657744694791\nIter: 5582\nnorm(gX): 0.02154561856761519\nIter: 5583\nnorm(gX): 0.021321387851603483\nIter: 5584\nnorm(gX): 0.021093159939761682\nIter: 5585\nnorm(gX): 0.020861014690849288\nIter: 5586\nnorm(gX): 0.02062503428066619\nIter: 5587\nnorm(gX): 0.02038530328668552\nIter: 5588\nnorm(gX): 0.02014190878162281\nIter: 5589\nnorm(gX): 0.019894940436835748\nIter: 5590\nnorm(gX): 0.0196444906366536\nIter: 5591\nnorm(gX): 0.01939065460477415\nIter: 5592\nnorm(gX): 0.019133530544062857\nIter: 5593\nnorm(gX): 0.018873219791236785\nIter: 5594\nnorm(gX): 0.018609826988040628\nIter: 5595\nnorm(gX): 0.018343460270795538\nIter: 5596\nnorm(gX): 0.018074231480344233\nIter: 5597\nnorm(gX): 0.01780225639473299\nIter: 5598\nnorm(gX): 0.017527654987197386\nIter: 5599\nnorm(gX): 0.017250551712367758\nIter: 5600\nnorm(gX): 0.01697107582393418\nIter: 5601\nnorm(gX): 0.016689361727431486\nIter: 5602\nnorm(gX): 0.016405549372253865\nIter: 5603\nnorm(gX): 0.016119784687446504\nIter: 5604\nnorm(gX): 0.015832220066448827\nIter: 5605\nnorm(gX): 0.015543014906517567\nIter: 5606\nnorm(gX): 0.015252336209245286\nIter: 5607\nnorm(gX): 0.014960359249337016\nIter: 5608\nnorm(gX): 0.014667268319580914\nIter: 5609\nnorm(gX): 0.014373257560827789\nIter: 5610\nnorm(gX): 0.014078531886655846\nIter: 5611\nnorm(gX): 0.013783308013389355\nIter: 5612\nnorm(gX): 0.013487815606982548\nIter: 5613\nnorm(gX): 0.013192298559216345\nIter: 5614\nnorm(gX): 0.012897016406419663\nIter: 5615\nnorm(gX): 0.012602245904430529\nIter: 5616\nnorm(gX): 0.012308282773799317\nIter: 5617\nnorm(gX): 0.012015443628822911\nIter: 5618\nnorm(gX): 0.01172406810301544\nIter: 5619\nnorm(gX): 0.011434521181237977\nIter: 5620\nnorm(gX): 0.01114719574508739\nIter: 5621\nnorm(gX): 0.010862515332024425\nIter: 5622\nnorm(gX): 0.010580937099953345\nIter: 5623\nnorm(gX): 0.010302954976212577\nIter: 5624\nnorm(gX): 0.010029102952290335\nIter: 5625\nnorm(gX): 0.009759958461597551\nIter: 5626\nnorm(gX): 0.009496145746023331\nIter: 5627\nnorm(gX): 0.009238339076051259\nIter: 5628\nnorm(gX): 0.008987265638160368\nIter: 5629\nnorm(gX): 0.008743707840963785\nIter: 5630\nnorm(gX): 0.00850850471930643\nIter: 5631\nnorm(gX): 0.008282552035633156\nIter: 5632\nnorm(gX): 0.008066800596955666\nIter: 5633\nnorm(gX): 0.007862252233692297\nIter: 5634\nnorm(gX): 0.007669952840157084\nIter: 5635\nnorm(gX): 0.007490981877366075\nIter: 5636\nnorm(gX): 0.007326437814270697\nIter: 5637\nnorm(gX): 0.007177419162162511\nIter: 5638\nnorm(gX): 0.007045001061456501\nIter: 5639\nnorm(gX): 0.006930207817512969\nIter: 5640\nnorm(gX): 0.006833982331801193\nIter: 5641\nnorm(gX): 0.006757153977140254\nIter: 5642\nnorm(gX): 0.006700407022341674\nIter: 5643\nnorm(gX): 0.006664252095727841\nIter: 5644\nnorm(gX): 0.006649003262882788\nIter: 5645\nnorm(gX): 0.006654762998104015\nIter: 5646\nnorm(gX): 0.00668141664639399\nIter: 5647\nnorm(gX): 0.0067286369960093155\nIter: 5648\nnorm(gX): 0.0067958984849579695\nIter: 5649\nnorm(gX): 0.00688249955780952\nIter: 5650\nnorm(gX): 0.006987590955389181\nIter: 5651\nnorm(gX): 0.007110207366216767\nIter: 5652\nnorm(gX): 0.007249299904935\nIter: 5653\nnorm(gX): 0.007403767234717832\nIter: 5654\nnorm(gX): 0.00757248369541962\nIter: 5655\nnorm(gX): 0.007754323407244008\nIter: 5656\nnorm(gX): 0.007948179886292797\nIter: 5657\nnorm(gX): 0.008152981167478975\nIter: 5658\nnorm(gX): 0.008367700756081424\nIter: 5659\nnorm(gX): 0.0085913649252192\nIter: 5660\nnorm(gX): 0.00882305696451012\nIter: 5661\nnorm(gX): 0.009061918993639577\nIter: 5662\nnorm(gX): 0.009307151911539273\nIter: 5663\nnorm(gX): 0.009558013980505648\nIter: 5664\nnorm(gX): 0.009813818462310733\nIter: 5665\nnorm(gX): 0.010073930641320486\nIter: 5666\nnorm(gX): 0.01033776449476049\nIter: 5667\nnorm(gX): 0.01060477920558174\nIter: 5668\nnorm(gX): 0.010874475659922372\nIter: 5669\nnorm(gX): 0.011146393028438846\nIter: 5670\nnorm(gX): 0.011420105497501538\nIter: 5671\nnorm(gX): 0.011695219191112247\nIter: 5672\nnorm(gX): 0.011971369305859954\nIter: 5673\nnorm(gX): 0.012248217467801916\nIter: 5674\nnorm(gX): 0.012525449310861862\nIter: 5675\nnorm(gX): 0.012802772270025619\nIter: 5676\nnorm(gX): 0.013079913578682694\nIter: 5677\nnorm(gX): 0.013356618456989267\nIter: 5678\nnorm(gX): 0.013632648477101501\nIter: 5679\nnorm(gX): 0.013907780090688568\nIter: 5680\nnorm(gX): 0.014181803304405537\nIter: 5681\nnorm(gX): 0.014454520489548145\nIter: 5682\nnorm(gX): 0.014725745312957597\nIter: 5683\nnorm(gX): 0.014995301777103622\nIter: 5684\nnorm(gX): 0.015263023358320381\nIter: 5685\nnorm(gX): 0.015528752233044417\nIter: 5686\nnorm(gX): 0.01579233858292832\nIter: 5687\nnorm(gX): 0.016053639970564682\nIter: 5688\nnorm(gX): 0.016312520778357466\nIter: 5689\nnorm(gX): 0.016568851703894653\nIter: 5690\nnorm(gX): 0.01682250930586582\nIter: 5691\nnorm(gX): 0.017073375595154754\nIter: 5692\nnorm(gX): 0.01732133766637972\nIter: 5693\nnorm(gX): 0.017566287365638564\nIter: 5694\nnorm(gX): 0.017808120990644632\nIter: 5695\nnorm(gX): 0.018046739019934872\nIter: 5696\nnorm(gX): 0.018282045868081798\nIter: 5697\nnorm(gX): 0.018513949664278214\nIter: 5698\nnorm(gX): 0.018742362051907067\nIter: 5699\nnorm(gX): 0.018967198006927446\nIter: 5700\nnorm(gX): 0.019188375673233823\nIter: 5701\nnorm(gX): 0.01940581621324334\nIter: 5702\nnorm(gX): 0.019619443672209132\nIter: 5703\nnorm(gX): 0.019829184854933443\nIter: 5704\nnorm(gX): 0.020034969213621117\nIter: 5705\nnorm(gX): 0.020236728745827486\nIter: 5706\nnorm(gX): 0.020434397901491878\nIter: 5707\nnorm(gX): 0.020627913498229636\nIter: 5708\nnorm(gX): 0.02081721464404439\nIter: 5709\nnorm(gX): 0.02100224266679691\nIter: 5710\nnorm(gX): 0.02118294104978027\nIter: 5711\nnorm(gX): 0.021359255372846116\nIter: 5712\nnorm(gX): 0.021531133258531808\nIter: 5713\nnorm(gX): 0.02169852432277101\nIter: 5714\nnorm(gX): 0.02186138012973924\nIter: 5715\nnorm(gX): 0.022019654150465817\nIter: 5716\nnorm(gX): 0.022173301724865323\nIter: 5717\nnorm(gX): 0.022322280026891447\nIter: 5718\nnorm(gX): 0.022466548032527834\nIter: 5719\nnorm(gX): 0.022606066490351077\nIter: 5720\nnorm(gX): 0.022740797894447516\nIter: 5721\nnorm(gX): 0.02287070645947547\nIter: 5722\nnorm(gX): 0.022995758097681153\nIter: 5723\nnorm(gX): 0.02311592039769447\nIter: 5724\nnorm(gX): 0.023231162604945732\nIter: 5725\nnorm(gX): 0.02334145560356415\nIter: 5726\nnorm(gX): 0.023446771899631663\nIter: 5727\nnorm(gX): 0.023547085605678558\nIter: 5728\nnorm(gX): 0.023642372426293128\nIter: 5729\nnorm(gX): 0.02373260964475983\nIter: 5730\nnorm(gX): 0.023817776110659206\nIter: 5731\nnorm(gX): 0.023897852228307752\nIter: 5732\nnorm(gX): 0.02397281994599052\nIter: 5733\nnorm(gX): 0.0240426627459149\nIter: 5734\nnorm(gX): 0.02410736563483758\nIter: 5735\nnorm(gX): 0.024166915135268974\nIter: 5736\nnorm(gX): 0.02422129927726003\nIter: 5737\nnorm(gX): 0.02427050759068917\nIter: 5738\nnorm(gX): 0.024314531098027725\nIter: 5739\nnorm(gX): 0.02435336230752168\nIter: 5740\nnorm(gX): 0.024386995206806254\nIter: 5741\nnorm(gX): 0.024415425256875855\nIter: 5742\nnorm(gX): 0.024438649386404523\nIter: 5743\nnorm(gX): 0.024456665986413478\nIter: 5744\nnorm(gX): 0.024469474905226152\nIter: 5745\nnorm(gX): 0.02447707744372998\nIter: 5746\nnorm(gX): 0.024479476350926344\nIter: 5747\nnorm(gX): 0.024476675819740563\nIter: 5748\nnorm(gX): 0.02446868148309929\nIter: 5749\nnorm(gX): 0.024455500410282296\nIter: 5750\nnorm(gX): 0.024437141103515886\nIter: 5751\nnorm(gX): 0.024413613494821032\nIter: 5752\nnorm(gX): 0.02438492894314182\nIter: 5753\nnorm(gX): 0.024351100231733865\nIter: 5754\nnorm(gX): 0.024312141565806664\nIter: 5755\nnorm(gX): 0.024268068570484146\nIter: 5756\nnorm(gX): 0.024218898289033144\nIter: 5757\nnorm(gX): 0.024164649181419317\nIter: 5758\nnorm(gX): 0.024105341123197215\nIter: 5759\nnorm(gX): 0.02404099540474664\nIter: 5760\nnorm(gX): 0.02397163473088872\nIter: 5761\nnorm(gX): 0.023897283220897306\nIter: 5762\nnorm(gX): 0.023817966408968\nIter: 5763\nnorm(gX): 0.023733711245145762\nIter: 5764\nnorm(gX): 0.02364454609675678\nIter: 5765\nnorm(gX): 0.023550500750396837\nIter: 5766\nnorm(gX): 0.02345160641452793\nIter: 5767\nnorm(gX): 0.023347895722711443\nIter: 5768\nnorm(gX): 0.02323940273755354\nIter: 5769\nnorm(gX): 0.023126162955434224\nIter: 5770\nnorm(gX): 0.023008213312044577\nIter: 5771\nnorm(gX): 0.02288559218888346\nIter: 5772\nnorm(gX): 0.02275833942069082\nIter: 5773\nnorm(gX): 0.02262649630402839\nIter: 5774\nnorm(gX): 0.022490105607003458\nIter: 5775\nnorm(gX): 0.02234921158029426\nIter: 5776\nnorm(gX): 0.022203859969600864\nIter: 5777\nnorm(gX): 0.022054098029613663\nIter: 5778\nnorm(gX): 0.021899974539679394\nIter: 5779\nnorm(gX): 0.02174153982129395\nIter: 5780\nnorm(gX): 0.021578845757603413\nIter: 5781\nnorm(gX): 0.02141194581510661\nIter: 5782\nnorm(gX): 0.021240895067740326\nIter: 5783\nnorm(gX): 0.021065750223606194\nIter: 5784\nnorm(gX): 0.020886569654540152\nIter: 5785\nnorm(gX): 0.02070341342886314\nIter: 5786\nnorm(gX): 0.020516343347529043\nIter: 5787\nnorm(gX): 0.020325422984084907\nIter: 5788\nnorm(gX): 0.0201307177287383\nIter: 5789\nnorm(gX): 0.019932294836973258\nIter: 5790\nnorm(gX): 0.01973022348313937\nIter: 5791\nnorm(gX): 0.019524574819524878\nIter: 5792\nnorm(gX): 0.019315422041403418\nIter: 5793\nnorm(gX): 0.01910284045874641\nIter: 5794\nnorm(gX): 0.018886907575139743\nIter: 5795\nnorm(gX): 0.018667703174765356\nIter: 5796\nnorm(gX): 0.018445309418170693\nIter: 5797\nnorm(gX): 0.018219810947774032\nIter: 5798\nnorm(gX): 0.01799129500410574\nIter: 5799\nnorm(gX): 0.017759851553896217\nIter: 5800\nnorm(gX): 0.017525573431242048\nIter: 5801\nnorm(gX): 0.017288556493252585\nIter: 5802\nnorm(gX): 0.01704889979170935\nIter: 5803\nnorm(gX): 0.01680670576241306\nIter: 5804\nnorm(gX): 0.016562080434183432\nIter: 5805\nnorm(gX): 0.016315133659574033\nIter: 5806\nnorm(gX): 0.016065979369702867\nIter: 5807\nnorm(gX): 0.015814735855785817\nIter: 5808\nnorm(gX): 0.015561526080329163\nIter: 5809\nnorm(gX): 0.015306478021191977\nIter: 5810\nnorm(gX): 0.01504972505218334\nIter: 5811\nnorm(gX): 0.014791406364078151\nIter: 5812\nnorm(gX): 0.014531667430610455\nIter: 5813\nnorm(gX): 0.014270660524208233\nIter: 5814\nnorm(gX): 0.014008545286886555\nIter: 5815\nnorm(gX): 0.01374548936220328\nIter: 5816\nnorm(gX): 0.013481669094630114\nIter: 5817\nnorm(gX): 0.013217270303351202\nIter: 5818\nnorm(gX): 0.012952489137920327\nIter: 5819\nnorm(gX): 0.012687533023682139\nIter: 5820\nnorm(gX): 0.012422621705347752\nIter: 5821\nnorm(gX): 0.012157988397158754\nIter: 5822\nnorm(gX): 0.011893881048234\nIter: 5823\nnorm(gX): 0.011630563731307128\nIter: 5824\nnorm(gX): 0.01136831816215003\nIter: 5825\nnorm(gX): 0.011107445355645373\nIter: 5826\nnorm(gX): 0.01084826742192732\nIter: 5827\nnorm(gX): 0.010591129502462398\nIter: 5828\nnorm(gX): 0.010336401840654234\nIter: 5829\nnorm(gX): 0.010084481974300024\nIter: 5830\nnorm(gX): 0.00983579702711613\nIter: 5831\nnorm(gX): 0.009590806063345114\nIter: 5832\nnorm(gX): 0.009350002452076962\nIter: 5833\nnorm(gX): 0.00911391616573105\nIter: 5834\nnorm(gX): 0.008883115909595833\nIter: 5835\nnorm(gX): 0.008658210945876979\nIter: 5836\nnorm(gX): 0.00843985243637399\nIter: 5837\nnorm(gX): 0.008228734083738554\nIter: 5838\nnorm(gX): 0.008025591804160596\nIter: 5839\nnorm(gX): 0.00783120211850177\nIter: 5840\nnorm(gX): 0.007646378910339716\nIter: 5841\nnorm(gX): 0.0074719681774864845\nIter: 5842\nnorm(gX): 0.007308840410053189\nIter: 5843\nnorm(gX): 0.007157880277157483\nIter: 5844\nnorm(gX): 0.007019973410488027\nIter: 5845\nnorm(gX): 0.006895990247525342\nIter: 5846\nnorm(gX): 0.006786767144419119\nIter: 5847\nnorm(gX): 0.006693085279184988\nIter: 5848\nnorm(gX): 0.006615648213145269\nIter: 5849\nnorm(gX): 0.006555059315981155\nIter: 5850\nnorm(gX): 0.006511800525495799\nIter: 5851\nnorm(gX): 0.006486214040346828\nIter: 5852\nnorm(gX): 0.006478488478547855\nIter: 5853\nnorm(gX): 0.006488650752794318\nIter: 5854\nnorm(gX): 0.006516564436769043\nIter: 5855\nnorm(gX): 0.0065619347911278205\nIter: 5856\nnorm(gX): 0.006624319983044991\nIter: 5857\nnorm(gX): 0.006703147475774569\nIter: 5858\nnorm(gX): 0.006797734171892193\nIter: 5859\nnorm(gX): 0.006907308712540154\nIter: 5860\nnorm(gX): 0.007031034364605813\nIter: 5861\nnorm(gX): 0.007168031129011497\nIter: 5862\nnorm(gX): 0.0073173960131693565\nIter: 5863\nnorm(gX): 0.007478220762616662\nIter: 5864\nnorm(gX): 0.007649606684501939\nIter: 5865\nnorm(gX): 0.00783067648145348\nIter: 5866\nnorm(gX): 0.008020583229457846\nIter: 5867\nnorm(gX): 0.008218516775992888\nIter: 5868\nnorm(gX): 0.00842370791249054\nIter: 5869\nnorm(gX): 0.008635430702148977\nIter: 5870\nnorm(gX): 0.008853003334945497\nIter: 5871\nnorm(gX): 0.009075787849666202\nIter: 5872\nnorm(gX): 0.009303189018844334\nIter: 5873\nnorm(gX): 0.009534652644462472\nIter: 5874\nnorm(gX): 0.009769663465296032\nIter: 5875\nnorm(gX): 0.010007742833974459\nIter: 5876\nnorm(gX): 0.010248446284603622\nIter: 5877\nnorm(gX): 0.010491361080745505\nIter: 5878\nnorm(gX): 0.010736103808187186\nIter: 5879\nnorm(gX): 0.010982318056940724\nIter: 5880\nnorm(gX): 0.01122967222136437\nIter: 5881\nnorm(gX): 0.011477857435607737\nIter: 5882\nnorm(gX): 0.01172658565276978\nIter: 5883\nnorm(gX): 0.011975587869973898\nIter: 5884\nnorm(gX): 0.01222461249706777\nIter: 5885\nnorm(gX): 0.012473423863717116\nIter: 5886\nnorm(gX): 0.012721800857723025\nIter: 5887\nnorm(gX): 0.012969535686295148\nIter: 5888\nnorm(gX): 0.013216432751497604\nIter: 5889\nnorm(gX): 0.01346230763092081\nIter: 5890\nnorm(gX): 0.01370698615481002\nIter: 5891\nnorm(gX): 0.01395030357122154\nIter: 5892\nnorm(gX): 0.014192103791244243\nIter: 5893\nnorm(gX): 0.014432238706804705\nIter: 5894\nnorm(gX): 0.014670567574222276\nIter: 5895\nnorm(gX): 0.01490695645709991\nIter: 5896\nnorm(gX): 0.015141277722852522\nIter: 5897\nnorm(gX): 0.015373409587517779\nIter: 5898\nnorm(gX): 0.015603235704130396\nIter: 5899\nnorm(gX): 0.01583064479034519\nIter: 5900\nnorm(gX): 0.01605553029137396\nIter: 5901\nnorm(gX): 0.016277790074772484\nIter: 5902\nnorm(gX): 0.01649732615388613\nIter: 5903\nnorm(gX): 0.016714044437150837\nIter: 5904\nnorm(gX): 0.01692785450067266\nIter: 5905\nnorm(gX): 0.017138669381817188\nIter: 5906\nnorm(gX): 0.017346405391749933\nIter: 5907\nnorm(gX): 0.017550981945078575\nIter: 5908\nnorm(gX): 0.017752321404932957\nIter: 5909\nnorm(gX): 0.017950348942046855\nIter: 5910\nnorm(gX): 0.01814499240641885\nIter: 5911\nnorm(gX): 0.01833618221043081\nIter: 5912\nnorm(gX): 0.018523851222308586\nIter: 5913\nnorm(gX): 0.01870793466894825\nIter: 5914\nnorm(gX): 0.018888370047248637\nIter: 5915\nnorm(gX): 0.019065097043160423\nIter: 5916\nnorm(gX): 0.019238057457729843\nIter: 5917\nnorm(gX): 0.01940719513951284\nIter: 5918\nnorm(gX): 0.019572455922756003\nIter: 5919\nnorm(gX): 0.019733787570856472\nIter: 5920\nnorm(gX): 0.0198911397245917\nIter: 5921\nnorm(gX): 0.020044463854724998\nIter: 5922\nnorm(gX): 0.0201937132185622\nIter: 5923\nnorm(gX): 0.02033884282015604\nIter: 5924\nnorm(gX): 0.02047980937377686\nIter: 5925\nnorm(gX): 0.02061657127043625\nIter: 5926\nnorm(gX): 0.020749088547134416\nIter: 5927\nnorm(gX): 0.02087732285862284\nIter: 5928\nnorm(gX): 0.021001237451480085\nIter: 5929\nnorm(gX): 0.021120797140252518\nIter: 5930\nnorm(gX): 0.02123596828555576\nIter: 5931\nnorm(gX): 0.021346718773888523\nIter: 5932\nnorm(gX): 0.021453017999075476\nIter: 5933\nnorm(gX): 0.021554836845181287\nIter: 5934\nnorm(gX): 0.021652147670747867\nIter: 5935\nnorm(gX): 0.021744924294280002\nIter: 5936\nnorm(gX): 0.021833141980859064\nIter: 5937\nnorm(gX): 0.021916777429773993\nIter: 5938\nnorm(gX): 0.02199580876311866\nIter: 5939\nnorm(gX): 0.02207021551523495\nIter: 5940\nnorm(gX): 0.02213997862297866\nIter: 5941\nnorm(gX): 0.022205080416689518\nIter: 5942\nnorm(gX): 0.022265504611848858\nIter: 5943\nnorm(gX): 0.022321236301348205\nIter: 5944\nnorm(gX): 0.022372261948322567\nIter: 5945\nnorm(gX): 0.022418569379516362\nIter: 5946\nnorm(gX): 0.0224601477791116\nIter: 5947\nnorm(gX): 0.022496987683025143\nIter: 5948\nnorm(gX): 0.02252908097360127\nIter: 5949\nnorm(gX): 0.022556420874689405\nIter: 5950\nnorm(gX): 0.02257900194708469\nIter: 5951\nnorm(gX): 0.022596820084301406\nIter: 5952\nnorm(gX): 0.022609872508653005\nIter: 5953\nnorm(gX): 0.022618157767634147\nIter: 5954\nnorm(gX): 0.022621675730574064\nIter: 5955\nnorm(gX): 0.022620427585577824\nIter: 5956\nnorm(gX): 0.022614415836708663\nIter: 5957\nnorm(gX): 0.022603644301425307\nIter: 5958\nnorm(gX): 0.02258811810826948\nIter: 5959\nnorm(gX): 0.02256784369479726\nIter: 5960\nnorm(gX): 0.022542828805755158\nIter: 5961\nnorm(gX): 0.022513082491483066\nIter: 5962\nnorm(gX): 0.02247861510659119\nIter: 5963\nnorm(gX): 0.02243943830888708\nIter: 5964\nnorm(gX): 0.02239556505854005\nIter: 5965\nnorm(gX): 0.022347009617573202\nIter: 5966\nnorm(gX): 0.02229378754958049\nIter: 5967\nnorm(gX): 0.022235915719806026\nIter: 5968\nnorm(gX): 0.022173412295514738\nIter: 5969\nnorm(gX): 0.022106296746723982\nIter: 5970\nnorm(gX): 0.022034589847291848\nIter: 5971\nnorm(gX): 0.021958313676434197\nIter: 5972\nnorm(gX): 0.021877491620639598\nIter: 5973\nnorm(gX): 0.02179214837607813\nIter: 5974\nnorm(gX): 0.021702309951491847\nIter: 5975\nnorm(gX): 0.02160800367166138\nIter: 5976\nnorm(gX): 0.021509258181445803\nIter: 5977\nnorm(gX): 0.021406103450493114\nIter: 5978\nnorm(gX): 0.02129857077864251\nIter: 5979\nnorm(gX): 0.021186692802110867\nIter: 5980\nnorm(gX): 0.021070503500516324\nIter: 5981\nnorm(gX): 0.02095003820482054\nIter: 5982\nnorm(gX): 0.020825333606280284\nIter: 5983\nnorm(gX): 0.020696427766494843\nIter: 5984\nnorm(gX): 0.020563360128651564\nIter: 5985\nnorm(gX): 0.02042617153008482\nIter: 5986\nnorm(gX): 0.020284904216263906\nIter: 5987\nnorm(gX): 0.020139601856359705\nIter: 5988\nnorm(gX): 0.01999030956051242\nIter: 5989\nnorm(gX): 0.01983707389901277\nIter: 5990\nnorm(gX): 0.019679942923487823\nIter: 5991\nnorm(gX): 0.01951896619040319\nIter: 5992\nnorm(gX): 0.0193541947869908\nIter: 5993\nnorm(gX): 0.019185681359921346\nIter: 5994\nnorm(gX): 0.01901348014693297\nIter: 5995\nnorm(gX): 0.018837647011740764\nIter: 5996\nnorm(gX): 0.018658239482517158\nIter: 5997\nnorm(gX): 0.018475316794340745\nIter: 5998\nnorm(gX): 0.018288939935953555\nIter: 5999\nnorm(gX): 0.018099171701319385\nIter: 6000\nnorm(gX): 0.017906076746410905\nIter: 6001\nnorm(gX): 0.01770972165179473\nIter: 6002\nnorm(gX): 0.01751017499161301\nIter: 6003\nnorm(gX): 0.017307507409574072\nIter: 6004\nnorm(gX): 0.01710179170275305\nIter: 6005\nnorm(gX): 0.016893102913955526\nIter: 6006\nnorm(gX): 0.01668151843360822\nIter: 6007\nnorm(gX): 0.016467118112128976\nIter: 6008\nnorm(gX): 0.01624998438396277\nIter: 6009\nnorm(gX): 0.016030202404508205\nIter: 6010\nnorm(gX): 0.015807860201372485\nIter: 6011\nnorm(gX): 0.015583048841476611\nIter: 6012\nnorm(gX): 0.015355862615838173\nIter: 6013\nnorm(gX): 0.015126399243930543\nIter: 6014\nnorm(gX): 0.014894760099876388\nIter: 6015\nnorm(gX): 0.014661050462895444\nIter: 6016\nnorm(gX): 0.014425379794790208\nIter: 6017\nnorm(gX): 0.014187862047506087\nIter: 6018\nnorm(gX): 0.013948616004281628\nIter: 6019\nnorm(gX): 0.013707765658141371\nIter: 6020\nnorm(gX): 0.013465440632107799\nIter: 6021\nnorm(gX): 0.01322177664586047\nIter: 6022\nnorm(gX): 0.012976916034177692\nIter: 6023\nnorm(gX): 0.0127310083230505\nIter: 6024\nnorm(gX): 0.012484210869997766\nIter: 6025\nnorm(gX): 0.012236689575734438\nIter: 6026\nnorm(gX): 0.011988619675039768\nIter: 6027\nnorm(gX): 0.011740186615389248\nIter: 6028\nnorm(gX): 0.011491587032451983\nIter: 6029\nnorm(gX): 0.01124302983228359\nIter: 6030\nnorm(gX): 0.010994737390354451\nIter: 6031\nnorm(gX): 0.010746946877874499\nIter: 6032\nnorm(gX): 0.0104999117256602\nIter: 6033\nnorm(gX): 0.010253903235287123\nIter: 6034\nnorm(gX): 0.010009212345779062\nIter: 6035\nnorm(gX): 0.009766151561795144\nIter: 6036\nnorm(gX): 0.009525057045522749\nIter: 6037\nnorm(gX): 0.009286290868790304\nIter: 6038\nnorm(gX): 0.009050243413789733\nIter: 6039\nnorm(gX): 0.008817335899565465\nIter: 6040\nnorm(gX): 0.008588022995849697\nIter: 6041\nnorm(gX): 0.008362795465507179\nIter: 6042\nnorm(gX): 0.008142182750152577\nIter: 6043\nnorm(gX): 0.00792675538005304\nIter: 6044\nnorm(gX): 0.007717127048009796\nIter: 6045\nnorm(gX): 0.00751395613804361\nIter: 6046\nnorm(gX): 0.007317946443988969\nIter: 6047\nnorm(gX): 0.00712984675377137\nIter: 6048\nnorm(gX): 0.006950448917421387\nIter: 6049\nnorm(gX): 0.006780583969710347\nIter: 6050\nnorm(gX): 0.0066211158546070625\nIter: 6051\nnorm(gX): 0.006472932315287997\nIter: 6052\nnorm(gX): 0.006336932590010831\nIter: 6053\nnorm(gX): 0.0062140117099808235\nIter: 6054\nnorm(gX): 0.006105041444166317\nIter: 6055\nnorm(gX): 0.006010848278186582\nIter: 6056\nnorm(gX): 0.005932189229455064\nIter: 6057\nnorm(gX): 0.005869726740548125\nIter: 6058\nnorm(gX): 0.005824004283428648\nIter: 6059\nnorm(gX): 0.005795424557950925\nIter: 6060\nnorm(gX): 0.0057842321943075104\nIter: 6061\nnorm(gX): 0.005790502617997069\nIter: 6062\nnorm(gX): 0.0058141382123317615\nIter: 6063\nnorm(gX): 0.005854872188710895\nIter: 6064\nnorm(gX): 0.005912279773932786\nIter: 6065\nnorm(gX): 0.005985795593801927\nIter: 6066\nnorm(gX): 0.006074735599847749\nIter: 6067\nnorm(gX): 0.006178321625391264\nIter: 6068\nnorm(gX): 0.0062957066754388774\nIter: 6069\nnorm(gX): 0.006425999301485049\nIter: 6070\nnorm(gX): 0.006568285802391497\nIter: 6071\nnorm(gX): 0.00672164943499211\nIter: 6072\nnorm(gX): 0.006885186237687685\nIter: 6073\nnorm(gX): 0.007058017417421439\nIter: 6074\nnorm(gX): 0.007239298503805258\nIter: 6075\nnorm(gX): 0.007428225633360387\nIter: 6076\nnorm(gX): 0.007624039405466401\nIter: 6077\nnorm(gX): 0.007826026769207968\nIter: 6078\nnorm(gX): 0.008033521376543413\nIter: 6079\nnorm(gX): 0.008245902789642053\nIter: 6080\nnorm(gX): 0.008462594871569168\nIter: 6081\nnorm(gX): 0.008683063629233672\nIter: 6082\nnorm(gX): 0.008906814720861439\nIter: 6083\nnorm(gX): 0.009133390790432915\nIter: 6084\nnorm(gX): 0.00936236874951425\nIter: 6085\nnorm(gX): 0.00959335709274705\nIter: 6086\nnorm(gX): 0.00982599330627806\nIter: 6087\nnorm(gX): 0.010059941407571814\nIter: 6088\nnorm(gX): 0.010294889639441775\nIter: 6089\nnorm(gX): 0.010530548329639783\nIter: 6090\nnorm(gX): 0.01076664791909291\nIter: 6091\nnorm(gX): 0.011002937156269432\nIter: 6092\nnorm(gX): 0.011239181451254339\nIter: 6093\nnorm(gX): 0.011475161380878927\nIter: 6094\nnorm(gX): 0.01171067133476655\nIter: 6095\nnorm(gX): 0.011945518291687982\nIter: 6096\nnorm(gX): 0.012179520715420004\nIter: 6097\nnorm(gX): 0.012412507559629984\nIter: 6098\nnorm(gX): 0.012644317371730827\nIter: 6099\nnorm(gX): 0.012874797486284544\nIter: 6100\nnorm(gX): 0.013103803299210945\nIter: 6101\nnorm(gX): 0.013331197614764308\nIter: 6102\nnorm(gX): 0.013556850057896854\nIter: 6103\nnorm(gX): 0.013780636545388735\nIter: 6104\nnorm(gX): 0.014002438809669386\nIter: 6105\nnorm(gX): 0.014222143969929692\nIter: 6106\nnorm(gX): 0.014439644145615448\nIter: 6107\nnorm(gX): 0.014654836107927676\nIter: 6108\nnorm(gX): 0.014867620965414625\nIter: 6109\nnorm(gX): 0.015077903880128262\nIter: 6110\nnorm(gX): 0.015285593811213458\nIter: 6111\nnorm(gX): 0.015490603283130294\nIter: 6112\nnorm(gX): 0.01569284817597681\nIter: 6113\nnorm(gX): 0.01589224753569649\nIter: 6114\nnorm(gX): 0.016088723402170244\nIter: 6115\nnorm(gX): 0.016282200653369244\nIter: 6116\nnorm(gX): 0.01647260686401458\nIter: 6117\nnorm(gX): 0.01665987217726729\nIter: 6118\nnorm(gX): 0.016843929188216564\nIter: 6119\nnorm(gX): 0.01702471283795733\nIter: 6120\nnorm(gX): 0.017202160317283298\nIter: 6121\nnorm(gX): 0.017376210979048897\nIter: 6122\nnorm(gX): 0.017546806258365816\nIter: 6123\nnorm(gX): 0.01771388959990345\nIter: 6124\nnorm(gX): 0.017877406391610357\nIter: 6125\nnorm(gX): 0.018037303904270507\nIter: 6126\nnorm(gX): 0.01819353123634165\nIter: 6127\nnorm(gX): 0.01834603926358268\nIter: 6128\nnorm(gX): 0.018494780593045598\nIter: 6129\nnorm(gX): 0.018639709521010833\nIter: 6130\nnorm(gX): 0.0187807819945358\nIter: 6131\nnorm(gX): 0.018917955576251238\nIter: 6132\nnorm(gX): 0.019051189412154588\nIter: 6133\nnorm(gX): 0.019180444202102225\nIter: 6134\nnorm(gX): 0.01930568217276852\nIter: 6135\nnorm(gX): 0.01942686705285696\nIter: 6136\nnorm(gX): 0.0195439640503498\nIter: 6137\nnorm(gX): 0.019656939831629052\nIter: 6138\nnorm(gX): 0.019765762502315088\nIter: 6139\nnorm(gX): 0.019870401589624317\nIter: 6140\nnorm(gX): 0.019970828026171127\nIter: 6141\nnorm(gX): 0.020067014135048265\nIter: 6142\nnorm(gX): 0.020158933616089683\nIter: 6143\nnorm(gX): 0.02024656153319377\nIter: 6144\nnorm(gX): 0.020329874302652016\nIter: 6145\nnorm(gX): 0.02040884968235691\nIter: 6146\nnorm(gX): 0.020483466761818735\nIter: 6147\nnorm(gX): 0.020553705952947008\nIter: 6148\nnorm(gX): 0.020619548981498554\nIter: 6149\nnorm(gX): 0.02068097887914104\nIter: 6150\nnorm(gX): 0.020737979976107865\nIter: 6151\nnorm(gX): 0.020790537894334885\nIter: 6152\nnorm(gX): 0.020838639541099425\nIter: 6153\nnorm(gX): 0.0208822731030785\nIter: 6154\nnorm(gX): 0.02092142804080947\nIter: 6155\nnorm(gX): 0.02095609508350139\nIter: 6156\nnorm(gX): 0.02098626622418279\nIter: 6157\nnorm(gX): 0.021011934715177958\nIter: 6158\nnorm(gX): 0.02103309506382701\nIter: 6159\nnorm(gX): 0.021049743028495525\nIter: 6160\nnorm(gX): 0.02106187561482165\nIter: 6161\nnorm(gX): 0.021069491072181448\nIter: 6162\nnorm(gX): 0.02107258889038177\nIter: 6163\nnorm(gX): 0.02107116979653339\nIter: 6164\nnorm(gX): 0.021065235752142832\nIter: 6165\nnorm(gX): 0.021054789950368236\nIter: 6166\nnorm(gX): 0.02103983681347127\nIter: 6167\nnorm(gX): 0.021020381990437792\nIter: 6168\nnorm(gX): 0.020996432354764904\nIter: 6169\nnorm(gX): 0.020967996002452638\nIter: 6170\nnorm(gX): 0.020935082250150623\nIter: 6171\nnorm(gX): 0.02089770163350207\nIter: 6172\nnorm(gX): 0.020855865905674134\nIter: 6173\nnorm(gX): 0.02080958803609125\nIter: 6174\nnorm(gX): 0.020758882209384495\nIter: 6175\nnorm(gX): 0.02070376382456243\nIter: 6176\nnorm(gX): 0.02064424949442303\nIter: 6177\nnorm(gX): 0.020580357045232783\nIter: 6178\nnorm(gX): 0.020512105516669876\nIter: 6179\nnorm(gX): 0.020439515162102892\nIter: 6180\nnorm(gX): 0.02036260744916345\nIter: 6181\nnorm(gX): 0.020281405060714217\nIter: 6182\nnorm(gX): 0.020195931896185824\nIter: 6183\nnorm(gX): 0.02010621307336162\nIter: 6184\nnorm(gX): 0.020012274930625466\nIter: 6185\nnorm(gX): 0.019914145029730403\nIter: 6186\nnorm(gX): 0.019811852159135054\nIter: 6187\nnorm(gX): 0.01970542633795773\nIter: 6188\nnorm(gX): 0.019594898820621414\nIter: 6189\nnorm(gX): 0.019480302102232627\nIter: 6190\nnorm(gX): 0.019361669924792036\nIter: 6191\nnorm(gX): 0.0192390372843044\nIter: 6192\nnorm(gX): 0.019112440438889124\nIter: 6193\nnorm(gX): 0.018981916917944177\nIter: 6194\nnorm(gX): 0.018847505532545713\nIter: 6195\nnorm(gX): 0.018709246387127985\nIter: 6196\nnorm(gX): 0.018567180892624918\nIter: 6197\nnorm(gX): 0.018421351781175824\nIter: 6198\nnorm(gX): 0.018271803122597133\nIter: 6199\nnorm(gX): 0.01811858034274695\nIter: 6200\nnorm(gX): 0.017961730243998913\nIter: 6201\nnorm(gX): 0.017801301028059033\nIter: 6202\nnorm(gX): 0.017637342321322764\nIter: 6203\nnorm(gX): 0.017469905203032206\nIter: 6204\nnorm(gX): 0.017299042236568724\nIter: 6205\nnorm(gX): 0.017124807504151587\nIter: 6206\nnorm(gX): 0.016947256645319708\nIter: 6207\nnorm(gX): 0.01676644689960201\nIter: 6208\nnorm(gX): 0.016582437153796273\nIter: 6209\nnorm(gX): 0.01639528799436856\nIter: 6210\nnorm(gX): 0.016205061765500354\nIter: 6211\nnorm(gX): 0.016011822633409573\nIter: 6212\nnorm(gX): 0.01581563665765018\nIter: 6213\nnorm(gX): 0.01561657187011754\nIter: 6214\nnorm(gX): 0.015414698362633502\nIter: 6215\nnorm(gX): 0.015210088384104627\nIter: 6216\nnorm(gX): 0.015002816448300236\nIter: 6217\nnorm(gX): 0.014792959453485783\nIter: 6218\nnorm(gX): 0.01458059681527946\nIter: 6219\nnorm(gX): 0.014365810614288592\nIter: 6220\nnorm(gX): 0.014148685760257717\nIter: 6221\nnorm(gX): 0.013929310174693524\nIter: 6222\nnorm(gX): 0.01370777499420253\nIter: 6223\nnorm(gX): 0.013484174797049935\nIter: 6224\nnorm(gX): 0.013258607855791563\nIter: 6225\nnorm(gX): 0.01303117641917725\nIter: 6226\nnorm(gX): 0.012801987027016995\nIter: 6227\nnorm(gX): 0.012571150862097617\nIter: 6228\nnorm(gX): 0.01233878414386396\nIter: 6229\nnorm(gX): 0.01210500856916271\nIter: 6230\nnorm(gX): 0.011869951806062437\nIter: 6231\nnorm(gX): 0.011633748047505919\nIter: 6232\nnorm(gX): 0.011396538632559185\nIter: 6233\nnorm(gX): 0.011158472743801993\nIter: 6234\nnorm(gX): 0.010919708190709964\nIter: 6235\nnorm(gX): 0.01068041228983332\nIter: 6236\nnorm(gX): 0.010440762854041637\nIter: 6237\nnorm(gX): 0.010200949304272233\nIter: 6238\nnorm(gX): 0.009961173918646034\nIter: 6239\nnorm(gX): 0.009721653235128176\nIter: 6240\nnorm(gX): 0.00948261962502223\nIter: 6241\nnorm(gX): 0.00924432305554605\nIter: 6242\nnorm(gX): 0.009007033060054008\nIter: 6243\nnorm(gX): 0.008771040934087183\nIter: 6244\nnorm(gX): 0.00853666217373265\nIter: 6245\nnorm(gX): 0.00830423916935139\nIter: 6246\nnorm(gX): 0.008074144161520542\nIter: 6247\nnorm(gX): 0.00784678245628732\nIter: 6248\nnorm(gX): 0.007622595881994629\nIter: 6249\nnorm(gX): 0.007402066448203556\nIter: 6250\nnorm(gX): 0.007185720136824702\nIter: 6251\nnorm(gX): 0.006974130713793324\nIter: 6252\nnorm(gX): 0.0067679233942403985\nIter: 6253\nnorm(gX): 0.00656777812279277\nIter: 6254\nnorm(gX): 0.006374432142492823\nIter: 6255\nnorm(gX): 0.006188681421661485\nIter: 6256\nnorm(gX): 0.006011380393232293\nIter: 6257\nnorm(gX): 0.005843439346423651\nIter: 6258\nnorm(gX): 0.005685818716350031\nIter: 6259\nnorm(gX): 0.005539519473229968\nIter: 6260\nnorm(gX): 0.0054055688603873425\nIter: 6261\nnorm(gX): 0.00528500091661994\nIter: 6262\nnorm(gX): 0.005178831587746605\nIter: 6263\nnorm(gX): 0.005088028806379965\nIter: 6264\nnorm(gX): 0.005013478676784678\nIter: 6265\nnorm(gX): 0.004955949755109648\n"
],
[
"groups = get_group(ans, tol=1.5)",
"_____no_output_____"
],
[
"groups",
"_____no_output_____"
],
[
"purity_score(y_true,groups)",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,10))\nplot_res_data(points,ans,groups)",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,10))\nplot_res_data(points,ans,groups,way='ans')",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,10))\nplot_res_data(points,ans,groups,way='points')",
"_____no_output_____"
],
[
"plt.plot(np.log(AGM_loss))",
"_____no_output_____"
]
],
[
[
"## GM Sample",
"_____no_output_____"
]
],
[
[
"lbd = 0.05\ndelta = 1e-3\nfunc = lambda X,B: loss_func(X,points,lbd,delta,B)\ngrad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D)\nans2,GM_loss = GM(points,func,grad,1e-2)",
"Iter: 1\nnorm(gX): 787.3309095574176\nstep size: 0.5\nIter: 2\nnorm(gX): 717.3207051746559\nstep size: 0.25\nIter: 3\nnorm(gX): 869.1680177536285\nstep size: 0.25\nIter: 4\nnorm(gX): 579.8730414339898\nstep size: 0.125\nIter: 5\nnorm(gX): 728.0916829373107\nstep size: 0.125\nIter: 6\nnorm(gX): 554.2331149835528\nstep size: 0.0625\nIter: 7\nnorm(gX): 644.0911262687881\nstep size: 0.0625\nIter: 8\nnorm(gX): 490.7951170084344\nstep size: 0.03125\nIter: 9\nnorm(gX): 614.2029671615845\nstep size: 0.03125\nIter: 10\nnorm(gX): 480.3019755973577\nstep size: 0.015625\nIter: 11\nnorm(gX): 600.0163216324485\nstep size: 0.015625\nIter: 12\nnorm(gX): 453.4114706121287\nstep size: 0.0078125\nIter: 13\nnorm(gX): 590.9166428070558\nstep size: 0.0078125\nIter: 14\nnorm(gX): 447.1984898984225\nstep size: 0.00390625\nIter: 15\nnorm(gX): 581.2975891147249\nstep size: 0.00390625\nIter: 16\nnorm(gX): 435.8330126203057\nstep size: 0.001953125\nIter: 17\nnorm(gX): 571.908383524523\nstep size: 0.001953125\nIter: 18\nnorm(gX): 429.7044438212493\nstep size: 0.0009765625\nIter: 19\nnorm(gX): 560.8841624135523\nstep size: 0.0009765625\nIter: 20\nnorm(gX): 421.005422757075\nstep size: 0.00048828125\nIter: 21\nnorm(gX): 548.4417384562693\nstep size: 0.00048828125\nIter: 22\nnorm(gX): 411.9085373734193\nstep size: 0.000244140625\nIter: 23\nnorm(gX): 531.5304782273596\nstep size: 0.000244140625\nIter: 24\nnorm(gX): 395.44663700667616\nstep size: 0.0001220703125\nIter: 25\nnorm(gX): 494.17945741619\nstep size: 0.0001220703125\nIter: 26\nnorm(gX): 342.2465058378749\nstep size: 6.103515625e-05\nIter: 27\nnorm(gX): 319.0181133368159\nstep size: 6.103515625e-05\nIter: 28\nnorm(gX): 355.27509678315346\nstep size: 6.103515625e-05\nIter: 29\nnorm(gX): 407.5858409466523\nstep size: 6.103515625e-05\nIter: 30\nnorm(gX): 426.41395850342457\nstep size: 6.103515625e-05\nIter: 31\nnorm(gX): 442.36594719738235\nstep size: 6.103515625e-05\nIter: 32\nnorm(gX): 459.9227981094003\nstep size: 6.103515625e-05\nIter: 33\nnorm(gX): 486.167775200806\nstep size: 6.103515625e-05\nIter: 34\nnorm(gX): 165.8747044656943\nstep size: 3.0517578125e-05\nIter: 35\nnorm(gX): 420.73096451193794\nstep size: 0.0001220703125\nIter: 36\nnorm(gX): 163.27043164641958\nstep size: 3.0517578125e-05\nIter: 37\nnorm(gX): 215.7840047653005\nstep size: 6.103515625e-05\nIter: 38\nnorm(gX): 306.68879197341676\nstep size: 6.103515625e-05\nIter: 39\nnorm(gX): 142.2033929786925\nstep size: 3.0517578125e-05\nIter: 40\nnorm(gX): 338.4521949419133\nstep size: 0.0001220703125\nIter: 41\nnorm(gX): 143.82229657708416\nstep size: 3.0517578125e-05\nIter: 42\nnorm(gX): 350.6609796639935\nstep size: 0.0001220703125\nIter: 43\nnorm(gX): 146.3187083281161\nstep size: 3.0517578125e-05\nIter: 44\nnorm(gX): 366.7335371465842\nstep size: 0.0001220703125\nIter: 45\nnorm(gX): 148.66723718232126\nstep size: 3.0517578125e-05\nIter: 46\nnorm(gX): 381.7090389331514\nstep size: 0.0001220703125\nIter: 47\nnorm(gX): 151.02935509324428\nstep size: 3.0517578125e-05\nIter: 48\nnorm(gX): 193.62179070984004\nstep size: 6.103515625e-05\nIter: 49\nnorm(gX): 276.5455454932033\nstep size: 6.103515625e-05\nIter: 50\nnorm(gX): 142.10269038431406\nstep size: 3.0517578125e-05\nIter: 51\nnorm(gX): 356.06003137795113\nstep size: 0.0001220703125\nIter: 52\nnorm(gX): 158.72277964099894\nstep size: 3.0517578125e-05\nIter: 53\nnorm(gX): 209.64623209895095\nstep size: 6.103515625e-05\nIter: 54\nnorm(gX): 306.4779456159885\nstep size: 6.103515625e-05\nIter: 55\nnorm(gX): 148.2560614908001\nstep size: 3.0517578125e-05\nIter: 56\nnorm(gX): 383.3807666026402\nstep size: 0.0001220703125\nIter: 57\nnorm(gX): 153.96045426887534\nstep size: 3.0517578125e-05\nIter: 58\nnorm(gX): 199.970825225825\nstep size: 6.103515625e-05\nIter: 59\nnorm(gX): 288.72998094068294\nstep size: 6.103515625e-05\nIter: 60\nnorm(gX): 144.60695493771115\nstep size: 3.0517578125e-05\nIter: 61\nnorm(gX): 374.04918804625186\nstep size: 0.0001220703125\nIter: 62\nnorm(gX): 162.54160116460784\nstep size: 3.0517578125e-05\nIter: 63\nnorm(gX): 217.47132754768415\nstep size: 6.103515625e-05\nIter: 64\nnorm(gX): 320.900586308008\nstep size: 6.103515625e-05\nIter: 65\nnorm(gX): 151.31531280260015\nstep size: 3.0517578125e-05\nIter: 66\nnorm(gX): 194.64672095040478\nstep size: 6.103515625e-05\nIter: 67\nnorm(gX): 279.2437225677172\nstep size: 6.103515625e-05\nIter: 68\nnorm(gX): 142.86909157863983\nstep size: 3.0517578125e-05\nIter: 69\nnorm(gX): 362.7153458124851\nstep size: 0.0001220703125\nIter: 70\nnorm(gX): 160.39676998329531\nstep size: 3.0517578125e-05\n"
],
[
"len(GM_loss)",
"_____no_output_____"
],
[
"groups = get_group(ans2, tol=2)",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,10))\nplot_res_data(points,ans2,groups,way='points')",
"_____no_output_____"
],
[
"plt.rc_context({'axes.edgecolor':'orange', 'xtick.color':'green', 'ytick.color':'green', 'figure.facecolor':'white'})",
"_____no_output_____"
],
[
"plt.plot(np.log(GM_loss - GM_loss[len(GM_loss)-1]))\nplt.ylabel(\"Loss: |f(xk) - f(x*)|\")\nplt.xlabel(\"Iters\")",
"_____no_output_____"
]
],
[
[
"## GM_BB Sample",
"_____no_output_____"
]
],
[
[
"lbd = 0.05\ndelta = 1e-3\nfunc = lambda X,B: loss_func(X,points,lbd,delta,B)\ngrad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D)\nans_BB,GM_BB_loss = GM_BB(points,func,grad,1e-5)",
"_____no_output_____"
],
[
"len(GM_BB_loss)",
"_____no_output_____"
],
[
"groups = get_group(ans_BB, tol=2)",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,10))\nplot_res_data(points,ans_BB,groups,way='points')",
"_____no_output_____"
],
[
"plt.rc_context({'axes.edgecolor':'black', 'xtick.color':'black', 'ytick.color':'black', 'figure.facecolor':'white'})",
"_____no_output_____"
],
[
"plt.figure(figsize=(8,8))\nplt.ylabel(\"Loss: log(|f(xk) - f(x*)|)\")\nplt.plot(np.log(GM_BB_loss - GM_BB_loss[len(GM_BB_loss)-1]),label=\"GM_BB\")\nplt.plot(np.log(GM_loss - GM_loss[len(GM_loss)-1]),color=\"green\",label=\"GM\")\nplt.legend()\nplt.savefig(\"D:\\Study\\MDS\\Term 1\\Optimization\\Final\\Figure\\BB_GM_Loss\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"## BFGS\ntol=0.03 is quite almost minimum, if smaller s.y is too small 1/s.y=nan",
"_____no_output_____"
]
],
[
[
"lbd = 0.05\ndelta = 1e-3\nfunc = lambda X,B: loss_func(X,points,lbd,delta,B)\ngrad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D)\nans_BFGS,BFGS_loss = BFGS(points,func,grad,0.003)",
"nomr_2: 9.668224538359638\ns.T.dot(y): 701.5651720961256\nalpha: 0.5\nalpha: 0.25\nnomr_2: 3.247738361943829\ns.T.dot(y): 24.678406178873377\nalpha: 0.5\nnomr_2: 1.7096200287469712\ns.T.dot(y): 2.866165799370821\nalpha: 0.5\nnomr_2: 1.451120877859875\ns.T.dot(y): 0.8518010608893598\nalpha: 0.5\nnomr_2: 1.331676711032868\ns.T.dot(y): 0.6431933240655382\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.9838595410518371\ns.T.dot(y): 0.24966947673657425\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.8295665012864109\ns.T.dot(y): 0.16784448492146117\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.7619401713145493\ns.T.dot(y): 0.14290905788337285\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.8659924331764195\ns.T.dot(y): 0.13342807008668855\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.6028649498770787\ns.T.dot(y): 0.08628937686481358\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.6851670646316601\ns.T.dot(y): 0.06843983811339856\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.6120160927517656\ns.T.dot(y): 0.06795025871397452\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.4952353563061624\ns.T.dot(y): 0.04510948691830842\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.4949837889670403\ns.T.dot(y): 0.040957375990223546\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.49915218096786745\ns.T.dot(y): 0.03041203185397912\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nnomr_2: 0.36911658162806577\ns.T.dot(y): 0.014237038164743344\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nnomr_2: 0.27085154984887444\ns.T.dot(y): 0.011515398079876872\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nnomr_2: 0.26103013583270485\ns.T.dot(y): 0.005468620402676798\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nnomr_2: 0.20873707045088466\ns.T.dot(y): 0.005532150869488454\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nnomr_2: 0.21576093288338857\ns.T.dot(y): 0.002931747090889283\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nnomr_2: 0.23640487608501978\ns.T.dot(y): 0.0038202956008701245\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nalpha: 0.0625\nnomr_2: 0.1503827734516125\ns.T.dot(y): 0.0009806910395336128\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nnomr_2: 0.1798230711412162\ns.T.dot(y): 0.002007846950689463\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nalpha: 0.0625\nnomr_2: 0.189954036456829\ns.T.dot(y): 0.0006309806149973769\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nalpha: 0.0625\nnomr_2: 0.19414938509920746\ns.T.dot(y): 0.0011389236541280313\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nalpha: 0.0625\nnomr_2: 0.17089535420188678\ns.T.dot(y): 0.0008622366045833219\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nalpha: 0.0625\nnomr_2: 0.1440767502207838\ns.T.dot(y): 0.00035393028232577834\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nalpha: 0.0625\nnomr_2: 0.10771308744034072\ns.T.dot(y): 0.00039100507007392283\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nalpha: 0.0625\nnomr_2: 0.12628732554684297\ns.T.dot(y): 0.0004256811555184352\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nalpha: 0.0625\nnomr_2: 0.14833548538783328\ns.T.dot(y): 0.0002526854829517081\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nalpha: 0.0625\nnomr_2: 0.07284624622955406\ns.T.dot(y): 0.00031390737424269964\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nnomr_2: 0.09264078939479146\ns.T.dot(y): 0.0003103423336840922\nalpha: 0.5\nalpha: 0.25\nalpha: 0.125\nnomr_2: 0.07097884429003483\ns.T.dot(y): 0.0002359766522106659\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.06509914318609143\ns.T.dot(y): 0.00029393001066347165\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.07150699073108663\ns.T.dot(y): 0.00025342785537280863\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.03283326604845343\ns.T.dot(y): 0.0001231723982187282\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.05058732663871981\ns.T.dot(y): 0.00017431307561467015\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.03195587339412929\ns.T.dot(y): 0.00010142552448493964\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.02970220268305768\ns.T.dot(y): 4.447448686143305e-05\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.016599801322637678\ns.T.dot(y): 4.855977304513286e-05\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.01642036969518927\ns.T.dot(y): 3.347399203217679e-05\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.013290603540120306\ns.T.dot(y): 1.6950089714771923e-05\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.012449298718357621\ns.T.dot(y): 1.0105078026091953e-05\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.007723987685284644\ns.T.dot(y): 9.833858753055153e-06\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.007388915612443359\ns.T.dot(y): 5.356882888459674e-06\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.004981409297359743\ns.T.dot(y): 1.4808378468305975e-06\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.004220851832501114\ns.T.dot(y): 1.5044364795982285e-06\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.0031543465833969225\ns.T.dot(y): 9.068998476327401e-07\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.0016830123435260238\ns.T.dot(y): 4.5762674751671785e-07\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.0013866682385022455\ns.T.dot(y): 1.5158074742724683e-07\nalpha: 0.5\nnomr_2: 0.0010810812759605092\ns.T.dot(y): 2.542250284136836e-07\nalpha: 0.5\nnomr_2: 0.0008366622002465411\ns.T.dot(y): 1.0122401006834466e-07\nalpha: 0.5\nnomr_2: 0.0007060081558932937\ns.T.dot(y): 9.209445594082792e-08\nalpha: 0.5\nnomr_2: 0.00044090416830751836\ns.T.dot(y): 3.647829536126494e-08\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.0002694027203774851\ns.T.dot(y): 8.107864531929442e-09\nalpha: 0.5\nalpha: 0.25\nnomr_2: 0.0001270251151546654\ns.T.dot(y): 4.59375126412316e-09\nalpha: 0.5\nalpha: 0.25\nnomr_2: 8.27120113900032e-05\ns.T.dot(y): 1.4044715784406804e-09\nalpha: 0.5\nalpha: 0.25\nnomr_2: 4.1765654194568245e-05\ns.T.dot(y): 3.678553771164134e-10\nalpha: 0.5\nalpha: 0.25\nnomr_2: 2.2612698611330335e-05\ns.T.dot(y): 1.0143341635437645e-10\nalpha: 0.5\nalpha: 0.25\nnomr_2: 1.4519342466484985e-05\ns.T.dot(y): 3.1036566696031584e-11\nalpha: 0.5\nnomr_2: 9.221857014929621e-06\ns.T.dot(y): 2.1744788293643707e-11\nalpha: 0.5\nalpha: 0.25\nnomr_2: 5.865819514525992e-06\ns.T.dot(y): 4.5815402857169294e-12\n"
],
[
"groups = get_group(ans_BFGS, tol=2)",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,10))\nplot_res_data(points,ans_BFGS,groups)",
"_____no_output_____"
],
[
"plt.rc_context({'axes.edgecolor':'orange', 'xtick.color':'green', 'ytick.color':'green', 'figure.facecolor':'white'})",
"_____no_output_____"
],
[
"plt.figure(figsize=(5,5))\nplt.ylabel(\"Loss\")\nplt.plot(np.log(BFGS_loss - BFGS_loss[len(BFGS_loss)-1]))\nplt.show()",
"_____no_output_____"
]
],
[
[
"## LBFGS\ntol=0.03 is quite almost minimum, if smaller s.y is too small 1/s.y=nan",
"_____no_output_____"
]
],
[
[
"lbd = 0.05\ndelta = 1e-3\nfunc = lambda X,B: loss_func(X,points,lbd,delta,B)\ngrad = lambda X,B,D: grad_hub_matrix(X,delta,points,lbd,B,D)\nans_LBFGS,LBFGS_loss = LBFGS(points,func,grad,0.003,1,5)",
"step_size: 1\ns.T.dot(y): 701.5651720961255\nIter: 0\nnorm_2: 9.668224538359638\nstep_size: 0.25\ns.T.dot(y): 21.13910379007497\nIter: 1\nnorm_2: 2.6420205372295418\nstep_size: 1\ns.T.dot(y): 0.9512111077628872\nIter: 2\nnorm_2: 1.7725003900152791\nstep_size: 1\ns.T.dot(y): 0.29449477430825\nIter: 3\nnorm_2: 1.5708180499735216\nstep_size: 1\ns.T.dot(y): 0.1073091717777479\nIter: 4\nnorm_2: 1.5237389240759862\nstep_size: 1\ns.T.dot(y): 0.0035204853683236136\nIter: 5\nnorm_2: 1.5179144358211984\nstep_size: 1\ns.T.dot(y): 0.13222025552239955\nIter: 6\nnorm_2: 1.0306917144499643\nstep_size: 1\ns.T.dot(y): 0.22353121139016216\nIter: 7\nnorm_2: 1.0906907486611013\nstep_size: 1\ns.T.dot(y): 0.04653016089473028\nIter: 8\nnorm_2: 1.0257535734004362\nstep_size: 1\ns.T.dot(y): 0.03135836533694491\nIter: 9\nnorm_2: 0.9652394217607453\nstep_size: 1\ns.T.dot(y): 0.008231233120747222\nIter: 10\nnorm_2: 0.9654259306796656\nstep_size: 1\ns.T.dot(y): 0.5462815982150803\nIter: 11\nnorm_2: 1.5170228582589298\nstep_size: 1\ns.T.dot(y): 0.08097320698170678\nIter: 12\nnorm_2: 1.1787482616467706\nstep_size: 0.5\ns.T.dot(y): 0.01327612710428373\nIter: 13\nnorm_2: 0.9696527550192998\nstep_size: 0.5\ns.T.dot(y): 0.00703306476772973\nIter: 14\nnorm_2: 0.8302512640784643\nstep_size: 1\ns.T.dot(y): 0.0023440011101121434\nIter: 15\nnorm_2: 0.7679273570630598\nstep_size: 1\ns.T.dot(y): 0.08731652629808406\nIter: 16\nnorm_2: 0.4556966243212208\nstep_size: 1\ns.T.dot(y): 0.016416443351648335\nIter: 17\nnorm_2: 0.29720892064239557\nstep_size: 1\ns.T.dot(y): 0.003982201116019696\nIter: 18\nnorm_2: 0.2815080451738345\nstep_size: 1\ns.T.dot(y): 0.0010365837642469303\nIter: 19\nnorm_2: 0.24162840448037956\nstep_size: 1\ns.T.dot(y): 0.0003359370235258635\nIter: 20\nnorm_2: 0.22735493422277947\nstep_size: 1\ns.T.dot(y): 0.006808226646739311\nIter: 21\nnorm_2: 0.32985548535800585\nstep_size: 1\ns.T.dot(y): 0.005995116779482379\nIter: 22\nnorm_2: 0.28915625695299824\nstep_size: 1\ns.T.dot(y): 0.0004955427353425457\nIter: 23\nnorm_2: 0.26133974979662644\nstep_size: 1\ns.T.dot(y): 0.0007933794549170064\nIter: 24\nnorm_2: 0.24651539063331168\nstep_size: 1\ns.T.dot(y): 7.840399327093494e-05\nIter: 25\nnorm_2: 0.24069327297822224\nstep_size: 0.5\ns.T.dot(y): 0.0006472862399371085\nIter: 26\nnorm_2: 0.1899999886088421\nstep_size: 1\ns.T.dot(y): 0.0008506702698527249\nIter: 27\nnorm_2: 0.1509948645916843\nstep_size: 1\ns.T.dot(y): 0.0008079081228096255\nIter: 28\nnorm_2: 0.16121957169004614\nstep_size: 1\ns.T.dot(y): 0.00012864106235299395\nIter: 29\nnorm_2: 0.17016162696100084\nstep_size: 0.5\ns.T.dot(y): 0.00040199336907648683\nIter: 30\nnorm_2: 0.11030496581015564\nstep_size: 1\ns.T.dot(y): 3.799420071561239e-05\nIter: 31\nnorm_2: 0.0903523872715089\nstep_size: 1\ns.T.dot(y): 2.2188896764762334e-05\nIter: 32\nnorm_2: 0.08392383514148088\nstep_size: 1\ns.T.dot(y): 0.00015998342769055738\nIter: 33\nnorm_2: 0.09837125681026831\nstep_size: 1\ns.T.dot(y): 1.958611210958376e-05\nIter: 34\nnorm_2: 0.08996510709770186\nstep_size: 1\ns.T.dot(y): 4.633690356947413e-05\nIter: 35\nnorm_2: 0.06961237728611804\nstep_size: 1\ns.T.dot(y): 4.067585465308865e-06\nIter: 36\nnorm_2: 0.06516429753384181\nstep_size: 1\ns.T.dot(y): 1.2380377250410811e-05\nIter: 37\nnorm_2: 0.0613929160339922\nstep_size: 1\ns.T.dot(y): 2.4861967347868232e-05\nIter: 38\nnorm_2: 0.024792266290120935\nstep_size: 1\ns.T.dot(y): 4.077076927121715e-06\nIter: 39\nnorm_2: 0.022584650652087333\nstep_size: 1\ns.T.dot(y): 4.147774848657685e-06\nIter: 40\nnorm_2: 0.024564862314632334\nstep_size: 1\ns.T.dot(y): 3.1917515595121515e-05\nIter: 41\nnorm_2: 0.046112386881512125\nstep_size: 1\ns.T.dot(y): 5.005617323752784e-06\nIter: 42\nnorm_2: 0.04497872276511204\nstep_size: 1\ns.T.dot(y): 4.709594231928857e-06\nIter: 43\nnorm_2: 0.041858783219238235\nstep_size: 1\ns.T.dot(y): 1.0832592761920622e-05\nIter: 44\nnorm_2: 0.040972878486227056\nstep_size: 1\ns.T.dot(y): 1.7443587683959397e-05\nIter: 45\nnorm_2: 0.023429617832407528\nstep_size: 1\ns.T.dot(y): 3.1498696181272674e-06\nIter: 46\nnorm_2: 0.01459153334572272\nstep_size: 1\ns.T.dot(y): 2.5238688668398633e-06\nIter: 47\nnorm_2: 0.010328292857022631\nstep_size: 1\ns.T.dot(y): 7.905061797641216e-07\nIter: 48\nnorm_2: 0.00905369512192333\nstep_size: 1\ns.T.dot(y): 4.2277098685296524e-07\nIter: 49\nnorm_2: 0.007394633082195927\nstep_size: 1\ns.T.dot(y): 2.9816045135317924e-06\nIter: 50\nnorm_2: 0.013515870008973883\nstep_size: 1\ns.T.dot(y): 5.598954747107913e-07\nIter: 51\nnorm_2: 0.012064180402235546\nstep_size: 1\ns.T.dot(y): 4.1734679370408214e-07\nIter: 52\nnorm_2: 0.011867885886377725\nstep_size: 1\ns.T.dot(y): 7.034015177623933e-07\nIter: 53\nnorm_2: 0.013846739648846102\nstep_size: 1\ns.T.dot(y): 3.962915235856222e-06\nIter: 54\nnorm_2: 0.007193358992230576\nstep_size: 1\ns.T.dot(y): 3.5711781255362117e-07\nIter: 55\nnorm_2: 0.0047097959595383985\nstep_size: 1\ns.T.dot(y): 4.6629329244697834e-07\nIter: 56\nnorm_2: 0.004848683909658404\nstep_size: 1\ns.T.dot(y): 1.799850823046117e-07\nIter: 57\nnorm_2: 0.0035501444234347598\nstep_size: 1\ns.T.dot(y): 3.450590190686939e-08\nIter: 58\nnorm_2: 0.003502818938400975\nstep_size: 1\ns.T.dot(y): 2.4147441921845944e-08\nIter: 59\nnorm_2: 0.0035770890122079984\nstep_size: 1\ns.T.dot(y): 4.6487313812411024e-07\nIter: 60\nnorm_2: 0.004484583127065195\nstep_size: 1\ns.T.dot(y): 1.5460181036130107e-08\nIter: 61\nnorm_2: 0.004454833995003681\nstep_size: 1\ns.T.dot(y): 4.922327466149896e-08\nIter: 62\nnorm_2: 0.004485169902025197\nstep_size: 1\ns.T.dot(y): 7.784309350343689e-08\nIter: 63\nnorm_2: 0.004615907208196994\nstep_size: 1\ns.T.dot(y): 1.9145364097573516e-07\nIter: 64\nnorm_2: 0.002827241714752838\nstep_size: 1\ns.T.dot(y): 3.5968108154719054e-08\nIter: 65\nnorm_2: 0.0016959868908025908\nstep_size: 1\ns.T.dot(y): 4.1209822326419154e-08\nIter: 66\nnorm_2: 0.0011846019203922704\nstep_size: 1\ns.T.dot(y): 1.977826235188131e-08\nIter: 67\nnorm_2: 0.0007410017689049496\nstep_size: 1\ns.T.dot(y): 3.279664131769158e-09\nIter: 68\nnorm_2: 0.0006698281709531976\nstep_size: 1\ns.T.dot(y): 1.1930498741733972e-08\nIter: 69\nnorm_2: 0.0008868041433598371\nstep_size: 1\ns.T.dot(y): 6.576749895246503e-09\nIter: 70\nnorm_2: 0.0006757493505021068\nstep_size: 1\ns.T.dot(y): 2.0955027111861914e-09\nIter: 71\nnorm_2: 0.0007355573946914424\nstep_size: 1\ns.T.dot(y): 4.149675239934809e-09\nIter: 72\nnorm_2: 0.0008593224404490359\nstep_size: 1\ns.T.dot(y): 1.5545012558998227e-08\nIter: 73\nnorm_2: 0.0006235992201107099\nstep_size: 1\ns.T.dot(y): 2.1048628853607223e-09\nIter: 74\nnorm_2: 0.00042877813380379356\nstep_size: 1\ns.T.dot(y): 2.6970315152936003e-09\nIter: 75\nnorm_2: 0.0005059944282560958\nstep_size: 1\ns.T.dot(y): 1.9444589481800796e-09\nIter: 76\nnorm_2: 0.00038886337239439454\nstep_size: 1\ns.T.dot(y): 1.4337392411403508e-09\nIter: 77\nnorm_2: 0.0003432270229217621\nstep_size: 1\ns.T.dot(y): 3.505519140007169e-10\nIter: 78\nnorm_2: 0.0003606087783759464\nstep_size: 1\ns.T.dot(y): 2.627814530889453e-09\nIter: 79\nnorm_2: 0.00024149983281794538\nstep_size: 1\ns.T.dot(y): 3.941116842521392e-10\nIter: 80\nnorm_2: 0.0002466130972016348\nstep_size: 1\ns.T.dot(y): 6.258647643463417e-10\nIter: 81\nnorm_2: 0.0002610419543714754\nstep_size: 1\ns.T.dot(y): 5.628328786331141e-10\nIter: 82\nnorm_2: 0.0002673976126299132\nstep_size: 1\ns.T.dot(y): 7.982254326846646e-10\nIter: 83\nnorm_2: 0.00011576241224395399\nstep_size: 1\ns.T.dot(y): 3.946690363211018e-11\nIter: 84\nnorm_2: 0.00011037960884750927\nstep_size: 1\ns.T.dot(y): 9.075435984503712e-10\nIter: 85\nnorm_2: 0.00023174539220651707\nstep_size: 1\ns.T.dot(y): 2.054101888998513e-10\nIter: 86\nnorm_2: 0.00023055030676857505\nstep_size: 1\ns.T.dot(y): 3.077837942310464e-10\nIter: 87\nnorm_2: 0.0002283785826254441\nstep_size: 1\ns.T.dot(y): 3.742123208736149e-10\nIter: 88\nnorm_2: 0.00024127649978276803\nstep_size: 1\ns.T.dot(y): 8.614661255777254e-10\nIter: 89\nnorm_2: 0.00010645042873545919\nstep_size: 1\ns.T.dot(y): 7.763566077972938e-11\nIter: 90\nnorm_2: 7.15726557586558e-05\nstep_size: 1\ns.T.dot(y): 6.337642996373021e-11\nIter: 91\nnorm_2: 6.542086497659276e-05\nstep_size: 1\ns.T.dot(y): 3.623936174227203e-11\nIter: 92\nnorm_2: 5.2094118268439726e-05\nstep_size: 1\ns.T.dot(y): 2.5833375103471017e-11\nIter: 93\nnorm_2: 5.5133937258775266e-05\nstep_size: 1\ns.T.dot(y): 2.717743765175441e-11\nIter: 94\nnorm_2: 6.59740236903577e-05\nstep_size: 1\ns.T.dot(y): 9.361090814982377e-11\nIter: 95\nnorm_2: 5.2274359322571366e-05\nstep_size: 1\ns.T.dot(y): 2.458621853189793e-11\nIter: 96\nnorm_2: 6.227122189507031e-05\nstep_size: 1\ns.T.dot(y): 5.279802658627709e-11\nIter: 97\nnorm_2: 5.971714962188798e-05\nstep_size: 1\ns.T.dot(y): 2.7644625534773454e-11\nIter: 98\nnorm_2: 4.063006034871834e-05\nstep_size: 1\ns.T.dot(y): 1.963962741533506e-11\nIter: 99\nnorm_2: 3.282755164417799e-05\nstep_size: 1\ns.T.dot(y): 7.386343086804211e-12\nIter: 100\nnorm_2: 2.4566983773487384e-05\nstep_size: 1\ns.T.dot(y): 5.616081250874886e-12\nIter: 101\nnorm_2: 2.880041445277356e-05\nstep_size: 1\ns.T.dot(y): 1.6520271049762006e-11\nIter: 102\nnorm_2: 3.56372959248e-05\nstep_size: 1\ns.T.dot(y): 2.568492492463995e-11\nIter: 103\nnorm_2: 2.7554302073579164e-05\nstep_size: 1\ns.T.dot(y): 4.08342533176453e-12\nIter: 104\nnorm_2: 3.485029833454627e-05\nstep_size: 1\ns.T.dot(y): 5.843950637983115e-12\nIter: 105\nnorm_2: 3.1808294224536104e-05\nstep_size: 1\ns.T.dot(y): 1.51695998397126e-11\nIter: 106\nnorm_2: 2.950050283998313e-05\nstep_size: 1\ns.T.dot(y): 7.331391938343275e-12\nIter: 107\nnorm_2: 1.2086410760913831e-05\nstep_size: 1\ns.T.dot(y): 7.488489847544705e-13\nIter: 108\nnorm_2: 8.214912236634415e-06\n"
],
[
"groups = get_group(ans_LBFGS, tol=2)",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,10))\nplot_res_data(points,ans_LBFGS,groups)",
"_____no_output_____"
],
[
"plt.rc_context({'axes.edgecolor':'orange', 'xtick.color':'green', 'ytick.color':'green', 'figure.facecolor':'white'})",
"_____no_output_____"
],
[
"plt.figure(figsize=(5,5))\nplt.ylabel(\"Loss\")\nplt.plot(np.log(LBFGS_loss - LBFGS_loss[len(LBFGS_loss)-1]))\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 计算Hessian",
"_____no_output_____"
]
],
[
[
"from itertools import combinations\ndef huber(x, delta):\n '''\n Args:\n x: input that has been norm2ed (n*(n-1)/2,)\n delta: threshold\n Output:\n (n*(n-1)/2,)\n '''\n return np.where(x > delta ** 2, np.sqrt(x) - delta / 2, x / (2 * delta))\n\n\ndef pair_col_diff_norm2(x, idx):\n '''\n compute norm2 of pairwise column difference\n Args:\n x: (d, n)\n idx: (n*(n - 1)/2, 2), used to indexing pairwise column combinations\n Output:\n (n*(n-1)/2,)\n '''\n x = x[:, idx] # (d, n*(n - 1)/2, 2)\n x = np.diff(x, axis=-1).squeeze() # (d, n*(n-1)/2)\n x = np.sum(x ** 2, axis=0) # (n*(n-1)/2,)\n return x\n\n\ndef pair_col_diff_sum(x, t, idx):\n '''\n compute sum of pairwise column difference\n Args:\n x: (d, n)\n t: (d, n)\n idx: (n*(n - 1)/2, 2), used to indexing pairwise column combinations\n Output:\n (n*(n-1)/2,)\n '''\n x = np.diff(x[:, idx], axis=-1).squeeze() # (d, n*(n-1)/2)\n t = np.diff(t[:, idx], axis=-1).squeeze() # (d, n*(n-1)/2)\n return np.sum(x * t, axis=0) # (n*(n-1)/2,)\n\n\nclass OBJ:\n def __init__(self, d, n, delta):\n '''\n a: training data samples of shape (d, n)\n '''\n self.d = d\n self.n = n\n self.delta = delta\n self.idx = np.array(list(combinations(list(range(n)), 2)))\n self.triu_idx = np.triu_indices(self.n, 1)\n\n def __call__(self, x, a, lamb):\n '''\n Args:\n x: (d, n)\n a: (d, n)\n lamb: control effect of regularization\n Output:\n scalar\n '''\n v = np.sum((x - a) ** 2) / 2\n v += lamb * np.sum(huber(pair_col_diff_norm2(x, self.idx), self.delta))\n return v\n\n def grad(self, x, a, lamb):\n '''\n gradient\n Output:\n (d, n)\n '''\n g = x - a\n diff_norm2 = pair_col_diff_norm2(x, self.idx) # (n*(n-1)/2,)\n tmp = np.zeros((self.n, self.n))\n tmp[self.triu_idx] = diff_norm2\n tmp += tmp.T # (n, n)\n mask = (tmp > self.delta ** 2)\n tmp = np.where(mask,\n np.divide(1, np.sqrt(tmp), where=mask),\n 0)\n x = x.T\n g = g + lamb * (tmp.sum(axis=1, keepdims=True) * x - tmp @ x).T\n tmp = 1 - mask\n g = g + lamb * (tmp.sum(axis=1, keepdims=True) * x - tmp @ x).T / self.delta\n return g.flatten()\n\n def hessiant(self, x, t, lamb):\n '''\n returns the result of hessian matrix dot product a vector t\n Args:\n t: (d, n)\n Output:\n (d, n)\n '''\n ht = 0\n ht += t\n diff_norm2 = pair_col_diff_norm2(x, self.idx) # (n*(n-1)/2,)\n diff_sum = pair_col_diff_sum(x, t, self.idx)\n tmp = np.zeros((self.n, self.n))\n tmp[self.triu_idx] = diff_norm2\n tmp += tmp.T\n\n mask = (tmp > self.delta ** 2)\n tmp = np.where(mask,\n np.divide(1, np.sqrt(tmp), where=mask),\n 0)\n t = t.T\n x = x.T\n ht += (lamb * (tmp.sum(axis=1, keepdims=True) * t - tmp @ t).T)\n # tmp1 = np.where(tmp1 > 0, tmp1 ** 3, 0)\n tmp = tmp ** 3\n tmp[self.triu_idx] *= diff_sum\n tmp[(self.triu_idx[1], self.triu_idx[0])] *= diff_sum\n ht -= lamb * (tmp.sum(axis=1, keepdims=True) * x - tmp @ x).T\n\n tmp = 1 - mask\n ht += (lamb * (tmp.sum(axis=1, keepdims=True) * t - tmp @ t).T / self.delta)\n return ht.flatten()",
"_____no_output_____"
],
[
"import numpy as np\n# from numpy.lib.function_base import _delete_dispatcher\n\ndef Hessian_hub(X, p, delta, B):\n n = X.shape[0]; d = X.shape[1]\n res = np.zeros(n*d).reshape((n*d,1))\n for i in range(n):\n H_tmp = Hessian_rows(i,n,d,delta,B,X)\n res[i*d:(i+1)*d] = H_tmp.dot(p).reshape((d,1))\n return np.array(res)\n\ndef Hessian_rows(i,n,d,delta,B,X):#i从0开始\n I = np.identity(d)\n DF = np.tile((-1/delta) * np.identity(d), n)\n choose_BX = (B.T[i] != 0) #choose material Xi-Xk, n-1 in total\n DBX = B.dot(X)[choose_BX] \n DBX[:i,:] = -DBX[:i,:]\n mask = np.linalg.norm(DBX, axis=1) > delta #find ||Xi-Xk|| which is greater than delta\n mask2 = np.tile(mask,(d,1)).T.reshape(1,-1)[0] #will be further use\n norm = np.linalg.norm(DBX[mask], axis=1) #Calculate the norm which is greater than delta (prepare for the left part of tmp)\n #prepare for the right part of tmp\n row = np.repeat(np.arange(n-1),d)\n col = np.arange((n-1)*d)\n DBX_trans = np.array(sps.csr_matrix((DBX.flatten(),(row,col)),shape=((n-1),(n-1)*d)).todense())\n\n tmp = -(np.tile(I,(1,len(norm)))/np.repeat(norm,d)) + \\\n (DBX[mask].T.dot(DBX_trans[:,mask2]))/np.repeat(norm**3,d)\n #change the values of items whose norm are greater than delta\n DF[:,:i*d][:, mask2[:i*d]] = tmp[:,:i*d]\n DF[:,(i+1)*d:][:,mask2[i*d:]] = tmp[:,i*d:]\n z = np.zeros((d,d))\n DF[:,i*d:(i+1)*d] = z\n I_tmp = np.tile(I,(n,1))\n iblock_tmp = -DF.dot(I_tmp)\n DF[:,i*d:(i+1)*d] = iblock_tmp\n \n\n return np.array(DF)\n",
"_____no_output_____"
],
[
"i=1\nn, d = 5,3\ndelta = 0.1\nB = gen_B(n, sparse=False)\nX = np.arange(n*d).reshape(n,d)\nDBX = B.T.dot(B.dot(X))\np = np.arange(n*d).reshape((n*d,1))\n# Hessian_hub(X, p, 0.1, B)\nHdn = Hessian_rows(i,n,d,delta,B,X)\nHdn[0].dot(p)\n",
"_____no_output_____"
]
],
[
[
"## 测试Hessian",
"_____no_output_____"
]
],
[
[
"n, d = 4,2\ntest = OBJ(d,n,0.1)\nX = np.arange(n*d).reshape(n,d)\nt = np.arange(n*d).reshape((d,n)).astype(float)\ntest.hessiant(X.T, t, 0.1)",
"_____no_output_____"
],
[
"test.hessiant(X.T, t, 0.1)",
"_____no_output_____"
],
[
"n, d = 4,2\ndelta = 0.1\nX = np.arange(n*d).reshape(n,d)\nB = gen_B(n, sparse=False)\np = np.arange(n*d).reshape((n*d,1))\nHessian_hub(X, p, delta, B)",
"_____no_output_____"
],
[
"i=1\nB = gen_B(n, sparse=False)\nX = np.random.randn(n,d)\nDBX = B.T.dot(B.dot(X))",
"_____no_output_____"
],
[
"i = 0\nn, d = 5,3\ndelta = 0.1\nB = gen_B(n, sparse=False)\nX = np.random.randn(n,d)\nDBX = B.T.dot(B.dot(X))",
"_____no_output_____"
],
[
" I = np.identity(d)\n DF = np.tile((-1/delta) * np.identity(d), n)\n choose_BX = (B.T[i] != 0) #choose material Xi-Xk, n-1 in total\n DBX = B.dot(X)[choose_BX] \n mask = np.linalg.norm(DBX, axis=1) > delta #find ||Xi-Xk|| which is greater than delta\n mask2 = np.tile(mask,(d,1)).T.reshape(1,-1)[0] #will be further use\n norm = np.linalg.norm(DBX[mask], axis=1) #Calculate the norm which is greater than delta (prepare for the left part of tmp)\n #prepare for the right part of tmp\n row = np.repeat(np.arange(n-1),d)\n col = np.arange((n-1)*d)\n DBX_trans = np.array(sps.csr_matrix((DBX.flatten(),(row,col)),shape=((n-1),(n-1)*d)).todense())\n\n tmp = -(np.tile(I,(1,len(norm)))/np.repeat(norm,d)) + \\\n (DBX[mask].T.dot(DBX_trans[:,mask2]))/np.repeat(norm**3,d)\n #change the values of items whose norm are greater than delta\n DF[:,:i*d][:, mask2[:i*d]] = tmp[:,:i*d]\n DF[:,(i+1)*d:][:,mask2[i*d:]] = tmp[:,i*d:]\n z = np.zeros((d,d))\n DF[:,i*d:(i+1)*d] = z\n I_tmp = np.tile(-I,(n,1))\n iblock_tmp = DF.dot(I_tmp)\n DF[:,i*d:(i+1)*d] = iblock_tmp\n DF\n\n",
"_____no_output_____"
]
],
[
[
"## 使用原数据集转sparse后操作",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d00d95b8db0e80a680a216df1d65a565e8cdf5aa | 26,669 | ipynb | Jupyter Notebook | adanet/examples/tutorials/customizing_adanet.ipynb | xhlulu/adanet | f91eef02ce8d64f3f38e639a57c3534bdc533b3f | [
"Apache-2.0"
] | 1 | 2018-11-05T01:30:16.000Z | 2018-11-05T01:30:16.000Z | adanet/examples/tutorials/customizing_adanet.ipynb | aliushn/adanet | dfa2f0acc253d1de193aaa795b5559bc471f9ed8 | [
"Apache-2.0"
] | null | null | null | adanet/examples/tutorials/customizing_adanet.ipynb | aliushn/adanet | dfa2f0acc253d1de193aaa795b5559bc471f9ed8 | [
"Apache-2.0"
] | 1 | 2021-12-14T08:18:17.000Z | 2021-12-14T08:18:17.000Z | 35.090789 | 130 | 0.497356 | [
[
[
"##### Copyright 2018 The AdaNet Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Customizing AdaNet\n\nOften times, as a researcher or machine learning practitioner, you will have\nsome prior knowledge about a dataset. Ideally you should be able to encode that\nknowledge into your machine learning algorithm. With `adanet`, you can do so by\ndefining the *neural architecture search space* that the AdaNet algorithm should\nexplore.\n\nIn this tutorial, we will explore the flexibility of the `adanet` framework, and\ncreate a custom search space for an image-classificatio dataset using high-level\nTensorFlow libraries like `tf.layers`.\n\n",
"_____no_output_____"
]
],
[
[
"from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport functools\n\nimport adanet\nfrom adanet.examples import simple_dnn\nimport tensorflow as tf\n\n# The random seed to use.\nRANDOM_SEED = 42",
"_____no_output_____"
]
],
[
[
"## Fashion MNIST dataset\n\nIn this example, we will use the Fashion MNIST dataset\n[[Xiao et al., 2017](https://arxiv.org/abs/1708.07747)] for classifying fashion\napparel images into one of ten categories:\n\n1. T-shirt/top\n2. Trouser\n3. Pullover\n4. Dress\n5. Coat\n6. Sandal\n7. Shirt\n8. Sneaker\n9. Bag\n10. Ankle boot\n\n![Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist/blob/master/doc/img/fashion-mnist-sprite.png?raw=true)",
"_____no_output_____"
],
[
"## Download the data\n\nConveniently, the data is available via Keras:",
"_____no_output_____"
]
],
[
[
"(x_train, y_train), (x_test, y_test) = (\n tf.keras.datasets.fashion_mnist.load_data())",
"_____no_output_____"
]
],
[
[
"## Supply the data in TensorFlow\n\nOur first task is to supply the data in TensorFlow. Using the\ntf.estimator.Estimator covention, we will define a function that returns an\n`input_fn` which returns feature and label `Tensors`.\n\nWe will also use the `tf.data.Dataset` API to feed the data into our models.",
"_____no_output_____"
]
],
[
[
"FEATURES_KEY = \"images\"\n\n\ndef generator(images, labels):\n \"\"\"Returns a generator that returns image-label pairs.\"\"\"\n\n def _gen():\n for image, label in zip(images, labels):\n yield image, label\n\n return _gen\n\n\ndef preprocess_image(image, label):\n \"\"\"Preprocesses an image for an `Estimator`.\"\"\"\n # First let's scale the pixel values to be between 0 and 1.\n image = image / 255.\n # Next we reshape the image so that we can apply a 2D convolution to it.\n image = tf.reshape(image, [28, 28, 1])\n # Finally the features need to be supplied as a dictionary.\n features = {FEATURES_KEY: image}\n return features, label\n\n\ndef input_fn(partition, training, batch_size):\n \"\"\"Generate an input_fn for the Estimator.\"\"\"\n\n def _input_fn():\n if partition == \"train\":\n dataset = tf.data.Dataset.from_generator(\n generator(x_train, y_train), (tf.float32, tf.int32), ((28, 28), ()))\n else:\n dataset = tf.data.Dataset.from_generator(\n generator(x_test, y_test), (tf.float32, tf.int32), ((28, 28), ()))\n\n # We call repeat after shuffling, rather than before, to prevent separate\n # epochs from blending together.\n if training:\n dataset = dataset.shuffle(10 * batch_size, seed=RANDOM_SEED).repeat()\n\n dataset = dataset.map(preprocess_image).batch(batch_size)\n iterator = dataset.make_one_shot_iterator()\n features, labels = iterator.get_next()\n return features, labels\n\n return _input_fn",
"_____no_output_____"
]
],
[
[
"## Establish baselines\n\nThe next task should be to get somes baselines to see how our model performs on\nthis dataset.\n\nLet's define some information to share with all our `tf.estimator.Estimators`:",
"_____no_output_____"
]
],
[
[
"# The number of classes.\nNUM_CLASSES = 10\n\n# We will average the losses in each mini-batch when computing gradients.\nloss_reduction = tf.losses.Reduction.SUM_OVER_BATCH_SIZE\n\n# A `Head` instance defines the loss function and metrics for `Estimators`.\nhead = tf.contrib.estimator.multi_class_head(\n NUM_CLASSES, loss_reduction=loss_reduction)\n\n# Some `Estimators` use feature columns for understanding their input features.\nfeature_columns = [\n tf.feature_column.numeric_column(FEATURES_KEY, shape=[28, 28, 1])\n]\n\n# Estimator configuration.\nconfig = tf.estimator.RunConfig(\n save_checkpoints_steps=50000,\n save_summary_steps=50000,\n tf_random_seed=RANDOM_SEED)",
"_____no_output_____"
]
],
[
[
"Let's start simple, and train a linear model:",
"_____no_output_____"
]
],
[
[
"#@test {\"skip\": true}\n#@title Parameters\nLEARNING_RATE = 0.001 #@param {type:\"number\"}\nTRAIN_STEPS = 5000 #@param {type:\"integer\"}\nBATCH_SIZE = 64 #@param {type:\"integer\"}\n\nestimator = tf.estimator.LinearClassifier(\n feature_columns=feature_columns,\n n_classes=NUM_CLASSES,\n optimizer=tf.train.RMSPropOptimizer(learning_rate=LEARNING_RATE),\n loss_reduction=loss_reduction,\n config=config)\n\nresults, _ = tf.estimator.train_and_evaluate(\n estimator,\n train_spec=tf.estimator.TrainSpec(\n input_fn=input_fn(\"train\", training=True, batch_size=BATCH_SIZE),\n max_steps=TRAIN_STEPS),\n eval_spec=tf.estimator.EvalSpec(\n input_fn=input_fn(\"test\", training=False, batch_size=BATCH_SIZE),\n steps=None))\nprint(\"Accuracy:\", results[\"accuracy\"])\nprint(\"Loss:\", results[\"average_loss\"])",
"Accuracy: 0.8413\nLoss: 0.464809\n"
]
],
[
[
"The linear model with default parameters achieves about **84.13% accuracy**.\n\nLet's see if we can do better with the `simple_dnn` AdaNet:",
"_____no_output_____"
]
],
[
[
"#@test {\"skip\": true}\n#@title Parameters\nLEARNING_RATE = 0.003 #@param {type:\"number\"}\nTRAIN_STEPS = 5000 #@param {type:\"integer\"}\nBATCH_SIZE = 64 #@param {type:\"integer\"}\nADANET_ITERATIONS = 2 #@param {type:\"integer\"}\n\nestimator = adanet.Estimator(\n head=head,\n subnetwork_generator=simple_dnn.Generator(\n feature_columns=feature_columns,\n optimizer=tf.train.RMSPropOptimizer(learning_rate=LEARNING_RATE),\n seed=RANDOM_SEED),\n max_iteration_steps=TRAIN_STEPS // ADANET_ITERATIONS,\n evaluator=adanet.Evaluator(\n input_fn=input_fn(\"train\", training=False, batch_size=BATCH_SIZE),\n steps=None),\n config=config)\n\nresults, _ = tf.estimator.train_and_evaluate(\n estimator,\n train_spec=tf.estimator.TrainSpec(\n input_fn=input_fn(\"train\", training=True, batch_size=BATCH_SIZE),\n max_steps=TRAIN_STEPS),\n eval_spec=tf.estimator.EvalSpec(\n input_fn=input_fn(\"test\", training=False, batch_size=BATCH_SIZE),\n steps=None))\nprint(\"Accuracy:\", results[\"accuracy\"])\nprint(\"Loss:\", results[\"average_loss\"])",
"Accuracy: 0.8566\nLoss: 0.408646\n"
]
],
[
[
"The `simple_dnn` AdaNet model with default parameters achieves about **85.66%\naccuracy**.\n\nThis improvement can be attributed to `simple_dnn` searching over\nfully-connected neural networks which have more expressive power than the linear\nmodel due to their non-linear activations.\n\nFully-connected layers are permutation invariant to their inputs, meaning that\nif we consistently swapped two pixels before training, the final model would\nperform identically. However, there is spatial and locality information in\nimages that we should try to capture. Applying a few convolutions to our inputs\nwill allow us to do so, and that will require defining a custom\n`adanet.subnetwork.Builder` and `adanet.subnetwork.Generator`.",
"_____no_output_____"
],
[
"## Define a convolutional AdaNet model\n\nCreating a new search space for AdaNet to explore is straightforward. There are\ntwo abstract classes you need to extend:\n\n1. `adanet.subnetwork.Builder`\n2. `adanet.subnetwork.Generator`\n\nSimilar to the tf.estimator.Estimator `model_fn`, `adanet.subnetwork.Builder`\nallows you to define your own TensorFlow graph for creating a neural network,\nand specify the training operations.\n\nBelow we define one that applies a 2D convolution, max-pooling, and then a\nfully-connected layer to the images:",
"_____no_output_____"
]
],
[
[
"class SimpleCNNBuilder(adanet.subnetwork.Builder):\n \"\"\"Builds a CNN subnetwork for AdaNet.\"\"\"\n\n def __init__(self, learning_rate, max_iteration_steps, seed):\n \"\"\"Initializes a `SimpleCNNBuilder`.\n\n Args:\n learning_rate: The float learning rate to use.\n max_iteration_steps: The number of steps per iteration.\n seed: The random seed.\n\n Returns:\n An instance of `SimpleCNNBuilder`.\n \"\"\"\n self._learning_rate = learning_rate\n self._max_iteration_steps = max_iteration_steps\n self._seed = seed\n\n def build_subnetwork(self,\n features,\n logits_dimension,\n training,\n iteration_step,\n summary,\n previous_ensemble=None):\n \"\"\"See `adanet.subnetwork.Builder`.\"\"\"\n images = features.values()[0]\n kernel_initializer = tf.keras.initializers.he_normal(seed=self._seed)\n x = tf.layers.conv2d(\n images,\n filters=16,\n kernel_size=3,\n padding=\"same\",\n activation=\"relu\",\n kernel_initializer=kernel_initializer)\n x = tf.layers.max_pooling2d(x, pool_size=2, strides=2)\n x = tf.layers.flatten(x)\n x = tf.layers.dense(\n x, units=64, activation=\"relu\", kernel_initializer=kernel_initializer)\n\n # The `Head` passed to adanet.Estimator will apply the softmax activation.\n logits = tf.layers.dense(\n x, units=10, activation=None, kernel_initializer=kernel_initializer)\n\n # Use a constant complexity measure, since all subnetworks have the same\n # architecture and hyperparameters.\n complexity = tf.constant(1)\n\n return adanet.Subnetwork(\n last_layer=x,\n logits=logits,\n complexity=complexity,\n persisted_tensors={})\n\n def build_subnetwork_train_op(self, \n subnetwork, \n loss, \n var_list, \n labels, \n iteration_step,\n summary, \n previous_ensemble=None):\n \"\"\"See `adanet.subnetwork.Builder`.\"\"\"\n\n # Momentum optimizer with cosine learning rate decay works well with CNNs.\n learning_rate = tf.train.cosine_decay(\n learning_rate=self._learning_rate,\n global_step=iteration_step,\n decay_steps=self._max_iteration_steps)\n optimizer = tf.train.MomentumOptimizer(learning_rate, .9)\n # NOTE: The `adanet.Estimator` increments the global step.\n return optimizer.minimize(loss=loss, var_list=var_list)\n\n def build_mixture_weights_train_op(self, loss, var_list, logits, labels,\n iteration_step, summary):\n \"\"\"See `adanet.subnetwork.Builder`.\"\"\"\n return tf.no_op(\"mixture_weights_train_op\")\n\n @property\n def name(self):\n \"\"\"See `adanet.subnetwork.Builder`.\"\"\"\n return \"simple_cnn\"",
"_____no_output_____"
]
],
[
[
"Next, we extend a `adanet.subnetwork.Generator`, which defines the search\nspace of candidate `SimpleCNNBuilders` to consider including the final network.\nIt can create one or more at each iteration with different parameters, and the\nAdaNet algorithm will select the candidate that best improves the overall neural\nnetwork's `adanet_loss` on the training set.\n\nThe one below is very simple: it always creates the same architecture, but gives\nit a different random seed at each iteration:",
"_____no_output_____"
]
],
[
[
"class SimpleCNNGenerator(adanet.subnetwork.Generator):\n \"\"\"Generates a `SimpleCNN` at each iteration.\n \"\"\"\n\n def __init__(self, learning_rate, max_iteration_steps, seed=None):\n \"\"\"Initializes a `Generator` that builds `SimpleCNNs`.\n\n Args:\n learning_rate: The float learning rate to use.\n max_iteration_steps: The number of steps per iteration.\n seed: The random seed.\n\n Returns:\n An instance of `Generator`.\n \"\"\"\n self._seed = seed\n self._dnn_builder_fn = functools.partial(\n SimpleCNNBuilder,\n learning_rate=learning_rate,\n max_iteration_steps=max_iteration_steps)\n\n def generate_candidates(self, previous_ensemble, iteration_number,\n previous_ensemble_reports, all_reports):\n \"\"\"See `adanet.subnetwork.Generator`.\"\"\"\n seed = self._seed\n # Change the seed according to the iteration so that each subnetwork\n # learns something different.\n if seed is not None:\n seed += iteration_number\n return [self._dnn_builder_fn(seed=seed)]",
"_____no_output_____"
]
],
[
[
"With these defined, we pass them into a new `adanet.Estimator`:",
"_____no_output_____"
]
],
[
[
"#@title Parameters\nLEARNING_RATE = 0.05 #@param {type:\"number\"}\nTRAIN_STEPS = 5000 #@param {type:\"integer\"}\nBATCH_SIZE = 64 #@param {type:\"integer\"}\nADANET_ITERATIONS = 2 #@param {type:\"integer\"}\n\nmax_iteration_steps = TRAIN_STEPS // ADANET_ITERATIONS\nestimator = adanet.Estimator(\n head=head,\n subnetwork_generator=SimpleCNNGenerator(\n learning_rate=LEARNING_RATE,\n max_iteration_steps=max_iteration_steps,\n seed=RANDOM_SEED),\n max_iteration_steps=max_iteration_steps,\n evaluator=adanet.Evaluator(\n input_fn=input_fn(\"train\", training=False, batch_size=BATCH_SIZE),\n steps=None),\n adanet_loss_decay=.99,\n config=config)\n\nresults, _ = tf.estimator.train_and_evaluate(\n estimator,\n train_spec=tf.estimator.TrainSpec(\n input_fn=input_fn(\"train\", training=True, batch_size=BATCH_SIZE),\n max_steps=TRAIN_STEPS),\n eval_spec=tf.estimator.EvalSpec(\n input_fn=input_fn(\"test\", training=False, batch_size=BATCH_SIZE),\n steps=None))\nprint(\"Accuracy:\", results[\"accuracy\"])\nprint(\"Loss:\", results[\"average_loss\"])",
"Accuracy: 0.9041\nLoss: 0.26544\n"
]
],
[
[
"Our `SimpleCNNGenerator` code achieves **90.41% accuracy**.",
"_____no_output_____"
],
[
"## Conclusion and next steps\n\nIn this tutorial, you learned how to customize `adanet` to encode your\nunderstanding of a particular dataset, and explore novel search spaces with\nAdaNet.\n\nOne use-case that has worked for us at Google, has been to take a production\nmodel's TensorFlow code, convert it to into an `adanet.subnetwork.Builder`, and\nadaptively grow it into an ensemble. In many cases, this has given significant\nperformance improvements.\n\nAs an exercise, you can swap out the FASHION-MNIST with the MNIST handwritten\ndigits dataset in this notebook using `tf.keras.datasets.mnist.load_data()`, and\nsee how `SimpleCNN` performs.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d00d9ec195864e44a3c699c8e3a8a9d6a4279076 | 3,846 | ipynb | Jupyter Notebook | Machine Learning/Problem3/4_KL_Divergence.ipynb | bayeslabs/AiGym | 30c126fc2e140f9f164ff3f20638242b230e7e52 | [
"MIT"
] | 22 | 2019-07-15T08:26:31.000Z | 2022-01-17T06:29:17.000Z | Machine Learning/Problem3/4_KL_Divergence.ipynb | bayeslabs/AiGym | 30c126fc2e140f9f164ff3f20638242b230e7e52 | [
"MIT"
] | 26 | 2020-03-24T17:18:21.000Z | 2022-03-11T23:54:37.000Z | Machine Learning/Problem3/4_KL_Divergence.ipynb | bayeslabs/AiGym | 30c126fc2e140f9f164ff3f20638242b230e7e52 | [
"MIT"
] | 8 | 2019-07-17T09:13:11.000Z | 2021-04-16T11:20:51.000Z | 32.871795 | 179 | 0.460998 | [
[
[
"# CS229: Problem Set 3\n## Problem 4: KL Divergence and Maximum Likelihood\n\n\n**C. Combier**\n\nThis iPython Notebook provides solutions to Stanford's CS229 (Machine Learning, Fall 2017) graduate course problem set 3, taught by Andrew Ng.\n\nThe problem set can be found here: [./ps3.pdf](ps3.pdf)\n\nI chose to write the solutions to the coding questions in Python, whereas the Stanford class is taught with Matlab/Octave.\n\n## Notation\n\n- $x^i$ is the $i^{th}$ feature vector\n- $y^i$ is the expected outcome for the $i^{th}$ training example\n- $m$ is the number of training examples\n- $n$ is the number of features",
"_____no_output_____"
],
[
"### Question 4.a)\n\nThe goal is to prove that $KL(P||Q) \\geq 0$.\n\n$$\n\\begin{align*}\nKL(P||Q) &= \\sum_x P(x) \\log \\frac{P(x)}{Q(x)} \\\\\n&= -\\sum_x P(x) \\log \\frac{Q(x)}{P(x)} \\\\\n& \\geq -\\log \\sum_x P(x) \\frac{Q(x)}{P(x)} \\\\\n& \\geq -\\log \\sum_x Q(x) \\\\\n& \\geq - \\log 1 \\\\\n& \\geq 0\n\\end{align*}\n$$\n\nNow we prove $KL(P||Q) = 0 \\iff P=Q$.\n\n1. If $P = Q$, then it is immediate that $\\log \\frac{P(x)}{Q(x)} = 0$ and hence $KL(P||Q) = 0$\n\n2. If $KL(P||Q) = 0$, then $\\forall x$, $\\frac{P(x)}{Q(x)} = 1$, therefore $P = Q$\n\n",
"_____no_output_____"
],
[
"### Question 4.b)\n\n\\begin{align*}\nKL(P(X) \\parallel Q(X)) + KL(P(Y|X) \\parallel Q(Y|X)) \n&= \\sum_x P(x) (\\log \\frac{P(x)}{Q(x)} + \\sum_y P(y|x) \\log \\frac{P(y|x)}{Q(y|x)}) \\\\\n&= \\sum_x P(x) \\sum_y P(y|x) ( \\log \\frac{P(x)}{Q(x)} + \\log \\frac{P(y|x)}{Q(y|x)} ) \\\\\n\\end{align*}\n\nWe can include the term $\\log \\frac{P(x)}{Q(x)}$ in the sum over $y$, because $\\sum_y P(y|x) = 1$ since $P$ is a probability distribution. We continue the calculation:\n\n\\begin{align*}\nKL(P(X) \\parallel Q(X)) + KL(P(Y|X) \\parallel Q(Y|X)) \n&= \\sum_x P(x) \\sum_y P(y|x) \\log \\frac{P(x) P(y|x)}{Q(x) Q(y|x)} \\\\\n&= \\sum_x P(x) \\sum_y P(y|x) \\log \\frac{P(x, y)}{Q(x, y)} \\\\\n&= \\sum_x P(x, y) \\log \\frac{P(x, y)}{Q(x, y)} \\\\\n&= KL(P(X, Y) || Q(X, Y)) \\\\\n\\end{align*}",
"_____no_output_____"
],
[
"### Question 4.c)\n\n\\begin{align*}\nKL(\\hat P || P_{\\theta}) \n&= \\sum_x \\hat P(x) \\log \\frac{\\hat P(x)}{P_{\\theta}(x)} \\\\\n&= - \\sum_x \\hat P(x) \\log \\frac{P_{\\theta}(x)}{\\hat P(x)} \\\\\n&= - \\sum_x (\\frac{1}{m} \\sum_{i=1}^{m} 1 \\{x^{(i)} = x\\}). \\log \\frac{P_{\\theta}(x)}{\\frac{1}{m} \\sum_{i=1}^{m} 1 \\{x^{(i)} = x\\}} \\\\\n&= - \\frac{1}{m} \\sum_{i=1}^{m} \\log P_{\\theta}(x^{(i)}) \\\\\n\\end{align*}\n\nThus, minimizing $KL(\\hat P || P_{\\theta})$ is equivalent to maximizing $\\sum_{i=1}^{m} \\log P_{\\theta}(x^{(i)}) = \\ell(\\theta)$",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d00da45766ef8a93c09f028957736681ec20390a | 65,289 | ipynb | Jupyter Notebook | Run Example.ipynb | tj-kim/pytorch-cw2 | a793c7c2f8343440b5caa1f41999945e8557a8dd | [
"MIT"
] | null | null | null | Run Example.ipynb | tj-kim/pytorch-cw2 | a793c7c2f8343440b5caa1f41999945e8557a8dd | [
"MIT"
] | null | null | null | Run Example.ipynb | tj-kim/pytorch-cw2 | a793c7c2f8343440b5caa1f41999945e8557a8dd | [
"MIT"
] | null | null | null | 151.834884 | 53,220 | 0.8824 | [
[
[
"# CW Attack Example\n\nTJ Kim <br />\n1.28.21\n\n### Summary: \nImplement CW attack on toy network example given in the readme of the github. <br />\nhttps://github.com/tj-kim/pytorch-cw2?organization=tj-kim&organization=tj-kim\n\nA dummy network is made using CIFAR example. <br />\nhttps://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html",
"_____no_output_____"
],
[
"### Build Dummy Pytorch Network",
"_____no_output_____"
]
],
[
[
"import torch\nimport torchvision\nimport torchvision.transforms as transforms",
"_____no_output_____"
]
],
[
[
"Download a few classes from the dataset.",
"_____no_output_____"
]
],
[
[
"batch_size = 10\n\ntransform = transforms.Compose(\n [transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n\ntrainset = torchvision.datasets.CIFAR10(root='./data', train=True,\n download=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,\n shuffle=True, num_workers=2)\n\ntestset = torchvision.datasets.CIFAR10(root='./data', train=False,\n download=True, transform=transform)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,\n shuffle=False, num_workers=2)\n\nclasses = ('plane', 'car', 'bird', 'cat',\n 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')",
"Files already downloaded and verified\nFiles already downloaded and verified\n"
]
],
[
[
"Show a few images from the dataset.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\n\n# functions to show an image\n\n\ndef imshow(img):\n img = img / 2 + 0.5 # unnormalize\n npimg = img.numpy()\n plt.imshow(np.transpose(npimg, (1, 2, 0)))\n plt.show()\n\n\n# get some random training images\ndataiter = iter(trainloader)\nimages, labels = dataiter.next()\n\n# show images\nimshow(torchvision.utils.make_grid(images))\n# print labels\nprint(' '.join('%5s' % classes[labels[j]] for j in range(4)))",
"_____no_output_____"
]
],
[
[
"Define a NN.",
"_____no_output_____"
]
],
[
[
"import torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(3, 6, 5)\n self.pool = nn.MaxPool2d(2, 2)\n self.conv2 = nn.Conv2d(6, 16, 5)\n self.fc1 = nn.Linear(16 * 5 * 5, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 10)\n\n def forward(self, x):\n x = self.pool(F.relu(self.conv1(x)))\n x = self.pool(F.relu(self.conv2(x)))\n x = x.view(-1, 16 * 5 * 5)\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = self.fc3(x)\n return x",
"_____no_output_____"
]
],
[
[
"Define loss and optimizer",
"_____no_output_____"
]
],
[
[
"import torch.optim as optim\n\nnet = Net()\n\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)",
"_____no_output_____"
]
],
[
[
"Train the Network",
"_____no_output_____"
]
],
[
[
"\ntrain_flag = False\nPATH = './cifar_net.pth'\n\nif train_flag:\n for epoch in range(2): # loop over the dataset multiple times\n\n running_loss = 0.0\n for i, data in enumerate(trainloader, 0):\n # get the inputs; data is a list of [inputs, labels]\n inputs, labels = data\n\n # zero the parameter gradients\n optimizer.zero_grad()\n\n # forward + backward + optimize\n outputs = net(inputs)\n loss = criterion(outputs, labels)\n loss.backward()\n optimizer.step()\n\n # print statistics\n running_loss += loss.item()\n if i % 2000 == 1999: # print every 2000 mini-batches\n print('[%d, %5d] loss: %.3f' %\n (epoch + 1, i + 1, running_loss / 2000))\n running_loss = 0.0\n\n print('Finished Training')\nelse:\n net.load_state_dict(torch.load(PATH))",
"_____no_output_____"
]
],
[
[
"Save Existing Network.",
"_____no_output_____"
]
],
[
[
"if train_flag:\n torch.save(net.state_dict(), PATH)",
"_____no_output_____"
]
],
[
[
"Test Acc.",
"_____no_output_____"
]
],
[
[
"correct = 0\ntotal = 0\nwith torch.no_grad():\n for data in testloader:\n images, labels = data\n outputs = net(images)\n _, predicted = torch.max(outputs.data, 1)\n total += labels.size(0)\n correct += (predicted == labels).sum().item()\n\nprint('Accuracy of the network on the 10000 test images: %d %%' % (\n 100 * correct / total))",
"Accuracy of the network on the 10000 test images: 52 %\n"
]
],
[
[
"### C&W Attack\n\nPerform attack on toy network.\n\nBefore running the example code, we have to set the following parameters:\n\n- dataloader\n- mean\n- std \n\nThe mean and std are one value per each channel of input",
"_____no_output_____"
]
],
[
[
"dataloader = trainloader\nmean = (0.5,0.5,0.5)\nstd = (0.5,0.5,0.5)",
"_____no_output_____"
],
[
"import torch\nimport cw\n\ninputs_box = (min((0 - m) / s for m, s in zip(mean, std)),\n max((1 - m) / s for m, s in zip(mean, std)))\n\n\"\"\"\n# an untargeted adversary\nadversary = cw.L2Adversary(targeted=False,\n confidence=0.0,\n search_steps=10,\n box=inputs_box,\n optimizer_lr=5e-4)\n\ninputs, targets = next(iter(dataloader))\nadversarial_examples = adversary(net, inputs, targets, to_numpy=False)\nassert isinstance(adversarial_examples, torch.FloatTensor)\nassert adversarial_examples.size() == inputs.size()\n\"\"\"\n\n# a targeted adversary\nadversary = cw.L2Adversary(targeted=True,\n confidence=0.0,\n search_steps=10,\n box=inputs_box,\n optimizer_lr=5e-4)\n\ninputs, orig_label = next(iter(dataloader))\n# a batch of any attack targets\nattack_targets = torch.ones(inputs.size(0), dtype = torch.long) * 3\nadversarial_examples = adversary(net, inputs, attack_targets, to_numpy=False)\nassert isinstance(adversarial_examples, torch.FloatTensor)\nassert adversarial_examples.size() == inputs.size()",
"Using scale consts: [0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001]\nUsing scale consts: [0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.0005, 0.01, 0.0005]\nUsing scale consts: [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.00025, 0.1, 0.00025]\nUsing scale consts: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.000125, 1.0, 0.000125]\nUsing scale consts: [0.55, 0.55, 0.55, 0.55, 0.55, 10.0, 0.55, 6.25e-05, 0.55, 6.25e-05]\nUsing scale consts: [0.325, 0.325, 0.325, 0.775, 0.325, 5.5, 0.325, 3.125e-05, 0.775, 3.125e-05]\nUsing scale consts: [0.4375, 0.4375, 0.21250000000000002, 0.6625000000000001, 0.21250000000000002, 3.25, 0.21250000000000002, 1.5625e-05, 0.8875, 1.5625e-05]\nUsing scale consts: [0.49375, 0.49375, 0.15625, 0.6062500000000001, 0.15625, 2.125, 0.15625, 7.8125e-06, 0.94375, 7.8125e-06]\nUsing scale consts: [0.5218750000000001, 0.5218750000000001, 0.184375, 0.6343750000000001, 0.128125, 1.5625, 0.184375, 3.90625e-06, 0.9156249999999999, 3.90625e-06]\nUsing scale consts: [0.55, 0.55, 0.21250000000000002, 0.6625000000000001, 0.15625, 2.125, 0.184375, 3.90625e-06, 0.9156249999999999, 3.90625e-06]\n"
],
[
"# Obtain the outputs of the adversarial perturbations vs. original\nprint(\"attacked:\", torch.argmax(net(adversarial_examples),dim=1))\nprint(\"original:\", orig_label)",
"attacked: tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3])\noriginal: tensor([6, 9, 6, 2, 6, 9, 2, 3, 8, 5])\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d00da60e80772a97b9186dada4a23529d8bd8639 | 20,141 | ipynb | Jupyter Notebook | Topic_modelling_with_svd_and_nmf.ipynb | AdityaVarmaUddaraju/Topic_Modelling | d80406d3fb000cb6fe8766bfe262b6ce800ee535 | [
"MIT"
] | null | null | null | Topic_modelling_with_svd_and_nmf.ipynb | AdityaVarmaUddaraju/Topic_Modelling | d80406d3fb000cb6fe8766bfe262b6ce800ee535 | [
"MIT"
] | null | null | null | Topic_modelling_with_svd_and_nmf.ipynb | AdityaVarmaUddaraju/Topic_Modelling | d80406d3fb000cb6fe8766bfe262b6ce800ee535 | [
"MIT"
] | null | null | null | 28.051532 | 886 | 0.469639 | [
[
[
"<a href=\"https://colab.research.google.com/github/AdityaVarmaUddaraju/Topic_Modelling/blob/main/Topic_modelling_with_svd_and_nmf.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Singular Value Decomposition",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom sklearn.datasets import fetch_20newsgroups\nfrom sklearn import decomposition\nfrom scipy import linalg",
"_____no_output_____"
],
[
"categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space']\nremove = ('headers', 'footers', 'quotes')\nnewsgroups_train = fetch_20newsgroups(subset='train', categories=categories, remove=remove)\nnewsgroups_test = fetch_20newsgroups(subset='test', categories=categories, remove=remove)",
"Downloading 20news dataset. This may take a few minutes.\nDownloading dataset from https://ndownloader.figshare.com/files/5975967 (14 MB)\n"
],
[
"first_3_text = newsgroups_train.data[:3]\nfirst_3_label = newsgroups_train.target[:3]\nfor text,label in zip(first_3_text, first_3_label):\n print(f'{text}')\n print(f'topic: {label}')",
"Hi,\n\nI've noticed that if you only save a model (with all your mapping planes\npositioned carefully) to a .3DS file that when you reload it after restarting\n3DS, they are given a default position and orientation. But if you save\nto a .PRJ file their positions/orientation are preserved. Does anyone\nknow why this information is not stored in the .3DS file? Nothing is\nexplicitly said in the manual about saving texture rules in the .PRJ file. \nI'd like to be able to read the texture rule information, does anyone have \nthe format for the .PRJ file?\n\nIs the .CEL file format available from somewhere?\n\nRych\ntopic: 1\n\n\nSeems to be, barring evidence to the contrary, that Koresh was simply\nanother deranged fanatic who thought it neccessary to take a whole bunch of\nfolks with him, children and all, to satisfy his delusional mania. Jim\nJones, circa 1993.\n\n\nNope - fruitcakes like Koresh have been demonstrating such evil corruption\nfor centuries.\ntopic: 3\n\n >In article <[email protected]>, [email protected] (Mark Brader) \n\nMB> So the\nMB> 1970 figure seems unlikely to actually be anything but a perijove.\n\nJG>Sorry, _perijoves_...I'm not used to talking this language.\n\nCouldn't we just say periapsis or apoapsis?\n\n \ntopic: 2\n"
],
[
"newsgroups_train.target_names",
"_____no_output_____"
],
[
"from sklearn.feature_extraction.text import CountVectorizer",
"_____no_output_____"
],
[
"vectorizer = CountVectorizer(stop_words='english')",
"_____no_output_____"
],
[
"vectors = vectorizer.fit_transform(newsgroups_train.data).todense()\nvectors.shape",
"_____no_output_____"
],
[
"len(newsgroups_train.data)",
"_____no_output_____"
],
[
"vocab = np.array(vectorizer.get_feature_names())",
"_____no_output_____"
],
[
"vocab.shape",
"_____no_output_____"
],
[
"# Usinf svd to decompose term document matrix \nU, s, Vh = linalg.svd(vectors, full_matrices=False)",
"_____no_output_____"
],
[
"U.shape, s.shape, Vh.shape",
"_____no_output_____"
],
[
"num_top_words=8\n\ndef show_topics(a):\n top_words = lambda t: [vocab[i] for i in np.argsort(t)[:-num_top_words-1:-1]]\n topic_words = ([top_words(t) for t in a])\n return [' '.join(t) for t in topic_words]",
"_____no_output_____"
],
[
"show_topics(Vh[383:384])",
"_____no_output_____"
],
[
"np.argmax(U[0])",
"_____no_output_____"
],
[
"newsgroups_train.data[7]",
"_____no_output_____"
],
[
"np.argmax(U[7])",
"_____no_output_____"
],
[
"show_topics(Vh[431:432])",
"_____no_output_____"
]
],
[
[
"# Non-negative Matrix Factorization",
"_____no_output_____"
]
],
[
[
"clf = decomposition.NMF(n_components=5, random_state=1)",
"_____no_output_____"
],
[
"W1 = clf.fit_transform(vectors)\nH1 = clf.components_",
"_____no_output_____"
],
[
"show_topics(H1)",
"_____no_output_____"
],
[
"W1[0]",
"_____no_output_____"
]
],
[
[
"# Truncated SVD",
"_____no_output_____"
]
],
[
[
"!pip install fbpca\nimport fbpca",
"Collecting fbpca\n Downloading fbpca-1.0.tar.gz (11 kB)\nBuilding wheels for collected packages: fbpca\n Building wheel for fbpca (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for fbpca: filename=fbpca-1.0-py3-none-any.whl size=11376 sha256=0ba0e3732d8f48850f61a19b21358c0c90a5d80b92b86ed5f03f4d6009798e58\n Stored in directory: /root/.cache/pip/wheels/93/08/0c/1b9866c35c8d3f136d100dfe88036a32e0795437daca089f70\nSuccessfully built fbpca\nInstalling collected packages: fbpca\nSuccessfully installed fbpca-1.0\n"
],
[
"%time u, s, v = np.linalg.svd(vectors, full_matrices=False)",
"CPU times: user 1min 21s, sys: 4.4 s, total: 1min 26s\nWall time: 44.4 s\n"
],
[
"%time u, s, v = decomposition.randomized_svd(vectors, 10)",
"CPU times: user 13.1 s, sys: 1.82 s, total: 15 s\nWall time: 10.2 s\n"
],
[
"%time u, s, v = fbpca.pca(vectors, 10)",
"CPU times: user 2.95 s, sys: 734 ms, total: 3.68 s\nWall time: 1.99 s\n"
],
[
"show_topics(v)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00da70f2fa0ffa50c59e9b630413a64cf5b5f52 | 55,215 | ipynb | Jupyter Notebook | recurrent-neural-networks/time-series/Simple_RNN.ipynb | johnsonjoseph37/deep-learning-v2-pytorch | 566dd73c3e289ef16bc8a30f814284b8e243f731 | [
"MIT"
] | null | null | null | recurrent-neural-networks/time-series/Simple_RNN.ipynb | johnsonjoseph37/deep-learning-v2-pytorch | 566dd73c3e289ef16bc8a30f814284b8e243f731 | [
"MIT"
] | null | null | null | recurrent-neural-networks/time-series/Simple_RNN.ipynb | johnsonjoseph37/deep-learning-v2-pytorch | 566dd73c3e289ef16bc8a30f814284b8e243f731 | [
"MIT"
] | null | null | null | 121.085526 | 9,088 | 0.863063 | [
[
[
"# Simple RNN\n\nIn this notebook, we're going to train a simple RNN to do **time-series prediction**. Given some set of input data, it should be able to generate a prediction for the next time step!\n<img src='assets/time_prediction.png' width=40% />\n\n> * First, we'll create our data\n* Then, define an RNN in PyTorch\n* Finally, we'll train our network and see how it performs",
"_____no_output_____"
],
[
"### Import resources and create data ",
"_____no_output_____"
]
],
[
[
"import torch\nfrom torch import nn\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"plt.figure(figsize=(8,5))\n\n# how many time steps/data pts are in one batch of data\nseq_length = 20\n\n# generate evenly spaced data pts\ntime_steps = np.linspace(0, np.pi, seq_length + 1)\ndata = np.sin(time_steps)\ndata.resize((seq_length + 1, 1)) # size becomes (seq_length+1, 1), adds an input_size dimension\n\nx = data[:-1] # all but the last piece of data\ny = data[1:] # all but the first\n\n# display the data\nplt.plot(time_steps[1:], x, 'r.', label='input, x') # x\nplt.plot(time_steps[1:], y, 'b.', label='target, y') # y\n\nplt.legend(loc='best')\nplt.show()",
"_____no_output_____"
]
],
[
[
"---\n## Define the RNN\n\nNext, we define an RNN in PyTorch. We'll use `nn.RNN` to create an RNN layer, then we'll add a last, fully-connected layer to get the output size that we want. An RNN takes in a number of parameters:\n* **input_size** - the size of the input\n* **hidden_dim** - the number of features in the RNN output and in the hidden state\n* **n_layers** - the number of layers that make up the RNN, typically 1-3; greater than 1 means that you'll create a stacked RNN\n* **batch_first** - whether or not the input/output of the RNN will have the batch_size as the first dimension (batch_size, seq_length, hidden_dim)\n\nTake a look at the [RNN documentation](https://pytorch.org/docs/stable/nn.html#rnn) to read more about recurrent layers.",
"_____no_output_____"
]
],
[
[
"class RNN(nn.Module):\n def __init__(self, input_size, output_size, hidden_dim, n_layers):\n super(RNN, self).__init__()\n \n self.hidden_dim=hidden_dim\n\n # define an RNN with specified parameters\n # batch_first means that the first dim of the input and output will be the batch_size\n self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)\n \n # last, fully-connected layer\n self.fc = nn.Linear(hidden_dim, output_size)\n\n def forward(self, x, hidden):\n # x (batch_size, seq_length, input_size)\n # hidden (n_layers, batch_size, hidden_dim)\n # r_out (batch_size, seq_length, hidden_dim)\n batch_size = x.size(0)\n \n # get RNN outputs\n r_out, hidden = self.rnn(x, hidden)\n # shape output to be (batch_size*seq_length, hidden_dim)\n r_out = r_out.view(-1, self.hidden_dim) \n \n # get final output \n output = self.fc(r_out)\n \n return output, hidden\n",
"_____no_output_____"
]
],
[
[
"### Check the input and output dimensions\n\nAs a check that your model is working as expected, test out how it responds to input data.",
"_____no_output_____"
]
],
[
[
"# test that dimensions are as expected\ntest_rnn = RNN(input_size=1, output_size=1, hidden_dim=10, n_layers=2)\n\n# generate evenly spaced, test data pts\ntime_steps = np.linspace(0, np.pi, seq_length)\ndata = np.sin(time_steps)\ndata.resize((seq_length, 1))\n\ntest_input = torch.Tensor(data).unsqueeze(0) # give it a batch_size of 1 as first dimension\nprint('Input size: ', test_input.size())\n\n# test out rnn sizes\ntest_out, test_h = test_rnn(test_input, None)\nprint('Output size: ', test_out.size())\nprint('Hidden state size: ', test_h.size())",
"Input size: torch.Size([1, 20, 1])\nOutput size: torch.Size([20, 1])\nHidden state size: torch.Size([2, 1, 10])\n"
]
],
[
[
"---\n## Training the RNN\n\nNext, we'll instantiate an RNN with some specified hyperparameters. Then train it over a series of steps, and see how it performs.",
"_____no_output_____"
]
],
[
[
"# decide on hyperparameters\ninput_size=1 \noutput_size=1\nhidden_dim=32\nn_layers=1\n\n# instantiate an RNN\nrnn = RNN(input_size, output_size, hidden_dim, n_layers)\nprint(rnn)",
"RNN(\n (rnn): RNN(1, 32, batch_first=True)\n (fc): Linear(in_features=32, out_features=1, bias=True)\n)\n"
]
],
[
[
"### Loss and Optimization\n\nThis is a regression problem: can we train an RNN to accurately predict the next data point, given a current data point?\n\n>* The data points are coordinate values, so to compare a predicted and ground_truth point, we'll use a regression loss: the mean squared error.\n* It's typical to use an Adam optimizer for recurrent models.",
"_____no_output_____"
]
],
[
[
"# MSE loss and Adam optimizer with a learning rate of 0.01\ncriterion = nn.MSELoss()\noptimizer = torch.optim.Adam(rnn.parameters(), lr=0.01) ",
"_____no_output_____"
]
],
[
[
"### Defining the training function\n\nThis function takes in an rnn, a number of steps to train for, and returns a trained rnn. This function is also responsible for displaying the loss and the predictions, every so often.\n\n#### Hidden State\n\nPay close attention to the hidden state, here:\n* Before looping over a batch of training data, the hidden state is initialized\n* After a new hidden state is generated by the rnn, we get the latest hidden state, and use that as input to the rnn for the following steps",
"_____no_output_____"
]
],
[
[
"# train the RNN\ndef train(rnn, n_steps, print_every):\n \n # initialize the hidden state\n hidden = None \n \n for batch_i, step in enumerate(range(n_steps)):\n # defining the training data \n time_steps = np.linspace(step * np.pi, (step+1)*np.pi, seq_length + 1)\n data = np.sin(time_steps)\n data.resize((seq_length + 1, 1)) # input_size=1\n\n x = data[:-1]\n y = data[1:]\n \n # convert data into Tensors\n x_tensor = torch.Tensor(x).unsqueeze(0) # unsqueeze gives a 1, batch_size dimension\n y_tensor = torch.Tensor(y)\n\n # outputs from the rnn\n prediction, hidden = rnn(x_tensor, hidden)\n\n ## Representing Memory ##\n # make a new variable for hidden and detach the hidden state from its history\n # this way, we don't backpropagate through the entire history\n hidden = hidden.data\n\n # calculate the loss\n loss = criterion(prediction, y_tensor)\n # zero gradients\n optimizer.zero_grad()\n # perform backprop and update weights\n loss.backward()\n optimizer.step()\n\n # display loss and predictions\n if batch_i%print_every == 0: \n print('Loss: ', loss.item())\n plt.plot(time_steps[1:], x, 'r.') # input\n plt.plot(time_steps[1:], prediction.data.numpy().flatten(), 'b.') # predictions\n plt.show()\n \n return rnn\n",
"_____no_output_____"
],
[
"# train the rnn and monitor results\nn_steps = 75\nprint_every = 15\n\ntrained_rnn = train(rnn, n_steps, print_every)",
"C:\\Users\\johnj\\miniconda3\\lib\\site-packages\\torch\\autograd\\__init__.py:145: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at ..\\c10\\cuda\\CUDAFunctions.cpp:109.)\n Variable._execution_engine.run_backward(\n"
]
],
[
[
"### Time-Series Prediction\n\nTime-series prediction can be applied to many tasks. Think about weather forecasting or predicting the ebb and flow of stock market prices. You can even try to generate predictions much further in the future than just one time step!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d00db2c271592c6817b0eb43962b7fee1fe803f3 | 45,203 | ipynb | Jupyter Notebook | Untitled.ipynb | Mixpap/astrostatistics | a8bc2da44a1b93669b020e4916226385ddf05b3c | [
"MIT"
] | null | null | null | Untitled.ipynb | Mixpap/astrostatistics | a8bc2da44a1b93669b020e4916226385ddf05b3c | [
"MIT"
] | 2 | 2018-04-08T00:38:19.000Z | 2018-04-08T00:42:10.000Z | Untitled.ipynb | UOAPythonWorkshop/Astrostatistics-Workshop | 8fb06ea0e0f846e94af0542c0d90ad46bc2a8f46 | [
"MIT"
] | null | null | null | 124.184066 | 17,816 | 0.830586 | [
[
[
"import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom matplotlib import rc,rcParams\nrc('text', usetex=True)\nrcParams['figure.figsize'] = (8, 6.5)\nrcParams['ytick.labelsize'],rcParams['xtick.labelsize'] = 17.,17.\nrcParams['axes.labelsize']=19.\nrcParams['legend.fontsize']=17.\nrcParams['axes.titlesize']=20.\nrcParams['text.latex.preamble'] = ['\\\\usepackage{siunitx}']\nimport seaborn\nseaborn.despine()\nseaborn.set_style('white', {'xes.linewidth': 0.5, 'axes.edgecolor':'black'})\nseaborn.despine(left=True)\n%load_ext autoreload\nfrom astroML.linear_model import TLS_logL\nfrom astroML.plotting.mcmc import convert_to_stdev\nimport astroML.datasets as adata\nfrom astropy.table import Table\nfrom scipy import polyfit,linalg\nfrom scipy.optimize import curve_fit",
"_____no_output_____"
]
],
[
[
"Εστω οτι παρατηρούμε εναν αστέρα στον ουρανό και μετράμε τη ροή φωτονίων. Θεωρώντας ότι η ροή είναι σταθερή με το χρόνο ίση με $F_{\\mathtt{true}}$. \n\nΠαίρνουμε $N$ παρατηρήσεις, μετρώντας τη ροή $F_i$ και το σφάλμα $e_i$. \n\nΗ ανίχνευση ενός φωτονίου είναι ενα ανεξάρτητο γεγονός που ακολουθεί μια τυχαία κατανομή Poisson. Από τη διακύμανση της κατανομής Poisson υπολογίζουμε το σφάλμα $e_i=\\sqrt{F_i}$",
"_____no_output_____"
]
],
[
[
"N=100\nF_true=1000.\nF=np.random.poisson(F_true*np.ones(N))\ne=np.sqrt(F)\n\nplt.errorbar(np.arange(N),F,yerr=e, fmt='ok', ecolor='gray', alpha=0.5)\nplt.hlines(np.mean(F),0,N,linestyles='--')\nplt.hlines(F_true,0,N)\nprint np.mean(F),np.mean(F)-F_true,np.std(F)",
"_____no_output_____"
],
[
"ax=seaborn.distplot(F,bins=N/3)\nxx=np.linspace(F.min(),F.max())\ngaus=np.exp(-0.5*((xx-F_true)/np.std(F))**2)/np.sqrt(2.*np.pi*np.std(F)**2)\nax.plot(xx,gaus)",
"_____no_output_____"
]
],
[
[
"Η αρχική προσέγγιση μας είναι μέσω της μεγιστοποιήσης της πιθανοφάνειας. Με βάση τα δεδομένωα $D_i=(F_i,e_i)$ μπορούμε να υπολογίσουμε τη πιθανότητα να τα έχουμε παρατηρήσει δεδομένου της αληθινής τιμής $F_{\\mathtt{true}}$ υποθέτωντας ότι τα σφάλματα είναι gaussian\n$$\nP(D_i|F_{\\mathtt{true}})=\\frac{1}{\\sqrt{2\\pi e_i^2}}e^{-\\frac{(F_i-F_{\\mathtt{true}})^2}{2e_i^2}}\n$$\n\nΟρίζουμε τη συνάρτηση πιθανοφάνειας σαν το σύνολο των πιθανοτήτων για κάθε σημείο\n$$\nL(D|F_{\\mathtt{true}})=\\prod _{i=1}^N P(D_i|F_{\\mathtt{true}})\n$$\n\nΕπειδή η τιμή της συνάρτηση πιθανοφάνειας μπορεί να γίνει πολύ μικρή, είναι πιο έυκολο να χρησιμοποιήσουμε το λογάριθμο της\n$$\n\\log L = -\\frac{1}{2} \\sum _{i=0}^N \\big[ \\log(2\\pi e_i^2) + \\frac{(F_i-F_\\mathtt{true})^2}{e_i^2} \\big]\n$$",
"_____no_output_____"
]
],
[
[
"#xx=np.linspace(0,10,5000)\nxx=np.ones(1000)\n#seaborn.distplot(np.random.poisson(xx),kde=False)\nplt.hist(np.random.poisson(xx))",
"_____no_output_____"
],
[
"w = 1. / e ** 2\nprint(\"\"\"\n F_true = {0}\n F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements)\n \"\"\".format(F_true, (w * F).sum() / w.sum(), w.sum() ** -0.5, N))",
"\n F_true = 1000.0\n F_est = 997 +/- 3 (based on 100 measurements)\n \n"
],
[
"np.sum(((F-F.mean())/F.std())**2)/(N-1)",
"_____no_output_____"
],
[
"def log_prior(theta):\n return 1 # flat prior\n\ndef log_likelihood(theta, F, e):\n return -0.5 * np.sum(np.log(2 * np.pi * e ** 2)\n + (F - theta[0]) ** 2 / e ** 2)\n\ndef log_posterior(theta, F, e):\n return log_prior(theta) + log_likelihood(theta, F, e)",
"_____no_output_____"
],
[
"ndim = 1 # number of parameters in the model\nnwalkers = 100 # number of MCMC walkers\nnburn = 1000 # \"burn-in\" period to let chains stabilize\nnsteps = 5000 # number of MCMC steps to take\n\n# we'll start at random locations between 0 and 2000\nstarting_guesses = 20 * np.random.rand(nwalkers, ndim)\n\nimport emcee\nsampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e])\nsampler.run_mcmc(starting_guesses, nsteps)\n\nsample = sampler.chain # shape = (nwalkers, nsteps, ndim)\nsample = sampler.chain[:, nburn:, :].ravel() # discard burn-in points",
"_____no_output_____"
],
[
"sampler.chain[0]",
"_____no_output_____"
],
[
"# plot a histogram of the sample\nplt.hist(sampler.chain, bins=50, histtype=\"stepfilled\", alpha=0.3, normed=True)\n\n# plot a best-fit Gaussian\nF_fit = np.linspace(F.min(),F.max(),500)\npdf = stats.norm(np.mean(sample), np.std(sample)).pdf(F_fit)\n\n#plt.plot(F_fit, pdf, '-k')\nplt.xlabel(\"F\"); plt.ylabel(\"P(F)\")",
"_____no_output_____"
],
[
"print(\"\"\"\n F_true = {0}\n F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements)\n \"\"\".format(F_true, np.mean(sample), np.std(sample), N))",
"\n F_true = 1000.0\n F_est = 997 +/- 3 (based on 100 measurements)\n \n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00dc5ed524e0cc7a6ed36808533c827b39bcb5b | 519,913 | ipynb | Jupyter Notebook | 2.assign_amzn_fine_food_review_tsne/pca_tsne_mnist.ipynb | be-shekhar/learning-ml | f7e5dc771b192ed461294be4f05858bda7e63e27 | [
"Apache-2.0"
] | null | null | null | 2.assign_amzn_fine_food_review_tsne/pca_tsne_mnist.ipynb | be-shekhar/learning-ml | f7e5dc771b192ed461294be4f05858bda7e63e27 | [
"Apache-2.0"
] | 7 | 2020-03-24T16:43:00.000Z | 2022-03-11T23:41:28.000Z | 2.assign_amzn_fine_food_review_tsne/pca_tsne_mnist.ipynb | be-shekhar/learning-ml | f7e5dc771b192ed461294be4f05858bda7e63e27 | [
"Apache-2.0"
] | null | null | null | 697.869799 | 103,222 | 0.939005 | [
[
[
"# Load MNIST Data ",
"_____no_output_____"
]
],
[
[
"# MNIST dataset downloaded from Kaggle : \n#https://www.kaggle.com/c/digit-recognizer/data\n\n# Functions to read and show images.\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n\n \nd0 = pd.read_csv('./mnist_train.csv')\n\nprint(d0.head(5)) # print first five rows of d0.\n\n# save the labels into a variable l.\nl = d0['label']\n\n# Drop the label feature and store the pixel data in d.\nd = d0.drop(\"label\",axis=1)\n\n \n \n",
" label pixel0 pixel1 pixel2 pixel3 pixel4 pixel5 pixel6 pixel7 \\\n0 1 0 0 0 0 0 0 0 0 \n1 0 0 0 0 0 0 0 0 0 \n2 1 0 0 0 0 0 0 0 0 \n3 4 0 0 0 0 0 0 0 0 \n4 0 0 0 0 0 0 0 0 0 \n\n pixel8 ... pixel774 pixel775 pixel776 pixel777 pixel778 \\\n0 0 ... 0 0 0 0 0 \n1 0 ... 0 0 0 0 0 \n2 0 ... 0 0 0 0 0 \n3 0 ... 0 0 0 0 0 \n4 0 ... 0 0 0 0 0 \n\n pixel779 pixel780 pixel781 pixel782 pixel783 \n0 0 0 0 0 0 \n1 0 0 0 0 0 \n2 0 0 0 0 0 \n3 0 0 0 0 0 \n4 0 0 0 0 0 \n\n[5 rows x 785 columns]\n"
],
[
"print(d.shape)\nprint(l.shape)",
"(42000, 784)\n(42000,)\n"
],
[
"# display or plot a number.\nplt.figure(figsize=(7,7))\nidx = 1\n\ngrid_data = d.iloc[idx].as_matrix().reshape(28,28) # reshape from 1d to 2d pixel array\nplt.imshow(grid_data, interpolation = \"none\", cmap = \"gray\")\nplt.show()\n\nprint(l[idx])",
"_____no_output_____"
]
],
[
[
"# 2D Visualization using PCA ",
"_____no_output_____"
]
],
[
[
"# Pick first 15K data-points to work on for time-effeciency.\n#Excercise: Perform the same analysis on all of 42K data-points.\n\nlabels = l.head(15000)\ndata = d.head(15000)\n\nprint(\"the shape of sample data = \", data.shape)\n",
"the shape of sample data = (15000, 784)\n"
],
[
"# Data-preprocessing: Standardizing the data\n\nfrom sklearn.preprocessing import StandardScaler\nstandardized_data = StandardScaler().fit_transform(data)\nprint(standardized_data.shape)\n",
"(15000, 784)\n"
],
[
"#find the co-variance matrix which is : A^T * A\nsample_data = standardized_data\n\n# matrix multiplication using numpy\ncovar_matrix = np.matmul(sample_data.T , sample_data)\n\nprint ( \"The shape of variance matrix = \", covar_matrix.shape)\n",
"The shape of variance matrix = (784, 784)\n"
],
[
"# finding the top two eigen-values and corresponding eigen-vectors \n# for projecting onto a 2-Dim space.\n\nfrom scipy.linalg import eigh \n\n# the parameter 'eigvals' is defined (low value to heigh value) \n# eigh function will return the eigen values in asending order\n# this code generates only the top 2 (782 and 783) eigenvalues.\nvalues, vectors = eigh(covar_matrix, eigvals=(782,783))\n\nprint(\"Shape of eigen vectors = \",vectors.shape)\n# converting the eigen vectors into (2,d) shape for easyness of further computations\nvectors = vectors.T\n\nprint(\"Updated shape of eigen vectors = \",vectors.shape)\n# here the vectors[1] represent the eigen vector corresponding 1st principal eigen vector\n# here the vectors[0] represent the eigen vector corresponding 2nd principal eigen vector",
"Shape of eigen vectors = (784, 2)\nUpdated shape of eigen vectors = (2, 784)\n"
],
[
"# projecting the original data sample on the plane \n#formed by two principal eigen vectors by vector-vector multiplication.\n\nimport matplotlib.pyplot as plt\nnew_coordinates = np.matmul(vectors, sample_data.T)\n\nprint (\" resultanat new data points' shape \", vectors.shape, \"X\", sample_data.T.shape,\" = \", new_coordinates.shape)",
" resultanat new data points' shape (2, 784) X (784, 15000) = (2, 15000)\n"
],
[
"import pandas as pd\n\n# appending label to the 2d projected data\nnew_coordinates = np.vstack((new_coordinates, labels)).T\n\n# creating a new data frame for ploting the labeled points.\ndataframe = pd.DataFrame(data=new_coordinates, columns=(\"1st_principal\", \"2nd_principal\", \"label\"))\nprint(dataframe.head())",
" 1st_principal 2nd_principal label\n0 -5.558661 -5.043558 1.0\n1 6.193635 19.305278 0.0\n2 -1.909878 -7.678775 1.0\n3 5.525748 -0.464845 4.0\n4 6.366527 26.644289 0.0\n"
],
[
"# ploting the 2d data points with seaborn\nimport seaborn as sn\nsn.FacetGrid(dataframe, hue=\"label\", size=6).map(plt.scatter, '1st_principal', '2nd_principal').add_legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"# PCA using Scikit-Learn",
"_____no_output_____"
]
],
[
[
"# initializing the pca\nfrom sklearn import decomposition\npca = decomposition.PCA()\n",
"_____no_output_____"
],
[
"# configuring the parameteres\n# the number of components = 2\npca.n_components = 2\npca_data = pca.fit_transform(sample_data)\n\n# pca_reduced will contain the 2-d projects of simple data\nprint(\"shape of pca_reduced.shape = \", pca_data.shape)\n\n",
"shape of pca_reduced.shape = (15000, 2)\n"
],
[
"# attaching the label for each 2-d data point \npca_data = np.vstack((pca_data.T, labels)).T\n\n# creating a new data fram which help us in ploting the result data\npca_df = pd.DataFrame(data=pca_data, columns=(\"1st_principal\", \"2nd_principal\", \"label\"))\nsn.FacetGrid(pca_df, hue=\"label\", size=6).map(plt.scatter, '1st_principal', '2nd_principal').add_legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"# PCA for dimensionality redcution (not for visualization)",
"_____no_output_____"
]
],
[
[
"# PCA for dimensionality redcution (non-visualization)\n\npca.n_components = 784\npca_data = pca.fit_transform(sample_data)\n\npercentage_var_explained = pca.explained_variance_ / np.sum(pca.explained_variance_);\n\ncum_var_explained = np.cumsum(percentage_var_explained)\n\n# Plot the PCA spectrum\nplt.figure(1, figsize=(6, 4))\n\nplt.clf()\nplt.plot(cum_var_explained, linewidth=2)\nplt.axis('tight')\nplt.grid()\nplt.xlabel('n_components')\nplt.ylabel('Cumulative_explained_variance')\nplt.show()\n\n\n# If we take 200-dimensions, approx. 90% of variance is expalined.",
"_____no_output_____"
]
],
[
[
"# t-SNE using Scikit-Learn",
"_____no_output_____"
]
],
[
[
"# TSNE\n\nfrom sklearn.manifold import TSNE\n\n# Picking the top 1000 points as TSNE takes a lot of time for 15K points\ndata_1000 = standardized_data[0:1000,:]\nlabels_1000 = labels[0:1000]\n\nmodel = TSNE(n_components=2, random_state=0)\n# configuring the parameteres\n# the number of components = 2\n# default perplexity = 30\n# default learning rate = 200\n# default Maximum number of iterations for the optimization = 1000\n\ntsne_data = model.fit_transform(data_1000)\n\n\n# creating a new data frame which help us in ploting the result data\ntsne_data = np.vstack((tsne_data.T, labels_1000)).T\ntsne_df = pd.DataFrame(data=tsne_data, columns=(\"Dim_1\", \"Dim_2\", \"label\"))\n\n# Ploting the result of tsne\nsn.FacetGrid(tsne_df, hue=\"label\", size=6).map(plt.scatter, 'Dim_1', 'Dim_2').add_legend()\nplt.show()",
"_____no_output_____"
],
[
"model = TSNE(n_components=2, random_state=0, perplexity=50)\ntsne_data = model.fit_transform(data_1000) \n\n# creating a new data fram which help us in ploting the result data\ntsne_data = np.vstack((tsne_data.T, labels_1000)).T\ntsne_df = pd.DataFrame(data=tsne_data, columns=(\"Dim_1\", \"Dim_2\", \"label\"))\n\n# Ploting the result of tsne\nsn.FacetGrid(tsne_df, hue=\"label\", size=6).map(plt.scatter, 'Dim_1', 'Dim_2').add_legend()\nplt.title('With perplexity = 50')\nplt.show()",
"_____no_output_____"
],
[
"model = TSNE(n_components=2, random_state=0, perplexity=50, n_iter=5000)\ntsne_data = model.fit_transform(data_1000) \n\n# creating a new data fram which help us in ploting the result data\ntsne_data = np.vstack((tsne_data.T, labels_1000)).T\ntsne_df = pd.DataFrame(data=tsne_data, columns=(\"Dim_1\", \"Dim_2\", \"label\"))\n\n# Ploting the result of tsne\nsn.FacetGrid(tsne_df, hue=\"label\", size=6).map(plt.scatter, 'Dim_1', 'Dim_2').add_legend()\nplt.title('With perplexity = 50, n_iter=5000')\nplt.show()",
"_____no_output_____"
],
[
"model = TSNE(n_components=2, random_state=0, perplexity=2)\ntsne_data = model.fit_transform(data_1000) \n\n# creating a new data fram which help us in ploting the result data\ntsne_data = np.vstack((tsne_data.T, labels_1000)).T\ntsne_df = pd.DataFrame(data=tsne_data, columns=(\"Dim_1\", \"Dim_2\", \"label\"))\n\n# Ploting the result of tsne\nsn.FacetGrid(tsne_df, hue=\"label\", size=6).map(plt.scatter, 'Dim_1', 'Dim_2').add_legend()\nplt.title('With perplexity = 2')\nplt.show()",
"_____no_output_____"
],
[
"#Excercise: Run the same analysis using 42K points with various \n#values of perplexity and iterations.\n\n# If you use all of the points, you can expect plots like this blog below:\n# http://colah.github.io/posts/2014-10-Visualizing-MNIST/",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d00dcfe5fb0d4e841b6582be74d6456238579f21 | 16,614 | ipynb | Jupyter Notebook | presentation_vcr.ipynb | catherinedevlin/code-org-apis-data | f2692a05772c679c2ea67c4a7d68ad90823e585a | [
"CC0-1.0"
] | null | null | null | presentation_vcr.ipynb | catherinedevlin/code-org-apis-data | f2692a05772c679c2ea67c4a7d68ad90823e585a | [
"CC0-1.0"
] | null | null | null | presentation_vcr.ipynb | catherinedevlin/code-org-apis-data | f2692a05772c679c2ea67c4a7d68ad90823e585a | [
"CC0-1.0"
] | null | null | null | 18.398671 | 225 | 0.503611 | [
[
[
"!pip install vcrpy",
"_____no_output_____"
],
[
"import vcr\n\noffline = vcr.VCR(\n record_mode='new_episodes',\n)",
"_____no_output_____"
]
],
[
[
"# APIs and data",
"_____no_output_____"
],
[
"Catherine Devlin (@catherinedevlin)\n\nInnovation Specialist, 18F\n\nOakwood High School, Feb 16 2017",
"_____no_output_____"
],
[
"# Who am I?\n\n(hint: not Jean Valjean)\n\n![International Falls, MN winter street scene](http://kcc-tv.org/wp-content/uploads/2017/01/Winter-downtown.jpg)",
"_____no_output_____"
],
[
"# Cool things I've done\n\n- Chemical engineer in college\n- Oops, became a programmer\n- Created IPython `%sql` magic",
"_____no_output_____"
],
[
"# [Dayton Dynamic Languages](http://d8ndl.org)",
"_____no_output_____"
],
[
"# PyOhio\n\n![Volunteer with PyOhio shirt](https://lh3.googleusercontent.com/-5t5lFev02sI/UfQvMgfI_cI/AAAAAAAAHy8/bEC5sh9Fgc4/w800-h800/pyohio.jpg)",
"_____no_output_____"
],
[
"# [18F](18f.gsa.gov)",
"_____no_output_____"
],
[
"<a href=\"https://18f.gsa.gov\"><img src=\"https://18f.gsa.gov/assets/img/logos/18f-logo.svg\" \nalt=\"18F logo\" width=\"30%\" /></a>\n\n\"Digital startup\" within the Federal government\n\nIt's like college!\n\nMuch of what I'm teaching you I've learned the last couple years... some of it last week",
"_____no_output_____"
],
[
"# Federal Election Commission",
"_____no_output_____"
],
[
"[Old site](http://www.fec.gov/)\n\n[New site](https://beta.fec.gov/)\n\nUser research & best practices\n\nLet's look up our Representative",
"_____no_output_____"
],
[
"# API",
"_____no_output_____"
],
[
"![grocery truck](http://1.bp.blogspot.com/-O02HCO9IhSI/Tqk3autEPMI/AAAAAAAEj2s/idIM9s7hvgo/s1600/ATLAS+LOGISTICS+ATLANTA+GEORGIA+FREIGHTLINER+Day+Cab+Truck+Tractor+%252CKROGER+Trailer+Grocery+Store+Food+Supermarket.JPG)",
"_____no_output_____"
],
[
"# Webpage vs. API",
"_____no_output_____"
],
[
"# FEC API",
"_____no_output_____"
],
[
"https://api.open.fec.gov/developers/\n\nEvery API works differently.\n\nLet's find the committee ID for our Congressional representative.\n\nC00373001",
"_____no_output_____"
],
[
"https://api.open.fec.gov/v1/committee/C00373001/totals/?page=1&api_key=DEMO_KEY&sort=-cycle&per_page=20",
"_____no_output_____"
],
[
"- Knife\n- Cheese grater\n- Vegetable peeler\n- Apple corer\n- Food processor",
"_____no_output_____"
],
[
"![smore-making gadget](http://www.rd.com/wp-content/uploads/sites/2/2016/07/weird-kitchen-gadgets-smore-maker.jpg)",
"_____no_output_____"
],
[
"# `requests` library\n\nFirst, we install. That's like buying it.",
"_____no_output_____"
]
],
[
[
"!pip install requests",
"_____no_output_____"
]
],
[
[
"Then, we import. That's like getting it out of the cupboard.",
"_____no_output_____"
]
],
[
[
"import requests",
"_____no_output_____"
]
],
[
[
"# Oakwood High School",
"_____no_output_____"
]
],
[
[
"with offline.use_cassette('offline.vcr'):\n response = requests.get('http://ohs.oakwoodschools.org/pages/Oakwood_High_School')",
"_____no_output_____"
],
[
"response.ok",
"_____no_output_____"
],
[
"response.status_code",
"_____no_output_____"
],
[
"print(response.text)",
"_____no_output_____"
]
],
[
[
"We have backed our semi up to the front door.\n\nOK, back to checking out politicians.",
"_____no_output_____"
]
],
[
[
"url = 'https://api.open.fec.gov/v1/committee/C00373001/totals/?page=1&api_key=DEMO_KEY&sort=-cycle&per_page=20'\n\nwith offline.use_cassette('offline.vcr'):\n response = requests.get(url)",
"_____no_output_____"
],
[
"response.ok",
"_____no_output_____"
],
[
"response.status_code",
"_____no_output_____"
],
[
"response.json()",
"_____no_output_____"
],
[
"response.json()['results']",
"_____no_output_____"
],
[
"results = response.json()['results']",
"_____no_output_____"
],
[
"results[0]['cycle']",
"_____no_output_____"
],
[
"results[0]['disbursements']",
"_____no_output_____"
],
[
"for result in results:\n print(result['cycle'])",
"_____no_output_____"
],
[
"for result in results:\n year = result['cycle']\n spent = result['disbursements']\n print('year: {}\\t spent:{}'.format(year, spent))\n ",
"_____no_output_____"
]
],
[
[
"# [Pandas](http://pandas.pydata.org/)",
"_____no_output_____"
]
],
[
[
"!pip install pandas",
"_____no_output_____"
],
[
"import pandas as pd",
"_____no_output_____"
],
[
"data = pd.DataFrame(response.json()['results'])",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"data = data.set_index('cycle')",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"data['disbursements']",
"_____no_output_____"
],
[
"data[data['disbursements'] < 1000000 ]",
"_____no_output_____"
]
],
[
[
"# [Bokeh](http://bokeh.pydata.org/en/latest/)",
"_____no_output_____"
]
],
[
[
"!pip install bokeh",
"_____no_output_____"
],
[
"from bokeh.charts import Bar, show, output_notebook",
"_____no_output_____"
],
[
"by_year = Bar(data, values='disbursements')\n",
"_____no_output_____"
],
[
"output_notebook()\nshow(by_year)",
"_____no_output_____"
]
],
[
[
"# Playtime\n\n[so many options](http://bokeh.pydata.org/en/latest/docs/user_guide/charts.html)\n\n- Which column to map?\n- Colors or styles?\n- Scatter\n- Better y-axis label?\n- Some other candidate committee?\n - Portman C00458463, Brown C00264697\n- Filter it",
"_____no_output_____"
],
[
"# Where's it coming from?\n\nhttps://api.open.fec.gov/v1/committee/C00373001/schedules/schedule_a/by_state/?per_page=20&api_key=DEMO_KEY&page=1&cycle=2016\n",
"_____no_output_____"
]
],
[
[
"url = 'https://api.open.fec.gov/v1/committee/C00373001/schedules/schedule_a/by_state/?per_page=20&api_key=DEMO_KEY&page=1&cycle=2016'\nwith offline.use_cassette('offline.vcr'):\n response = requests.get(url)\nresults = response.json()['results']\ndata = pd.DataFrame(results)\ndata",
"_____no_output_____"
],
[
"data = data.set_index('state')",
"_____no_output_____"
],
[
"by_state = Bar(data, values='total')\nshow(by_state)",
"_____no_output_____"
]
],
[
[
"# More data\n\n[data.gov](https://www.data.gov/)\n\nWebsearch on anything + \"API\"\n\n",
"_____no_output_____"
],
[
"# What can you *really* do?\n\n[Moneyfollower](http://moneyfollower.us/)",
"_____no_output_____"
],
[
"# Learning more\n\n- [PyVideo](http://pyvideo.org/)\n- [Beginner's Guide](https://wiki.python.org/moin/BeginnersGuide)\n- [Hitchhiker's Guide](http://docs.python-guide.org/en/latest/)\n- [DDL](http://d8ndl.org/), [PyOhio](http://pyohio.org/)\n- @catherinedevlin",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d00dd009ecf59d74d955760fdb0e0fdab523a648 | 6,202 | ipynb | Jupyter Notebook | ejercicios/reg-toy-diabetes.ipynb | joseluisGA/videojuegos | a8795447fd40cd8fe032cadb4f2a1bd309a6e0de | [
"MIT"
] | null | null | null | ejercicios/reg-toy-diabetes.ipynb | joseluisGA/videojuegos | a8795447fd40cd8fe032cadb4f2a1bd309a6e0de | [
"MIT"
] | null | null | null | ejercicios/reg-toy-diabetes.ipynb | joseluisGA/videojuegos | a8795447fd40cd8fe032cadb4f2a1bd309a6e0de | [
"MIT"
] | 2 | 2021-06-15T08:44:05.000Z | 2021-07-17T09:57:04.000Z | 6,202 | 6,202 | 0.563689 | [
[
[
"[Diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset)\n----------------\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom sklearn import datasets\n\ndiabetes = datasets.load_diabetes()\nprint(diabetes['DESCR'])",
".. _diabetes_dataset:\n\nDiabetes dataset\n----------------\n\nTen baseline variables, age, sex, body mass index, average blood\npressure, and six blood serum measurements were obtained for each of n =\n442 diabetes patients, as well as the response of interest, a\nquantitative measure of disease progression one year after baseline.\n\n**Data Set Characteristics:**\n\n :Number of Instances: 442\n\n :Number of Attributes: First 10 columns are numeric predictive values\n\n :Target: Column 11 is a quantitative measure of disease progression one year after baseline\n\n :Attribute Information:\n - Age\n - Sex\n - Body mass index\n - Average blood pressure\n - S1\n - S2\n - S3\n - S4\n - S5\n - S6\n\nNote: Each of these 10 feature variables have been mean centered and scaled by the standard deviation times `n_samples` (i.e. the sum of squares of each column totals 1).\n\nSource URL:\nhttps://www4.stat.ncsu.edu/~boos/var.select/diabetes.html\n\nFor more information see:\nBradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani (2004) \"Least Angle Regression,\" Annals of Statistics (with discussion), 407-499.\n(https://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf)\n"
],
[
"# Convert the data to a pandas dataframe\ndf = pd.DataFrame(diabetes.data, columns=diabetes.feature_names)\ndf['diabetes'] = diabetes.target\ndf.head()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
d00dd49ba6ac7809738661f86058dd50084f4d7a | 30,152 | ipynb | Jupyter Notebook | RandomForest.ipynb | AM-2018-2-dusteam/ML-poker | a8413a562e64854419d9713750fea8688aa66d2f | [
"MIT"
] | null | null | null | RandomForest.ipynb | AM-2018-2-dusteam/ML-poker | a8413a562e64854419d9713750fea8688aa66d2f | [
"MIT"
] | null | null | null | RandomForest.ipynb | AM-2018-2-dusteam/ML-poker | a8413a562e64854419d9713750fea8688aa66d2f | [
"MIT"
] | 2 | 2018-09-30T17:29:29.000Z | 2018-10-06T01:08:26.000Z | 60.668008 | 18,888 | 0.790727 | [
[
[
"# Random Forest\n\nAplicação do random forest em uma mão de poker\n\n***Dataset:*** https://archive.ics.uci.edu/ml/datasets/Poker+Hand\n\n***Apresentação:*** https://docs.google.com/presentation/d/1zFS4cTf9xwvcVPiCOA-sV_RFx_UeoNX2dTthHkY9Am4/edit",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.utils import column_or_1d\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.ensemble import GradientBoostingClassifier\nfrom sklearn.model_selection import cross_val_score\nimport seaborn as sn\nimport timeit\n\n\nfrom format import format_poker_data\n\n",
"_____no_output_____"
],
[
"train_data = pd.read_csv('train.csv')\n",
"_____no_output_____"
],
[
"test_data = pd.read_csv('test.csv')",
"_____no_output_____"
],
[
"X_train, y_train = np.split(train_data,[-1],axis=1)",
"_____no_output_____"
],
[
"test_data = test_data.dropna()",
"_____no_output_____"
],
[
"X_test, y_test = np.split(test_data,[-1],axis=1)",
"_____no_output_____"
],
[
"start_time = timeit.default_timer()\n\nX_train , equal_suit_train = format_poker_data(X_train)\nelapsed = timeit.default_timer() - start_time\nprint(str(elapsed)+\" ns\")",
"0.06867059499927564 ns\n"
],
[
"X_test , equal_suit_test = format_poker_data(X_test)",
"_____no_output_____"
],
[
"rf = RandomForestClassifier(n_estimators=50,random_state=42)\nrf2 = RandomForestClassifier(n_estimators=50,random_state=42)",
"_____no_output_____"
],
[
"y_train = column_or_1d(y_train)\ny_test = column_or_1d(y_test)",
"_____no_output_____"
],
[
"rf.fit(X_train,y_train)",
"_____no_output_____"
],
[
"rf.score(X_train,y_train)",
"_____no_output_____"
],
[
"rf.score(X_test,y_test)",
"_____no_output_____"
],
[
"n_data_train = pd.DataFrame()\nn_data_train['predict'] = rf.predict(X_train)\nn_data_train['is_the_same'] = equal_suit_train\nn_data_train.shape\n\nn_data_test = pd.DataFrame()\nn_data_test['predict'] = rf.predict(X_test)\nn_data_test['is_the_same'] = equal_suit_test\n\nn_data_train.head()",
"_____no_output_____"
],
[
"n_data_train = pd.get_dummies(n_data_train,columns=['predict']).astype('bool')\nn_data_test = pd.get_dummies(n_data_test,columns=['predict']).astype('bool')",
"_____no_output_____"
],
[
"rf2.fit(n_data_train,y_train)",
"_____no_output_____"
],
[
"rf2.score(n_data_train,y_train)",
"_____no_output_____"
],
[
"rf2.score(n_data_test,y_test)",
"_____no_output_____"
],
[
"#Confusion Matrixfor Test Data\nconf_array_test = confusion_matrix(y_test,rf2.predict(n_data_test))\nconf_array_test = conf_array_test / conf_array_test.astype(np.float).sum(axis=1)\ndf_class_test = pd.DataFrame(conf_array_test, range(10),range(10))\nsn.set(font_scale=0.7)#for label size\nsn.heatmap(df_class_test,annot=True)# font size\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00de54c5c08ee07365fb79baa14264100c11e52 | 327,339 | ipynb | Jupyter Notebook | PythonCodes/Utilities/WeightingPlots/WeightingFunction2.ipynb | Nicolucas/C-Scripts | 2608df5c2e635ad16f422877ff440af69f98f960 | [
"MIT"
] | 1 | 2020-02-25T08:05:13.000Z | 2020-02-25T08:05:13.000Z | PythonCodes/Utilities/WeightingPlots/WeightingFunction2.ipynb | Nicolucas/TEAR | bbeb599cf2bab70fd7a82041336a1a918e8727f2 | [
"MIT"
] | null | null | null | PythonCodes/Utilities/WeightingPlots/WeightingFunction2.ipynb | Nicolucas/TEAR | bbeb599cf2bab70fd7a82041336a1a918e8727f2 | [
"MIT"
] | null | null | null | 1,098.45302 | 74,852 | 0.952251 | [
[
[
"import numpy as np\nimport matplotlib.pylab as plt",
"_____no_output_____"
],
[
"\ndef Weight(phi,A=5, phi_o=0):\n return 1-(0.5*np.tanh(A*((np.abs(phi)-phi_o)))+0.5)\n\ndef annot_max(x,y, ax=None):\n x=np.array(x)\n y=np.array(y) \n xmax = x[np.argmax(y)]\n ymax = y.max()\n text= \"x={:.3f}, y={:.3f}\".format(xmax, ymax)\n if not ax:\n ax=plt.gca()\n bbox_props = dict(boxstyle=\"square,pad=0.3\", fc=\"w\", ec=\"k\", lw=0.72)\n arrowprops=dict(arrowstyle=\"->\",connectionstyle=\"angle,angleA=0,angleB=60\")\n kw = dict(xycoords='data',textcoords=\"axes fraction\",\n arrowprops=arrowprops, bbox=bbox_props, ha=\"right\", va=\"top\")\n ax.annotate(text, xy=(xmax, ymax), xytext=(0.94,0.96), **kw)\n\ndef plotweighting(philist, A, p, delta, ctephi_o, enumeration):\n label=enumeration+r\" $w(\\phi,\\phi_o=$\"+\"$\\delta$\"+r\"$ \\cdot $\"+\"{cte}\".format(cte=ctephi_o)+r\"$,A = $\"+\"{A}\".format(A=A)+r\"$\\frac{p}{\\delta})$\"+\"\\np = {p}, $\\delta$ = {delta}m\".format(p=p,delta=delta)\n plt.plot(philist,[Weight(phi, A = A*p/delta, phi_o = phi_o) for phi in philist], label = label)",
"_____no_output_____"
],
[
"plt.figure(figsize= [10, 4],dpi=100)\n\ndelta = 50.05;\nphilist=np.arange(-(delta+10),(delta+10),.5).tolist()\n\n#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA = 4; p = 2; ctephi_o = 0.65; phi_o = delta*ctephi_o;\nplotweighting(philist, A, p, delta, ctephi_o,\"(A)\")\n\n\n\n#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA = 12; p = 2; ctephi_o = 0.85; phi_o = delta*ctephi_o;\nplotweighting(philist, A, p, delta, ctephi_o,\"(B)\")\n\n\n#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA = 8; p = 2; ctephi_o = 0.80; phi_o = delta*ctephi_o;\nplotweighting(philist, A, p, delta, ctephi_o,\"(C)\")\n\n\nplt.axvline([delta],c=\"k\");plt.axvline([-delta],c=\"k\")\n\nplt.xlabel(\"$\\phi$\")\nplt.title(r\"$w(\\phi,\\phi_o,A)=1-(0.5\\ tanh(A(|\\phi|-\\phi_o))+0.5)$\")\nplt.grid()\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize= [10, 4],dpi=100)\n\ndelta = 50.05;\nphilist=np.arange(-(delta+10),(delta+10),.5).tolist()\n\n#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA = 4; p = 2; ctephi_o = .999; phi_o = delta*ctephi_o;\nplotweighting(philist, A, p, delta, ctephi_o,\"(D)\")\n\n#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA = 4; p = 2; ctephi_o = .65; phi_o = delta*ctephi_o;\nplotweighting(philist, A, p, delta, ctephi_o,\"(A)\")\n\n\nplt.axvline([delta],c=\"k\");plt.axvline([-delta],c=\"k\")\n\nplt.xlabel(\"$\\phi$\")\nplt.title(r\"$w(\\phi,\\phi_o,A)=1-(0.5\\ tanh(A(|\\phi|-\\phi_o))+0.5)$\")\nplt.grid()\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize= [10, 4],dpi=100)\n\ndelta = 50.05;\nphilist=np.arange(-(delta+10),(delta+10),.5).tolist()\n\n#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA = 4; p = 2; ctephi_o = .65; phi_o = delta*ctephi_o;\nplotweighting(philist, A, p, delta, ctephi_o,\"(A)\")\n\n#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA = 4; p = 2; ctephi_o = .75; phi_o = delta*ctephi_o;\nplotweighting(philist, A, p, delta, ctephi_o,\"(.75)\")\n\n#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA = 4; p = 2; ctephi_o = .85; phi_o = delta*ctephi_o;\nplotweighting(philist, A, p, delta, ctephi_o,\"(.85)\")\n\n\nplt.axvline([delta],c=\"k\");plt.axvline([-delta],c=\"k\")\n\nplt.xlabel(\"$\\phi$\")\nplt.title(r\"$w(\\phi,\\phi_o,A)=1-(0.5\\ tanh(A(|\\phi|-\\phi_o))+0.5)$\")\nplt.grid()\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize= [10, 4],dpi=100)\n\ndelta = 50.05;\nphilist=np.arange(-(delta+10),(delta+10),.5).tolist()\n\n#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA = 4; p = 2; ctephi_o = .85; phi_o = delta*ctephi_o;\nplotweighting(philist, A, p, delta, ctephi_o,\"(A)\")\n\n\n#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA = 6; p = 2; ctephi_o = .75; phi_o = delta*ctephi_o;\nplotweighting(philist, A, p, delta, ctephi_o,\"(?)\")\n\n\nplt.axvline([delta],c=\"k\");plt.axvline([-delta],c=\"k\")\n\nplt.xlabel(\"$\\phi$\")\nplt.title(r\"$w(\\phi,\\phi_o,A)=1-(0.5\\ tanh(A(|\\phi|-\\phi_o))+0.5)$\")\nplt.grid()\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize= [10, 4],dpi=100)\n\ndelta = 50.05;\nphilist=np.arange(-(delta+10),(delta+10),.5).tolist()\n\n#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA = 6.5; p = 2; ctephi_o = .85; phi_o = delta*ctephi_o;\nplotweighting(philist, A, p, delta, ctephi_o,\"(A)\")\n\n#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA = 6; p = 3; ctephi_o = .65; phi_o = delta*ctephi_o;\nplotweighting(philist, A, p, delta, ctephi_o,\"(B)\")\n\n#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nA = 4; p = 3; ctephi_o = .65; phi_o = delta*ctephi_o;\nplotweighting(philist, A, p, delta, ctephi_o,\"(C)\")\n\n\nplt.axvline([delta],c=\"k\");plt.axvline([-delta],c=\"k\")\n\nplt.xlabel(\"$\\phi$\")\nplt.title(r\"$w(\\phi,\\phi_o,A)=1-(0.5\\ tanh(A(|\\phi|-\\phi_o))+0.5)$\")\nplt.grid()\nplt.legend()\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d00de87efe3bfe004ba24ef04b92f972c5fe8395 | 33,515 | ipynb | Jupyter Notebook | AnushkaProject/Balance Scale Decision Tree.ipynb | Sakshat682/BalanceDataProject | 406f2c08042df7af0666ba65b0737e33690dc5f9 | [
"MIT"
] | 1 | 2021-09-30T05:50:59.000Z | 2021-09-30T05:50:59.000Z | AnushkaProject/Balance Scale Decision Tree.ipynb | Sakshat682/BalanceDataProject | 406f2c08042df7af0666ba65b0737e33690dc5f9 | [
"MIT"
] | 6 | 2021-09-30T00:25:32.000Z | 2021-10-04T03:58:12.000Z | AnushkaProject/Balance Scale Decision Tree.ipynb | Sakshat682/BalanceDataProject | 406f2c08042df7af0666ba65b0737e33690dc5f9 | [
"MIT"
] | 4 | 2021-09-30T04:33:14.000Z | 2021-10-03T19:05:14.000Z | 24.679676 | 431 | 0.422676 | [
[
[
"# Description",
"_____no_output_____"
],
[
"This task is to do an exploratory data analysis on the balance-scale dataset\n",
"_____no_output_____"
],
[
"## Data Set Information",
"_____no_output_____"
],
[
"This data set was generated to model psychological experimental results. Each example is classified as having the balance scale tip to the right, tip to the left, or be balanced. The attributes are the left weight, the left distance, the right weight, and the right distance. The correct way to find the class is the greater of (left-distance left-weight) and (right-distance right-weight). If they are equal, it is balanced.",
"_____no_output_____"
],
[
"### Attribute Information:-",
"_____no_output_____"
],
[
"1. Class Name: 3 (L, B, R)\n2. Left-Weight: 5 (1, 2, 3, 4, 5)\n3. Left-Distance: 5 (1, 2, 3, 4, 5)\n4. Right-Weight: 5 (1, 2, 3, 4, 5)\n5. Right-Distance: 5 (1, 2, 3, 4, 5)",
"_____no_output_____"
]
],
[
[
"#importing libraries\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n",
"_____no_output_____"
],
[
"#reading the data\ndata=pd.read_csv('balance-scale.data')",
"_____no_output_____"
],
[
"#shape of the data\ndata.shape",
"_____no_output_____"
],
[
"#first five rows of the data\ndata.head()",
"_____no_output_____"
],
[
"#Generating the x values\nx=data.drop(['Class'],axis=1)",
"_____no_output_____"
],
[
"x.head()",
"_____no_output_____"
],
[
"#Generating the y values\ny=data['Class']\ny.head()",
"_____no_output_____"
],
[
"#Checking for any null data in x\nx.isnull().any()",
"_____no_output_____"
],
[
"#Checking for any null data in y\ny.isnull().any()",
"_____no_output_____"
],
[
"#Adding left and right torque as a new data frame\nx1=pd.DataFrame()\nx1['LT']=x['LW']*x['LD']\nx1['RT']=x['RW']*x['RD']\nx1.head()",
"_____no_output_____"
],
[
"#Converting the results of \"Classs\" attribute ,i.e., Balanced(B), Left(L) and Right(R) to numerical values for computation in sklearn\ny=y.map(dict(B=0,L=1,R=2))\ny.head()",
"_____no_output_____"
]
],
[
[
"### Using the Weight and Distance parameters",
"_____no_output_____"
],
[
"Splitting the data set into a ratio of 70:30 by the built in 'train_test_split' function in sklearn to get a better idea of accuracy of the model",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(x,y,stratify=y, test_size=0.3, random_state=2)",
"_____no_output_____"
],
[
"X_train.describe()",
"_____no_output_____"
],
[
"#Importing decision tree classifier and creating it's object\nfrom sklearn.tree import DecisionTreeClassifier\nclf= DecisionTreeClassifier()",
"_____no_output_____"
],
[
"clf.fit(X_train,y_train)",
"_____no_output_____"
],
[
"y_pred=clf.predict(X_test)",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\naccuracy_score(y_test,y_pred)",
"_____no_output_____"
]
],
[
[
"We observe that the accuracy score is pretty low. Thus, we need to find optimal parameters to get the best accuracy. We do that by using GridSearchCV",
"_____no_output_____"
]
],
[
[
"#Using GridSearchCV to find the maximun optimal depth\nfrom sklearn.model_selection import GridSearchCV\ntree_para={\"criterion\":[\"gini\",\"entropy\"], \"max_depth\":[3,4,5,6,7,8,9,10,11,12]}\ndt_model_grid= GridSearchCV(DecisionTreeClassifier(random_state=3),tree_para, cv=10)",
"_____no_output_____"
],
[
"dt_model_grid.fit(X_train,y_train)",
"_____no_output_____"
],
[
"# To print the optimum parameters computed by GridSearchCV required for best accuracy score\ndt_model=dt_model_grid.best_estimator_\nprint(dt_model)",
"DecisionTreeClassifier(criterion='entropy', max_depth=5, random_state=3)\n"
],
[
"#To find the best accuracy score for all possible combinations of parameters provided\ndt_model_grid.best_score_",
"_____no_output_____"
],
[
"dt_model_grid.best_params_",
"_____no_output_____"
],
[
"#Scoring the model\nfrom sklearn.metrics import classification_report\ny_pred1=dt_model.predict(X_test)\nprint(classification_report(y_test,y_pred1,target_names=[\"Balanced\",\"Left\",\"Right\"]))",
" precision recall f1-score support\n\n Balanced 0.09 0.07 0.08 15\n Left 0.75 0.83 0.79 87\n Right 0.81 0.77 0.79 86\n\n accuracy 0.74 188\n macro avg 0.55 0.55 0.55 188\nweighted avg 0.73 0.74 0.73 188\n\n"
],
[
"from sklearn import tree\n",
"_____no_output_____"
],
[
"!pip install graphviz",
"Collecting graphviz\n Downloading graphviz-0.17-py3-none-any.whl (18 kB)\nInstalling collected packages: graphviz\nSuccessfully installed graphviz-0.17\n"
],
[
"#Plotting the Tree\nfrom sklearn.tree import export_graphviz\nexport_graphviz(\ndt_model,\nout_file=(\"model1.dot\"),\nfeature_names=[\"Left Weight\",\"Left Distance\",\"Right Weight\",\"Right Distance\"],\nclass_names=[\"Balanced\",\"Left\",\"Right\"],\nfilled=True)\n\n#Run this to print png\n# !dot -Tpng model1.dot -o model1.png\n",
"_____no_output_____"
]
],
[
[
"## Using the created Torque",
"_____no_output_____"
]
],
[
[
"dt_model2 = DecisionTreeClassifier(random_state=31)\nX_train, X_test, y_train, y_test= train_test_split(x1,y, stratify=y, test_size=0.3, random_state=8)",
"_____no_output_____"
],
[
"X_train.head(\n)",
"_____no_output_____"
],
[
"X_train.shape",
"_____no_output_____"
],
[
"dt_model2.fit(X_train, y_train)",
"_____no_output_____"
],
[
"y_pred2= dt_model2.predict(X_test)\nprint(classification_report(y_test, y_pred2, target_names=[\"Balanced\",\"Left\",\"Right\"]))",
" precision recall f1-score support\n\n Balanced 0.65 0.73 0.69 15\n Left 1.00 1.00 1.00 86\n Right 0.95 0.93 0.94 87\n\n accuracy 0.95 188\n macro avg 0.87 0.89 0.88 188\nweighted avg 0.95 0.95 0.95 188\n\n"
],
[
"#Plotting the Tree\nfrom sklearn import export_graphviz\nexport_graphviz(\ndt_model2,\nout_file=(\"model2.dot\"),\nfeature_names=[\"Left Torque\", \"Right Torque\"],\nclass_names=[\"Balanced\",\"Left\",\"Right\"],\nfilled=True)\n\n# run this to make png\n# dot -Tpng model2.dot -o model2.png",
"_____no_output_____"
]
],
[
[
"## Increasing the optimization",
"_____no_output_____"
],
[
"After observing the trees, we conclude that differences are not being taken into account. Hence, we add the differences attribute to try and increase the accuracy.",
"_____no_output_____"
]
],
[
[
"x1['Diff']= x1['LT']- x1['RT']\nx1.head()",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test =train_test_split(x1,y, stratify=y, test_size=0.3,random_state=40)",
"_____no_output_____"
],
[
"dt_model3= DecisionTreeClassifier(random_state=40)\ndt_model3.fit(X_train, y_train)",
"_____no_output_____"
],
[
"#Create Classification Report\ny_pred3= dt_model3.predict(X_test)\nprint(classification_report(y_test, y_pred3, target_names=[\"Balanced\", \"Left\", \"Right\"]))",
" precision recall f1-score support\n\n Balanced 1.00 1.00 1.00 15\n Left 1.00 1.00 1.00 87\n Right 1.00 1.00 1.00 86\n\n accuracy 1.00 188\n macro avg 1.00 1.00 1.00 188\nweighted avg 1.00 1.00 1.00 188\n\n"
],
[
"#Plotting the tree\nfrom sklearn.tree import export_graphviz\nexport_graphviz(\ndt_model3\nout_file=(\"model3.dot\"),\nfeature_names=[\"Left Torque\",\"Right Torque\",\"Difference\"],\nclass_names=[\"Balanced\",\"Left\",\"Right\"]\nfilled=True)\n\n# run this to make png\n# dot -Tpng model3.dot -o model3.png\n",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\n\naccuracy_score(y_pred3,y_test)",
"_____no_output_____"
]
],
[
[
"## Final Conclusion",
"_____no_output_____"
],
[
"The model returns a perfect accuracy score as desired.",
"_____no_output_____"
]
],
[
[
"!pip install seaborn\n",
"Collecting seaborn\n Downloading seaborn-0.11.2-py3-none-any.whl (292 kB)\nRequirement already satisfied: numpy>=1.15 in c:\\python39\\lib\\site-packages (from seaborn) (1.21.2)\nRequirement already satisfied: scipy>=1.0 in c:\\python39\\lib\\site-packages (from seaborn) (1.7.1)\nRequirement already satisfied: matplotlib>=2.2 in c:\\python39\\lib\\site-packages (from seaborn) (3.4.3)\nRequirement already satisfied: pandas>=0.23 in c:\\python39\\lib\\site-packages (from seaborn) (1.3.3)\nRequirement already satisfied: pyparsing>=2.2.1 in c:\\python39\\lib\\site-packages (from matplotlib>=2.2->seaborn) (2.4.7)\nRequirement already satisfied: cycler>=0.10 in c:\\python39\\lib\\site-packages (from matplotlib>=2.2->seaborn) (0.10.0)\nRequirement already satisfied: pillow>=6.2.0 in c:\\python39\\lib\\site-packages (from matplotlib>=2.2->seaborn) (8.3.2)\nRequirement already satisfied: python-dateutil>=2.7 in c:\\python39\\lib\\site-packages (from matplotlib>=2.2->seaborn) (2.8.2)\nRequirement already satisfied: kiwisolver>=1.0.1 in c:\\python39\\lib\\site-packages (from matplotlib>=2.2->seaborn) (1.3.2)\nRequirement already satisfied: six in c:\\python39\\lib\\site-packages (from cycler>=0.10->matplotlib>=2.2->seaborn) (1.16.0)\nRequirement already satisfied: pytz>=2017.3 in c:\\python39\\lib\\site-packages (from pandas>=0.23->seaborn) (2021.1)\nInstalling collected packages: seaborn\nSuccessfully installed seaborn-0.11.2\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d00dfd8ed19fe08f141c50864e5f4fbb2c8e7eef | 2,780 | ipynb | Jupyter Notebook | code/chapter03_DL-basics/3.10_mlp-pytorch.ipynb | fizzyelf-es/Dive-into-DL-PyTorch | d0bf7947c91ae4e02214cc9ef53fc3da78d99e88 | [
"Apache-2.0"
] | 15,792 | 2019-02-25T01:10:30.000Z | 2022-03-31T20:31:46.000Z | code/chapter03_DL-basics/3.10_mlp-pytorch.ipynb | fizzyelf-es/Dive-into-DL-PyTorch | d0bf7947c91ae4e02214cc9ef53fc3da78d99e88 | [
"Apache-2.0"
] | 159 | 2019-03-28T09:32:55.000Z | 2022-03-18T09:07:44.000Z | code/chapter03_DL-basics/3.10_mlp-pytorch.ipynb | fizzyelf-es/Dive-into-DL-PyTorch | d0bf7947c91ae4e02214cc9ef53fc3da78d99e88 | [
"Apache-2.0"
] | 5,126 | 2019-03-07T03:41:08.000Z | 2022-03-31T11:55:27.000Z | 21.384615 | 100 | 0.51259 | [
[
[
"# 3.10 多层感知机的简洁实现",
"_____no_output_____"
]
],
[
[
"import torch\nfrom torch import nn\nfrom torch.nn import init\nimport numpy as np\nimport sys\nsys.path.append(\"..\") \nimport d2lzh_pytorch as d2l\n\nprint(torch.__version__)",
"0.4.1\n"
]
],
[
[
"## 3.10.1 定义模型",
"_____no_output_____"
]
],
[
[
"num_inputs, num_outputs, num_hiddens = 784, 10, 256\n \nnet = nn.Sequential(\n d2l.FlattenLayer(),\n nn.Linear(num_inputs, num_hiddens),\n nn.ReLU(),\n nn.Linear(num_hiddens, num_outputs), \n )\n \nfor params in net.parameters():\n init.normal_(params, mean=0, std=0.01)",
"_____no_output_____"
]
],
[
[
"## 3.10.2 读取数据并训练模型",
"_____no_output_____"
]
],
[
[
"batch_size = 256\ntrain_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)\nloss = torch.nn.CrossEntropyLoss()\n\noptimizer = torch.optim.SGD(net.parameters(), lr=0.5)\n\nnum_epochs = 5\nd2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, None, None, optimizer)",
"epoch 1, loss 0.0031, train acc 0.703, test acc 0.757\nepoch 2, loss 0.0019, train acc 0.824, test acc 0.822\nepoch 3, loss 0.0016, train acc 0.845, test acc 0.825\nepoch 4, loss 0.0015, train acc 0.855, test acc 0.811\nepoch 5, loss 0.0014, train acc 0.865, test acc 0.846\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d00e0730d49103173050e3b856bd59350adbe2ca | 16,966 | ipynb | Jupyter Notebook | valid/CompareGMB.ipynb | Ayushk4/Bi-LSTM-CNN-CRF | 4f207a38cadfa3498de6573ef7a61ebfcfec30ae | [
"MIT"
] | 1 | 2020-09-03T17:26:50.000Z | 2020-09-03T17:26:50.000Z | valid/CompareGMB.ipynb | Ayushk4/Bi-LSTM-CNN-CRF | 4f207a38cadfa3498de6573ef7a61ebfcfec30ae | [
"MIT"
] | null | null | null | valid/CompareGMB.ipynb | Ayushk4/Bi-LSTM-CNN-CRF | 4f207a38cadfa3498de6573ef7a61ebfcfec30ae | [
"MIT"
] | null | null | null | 40.108747 | 204 | 0.392668 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d00e1a84cf5f9c6842fc9de2bc8fa750f8338edb | 113,897 | ipynb | Jupyter Notebook | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary | 825731d9e306553eeb5f5112c8e85f3fc4e0dc1f | [
"MIT"
] | 4 | 2020-01-01T11:09:00.000Z | 2021-07-07T17:22:19.000Z | notebooks/Process_Emails.ipynb | nirnayroy/ML_Enron_email_summary | 825731d9e306553eeb5f5112c8e85f3fc4e0dc1f | [
"MIT"
] | null | null | null | notebooks/Process_Emails.ipynb | nirnayroy/ML_Enron_email_summary | 825731d9e306553eeb5f5112c8e85f3fc4e0dc1f | [
"MIT"
] | 3 | 2019-12-26T18:23:02.000Z | 2020-12-29T15:15:44.000Z | 49.889181 | 25,140 | 0.578584 | [
[
[
"# Summarizing Emails using Machine Learning: Data Wrangling\n## Table of Contents\n1. Imports & Initalization <br>\n2. Data Input <br>\n A. Enron Email Dataset <br>\n B. BC3 Corpus <br>\n3. Preprocessing <br>\n A. Data Cleaning. <br>\n B. Sentence Cleaning <br>\n C. Tokenizing <br>\n4. Store Data <br>\n A. Locally as pickle <br>\n B. Into database <br>\n5. Data Exploration <br>\n A. Enron Emails <br>\n B. BC3 Corpus <br>",
"_____no_output_____"
],
[
"The goal of this notebook is to clean both the Enron Email and BC3 Corpus data sets to perform email text summarization. The BC3 Corpus contains human summarizations that can be used to calculate ROUGE metrics to better understand how accurate the summarizations are. The Enron dataset is far more comprehensive, but lacks summaries to test against. \n\nYou can find the text summarization notebook that uses the preprocessed data [here.](https://github.com/dailykirt/ML_Enron_email_summary/blob/master/notebooks/Text_rank_summarization.ipynb)\n\nA visual summary of the preprocessing steps are in the figure below. ",
"_____no_output_____"
],
[
"<img src=\"./images/Preprocess_Flow.jpg\">",
"_____no_output_____"
],
[
"## 1. Imports & Initalization",
"_____no_output_____"
]
],
[
[
"import sys\nfrom os import listdir\nfrom os.path import isfile, join\nimport configparser\nfrom sqlalchemy import create_engine\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport email\nimport mailparser\nimport xml.etree.ElementTree as ET\nfrom talon.signature.bruteforce import extract_signature\nfrom nltk.tokenize import word_tokenize, sent_tokenize\nfrom nltk.corpus import stopwords\nimport re\n\nimport dask.dataframe as dd\nfrom distributed import Client\nimport multiprocessing as mp",
"/home/kirt/anaconda3/lib/python3.7/site-packages/sklearn/externals/joblib/__init__.py:15: DeprecationWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+.\n warnings.warn(msg, category=DeprecationWarning)\n"
],
[
"#Set local location of emails. \nmail_dir = '../data/maildir/'\n#mail_dir = '../data/testdir/'",
"_____no_output_____"
]
],
[
[
"## 2. Data Input \n### A. Enron Email Dataset\nThe raw enron email dataset contains a maildir directory that contains folders seperated by employee which contain the emails. The following processes the raw text of each email into a dask dataframe with the following columns: \n\nEmployee: The username of the email owner. <br>\nBody: Cleaned body of the email. <br>\nSubject: The title of the email. <br>\nFrom: The original sender of the email <br>\nMessage-ID: Used to remove duplicate emails, as each email has a unique ID. <br>\nChain: The parsed out email chain from a email that was forwarded. <br>\nSignature: The extracted signature from the body.<br>\nDate: Time the email was sent. <br>\n\nAll of the Enron emails were sent using the Multipurpose Internet Mail Extensions 1.0 (MIME) format. Keeping this in mind helps find the correct libraries and methods to clean the emails in a standardized fashion. ",
"_____no_output_____"
]
],
[
[
"def process_email(index):\n '''\n This function splits a raw email into constituent parts that can be used as features.\n '''\n email_path = index[0]\n employee = index[1]\n folder = index[2]\n \n mail = mailparser.parse_from_file(email_path)\n full_body = email.message_from_string(mail.body)\n \n #Only retrieve the body of the email. \n if full_body.is_multipart():\n return\n else:\n mail_body = full_body.get_payload() \n \n split_body = clean_body(mail_body)\n headers = mail.headers\n #Reformating date to be more pandas readable\n date_time = process_date(headers.get('Date'))\n\n email_dict = {\n \"employee\" : employee,\n \"email_folder\": folder,\n \"message_id\": headers.get('Message-ID'),\n \"date\" : date_time,\n \"from\" : headers.get('From'),\n \"subject\": headers.get('Subject'),\n \"body\" : split_body['body'],\n \"chain\" : split_body['chain'],\n \"signature\": split_body['signature'],\n \"full_email_path\" : email_path #for debug purposes. \n }\n \n #Append row to dataframe. \n return email_dict",
"_____no_output_____"
],
[
"def clean_body(mail_body):\n '''\n This extracts both the email signature, and the forwarding email chain if it exists. \n '''\n delimiters = [\"-----Original Message-----\",\"To:\",\"From\"]\n \n #Trying to split string by biggest delimiter. \n old_len = sys.maxsize\n \n for delimiter in delimiters:\n split_body = mail_body.split(delimiter,1)\n new_len = len(split_body[0])\n if new_len <= old_len:\n old_len = new_len\n final_split = split_body\n \n #Then pull chain message\n if (len(final_split) == 1):\n mail_chain = None\n else:\n mail_chain = final_split[1] \n \n #The following uses Talon to try to get a clean body, and seperate out the rest of the email. \n clean_body, sig = extract_signature(final_split[0])\n \n return {'body': clean_body, 'chain' : mail_chain, 'signature': sig}",
"_____no_output_____"
],
[
"def process_date(date_time):\n '''\n Converts the MIME date format to a more pandas friendly type. \n '''\n try:\n date_time = email.utils.format_datetime(email.utils.parsedate_to_datetime(date_time))\n except:\n date_time = None\n return date_time",
"_____no_output_____"
],
[
"def generate_email_paths(mail_dir):\n '''\n Given a mail directory, this will generate the file paths to each email in each inbox. \n '''\n mailboxes = listdir(mail_dir)\n for mailbox in mailboxes:\n inbox = listdir(mail_dir + mailbox)\n for folder in inbox:\n path = mail_dir + mailbox + \"/\" + folder\n emails = listdir(path)\n for single_email in emails:\n full_path = path + \"/\" + single_email\n if isfile(full_path): #Skip directories.\n yield (full_path, mailbox, folder)\n ",
"_____no_output_____"
],
[
"#Use multiprocessing to speed up initial data load and processing. Also helps partition DASK dataframe. \ntry:\n cpus = mp.cpu_count()\nexcept NotImplementedError:\n cpus = 2\npool = mp.Pool(processes=cpus)\nprint(\"CPUS: \" + str(cpus))\n\nindexes = generate_email_paths(mail_dir)\nenron_email_df = pool.map(process_email,indexes)\n#Remove Nones from the list\nenron_email_df = [i for i in enron_email_df if i]\nenron_email_df = pd.DataFrame(enron_email_df)",
"CPUS: 6\n"
],
[
"enron_email_df.describe()",
"_____no_output_____"
]
],
[
[
"### B. BC3 Corpus",
"_____no_output_____"
],
[
"This dataset is split into two xml files. One contains the original emails split line by line, and the other contains the summarizations created by the annotators. Each email may contain several summarizations from different annotators and summarizations may also be over several emails. This will create a data frame for both xml files, then join them together using the thread number in combination of the email number for a single final dataframe. \n\nThe first dataframe will contain the wrangled original emails containing the following information:\n\nListno: Thread identifier <br>\nEmail_num: Email in thread sequence <br>\nFrom: The original sender of the email <br>\nTo: The recipient of the email. <br>\nRecieved: Time email was recieved. <br>\nSubject: Title of email. <br>\nBody: Original body. <br>",
"_____no_output_____"
]
],
[
[
"def parse_bc3_emails(root):\n '''\n This adds every BC3 email to a newly created dataframe. \n '''\n BC3_email_list = []\n #The emails are seperated by threads.\n for thread in root:\n email_num = 0\n #Iterate through the thread elements <name, listno, Doc>\n for thread_element in thread:\n #Getting the listno allows us to link the summaries to the correct emails\n if thread_element.tag == \"listno\":\n listno = thread_element.text\n #Each Doc element is a single email\n if thread_element.tag == \"DOC\":\n email_num += 1\n email_metadata = []\n for email_attribute in thread_element:\n #If the email_attri is text, then each child contains a line from the body of the email\n if email_attribute.tag == \"Text\":\n email_body = \"\"\n for sentence in email_attribute:\n email_body += sentence.text\n else:\n #The attributes of the Email <Recieved, From, To, Subject, Text> appends in this order. \n email_metadata.append(email_attribute.text)\n \n #Use same enron cleaning methods on the body of the email\n split_body = clean_body(email_body)\n \n email_dict = {\n \"listno\" : listno,\n \"date\" : process_date(email_metadata[0]),\n \"from\" : email_metadata[1],\n \"to\" : email_metadata[2],\n \"subject\" : email_metadata[3],\n \"body\" : split_body['body'],\n \"email_num\": email_num\n }\n \n BC3_email_list.append(email_dict) \n return pd.DataFrame(BC3_email_list)",
"_____no_output_____"
],
[
"#load BC3 Email Corpus. Much smaller dataset has no need for parallel processing. \nparsedXML = ET.parse( \"../data/BC3_Email_Corpus/corpus.xml\" )\nroot = parsedXML.getroot()\n\n#Clean up BC3 emails the same way as the Enron emails. \nbc3_email_df = parse_bc3_emails(root)",
"_____no_output_____"
],
[
"bc3_email_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 261 entries, 0 to 260\nData columns (total 7 columns):\nbody 261 non-null object\ndate 261 non-null object\nemail_num 261 non-null int64\nfrom 261 non-null object\nlistno 261 non-null object\nsubject 261 non-null object\nto 260 non-null object\ndtypes: int64(1), object(6)\nmemory usage: 14.4+ KB\n"
],
[
"bc3_email_df.head(3)",
"_____no_output_____"
]
],
[
[
"The second dataframe contains the summarizations of each email:\n\nAnnotator: Person who created summarization. <br>\nEmail_num: Email in thread sequence. <br>\nListno: Thread identifier. <br>\nSummary: Human summarization of the email. <br>",
"_____no_output_____"
]
],
[
[
"def parse_bc3_summaries(root):\n '''\n This parses every BC3 Human summary that is contained in the dataset. \n '''\n BC3_summary_list = []\n for thread in root:\n #Iterate through the thread elements <listno, name, annotation>\n for thread_element in thread:\n if thread_element.tag == \"listno\":\n listno = thread_element.text\n #Each Doc element is a single email\n if thread_element.tag == \"annotation\":\n for annotation in thread_element:\n #If the email_attri is summary, then each child contains a summarization line\n if annotation.tag == \"summary\":\n summary_dict = {}\n for summary in annotation:\n #Generate the set of emails the summary sentence belongs to (often a single email)\n email_nums = summary.attrib['link'].split(',')\n s = set()\n for num in email_nums:\n s.add(num.split('.')[0].strip()) \n #Remove empty strings, since they summarize whole threads instead of emails. \n s = [x for x in set(s) if x]\n for email_num in s:\n if email_num in summary_dict:\n summary_dict[email_num] += ' ' + summary.text\n else:\n summary_dict[email_num] = summary.text\n #get annotator description\n elif annotation.tag == \"desc\":\n annotator = annotation.text\n #For each email summarizaiton create an entry\n for email_num, summary in summary_dict.items():\n email_dict = {\n \"listno\" : listno,\n \"annotator\" : annotator,\n \"email_num\" : email_num,\n \"summary\" : summary\n } \n BC3_summary_list.append(email_dict)\n return pd.DataFrame(BC3_summary_list)",
"_____no_output_____"
],
[
"#Load summaries and process\nparsedXML = ET.parse( \"../data/BC3_Email_Corpus/annotation.xml\" )\nroot = parsedXML.getroot()\nbc3_summary_df = parse_bc3_summaries(root)\nbc3_summary_df['email_num'] = bc3_summary_df['email_num'].astype(int)",
"_____no_output_____"
],
[
"bc3_summary_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 669 entries, 0 to 668\nData columns (total 4 columns):\nannotator 669 non-null object\nemail_num 669 non-null int64\nlistno 669 non-null object\nsummary 669 non-null object\ndtypes: int64(1), object(3)\nmemory usage: 21.0+ KB\n"
],
[
"#merge the dataframes together\nbc3_df = pd.merge(bc3_email_df, \n bc3_summary_df[['annotator', 'email_num', 'listno', 'summary']],\n on=['email_num', 'listno'])\nbc3_df.head()",
"_____no_output_____"
]
],
[
[
"## 3. Preprocessing\n### A. Data Cleaning",
"_____no_output_____"
]
],
[
[
"#Convert date to pandas datetime.\nenron_email_df['date'] = pd.to_datetime(enron_email_df['date'], utc=True)\nbc3_df['date'] = pd.to_datetime(bc3_df.date, utc=True)\n\n#Look at the timeframe\nstart_date = str(enron_email_df.date.min())\nend_date = str(enron_email_df.date.max())\nprint(\"Start Date: \" + start_date)\nprint(\"End Date: \" + end_date)",
"Start Date: 1980-01-01 00:00:00+00:00\nEnd Date: 2024-05-26 10:49:57+00:00\n"
]
],
[
[
"Since the Enron data was collected in May 2002 according to wikipedia its a bit strange to see emails past that date. Reading some of the emails seem to suggest it's mostly spam. ",
"_____no_output_____"
]
],
[
[
"enron_email_df[(enron_email_df.date > '2003-01-01')].head()",
"_____no_output_____"
],
[
"#Quick look at emails before 1999, \nenron_email_df[(enron_email_df.date < '1999-01-01')].date.value_counts().head()",
"_____no_output_____"
],
[
"enron_email_df[(enron_email_df.date == '1980-01-01')].head()",
"_____no_output_____"
]
],
[
[
"There seems to be a glut of emails dated exactly on 1980-01-01. The emails seem legitimate, but these should be droped since without the true date we won't be able to figure out where the email fits in the context of a batch of summaries. Keep emails between Jan 1st 1999 and June 1st 2002. ",
"_____no_output_____"
]
],
[
[
"enron_email_df = enron_email_df[(enron_email_df.date > '1998-01-01') & (enron_email_df.date < '2002-06-01')]",
"_____no_output_____"
]
],
[
[
"### B. Sentence Cleaning",
"_____no_output_____"
],
[
"The raw enron email Corpus tends to have a large amount of unneeded characters that can interfere with tokenizaiton. It's best to do a bit more cleaning.",
"_____no_output_____"
]
],
[
[
"def clean_email_df(df):\n '''\n These remove symbols and character patterns that don't aid in producing a good summary. \n '''\n #Removing strings related to attatchments and certain non numerical characters.\n patterns = [\"\\[IMAGE\\]\",\"-\", \"_\", \"\\*\", \"+\",\"\\\".\\\"\"]\n for pattern in patterns:\n df['body'] = pd.Series(df['body']).str.replace(pattern, \"\")\n \n #Remove multiple spaces. \n df['body'] = df['body'].replace('\\s+', ' ', regex=True)\n\n #Blanks are replaced with NaN in the whole dataframe. Then rows with a 'NaN' in the body will be dropped. \n df = df.replace('',np.NaN)\n df = df.dropna(subset=['body'])\n\n #Remove all Duplicate emails \n #df = df.drop_duplicates(subset='body')\n return df",
"_____no_output_____"
],
[
"#Apply clean to both datasets. \nenron_email_df = clean_email_df(enron_email_df)\nbc3_df = clean_email_df(bc3_df)",
"_____no_output_____"
]
],
[
[
"### C. Tokenizing",
"_____no_output_____"
],
[
"It's important to split up sentences into it's constituent parts for the ML algorithim that will be used for text summarization. This will aid in further processing like removing extra whitespace. We can also remove stopwords, which are very commonly used words that don't provide additional sentence meaning like 'and' 'or' and 'the'. This will be applied to both the Enron and BC3 datasets. ",
"_____no_output_____"
]
],
[
[
"def remove_stopwords(sen):\n '''\n This function removes stopwords\n '''\n stop_words = stopwords.words('english')\n sen_new = \" \".join([i for i in sen if i not in stop_words])\n return sen_new\n\ndef tokenize_email(text):\n '''\n This function splits up the body into sentence tokens and removes stop words. \n '''\n clean_sentences = sent_tokenize(text, language='english')\n #removing punctuation, numbers and special characters. Then lowercasing. \n clean_sentences = [re.sub('[^a-zA-Z ]', '',s) for s in clean_sentences]\n clean_sentences = [s.lower() for s in clean_sentences]\n clean_sentences = [remove_stopwords(r.split()) for r in clean_sentences]\n return clean_sentences",
"_____no_output_____"
]
],
[
[
"Starting with the Enron dataset. ",
"_____no_output_____"
]
],
[
[
"#This tokenizing will be the extracted sentences that may be chosen to form the email summaries. \nenron_email_df['extractive_sentences'] = enron_email_df['body'].apply(sent_tokenize)\n#Splitting the text in emails into cleaned sentences\nenron_email_df['tokenized_body'] = enron_email_df['body'].apply(tokenize_email)\n#Tokenizing the bodies might have revealed more duplicate emails that should be droped. \nenron_email_df = enron_email_df.loc[enron_email_df.astype(str).drop_duplicates(subset='tokenized_body').index]",
"_____no_output_____"
]
],
[
[
"Now working on the BC3 Dataset. ",
"_____no_output_____"
]
],
[
[
"bc3_df['extractive_sentences'] = bc3_df['body'].apply(sent_tokenize)\nbc3_df['tokenized_body'] = bc3_df['body'].apply(tokenize_email)\n#bc3_email_df = bc3_email_df.loc[bc3_email_df.astype(str).drop_duplicates(subset='tokenized_body').index]",
"_____no_output_____"
]
],
[
[
"## Store Data\n### A. Locally as pickle ",
"_____no_output_____"
],
[
"After all the preprocessing is finished its best to store the the data so it can be quickly and easily retrieved by other software. Pickles are best used if you are working locally and want a simple way to store and load data. You can also use a cloud database that can be accessed by other production services such as Heroku to retrieve the data. In this case, I load the data up into a AWS postgres database. ",
"_____no_output_____"
]
],
[
[
"#Local locations for pickle files. \nENRON_PICKLE_LOC = \"../data/dataframes/wrangled_enron_full_df.pkl\"\nBC3_PICKLE_LOC = \"../data/dataframes/wrangled_BC3_df.pkl\"",
"_____no_output_____"
],
[
"#Store dataframes to disk\nenron_email_df.to_pickle(ENRON_PICKLE_LOC)\nbc3_df.head()\nbc3_df.to_pickle(BC3_PICKLE_LOC)",
"_____no_output_____"
]
],
[
[
"### B. Into database",
"_____no_output_____"
],
[
"I used a Postgres database with the DB configurations stored in a config_notebook.ini file. This allows me to easily switch between local and AWS configurations. ",
"_____no_output_____"
]
],
[
[
"#Configure postgres database\nconfig = configparser.ConfigParser()\nconfig.read('config_notebook.ini')\n\n#database_config = 'LOCAL_POSTGRES'\ndatabase_config = 'AWS_POSTGRES'\n\nPOSTGRES_ADDRESS = config[database_config]['POSTGRES_ADDRESS']\nPOSTGRES_USERNAME = config[database_config]['POSTGRES_USERNAME']\nPOSTGRES_PASSWORD = config[database_config]['POSTGRES_PASSWORD']\nPOSTGRES_DBNAME = config[database_config]['POSTGRES_DBNAME']\n\n#now create database connection\npostgres_str = ('postgresql+psycopg2://{username}:{password}@{ipaddress}/{dbname}'\n .format(username=POSTGRES_USERNAME, \n password=POSTGRES_PASSWORD,\n ipaddress=POSTGRES_ADDRESS,\n dbname=POSTGRES_DBNAME))\n\ncnx = create_engine(postgres_str)",
"_____no_output_____"
],
[
"#Store data. \nenron_email_df.to_sql('full_enron_emails', cnx)",
"_____no_output_____"
]
],
[
[
"## 5. Data Exploration",
"_____no_output_____"
],
[
"Exploring the dataset can go a long way to building more accurate machine learning models and spotting any possible issues with the dataset. Since the Enron dataset is quite large, we can speed up some of our computations by using Dask. While not strictly necessary, iterating on this dataset should be much faster.",
"_____no_output_____"
],
[
"### A. Enron Emails",
"_____no_output_____"
]
],
[
[
"client = Client(processes = True)\nclient.cluster",
"_____no_output_____"
],
[
"#Make into dask dataframe. \nenron_email_df = dd.from_pandas(enron_email_df, npartitions=cpus)\nenron_email_df.columns",
"_____no_output_____"
],
[
"#Used to create a describe summary of the dataset. Ignoring tokenized columns. \nenron_email_df[['body', 'chain', 'date', 'email_folder', 'employee', 'from', 'full_email_path', 'message_id', 'signature', 'subject']].describe().compute()",
"/home/kirt/anaconda3/lib/python3.7/site-packages/distributed/worker.py:3165: UserWarning: Large object of size 142.67 MB detected in task graph: \n ( ... e', 'subject'])\nConsider scattering large objects ahead of time\nwith client.scatter to reduce scheduler burden and \nkeep data on workers\n\n future = client.submit(func, big_data) # bad\n\n big_future = client.scatter(big_data) # good\n future = client.submit(func, big_future) # good\n % (format_bytes(len(b)), s)\n"
],
[
"#Get word frequencies from tokenized word lists\ndef get_word_freq(df):\n freq_words=dict()\n for tokens in df.tokenized_words.compute():\n for token in tokens:\n if token in freq_words:\n freq_words[token] += 1\n else: \n freq_words[token] = 1\n return freq_words ",
"_____no_output_____"
],
[
"def tokenize_word(sentences):\n tokens = []\n for sentence in sentences:\n tokens = word_tokenize(sentence)\n return tokens",
"_____no_output_____"
],
[
"#Tokenize the sentences \nenron_email_df['tokenized_words'] = enron_email_df['tokenized_body'].apply(tokenize_word).compute()",
"/home/kirt/anaconda3/lib/python3.7/site-packages/dask/dataframe/core.py:2955: UserWarning: \nYou did not provide metadata, so Dask is running your function on a small dataset to guess output types. It is possible that Dask will guess incorrectly.\nTo provide an explicit output types or to silence this message, please provide the `meta=` keyword, as described in the map or apply function that you are using.\n Before: .apply(func)\n After: .apply(func, meta=('tokenized_body', 'object'))\n\n warnings.warn(meta_warning(meta))\n"
],
[
"#Creating word dictionary to understand word frequencies. \nfreq_words = get_word_freq(enron_email_df)\nprint('Unique words: {:,}'.format(len(freq_words)))",
"Unique words: 96,284\n"
],
[
"word_data = []\n#Sort dictionary by highest word frequency. \nfor key, value in sorted(freq_words.items(), key=lambda item: item[1], reverse=True):\n word_data.append([key, freq_words[key]])\n\n#Prepare to plot bar graph of top words. \n#Create dataframe with Word and Frequency, then sort in Descending order. \nfreq_words_df = pd.DataFrame.from_dict(freq_words, orient='index').reset_index()\nfreq_words_df = freq_words_df.rename(columns={\"index\": \"Word\", 0: \"Frequency\"})\nfreq_words_df = freq_words_df.sort_values(by=['Frequency'],ascending = False)\nfreq_words_df.reset_index(drop = True, inplace=True)\nfreq_words_df.head(30).plot(x='Word', kind='bar', figsize=(20,10))",
"_____no_output_____"
]
],
[
[
"### B. BC3 Corpus",
"_____no_output_____"
]
],
[
[
"bc3_df.head()",
"_____no_output_____"
],
[
"bc3_df['to'].value_counts().head()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d00e1bf88f72695d3a47a9209f38a4eacffb64ff | 229,673 | ipynb | Jupyter Notebook | Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb | dartmouthrobotics/epscor_asv_data_analysis | 438205fc899eca67b33fb43d51bf538db6c734b4 | [
"MIT"
] | null | null | null | Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb | dartmouthrobotics/epscor_asv_data_analysis | 438205fc899eca67b33fb43d51bf538db6c734b4 | [
"MIT"
] | null | null | null | Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb | dartmouthrobotics/epscor_asv_data_analysis | 438205fc899eca67b33fb43d51bf538db6c734b4 | [
"MIT"
] | null | null | null | 429.295327 | 33,764 | 0.941391 | [
[
[
"# Load essential libraries\nimport csv\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport statistics \nimport numpy as np\nfrom scipy.signal import butter, lfilter, freqz\nfrom IPython.display import Image\n\nfrom datetime import datetime",
"_____no_output_____"
],
[
"# Time and robot egomotion\ntime = []\nstandardized_time = []\nstandardized_time2 = []\ncompass_heading = []\nspeed = []\n\n# sonde data\ntemp = []\nPH = []\ncond = [] # ms\nchlorophyll = [] \nODO = [] # mg/L\nsonar = []\nangular_z = []\n\n# wp data\nwp_time = []\nwp_seq = []",
"_____no_output_____"
],
[
"initial_time = None\ntime_crop = 4000\ntime_crop1 = 580\ntime_crop2 = 800\n\n# File loading from relative path\nfile = '../../../Data/ISER2021/Sunapee-20200715-path-1.csv'\n# File loading from relative path\nfile2 = '../../../Data/ISER2021/Sunapee-20200715-path-1-mavros.csv'\n\n# original data\nwith open(file, 'r') as csvfile:\n csvreader= csv.reader(csvfile, delimiter=',')\n header = next(csvreader)\n for row in csvreader:\n # robot data\n if initial_time is None:\n initial_time = float(row[0])\n \n current_time = float(row[0])\n \n if current_time - initial_time >= time_crop1 and current_time - initial_time < time_crop2:\n #if current_time - initial_time <= time_crop:\n time.append(float(row[0]))\n compass_heading.append(float(row[4]))\n speed.append(float(row[10]))\n angular_z.append(float(row[18]))\n\n # sonde data\n temp.append(float(row[23]))\n PH.append(float(row[26]))\n cond.append(float(row[25]))\n chlorophyll.append(float(row[29]))\n ODO.append(float(row[30]))\n sonar.append(float(row[8]))\n\n\n minimum_time = min(time)\n for time_stamp in time:\n standardized_time.append(time_stamp - minimum_time)\n\n# wp data \nwith open(file2, 'r') as csvfile2:\n csvreader2 = csv.reader(csvfile2, delimiter=',')\n header = next(csvreader2)\n for row in csvreader2:\n current_time = float(row[0])\n \n #if current_time - initial_time <= time_crop:\n if current_time - initial_time >= time_crop1 and current_time - initial_time < time_crop2:\n wp_time.append(float(row[0]))\n wp_seq.append(float(row[1]))\n \n for time_stamp in wp_time:\n standardized_time2.append(time_stamp - minimum_time)",
"_____no_output_____"
],
[
"# collision time around 790",
"_____no_output_____"
]
],
[
[
"### Compass heading",
"_____no_output_____"
]
],
[
[
"# Figure initialization\nfig, ax1 = plt.subplots()\n\nax1.set_xlabel('Time [sec]', fontsize=16)\nax1.set_ylabel('Heading [degree]', fontsize=16)\nax1.plot(standardized_time, compass_heading, label='compass heading')\nax1.legend()\n\nfor wp in standardized_time2:\n plt.axvline(x=wp, color='gray', linestyle='--')\n \nplt.show()",
"_____no_output_____"
],
[
"# Figure initialization\nfig, ax1 = plt.subplots()\n\nax1.set_xlabel('Time [sec]', fontsize=16)\nax1.set_ylabel('ground_speed_x [m/s]', fontsize=16)\nax1.plot(standardized_time, speed, label='ground_speed_x', color='m')\nax1.legend()\n\nfor wp in standardized_time2:\n plt.axvline(x=wp, color='gray', linestyle='--')\n \nplt.show()",
"_____no_output_____"
],
[
"# Figure initialization\nfig, ax1 = plt.subplots()\n\nax1.set_xlabel('Time [sec]', fontsize=16)\nax1.set_ylabel('angular_z [rad/s]', fontsize=16)\nax1.plot(standardized_time, angular_z, label='angular_z', color='r')\nax1.legend()\n\nfor wp in standardized_time2:\n plt.axvline(x=wp, color='gray', linestyle='--')\n \nplt.show()",
"_____no_output_____"
]
],
[
[
"### Temperature",
"_____no_output_____"
]
],
[
[
"# Figure initialization\nfig, ax1 = plt.subplots()\n\nax1.set_xlabel('Time [sec]', fontsize=16)\nax1.set_ylabel('Temperature [degree]', fontsize=16)\nax1.plot(standardized_time, temp, label='temp', color='k')\nax1.legend()\n\nfor wp in standardized_time2:\n plt.axvline(x=wp, color='gray', linestyle='--')\n \nplt.show()\n\nprint(\"Standard Deviation of the temp is % s \" %(statistics.stdev(temp)))\nprint(\"Mean of the temp is % s \" %(statistics.mean(temp))) ",
"_____no_output_____"
]
],
[
[
"### PH",
"_____no_output_____"
]
],
[
[
"# Figure initialization\nfig, ax1 = plt.subplots()\n\nax1.set_xlabel('Time [sec]', fontsize=16)\nax1.set_ylabel('PH', fontsize=16)\nax1.plot(standardized_time, PH, label='PH', color='r')\nax1.legend()\n\nfor wp in standardized_time2:\n plt.axvline(x=wp, color='gray', linestyle='--')\n \nplt.show()\n\nprint(\"Standard Deviation of the temp is % s \" %(statistics.stdev(PH)))\nprint(\"Mean of the temp is % s \" %(statistics.mean(PH))) ",
"_____no_output_____"
]
],
[
[
"### Conductivity",
"_____no_output_____"
]
],
[
[
"# Figure initialization\nfig, ax1 = plt.subplots()\n\nax1.set_xlabel('Time [sec]', fontsize=16)\nax1.set_ylabel('Conductivity [ms]', fontsize=16)\nax1.plot(standardized_time, cond, label='conductivity', color='b')\nax1.legend()\n\nfor wp in standardized_time2:\n plt.axvline(x=wp, color='gray', linestyle='--')\n \nplt.show()\n\nprint(\"Standard Deviation of the chlorophyll is % s \" %(statistics.stdev(cond)))\nprint(\"Mean of the chlorophyll is % s \" %(statistics.mean(cond)))",
"_____no_output_____"
]
],
[
[
"### Chlorophyll ",
"_____no_output_____"
]
],
[
[
"# Figure initialization\nfig, ax1 = plt.subplots()\n\nax1.set_xlabel('Time [sec]', fontsize=16)\nax1.set_ylabel('chlorophyll [RFU]', fontsize=16)\nax1.plot(standardized_time, chlorophyll, label='chlorophyll', color='g')\nax1.legend()\n\nfor wp in standardized_time2:\n plt.axvline(x=wp, color='gray', linestyle='--')\n \nplt.show()\n\nprint(\"Standard Deviation of the chlorophyll is % s \" %(statistics.stdev(chlorophyll)))\nprint(\"Mean of the chlorophyll is % s \" %(statistics.mean(chlorophyll))) ",
"_____no_output_____"
]
],
[
[
"### ODO",
"_____no_output_____"
]
],
[
[
"# Figure initialization\nfig, ax1 = plt.subplots()\n\nax1.set_xlabel('Time [sec]', fontsize=16)\nax1.set_ylabel('ODO [%sat]', fontsize=16)\nax1.plot(standardized_time, ODO, label='ODO', color='m')\nax1.legend()\n\nfor wp in standardized_time2:\n plt.axvline(x=wp, color='gray', linestyle='--')\n \nplt.show()\n\nprint(\"Standard Deviation of the DO is % s \" %(statistics.stdev(ODO)))\nprint(\"Mean of the DO is % s \" %(statistics.mean(ODO))) ",
"_____no_output_____"
]
],
[
[
"### Sonar depth",
"_____no_output_____"
]
],
[
[
"# Figure initialization\nfig, ax1 = plt.subplots()\n\nax1.set_xlabel('Time [sec]', fontsize=16)\nax1.set_ylabel('sonar [m]', fontsize=16)\nax1.plot(standardized_time, sonar, label='sonar', color='c')\nax1.legend()\n\nfor wp in standardized_time2:\n plt.axvline(x=wp, color='gray', linestyle='--')\n \nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d00e265a7763740ee050ae37160c571e842fba87 | 1,068 | ipynb | Jupyter Notebook | cheat-sheets/ml/classification/algorithms.ipynb | AElOuassouli/reading-notes | 59865a31afd9fcfb16a189c8f4bdaf59bc035d52 | [
"Apache-2.0"
] | null | null | null | cheat-sheets/ml/classification/algorithms.ipynb | AElOuassouli/reading-notes | 59865a31afd9fcfb16a189c8f4bdaf59bc035d52 | [
"Apache-2.0"
] | null | null | null | cheat-sheets/ml/classification/algorithms.ipynb | AElOuassouli/reading-notes | 59865a31afd9fcfb16a189c8f4bdaf59bc035d52 | [
"Apache-2.0"
] | null | null | null | 18.101695 | 77 | 0.549625 | [
[
[
"# Classification",
"_____no_output_____"
],
[
"## Binary classification",
"_____no_output_____"
],
[
"### Stochastic gradient descent (SGD)\n",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import SGDClassifier",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
d00e2f91e07094f31ac97021369f471acab031a8 | 20,892 | ipynb | Jupyter Notebook | tutorials/02_qsvm_multiclass.ipynb | gabrieleagl/qiskit-machine-learning | a38e1e8bd044d6993361fad6741131531ab6ef4b | [
"Apache-2.0"
] | null | null | null | tutorials/02_qsvm_multiclass.ipynb | gabrieleagl/qiskit-machine-learning | a38e1e8bd044d6993361fad6741131531ab6ef4b | [
"Apache-2.0"
] | null | null | null | tutorials/02_qsvm_multiclass.ipynb | gabrieleagl/qiskit-machine-learning | a38e1e8bd044d6993361fad6741131531ab6ef4b | [
"Apache-2.0"
] | null | null | null | 98.54717 | 14,020 | 0.84463 | [
[
[
"# QSVM multiclass classification\n\nA [multiclass extension](https://qiskit.org/documentation/apidoc/qiskit.aqua.components.multiclass_extensions.html) works in conjunction with an underlying binary (two class) classifier to provide classification where the number of classes is greater than two.\n\nCurrently the following multiclass extensions are supported:\n\n* OneAgainstRest\n* AllPairs\n* ErrorCorrectingCode\n\nThese use different techniques to group the data from the binary classification to achieve the final multiclass classification.",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nfrom qiskit import BasicAer\nfrom qiskit.circuit.library import ZZFeatureMap\nfrom qiskit.utils import QuantumInstance, algorithm_globals\nfrom qiskit_machine_learning.algorithms import QSVM\nfrom qiskit_machine_learning.multiclass_extensions import AllPairs\nfrom qiskit_machine_learning.utils.dataset_helper import get_feature_dimension",
"_____no_output_____"
]
],
[
[
"We want a dataset with more than two classes, so here we choose the `Wine` dataset that has 3 classes.",
"_____no_output_____"
]
],
[
[
"from qiskit_machine_learning.datasets import wine\n\nn = 2 # dimension of each data point\nsample_Total, training_input, test_input, class_labels = wine(training_size=24,\n test_size=6, n=n, plot_data=True)\ntemp = [test_input[k] for k in test_input]\ntotal_array = np.concatenate(temp)",
"_____no_output_____"
]
],
[
[
"To used a multiclass extension an instance thereof simply needs to be supplied, on the QSVM creation, using the `multiclass_extension` parameter. Although `AllPairs()` is used in the example below, the following multiclass extensions would also work:\n\n OneAgainstRest()\n ErrorCorrectingCode(code_size=5)",
"_____no_output_____"
]
],
[
[
"algorithm_globals.random_seed = 10598\n\nbackend = BasicAer.get_backend('qasm_simulator')\nfeature_map = ZZFeatureMap(feature_dimension=get_feature_dimension(training_input),\n reps=2, entanglement='linear')\nsvm = QSVM(feature_map, training_input, test_input, total_array,\n multiclass_extension=AllPairs())\nquantum_instance = QuantumInstance(backend, shots=1024,\n seed_simulator=algorithm_globals.random_seed,\n seed_transpiler=algorithm_globals.random_seed)\n\nresult = svm.run(quantum_instance)\nfor k,v in result.items():\n print(f'{k} : {v}')",
"testing_accuracy : 1.0\ntest_success_ratio : 1.0\npredicted_labels : [0 1 2 2 2 2]\npredicted_classes : ['A', 'B', 'C', 'C', 'C', 'C']\n"
],
[
"import qiskit.tools.jupyter\n%qiskit_version_table\n%qiskit_copyright",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d00e36e8f6a09694c47347abd3d469aecea3fae9 | 108,548 | ipynb | Jupyter Notebook | 01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb | rekil156/intro-to-deep-learning | 8067c61c734cde2db00f89a2626d993e0848f0b3 | [
"Unlicense"
] | null | null | null | 01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb | rekil156/intro-to-deep-learning | 8067c61c734cde2db00f89a2626d993e0848f0b3 | [
"Unlicense"
] | null | null | null | 01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb | rekil156/intro-to-deep-learning | 8067c61c734cde2db00f89a2626d993e0848f0b3 | [
"Unlicense"
] | null | null | null | 192.802842 | 31,960 | 0.895825 | [
[
[
"# Building Simple Neural Networks\n\nIn this section you will:\n\n* Import the MNIST dataset from Keras.\n* Format the data so it can be used by a Sequential model with Dense layers.\n* Split the dataset into training and test sections data.\n* Build a simple neural network using Keras Sequential model and Dense layers.\n* Train that model.\n* Evaluate the performance of that model.\n\nWhile we are accomplishing these tasks, we will also stop to discuss important concepts:\n\n* Splitting data into test and training sets.\n* Training rounds, batch size, and epochs.\n* Validation data vs test data.\n* Examining results.\n\n## Importing and Formatting the Data\n\nKeras has several built-in datasets that are already well formatted and properly cleaned. These datasets are an invaluable learning resource. Collecting and processing datasets is a serious undertaking, and deep learning tactics perform poorly without large high quality datasets. We will be leveraging the [Keras built in datasets](https://keras.io/datasets/) extensively, and you may wish to explore them further on your own.\n\nIn this exercise, we will be focused on the MNIST dataset, which is a set of 70,000 images of handwritten digits each labeled with the value of the written digit. Additionally, the images have been split into training and test sets.",
"_____no_output_____"
]
],
[
[
"# For drawing the MNIST digits as well as plots to help us evaluate performance we\n# will make extensive use of matplotlib\nfrom matplotlib import pyplot as plt\n\n# All of the Keras datasets are in keras.datasets\nfrom keras.datasets import mnist\n\n# Keras has already split the data into training and test data\n(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n\n# Training images is a list of 60,000 2D lists.\n# Each 2D list is 28 by 28—the size of the MNIST pixel data.\n# Each item in the 2D array is an integer from 0 to 255 representing its grayscale\n# intensity where 0 means white, 255 means black.\nprint(len(training_images), training_images[0].shape)\n\n# training_labels are a value between 0 and 9 indicating which digit is represented.\n# The first item in the training data is a 5\nprint(len(training_labels), training_labels[0])\n",
"Using TensorFlow backend.\n"
],
[
"# Lets visualize the first 100 images from the dataset\nfor i in range(100):\n ax = plt.subplot(10, 10, i+1)\n ax.axis('off')\n plt.imshow(training_images[i], cmap='Greys')\n",
"_____no_output_____"
]
],
[
[
"## Problems With This Data\n\nThere are (at least) two problems with this data as it is currently formatted, what do you think they are?",
"_____no_output_____"
],
[
"1. The input data is formatted as a 2D array, but our deep neural network needs to data as a 1D vector.\n * This is because of how deep neural networks are constructed, it is simply not possible to send anything but a vector as input.\n * These vectors can be/represent anything, but from the computer's perspective they must be a 1D vector.\n2. Our labels are numbers, but we're not performing regression. We need to use a 1-hot vector encoding for our labels.\n * This is important because if we use the number values we would be training our network to think of these values as continuous.\n * If the digit is supposed to be a 2, guessing 1 and guessing 9 are both equally wrong.\n * Training the network with numbers would imply that a prediction of 1 would be \"less wrong\" than a prediction of 9, when in fact both are equally wrong. ",
"_____no_output_____"
],
[
"### Fixing the data format\n\nLuckily, this is a common problem and we can use two methods to fix the data: `numpy.reshape` and `keras.utils.to_categorical`. This is nessesary because of how deep neural networks process data, there is no way to send 2D data to a `Sequential` model made of `Dense` layers.",
"_____no_output_____"
]
],
[
[
"from keras.utils import to_categorical\n\n# Preparing the dataset\n# Setup train and test splits\n(training_images, training_labels), (test_images, test_labels) = mnist.load_data()\n\n\n# 28 x 28 = 784, because that's the dimensions of the MNIST data.\nimage_size = 784\n\n# Reshaping the training_images and test_images to lists of vectors with length 784\n# instead of lists of 2D arrays. Same for the test_images\ntraining_data = training_images.reshape(training_images.shape[0], image_size) \ntest_data = test_images.reshape(test_images.shape[0], image_size)\n\n# [\n# [1,2,3]\n# [4,5,6]\n# ]\n\n# => [1,2,3,4,5,6]\n\n# Just showing the changes...\nprint(\"training data: \", training_images.shape, \" ==> \", training_data.shape)\nprint(\"test data: \", test_images.shape, \" ==> \", test_data.shape)",
"training data: (60000, 28, 28) ==> (60000, 784)\ntest data: (10000, 28, 28) ==> (10000, 784)\n"
],
[
"# Create 1-hot encoded vectors using to_categorical\nnum_classes = 10 # Because it's how many digits we have (0-9) \n\n# to_categorical takes a list of integers (our labels) and makes them into 1-hot vectors\ntraining_labels = to_categorical(training_labels, num_classes)\ntest_labels = to_categorical(test_labels, num_classes)",
"_____no_output_____"
],
[
"# Recall that before this transformation, training_labels[0] was the value 5. Look now:\nprint(training_labels[0])",
"[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]\n"
]
],
[
[
"## Building a Deep Neural Network\n\nNow that we've prepared our data, it's time to build a simple neural network. To start we'll make a deep network with 3 layers—the input layer, a single hidden layer, and the output layer. In a deep neural network all the layers are 1 dimensional. The input layer has to be the shape of our input data, meaning it must have 784 nodes. Similarly, the output layer must match our labels, meaning it must have 10 nodes. We can choose the number of nodes in our hidden layer, I've chosen 32 arbitrarally.",
"_____no_output_____"
]
],
[
[
"from keras.models import Sequential\nfrom keras.layers import Dense\n\n# Sequential models are a series of layers applied linearly.\nmodel = Sequential()\n\n# The first layer must specify it's input_shape.\n# This is how the first two layers are added, the input layer and the hidden layer.\nmodel.add(Dense(units=32, activation='sigmoid', input_shape=(image_size,)))\n\n# This is how the output layer gets added, the 'softmax' activation function ensures\n# that the sum of the values in the output nodes is 1. Softmax is very\n# common in classification networks. \nmodel.add(Dense(units=num_classes, activation='softmax'))\n\n# This function provides useful text data for our network\nmodel.summary()",
"Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_1 (Dense) (None, 32) 25120 \n_________________________________________________________________\ndense_2 (Dense) (None, 10) 330 \n=================================================================\nTotal params: 25,450\nTrainable params: 25,450\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"## Compiling and Training a Model\n\nOur model must be compiled and trained before it can make useful predictions. Models are trainined with the training data and training labels. During this process Keras will use an optimizer, loss function, metrics of our chosing to repeatedly make predictions and recieve corrections. The loss function is used to train the model, the metrics are only used for human evaluation of the model during and after training.\n\nTraining happens in a series of epochs which are divided into a series of rounds. Each round the network will recieve `batch_size` samples from the training data, make predictions, and recieve one correction based on the errors in those predictions. In a single epoch, the model will look at every item in the training set __exactly once__, which means individual data points are sampled from the training data without replacement during each round of each epoch.\n\nDuring training, the training data itself will be broken into two parts according to the `validation_split` parameter. The proportion that you specify will be left out of the training process, and used to evaluate the accuracy of the model. This is done to preserve the test data, while still having a set of data left out in order to test against — and hopefully prevent — overfitting. At the end of each epoch, predictions will be made for all the items in the validation set, but those predictions won't adjust the weights in the model. Instead, if the accuracy of the predictions in the validation set stops improving then training will stop early, even if accuracy in the training set is improving. ",
"_____no_output_____"
]
],
[
[
"# sgd stands for stochastic gradient descent.\n# categorical_crossentropy is a common loss function used for categorical classification.\n# accuracy is the percent of predictions that were correct.\nmodel.compile(optimizer=\"sgd\", loss='categorical_crossentropy', metrics=['accuracy'])\n\n# The network will make predictions for 128 flattened images per correction.\n# It will make a prediction on each item in the training set 5 times (5 epochs)\n# And 10% of the data will be used as validation data.\nhistory = model.fit(training_data, training_labels, batch_size=128, epochs=5, verbose=True, validation_split=.1)",
"Train on 54000 samples, validate on 6000 samples\nEpoch 1/5\n54000/54000 [==============================] - 1s 17us/step - loss: 1.3324 - accuracy: 0.6583 - val_loss: 0.8772 - val_accuracy: 0.8407\nEpoch 2/5\n54000/54000 [==============================] - 1s 13us/step - loss: 0.7999 - accuracy: 0.8356 - val_loss: 0.6273 - val_accuracy: 0.8850\nEpoch 3/5\n54000/54000 [==============================] - 1s 12us/step - loss: 0.6350 - accuracy: 0.8643 - val_loss: 0.5207 - val_accuracy: 0.8940\nEpoch 4/5\n54000/54000 [==============================] - 1s 11us/step - loss: 0.5499 - accuracy: 0.8752 - val_loss: 0.4532 - val_accuracy: 0.9040\nEpoch 5/5\n54000/54000 [==============================] - 1s 11us/step - loss: 0.4950 - accuracy: 0.8837 - val_loss: 0.4233 - val_accuracy: 0.9045\n"
]
],
[
[
"## Evaluating Our Model\n\nNow that we've trained our model, we want to evaluate its performance. We're using the \"test data\" here although in a serious experiment, we would likely not have done nearly enough work to warrent the application of the test data. Instead, we would rely on the validation metrics as a proxy for our test results until we had models that we believe would perform well. \n\nOnce we evaluate our model on the test data, any subsequent changes we make would be based on what we learned from the test data. Meaning, we would have functionally incorporated information from the test set into our training procedure which could bias and even invalidate the results of our research. In a non-research setting the real test might be more like putting this feature into production. \n\nNevertheless, it is always wise to create a test set that is not used as an evaluative measure until the very end of an experimental lifecycle. That is, once you have a model that you believe __should__ generalize well to unseen data you should test it on the test data to test that hypothosis. If your model performs poorly on the test data, you'll have to reevaluate your model, training data, and procedure. ",
"_____no_output_____"
]
],
[
[
"loss, accuracy = model.evaluate(test_data, test_labels, verbose=True)\n\nplt.plot(history.history['accuracy'])\nplt.plot(history.history['val_accuracy'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['training', 'validation'], loc='best')\n\nplt.show()\n\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['training', 'validation'], loc='best')\n\nplt.show()\n\nprint(f'Test loss: {loss:.3}')\nprint(f'Test accuracy: {accuracy:.3}')",
"10000/10000 [==============================] - 0s 15us/step\n"
]
],
[
[
"## How Did Our Network Do? \n\n* Why do we only have one value for test loss and test accuracy, but a chart over time for training and validation loss and accuracy?\n* Our model was more accurate on the validation data than it was on the training data. \n * Is this okay? Why or why not?\n * What if our model had been more accurate on the training data than the validation data?\n* Did our model get better during each epoch?\n * If not: why might that be the case?\n * If so: should we always expect this, where each epoch strictly improves training/validation accuracy/loss?",
"_____no_output_____"
],
[
"### Answers:\n\n\n* Why do we only have one value for test loss and test accuracy, but a chart over time for training and validation loss and accuracy?\n * __Because we only evaluate the test data once at the very end, but we evaluate training and validation scores once per epoch.__\n* Our model was more accurate on the validation data than it was on the training data. \n * Is this okay? Why or why not?\n * __Yes, this is okay, and even good. When our validation scores are better than our training scores, it's a sign that we are probably not overfitting__\n * What if our model had been more accurate on the training data than the validation data?\n * __This would concern us, because it would suggest we are probably overfitting.__\n* Did our model get better during each epoch?\n * If not: why might that be the case?\n * __Optimizers rely on the gradient to update our weights, but the 'function' we are optimizing (our neural network) is not a ground truth. A single batch, and even a complete epoch, may very well result in an adjustment that hurts overall performance.__\n * If so: should we always expect this, where each epoch strictly improves training/validation accuracy/loss?\n * __Not at all, see the above answer.__",
"_____no_output_____"
],
[
"## Look at Specific Results\n\nOften, it can be illuminating to view specific results, both when the model is correct and when the model is wrong. Lets look at the images and our model's predictions for the first 16 samples in the test set.",
"_____no_output_____"
]
],
[
[
"from numpy import argmax\n\n# Predicting once, then we can use these repeatedly in the next cell without recomputing the predictions.\npredictions = model.predict(test_data)\n\n# For pagination & style in second cell\npage = 0\nfontdict = {'color': 'black'}",
"_____no_output_____"
],
[
"# Repeatedly running this cell will page through the predictions\nfor i in range(16):\n ax = plt.subplot(4, 4, i+1)\n ax.axis('off')\n plt.imshow(test_images[i + page], cmap='Greys')\n prediction = argmax(predictions[i + page])\n true_value = argmax(test_labels[i + page])\n\n fontdict['color'] = 'black' if prediction == true_value else 'red'\n plt.title(\"{}, {}\".format(prediction, true_value), fontdict=fontdict)\n\npage += 16\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Will A Different Network Perform Better?\n\nGiven what you know so far, use Keras to build and train another sequential model that you think will perform __better__ than the network we just built and trained. Then evaluate that model and compare its performance to our model. Remember to look at accuracy and loss for training and validation data over time, as well as test accuracy and loss. ",
"_____no_output_____"
]
],
[
[
"# Your code here...\n",
"_____no_output_____"
]
],
[
[
"## Bonus questions: Go Further\n\nHere are some questions to help you further explore the concepts in this lab.\n\n* Does the original model, or your model, fail more often on a particular digit? \n * Write some code that charts the accuracy of our model's predictions on the test data by digit.\n * Is there a clear pattern? If so, speculate about why that could be...\n* Training for longer typically improves performance, up to a point.\n * For a simple model, try training it for 20 epochs, and 50 epochs.\n * Look at the charts of accuracy and loss over time, have you reached diminishing returns after 20 epochs? after 50?\n* More complex networks require more training time, but can outperform simpler networks.\n * Build a more complex model, with at least 3 hidden layers.\n * Like before, train it for 5, 20, and 50 epochs. \n * Evaluate the performance of the model against the simple model, and compare the total amount of time it took to train.\n * Was the extra complexity worth the additional training time? \n * Do you think your complex model would get even better with more time?\n* A little perspective on this last point: Some models train for [__weeks to months__](https://openai.com/blog/ai-and-compute/). ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d00e37e4267c25338117659f0b7c19c64dcdfb8d | 25,259 | ipynb | Jupyter Notebook | 01-demo1.ipynb | JiaxiangBU/conversion_metrics | 95b51c2a1a43e45078d64f1c6696ed8399987256 | [
"MIT"
] | null | null | null | 01-demo1.ipynb | JiaxiangBU/conversion_metrics | 95b51c2a1a43e45078d64f1c6696ed8399987256 | [
"MIT"
] | null | null | null | 01-demo1.ipynb | JiaxiangBU/conversion_metrics | 95b51c2a1a43e45078d64f1c6696ed8399987256 | [
"MIT"
] | null | null | null | 45.186047 | 8,772 | 0.63997 | [
[
[
"# default_exp ratio",
"_____no_output_____"
]
],
[
[
"> The email portion of this campaign was actually run as an A/B test. Half the emails sent out were generic upsells to your product while the other half contained personalized messaging around the users’ usage of the site.\n\n这是 AB Test 的实验内容。",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"# export\n'''Calculate conversion rates and related metrics.'''\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef conversion_rate(dataframe, column_names, converted = 'converted', id_name = 'user_id'):\n '''Calculate conversion rate.\n Cite https://www.datacamp.com/courses/analyzing-marketing-campaigns-with-pandas\n\n Parmaters\n ---------\n dataframe: pandas.DataFrame\n column_names: str\n The conlumn(s) chosen to partition groups to \n calculate conversion rate.\n converted: str\n The column with True and False to determine \n whether users are converted. \n id_name: str\n The column saved user_id.\n \n \n Returns\n -------\n conversion_rate: conversion rate'''\n # Total number of converted users\n column_conv = dataframe[dataframe[converted] == True] \\\n .groupby(column_names)[id_name] \\\n .nunique()\n\n # Total number users\n column_total = dataframe \\\n .groupby(column_names)[id_name] \\\n .nunique() \n \n # Conversion rate \n conversion_rate = column_conv/column_total\n \n # Fill missing values with 0\n conversion_rate = conversion_rate.fillna(0)\n \n return conversion_rate",
"_____no_output_____"
],
[
"marketing = pd.read_csv(\"data/marketing.csv\",\n parse_dates = ['date_served', 'date_subscribed', 'date_canceled'])",
"_____no_output_____"
],
[
"# Subset the DataFrame\nemail = marketing[marketing.marketing_channel == 'Email']\n\n# Group the email DataFrame by variant \nalloc = email.groupby(['variant']).user_id.nunique()\n\n# Plot a bar chart of the test allocation\nalloc.plot(kind = 'bar')\nplt.title('Personalization test allocation')\nplt.ylabel('# participants')\nplt.show()",
"_____no_output_____"
]
],
[
[
"差异不大。",
"_____no_output_____"
]
],
[
[
"# Group marketing by user_id and variant\nsubscribers = email.groupby(['user_id', \n 'variant'])['converted'].max()\nsubscribers_df = pd.DataFrame(subscribers.unstack(level=1)) \n\n# Drop missing values from the control column\ncontrol = subscribers_df['control'].dropna()\n\n# Drop missing values from the personalization column\npersonalization = subscribers_df['personalization'].dropna()\n\nprint('Control conversion rate:', np.mean(control))\nprint('Personalization conversion rate:', np.mean(personalization))",
"Control conversion rate: 0.2814814814814815\nPersonalization conversion rate: 0.3908450704225352\n"
]
],
[
[
"这种 Python 写法我觉得有点复杂。",
"_____no_output_____"
],
[
"$$\\begin{array}{l}{\\text { Calculating lift: }} \\\\ {\\qquad \\frac{\\text { Treatment conversion rate - Control conversion rate }}{\\text { Control conversion rate }}}\\end{array}$$",
"_____no_output_____"
],
[
"注意这里的 lift 是转化率的比较,因此是可以超过 100 %",
"_____no_output_____"
]
],
[
[
"# export\ndef lift(a,b, sig = 2):\n '''Calculate lift statistic for an AB test.\n Cite https://www.datacamp.com/courses/analyzing-marketing-campaigns-with-pandas\n\n Parmaters\n ---------\n a: float.\n control group.\n b: float.\n test group.\n sig: integer.\n default 2.\n\n Returns\n -------\n lift: lift statistic'''\n # Calcuate the mean of a and b\n a_mean = np.mean(a)\n b_mean = np.mean(b)\n\n # Calculate the lift using a_mean and b_mean\n lift = b_mean/a_mean - 1\n\n return str(round(lift*100, sig)) + '%'",
"_____no_output_____"
],
[
"lift(control, personalization, sig = 3)",
"_____no_output_____"
]
],
[
[
"## 查看是否统计显著",
"_____no_output_____"
]
],
[
[
"# export\nfrom scipy import stats\n\ndef lift_sig(a,b):\n '''Calculate lift statistical significance for an AB test.\n Cite https://www.datacamp.com/courses/analyzing-marketing-campaigns-with-pandas\n\n Parmaters\n ---------\n a: float.\n control group.\n b: float.\n test group.\n sig: integer.\n default 2.\n\n Returns\n -------\n lift: lift statistic'''\n\n output = stats.ttest_ind(a,b)\n t_value, p_value = output.statistic,output.pvalue\n print('The t value of the two variables is %.3f with p value %.3f' % (t_value, p_value))\n\n return (t_value, p_value)",
"_____no_output_____"
],
[
"t_value, p_value = lift_sig(control,personalization )",
"The t value of the two variables is -0.577 with p value 0.580\n"
]
],
[
[
"> In the next lesson, you will explore whether that holds up across all demographics.\n\n这真是做 AB test 一个成熟的思维,不代表每一个 group 都很好。\n",
"_____no_output_____"
]
],
[
[
"# export\ndef ab_test(df, segment, id_name = 'user_id', test_column = 'variant', converted = 'converted'):\n '''Calculate lift statistic by segmentation.\n Cite https://www.datacamp.com/courses/analyzing-marketing-campaigns-with-pandas\n\n Parmaters\n ---------\n df: pandas.DataFrame.\n\n segment: str.\n group column.\n \n id_name: user_id\n \n test_column: str\n The column indentify test or ctrl groups.\n \n converted: logical.\n Whether converted or not.\n\n Returns\n -------\n lift: lift statistic'''\n # Build a for loop for each segment in marketing\n for subsegment in np.unique(marketing[segment].values):\n print('Group - %s: ' % subsegment)\n\n df1 = df[df[segment] == subsegment]\n\n df2 = df1.groupby([id_name, test_column])[converted].max()\n df2 = pd.DataFrame(df2.unstack(level=1)) \n ctrl = df2.iloc[:,0].dropna()\n test = df2.iloc[:,1].dropna()\n \n # information\n print('lift:', lift(ctrl, test))\n lift_sig(ctrl, test)",
"_____no_output_____"
],
[
"df = marketing[marketing['marketing_channel'] == 'Email']\nab_test(df, segment='language_displayed', id_name='user_id', test_column='variant', converted='converted')",
"Group - Arabic: \nlift: 50.0%\nThe t value of the two variables is -0.577 with p value 0.580\nGroup - English: \nlift: 39.0%\nThe t value of the two variables is -2.218 with p value 0.027\nGroup - German: \nlift: -1.62%\nThe t value of the two variables is 0.191 with p value 0.849\nGroup - Spanish: \nlift: 166.67%\nThe t value of the two variables is -2.357 with p value 0.040\n"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"> Often treatment will not affect all people uniformly. Some people will love a particular marketing campaign while others hate it. As a marketing data scientist, it's your responsibility to enable your marketing stakeholders to target users according to their preferences.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |