text
stringlengths 0
4.99k
|
---|
features = pd.DataFrame(features) |
features.head() |
train_data = features.loc[0 : train_split - 1] |
val_data = features.loc[train_split:] |
The selected parameters are: Pressure, Temperature, Saturation vapor pressure, Vapor pressure deficit, Specific humidity, Airtight, Wind speed |
Training dataset |
The training dataset labels starts from the 792nd observation (720 + 72). |
start = past + future |
end = start + train_split |
x_train = train_data[[i for i in range(7)]].values |
y_train = features.iloc[start:end][[1]] |
sequence_length = int(past / step) |
The timeseries_dataset_from_array function takes in a sequence of data-points gathered at equal intervals, along with time series parameters such as length of the sequences/windows, spacing between two sequence/windows, etc., to produce batches of sub-timeseries inputs and targets sampled from the main timeseries. |
dataset_train = keras.preprocessing.timeseries_dataset_from_array( |
x_train, |
y_train, |
sequence_length=sequence_length, |
sampling_rate=step, |
batch_size=batch_size, |
) |
Validation dataset |
The validation dataset must not contain the last 792 rows as we won't have label data for those records, hence 792 must be subtracted from the end of the data. |
The validation label dataset must start from 792 after train_split, hence we must add past + future (792) to label_start. |
x_end = len(val_data) - past - future |
label_start = train_split + past + future |
x_val = val_data.iloc[:x_end][[i for i in range(7)]].values |
y_val = features.iloc[label_start:][[1]] |
dataset_val = keras.preprocessing.timeseries_dataset_from_array( |
x_val, |
y_val, |
sequence_length=sequence_length, |
sampling_rate=step, |
batch_size=batch_size, |
) |
for batch in dataset_train.take(1): |
inputs, targets = batch |
print(\"Input shape:\", inputs.numpy().shape) |
print(\"Target shape:\", targets.numpy().shape) |
Input shape: (256, 120, 7) |
Target shape: (256, 1) |
Training |
inputs = keras.layers.Input(shape=(inputs.shape[1], inputs.shape[2])) |
lstm_out = keras.layers.LSTM(32)(inputs) |
outputs = keras.layers.Dense(1)(lstm_out) |
model = keras.Model(inputs=inputs, outputs=outputs) |
model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate), loss=\"mse\") |
model.summary() |
Model: \"functional_1\" |
_________________________________________________________________ |
Layer (type) Output Shape Param # |
================================================================= |
input_1 (InputLayer) [(None, 120, 7)] 0 |
_________________________________________________________________ |
lstm (LSTM) (None, 32) 5120 |
_________________________________________________________________ |
dense (Dense) (None, 1) 33 |
================================================================= |
Total params: 5,153 |
Trainable params: 5,153 |
Non-trainable params: 0 |
_________________________________________________________________ |
We'll use the ModelCheckpoint callback to regularly save checkpoints, and the EarlyStopping callback to interrupt training when the validation loss is not longer improving. |
path_checkpoint = \"model_checkpoint.h5\" |
es_callback = keras.callbacks.EarlyStopping(monitor=\"val_loss\", min_delta=0, patience=5) |
modelckpt_callback = keras.callbacks.ModelCheckpoint( |
monitor=\"val_loss\", |
filepath=path_checkpoint, |
verbose=1, |
save_weights_only=True, |
save_best_only=True, |
) |
history = model.fit( |
dataset_train, |
epochs=epochs, |
validation_data=dataset_val, |
callbacks=[es_callback, modelckpt_callback], |
) |
Epoch 1/10 |
1172/1172 [==============================] - ETA: 0s - loss: 0.2059 |
Epoch 00001: val_loss improved from inf to 0.16357, saving model to model_checkpoint.h5 |
1172/1172 [==============================] - 101s 86ms/step - loss: 0.2059 - val_loss: 0.1636 |
Epoch 2/10 |
1172/1172 [==============================] - ETA: 0s - loss: 0.1271 |