当前位置:网站首页>Simple but easy to use: using keras 2 to realize multi-dimensional time series prediction based on LSTM
Simple but easy to use: using keras 2 to realize multi-dimensional time series prediction based on LSTM
2022-07-24 06:14:00 【A small EZ】
Hello friends , After a long time, I started to write time series related blogs again .
New Year , The past time series prediction blog has adopted a version of Keras 2.2 tensorflow-gpu 1.13.1 Version implementation .
The theme of this blog is :
Provide a kind of For beginners Of LSTM Time series prediction model
Very effective and easy to use
Simultaneous adoption Keras 2 + TensorFlow 2 Realization , Provide the whole process of prediction and verification .
edition :
cuda 10.1
cudnn 8.0.5
keras 2.4.3
tensorflow-gpu 2.3.0
Keras 2.4 Version only supports TensorFlow As the back-end , See Keras 2.4 Release , Really become TensorFlow Of Keras ,import There are also some changes compared with the previous version .
Data is introduced
This data set is a pollution data set , We need to use the multidimensional time series to predict pollution This dimension , use 80% As a training set ,20% As test set .
Start
The first step of deep learning
import tensorflow as tf
Compared to the old version keras Here are some changes
from tensorflow.keras import Sequential
from tensorflow.keras.layers import LSTM,Dense,Activation,Dropout
from tensorflow.keras.callbacks import History,Callback,EarlyStopping
import numpy as np
Model implementation
The model uses the simplest sequential model ,
Use double layer LSTM Add a full connection layer to realize prediction
The specific structure is as follows
Then I added a early stopping Mechanism
def lstm_model(train_x,train_y,config):
model = Sequential()
model.add(LSTM(config.lstm_layers[0],input_shape=(train_x.shape[1],train_x.shape[2]),
return_sequences=True))
model.add(Dropout(config.dropout))
model.add(LSTM(
config.lstm_layers[1],
return_sequences=False))
model.add(Dropout(config.dropout))
model.add(Dense(
train_y.shape[1]))
model.add(Activation("relu"))
model.summary()
cbs = [History(), EarlyStopping(monitor='val_loss',
patience=config.patience,
min_delta=config.min_delta,
verbose=0)]
model.compile(loss=config.loss_metric,optimizer=config.optimizer)
model.fit(train_x,
train_y,
batch_size=config.lstm_batch_size,
epochs=config.epochs,
validation_split=config.validation_split,
callbacks=cbs,
verbose=True)
return model
At the same time, the input and output of the specific model are based on train_x and train_y Of shape To set up . So this is an adaptive model .
Just make sure that train_x The dimensions are 3,train_y The dimensions are 2, Can run smoothly .
Specific parameters
# Use classes to implement a configuration file
class Config:
def __init__(self):
self.path = './Model/'
self.dimname = 'pollution'
# Before using n_predictions Step by step to predict the next step
self.n_predictions = 30
# Appoint EarlyStopping If a single training val_loss The value cannot be reduced at least min_delta when , Retraining is allowed at most patience Time
# How many... Can you tolerate epoch There's nothing in the improvement
self.patience = 10
self.min_delta = 0.00001
# Appoint LSTM Number of neurons in two layers
self.lstm_layers = [80,80]
self.dropout = 0.2
self.lstm_batch_size = 64
self.optimizer = 'adam'
self.loss_metric = 'mse'
self.validation_split = 0.2
self.verbose = 1
self.epochs = 200
## Is an array Such as [64,64]
def change_lstm_layers(self,layers):
self.lstm_layers = layers
The structure of the model is shown in the figure 
Because of the adoption of Early Stopping Mechanism Training 28 It's over
Epoch 28/200
438/438 [==============================] - 10s 23ms/step - loss: 8.4697e-04 - val_loss: 4.9450e-04
result
Let's see the results
RMSE by 24.096020043963737
MAE by 13.384563587562422
MAPE by 25.183164455025054

We can see that our method is simple , But the prediction effect is very good ~
notes
This code has been uploaded to my github
An old version of this tutorial is also attached ( The gradient search part may be a little small bug If you need to proofread carefully )
If this article I've praised 1000 perhaps github This project star too 100
I will open source Pass through the hall into the inner chamber LSTM: Use LSTM Simple time series anomaly detection
A new version of the Better implementation .
Reference resources
Anomaly Detection in Time Series Data Using LSTMs and Automatic Thresholding
边栏推荐
- Accurate calculation of time delay detailed explanation of VxWorks timestamp
- 异地远程连接在家里的群晖NAS【无公网IP,免费内网穿透】
- Sequential stack C language stack entry and exit traversal
- Demo of UDP communication applied to various environments
- EXCEL 生成mysql语句批量操作
- 不租服务器,自建个人商业网站(4)
- HoloLens 2 开发:开发环境部署
- UE4: what is the gameplay framework
- JUC并发编程基础(1)--相关基础概念
- HoloLens 2 开发101:创建首个HoloLens 2应用程序
猜你喜欢

碰壁记录(持续更新)

Lua Foundation

PDF Text merge

Solve modularnotfounderror: no module named "cv2.aruco“

MySQL download and installation environment settings
![[deep learning] teach you to write](/img/c6/333b16758d79ebd77185be6e3cb38f.png)
[deep learning] teach you to write "handwritten digit recognition neural network" hand in hand, without using any framework, pure numpy

JUC并发编程基础(9)--线程池

How to solve the problem of large distribution gap between training set and test set

Write the list to txt and directly remove the comma in the middle

头歌 平台作业
随机推荐
JUC并发编程基础(6)--Lock锁
day-7 jvm完结
Traditional K-means implementation
餐饮数据统计分析---泰迪云课程大作业
Introduction to QT new project
Openpose2d转换3d姿态识别
JVM system learning
unity2D横版游戏跳跃实时响应
机器学习&深度学习 入门资料分享总结
Unity基础知识及一些基本API的使用
Sort ArrayList < ArrayList < double> >
What is monotonic queue
Write the list to txt and directly remove the comma in the middle
公网访问内网IIS网站服务器【无需公网IP】
day4-jvm
JUC并发编程基础(7)--多线程锁
day4-jvm
Accurate calculation of time delay detailed explanation of VxWorks timestamp
JUC并发编程基础(1)--相关基础概念
UE4 reload system 2. Scene capture of reload system