Abstract—



Download 0.78 Mb.
View original pdf
Page4/5
Date09.05.2022
Size0.78 Mb.
#58712
1   2   3   4   5
Recurrent
t −1
+
b
z
)
r
t
=
σ (W
r
x
t
+
U
r
h
t−1
+
b
r
h
t
=(
1−z
t
)⋅
H
t−1
+
¿
+
z
t

tanh(W
h
x
t
+
U
h
(
r
t

h
t −1
)+
b
h
¿ ¿ ¿ ¿
z=
xμ
σ
; μ=
1
N

i=1
N
(
x
i
)
=

1
N

i=1
N
(
x
i

μ )
2
,
X
n or m
=
XX
mi n
X
m ax iiXiim in iINTERNATIONAL JOURNAL OF MATHEMATICS AND COMPUTERS IN SIMULATION
Volume 11, 2017
ISSN: 1998-0159 9

activation function will be used, the latter being characterized by an inner activations range equals to [−1;1]. We underline that our prediction object consists in the difference of close prices of last day with respect to a time window and objective range, namely considering the next day, the next 5 days, the days, etc. The resulting differences are then binarized to better show if the close price will go up, or down, which leads to the use of binary vectors [1;0] and [0;1]. Our data set is a set of normalized time windows of 30 minutes with corresponding labels if price goes up or down. We split our dataset into a train set and a test set, with a division of 90% to respectively. Splits are done in historical order to simulate real world situation, when
Figure 4: Close prices of GOOGL asset we train on past data and try to predict future. To avoid overfitting, we shuffle our test and train sets after splitting.
After splitting we see, that we have 45% of up labels and down labels in our test dataset. It will be a good test to check,
if our algorithm did not overfit - if it shows 55% of accuracy,
it shouldn’t mean that it predicts better then random guess, it means that it just learnt the distribution in test dataset, with other words, overfitted the dataset.
VI.E
XPERIMENTAL
R
ESULTS
In this section we provide the computational results related to the training process. All NNs were trained using Keras, which a NNs library written in Python for deep learning. Every network was trained for 100 epochs. This high value has been chosen because experimental results show that training on less epochs causes the deep network to overfit test set and just learn the distribution. Moreover, training for longer time is necessary to better understand convergence trend. If after sometime cross-entropy error will start to grow, we can choose model that has the best performance. As optimization algorithm we have used the Adam approach, see [9], with related gradients calculated with BPTT algorithm, while we have used a GPU hardware, namely Nvidia GTX M, to reduce computational costs.
A - Performance Analysis
For all RNNs we use the same pattern, namely a two stacked recurrent layers. In this model the output of the first layer constitutes the input of the second and soon, with one affine layer on the top with softmax function on the output tore- sample it as a probability distribution. The activation function of cells is the tanh function, while the function for the inner activation inside the cells is a sigmoid function. Moreover we have use hard approximation of the sigmoid function to speedup the whole procedure. Start weights for inputs are initialized exploiting the Glorot uniform, see [17], hence we have while the inner weights in cells are initialized with orthogonal initialization described by Saxe, McClelland, and Ganguli in. The latter implies that They the weight matrix should be chosen as a random orthogonal matrix, namely a square matrix
W such that If you modify this document for use with other
Results for prediction trend for the next day are shown on figures from 5 to 10. The results are then summarized in table on figure Figure 5: RNN loss within 100 epochs
Looking at the error plots we can easily see that regular RNNs tends to overfit. In fact, cross-entropy value has its minimum around the 70th epoch, starting to grow again after this. It follows, that it is better to use early stopping technique, see, to know when optimally stop the training scheme. In case of LSTM and GRU, loss tends to decrease, so these networks can be run more time and on more data, in case of GRU we can see that convergence is smoother then with LSTMs. Time consumption for training 100 epochs is in table 13 We also tried to increase efficiency with dropout technique for U and
W weights, see [7], to LSTM and GRU. Nevertheless such

Download 0.78 Mb.

Share with your friends:
1   2   3   4   5




The database is protected by copyright ©ua.originaldll.com 2024
send message

    Main page