Keras Deep Learning and Financial ReturnsOn loading the saved Keras sequential model, my test data gives low accuracy in the beginningKeras Early Stopping Fails with Multi-loss modelloss, val_loss, acc and val_acc do not update at all over epochsConvolution Neural Net for Stock Market Prediction, RegressionKeras TimeSeries - Regression with negative valuesDimensionality of LSTM predictions, inputs and targetsbatch_input_shape for Keras LSTM modelKeras ValueError: Shapes (?, ?, ?) and (6, 1) must have the same rank & logits and labels must have the same shape ((6, 1) vs (?, ?, ?)) when compileShaping data for LSTM, and feeding output of dense layers to LSTM
Is it fair to ask my employer for personal laptop insurance?
Uncovering the Accelerated Dragon opening
Kerning feedback on logo
How can I discourage sharing internal API keys within a company?
How seriously should I take a CBP interview where I was told I have a red flag and could only stay for 30 days?
Does a gnoll speak both Gnoll and Abyssal, or is Gnoll a dialect of Abyssal?
Will replacing a fake visa with a different fake visa cause me problems when applying for a legal study permit?
Writing a love interest for my hero
Maintenance tips to prolong engine lifespan for short trips
What is this unknown executable on my boot volume? Is it Malicious?
Were Roman public roads build by private companies?
When was the earliest opportunity the Voyager crew had to return to the Alpha quadrant?
Why does F + F' = 1?
How can I locate a missing person abroad?
My research paper filed as a patent in China by my Chinese supervisor without me as inventor
Do they still use tiger roars in the 2019 "Lion King" movie?
Why did it become so much more expensive to start a university?
How to run Death House for 3 new players with no healer?
Which ping implementation is Cygwin using?
Have there been any countries that voted themselves out of existence?
How do I determine what is "magic" and "bearing magic" for Detect Magic?
Are scroll bars dead in 2019?
Do Milankovitch Cycles fully explain climate change?
A shy person in a queue
Keras Deep Learning and Financial Returns
On loading the saved Keras sequential model, my test data gives low accuracy in the beginningKeras Early Stopping Fails with Multi-loss modelloss, val_loss, acc and val_acc do not update at all over epochsConvolution Neural Net for Stock Market Prediction, RegressionKeras TimeSeries - Regression with negative valuesDimensionality of LSTM predictions, inputs and targetsbatch_input_shape for Keras LSTM modelKeras ValueError: Shapes (?, ?, ?) and (6, 1) must have the same rank & logits and labels must have the same shape ((6, 1) vs (?, ?, ?)) when compileShaping data for LSTM, and feeding output of dense layers to LSTM
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
I am experiencing with Tensorflow via the Keras library and before diving into predicting uncertainty, I thought it might be a good idea to predict something certain. Therefore, I tried to predict weekly returns using daily price level data. My input shape looks like this: (1000, 5, 2), i.e. 1000 matrices of the form:
Stock A Stock B
110 100
95 101
90 100
89 99
100 110
For Stock A
the price at day t=0
is 100
, 95
at t-1
and 100
at t-5
. Thus, the weekly return for Stock A
would be 110/100=10%
and -10%
for Stock B
. Because I focus on only predicting Stock As return for now, my y for this input matrix would just be the scalar 0.01
. Furthermore, I want to make it a classification problem and thus make a one-hot encoded vector via to_categorical
with 1 if the y is above 5%, 2 if it is below -5% and 0 if it is in between. Hence my classification output for the aforementioned matrix would be:
0 1 0
To simplify: I want my model to learn to calculate returns, i.e. divide the first value in the input matrix by the last value of the input matrix for stock A and ignore the input for stock B. This would give the y. It is just a practice task for me before I get to more difficult tasks and the model should achieve a loss of zero because there is no uncertainty. What model do you propose to do that? I tried the following and it does not converge at all. Training and validation weights are calculated via compute_sample_weight('balanced', )
.
Earlystop = EarlyStopping(monitor='val_loss', patience=150, mode='min', verbose=1, min_delta=0.0002, restore_best_weights=True)
checkpoint = ModelCheckpoint('nn', monitor='val_loss', verbose=1, save_best_only=True, mode='min', save_weights_only=False)
Plateau = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=30, verbose=1)
optimizer = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, amsgrad=True)
input_ = Input(batch_shape=(batch_size, 1, 5, 2))
model = LocallyConnected2D(16, kernel_size=(5, 1), padding='valid', data_format="channels_first")(input_)
model = LeakyReLU(alpha=0.01)(model)
model = Dense(128)(model)
model = LeakyReLU(alpha=0.01)(model)
model = Flatten()(model)
x1 = Dense(3, activation='softmax', name='0')(model)
final_model = Model(inputs=input_, outputs=[x1])
final_model.compile(loss='categorical_crossentropy' , optimizer=optimizer, metrics=['accuracy'])
history = final_model.fit(X_train, y_train, epochs=1000, batch_size=batch_size, verbose=2, shuffle=False, validation_data=[X_valid, y_valid, valid_weight], sample_weight=train_weight, callbacks=[Earlystop, checkpoint, Plateau])
I thought convolution might be good for this and because every return is calcualted individually I decided to go for a LocallyConnected layer. Do I need to add more layers for such a simple task?
EDIT: transformed my input matrix to returns and the model converges successfully. So the input must be correct but the model fails to find the division function. Are there any layers that would be suited to do that?
python keras neural-network finance quantitative-finance
add a comment
|
I am experiencing with Tensorflow via the Keras library and before diving into predicting uncertainty, I thought it might be a good idea to predict something certain. Therefore, I tried to predict weekly returns using daily price level data. My input shape looks like this: (1000, 5, 2), i.e. 1000 matrices of the form:
Stock A Stock B
110 100
95 101
90 100
89 99
100 110
For Stock A
the price at day t=0
is 100
, 95
at t-1
and 100
at t-5
. Thus, the weekly return for Stock A
would be 110/100=10%
and -10%
for Stock B
. Because I focus on only predicting Stock As return for now, my y for this input matrix would just be the scalar 0.01
. Furthermore, I want to make it a classification problem and thus make a one-hot encoded vector via to_categorical
with 1 if the y is above 5%, 2 if it is below -5% and 0 if it is in between. Hence my classification output for the aforementioned matrix would be:
0 1 0
To simplify: I want my model to learn to calculate returns, i.e. divide the first value in the input matrix by the last value of the input matrix for stock A and ignore the input for stock B. This would give the y. It is just a practice task for me before I get to more difficult tasks and the model should achieve a loss of zero because there is no uncertainty. What model do you propose to do that? I tried the following and it does not converge at all. Training and validation weights are calculated via compute_sample_weight('balanced', )
.
Earlystop = EarlyStopping(monitor='val_loss', patience=150, mode='min', verbose=1, min_delta=0.0002, restore_best_weights=True)
checkpoint = ModelCheckpoint('nn', monitor='val_loss', verbose=1, save_best_only=True, mode='min', save_weights_only=False)
Plateau = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=30, verbose=1)
optimizer = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, amsgrad=True)
input_ = Input(batch_shape=(batch_size, 1, 5, 2))
model = LocallyConnected2D(16, kernel_size=(5, 1), padding='valid', data_format="channels_first")(input_)
model = LeakyReLU(alpha=0.01)(model)
model = Dense(128)(model)
model = LeakyReLU(alpha=0.01)(model)
model = Flatten()(model)
x1 = Dense(3, activation='softmax', name='0')(model)
final_model = Model(inputs=input_, outputs=[x1])
final_model.compile(loss='categorical_crossentropy' , optimizer=optimizer, metrics=['accuracy'])
history = final_model.fit(X_train, y_train, epochs=1000, batch_size=batch_size, verbose=2, shuffle=False, validation_data=[X_valid, y_valid, valid_weight], sample_weight=train_weight, callbacks=[Earlystop, checkpoint, Plateau])
I thought convolution might be good for this and because every return is calcualted individually I decided to go for a LocallyConnected layer. Do I need to add more layers for such a simple task?
EDIT: transformed my input matrix to returns and the model converges successfully. So the input must be correct but the model fails to find the division function. Are there any layers that would be suited to do that?
python keras neural-network finance quantitative-finance
What are the results from your test set?
– Nathan McCoy
Mar 28 at 9:30
1
It is not converging. Basically it stays at this for all epochs:Epoch 5/1000 - 5s - loss: 12.3980 - acc: 0.5916 - val_loss: 11.8097 - val_acc: 0.5625
– freddy888
Mar 28 at 9:39
Not sure here, but some suggestions. Remove Plateau callback and rerun to see if it is causing issues. Also, try generating data similar to inputs to see if it is not a input issue (quality, quantity). You have lots of hyperparameters, try modifying. It is hard to know what is the source of your issues.
– Nathan McCoy
Mar 28 at 11:42
I just transformed the input to match the output by taking the percent change instead of price levels. The model works very nicely now. Hence, the general data should be correct. Do you think LocallyConnected2D makes sense as the first layer?
– freddy888
Mar 28 at 11:44
It will model filters independently, which assumes weights are independent. Depending on your input context you could also try conv2d layer. I would just try both tbh.
– Nathan McCoy
Mar 28 at 12:47
add a comment
|
I am experiencing with Tensorflow via the Keras library and before diving into predicting uncertainty, I thought it might be a good idea to predict something certain. Therefore, I tried to predict weekly returns using daily price level data. My input shape looks like this: (1000, 5, 2), i.e. 1000 matrices of the form:
Stock A Stock B
110 100
95 101
90 100
89 99
100 110
For Stock A
the price at day t=0
is 100
, 95
at t-1
and 100
at t-5
. Thus, the weekly return for Stock A
would be 110/100=10%
and -10%
for Stock B
. Because I focus on only predicting Stock As return for now, my y for this input matrix would just be the scalar 0.01
. Furthermore, I want to make it a classification problem and thus make a one-hot encoded vector via to_categorical
with 1 if the y is above 5%, 2 if it is below -5% and 0 if it is in between. Hence my classification output for the aforementioned matrix would be:
0 1 0
To simplify: I want my model to learn to calculate returns, i.e. divide the first value in the input matrix by the last value of the input matrix for stock A and ignore the input for stock B. This would give the y. It is just a practice task for me before I get to more difficult tasks and the model should achieve a loss of zero because there is no uncertainty. What model do you propose to do that? I tried the following and it does not converge at all. Training and validation weights are calculated via compute_sample_weight('balanced', )
.
Earlystop = EarlyStopping(monitor='val_loss', patience=150, mode='min', verbose=1, min_delta=0.0002, restore_best_weights=True)
checkpoint = ModelCheckpoint('nn', monitor='val_loss', verbose=1, save_best_only=True, mode='min', save_weights_only=False)
Plateau = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=30, verbose=1)
optimizer = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, amsgrad=True)
input_ = Input(batch_shape=(batch_size, 1, 5, 2))
model = LocallyConnected2D(16, kernel_size=(5, 1), padding='valid', data_format="channels_first")(input_)
model = LeakyReLU(alpha=0.01)(model)
model = Dense(128)(model)
model = LeakyReLU(alpha=0.01)(model)
model = Flatten()(model)
x1 = Dense(3, activation='softmax', name='0')(model)
final_model = Model(inputs=input_, outputs=[x1])
final_model.compile(loss='categorical_crossentropy' , optimizer=optimizer, metrics=['accuracy'])
history = final_model.fit(X_train, y_train, epochs=1000, batch_size=batch_size, verbose=2, shuffle=False, validation_data=[X_valid, y_valid, valid_weight], sample_weight=train_weight, callbacks=[Earlystop, checkpoint, Plateau])
I thought convolution might be good for this and because every return is calcualted individually I decided to go for a LocallyConnected layer. Do I need to add more layers for such a simple task?
EDIT: transformed my input matrix to returns and the model converges successfully. So the input must be correct but the model fails to find the division function. Are there any layers that would be suited to do that?
python keras neural-network finance quantitative-finance
I am experiencing with Tensorflow via the Keras library and before diving into predicting uncertainty, I thought it might be a good idea to predict something certain. Therefore, I tried to predict weekly returns using daily price level data. My input shape looks like this: (1000, 5, 2), i.e. 1000 matrices of the form:
Stock A Stock B
110 100
95 101
90 100
89 99
100 110
For Stock A
the price at day t=0
is 100
, 95
at t-1
and 100
at t-5
. Thus, the weekly return for Stock A
would be 110/100=10%
and -10%
for Stock B
. Because I focus on only predicting Stock As return for now, my y for this input matrix would just be the scalar 0.01
. Furthermore, I want to make it a classification problem and thus make a one-hot encoded vector via to_categorical
with 1 if the y is above 5%, 2 if it is below -5% and 0 if it is in between. Hence my classification output for the aforementioned matrix would be:
0 1 0
To simplify: I want my model to learn to calculate returns, i.e. divide the first value in the input matrix by the last value of the input matrix for stock A and ignore the input for stock B. This would give the y. It is just a practice task for me before I get to more difficult tasks and the model should achieve a loss of zero because there is no uncertainty. What model do you propose to do that? I tried the following and it does not converge at all. Training and validation weights are calculated via compute_sample_weight('balanced', )
.
Earlystop = EarlyStopping(monitor='val_loss', patience=150, mode='min', verbose=1, min_delta=0.0002, restore_best_weights=True)
checkpoint = ModelCheckpoint('nn', monitor='val_loss', verbose=1, save_best_only=True, mode='min', save_weights_only=False)
Plateau = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=30, verbose=1)
optimizer = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, amsgrad=True)
input_ = Input(batch_shape=(batch_size, 1, 5, 2))
model = LocallyConnected2D(16, kernel_size=(5, 1), padding='valid', data_format="channels_first")(input_)
model = LeakyReLU(alpha=0.01)(model)
model = Dense(128)(model)
model = LeakyReLU(alpha=0.01)(model)
model = Flatten()(model)
x1 = Dense(3, activation='softmax', name='0')(model)
final_model = Model(inputs=input_, outputs=[x1])
final_model.compile(loss='categorical_crossentropy' , optimizer=optimizer, metrics=['accuracy'])
history = final_model.fit(X_train, y_train, epochs=1000, batch_size=batch_size, verbose=2, shuffle=False, validation_data=[X_valid, y_valid, valid_weight], sample_weight=train_weight, callbacks=[Earlystop, checkpoint, Plateau])
I thought convolution might be good for this and because every return is calcualted individually I decided to go for a LocallyConnected layer. Do I need to add more layers for such a simple task?
EDIT: transformed my input matrix to returns and the model converges successfully. So the input must be correct but the model fails to find the division function. Are there any layers that would be suited to do that?
python keras neural-network finance quantitative-finance
python keras neural-network finance quantitative-finance
edited Mar 28 at 11:38
freddy888
asked Mar 28 at 9:20
freddy888freddy888
3995 silver badges19 bronze badges
3995 silver badges19 bronze badges
What are the results from your test set?
– Nathan McCoy
Mar 28 at 9:30
1
It is not converging. Basically it stays at this for all epochs:Epoch 5/1000 - 5s - loss: 12.3980 - acc: 0.5916 - val_loss: 11.8097 - val_acc: 0.5625
– freddy888
Mar 28 at 9:39
Not sure here, but some suggestions. Remove Plateau callback and rerun to see if it is causing issues. Also, try generating data similar to inputs to see if it is not a input issue (quality, quantity). You have lots of hyperparameters, try modifying. It is hard to know what is the source of your issues.
– Nathan McCoy
Mar 28 at 11:42
I just transformed the input to match the output by taking the percent change instead of price levels. The model works very nicely now. Hence, the general data should be correct. Do you think LocallyConnected2D makes sense as the first layer?
– freddy888
Mar 28 at 11:44
It will model filters independently, which assumes weights are independent. Depending on your input context you could also try conv2d layer. I would just try both tbh.
– Nathan McCoy
Mar 28 at 12:47
add a comment
|
What are the results from your test set?
– Nathan McCoy
Mar 28 at 9:30
1
It is not converging. Basically it stays at this for all epochs:Epoch 5/1000 - 5s - loss: 12.3980 - acc: 0.5916 - val_loss: 11.8097 - val_acc: 0.5625
– freddy888
Mar 28 at 9:39
Not sure here, but some suggestions. Remove Plateau callback and rerun to see if it is causing issues. Also, try generating data similar to inputs to see if it is not a input issue (quality, quantity). You have lots of hyperparameters, try modifying. It is hard to know what is the source of your issues.
– Nathan McCoy
Mar 28 at 11:42
I just transformed the input to match the output by taking the percent change instead of price levels. The model works very nicely now. Hence, the general data should be correct. Do you think LocallyConnected2D makes sense as the first layer?
– freddy888
Mar 28 at 11:44
It will model filters independently, which assumes weights are independent. Depending on your input context you could also try conv2d layer. I would just try both tbh.
– Nathan McCoy
Mar 28 at 12:47
What are the results from your test set?
– Nathan McCoy
Mar 28 at 9:30
What are the results from your test set?
– Nathan McCoy
Mar 28 at 9:30
1
1
It is not converging. Basically it stays at this for all epochs:
Epoch 5/1000 - 5s - loss: 12.3980 - acc: 0.5916 - val_loss: 11.8097 - val_acc: 0.5625
– freddy888
Mar 28 at 9:39
It is not converging. Basically it stays at this for all epochs:
Epoch 5/1000 - 5s - loss: 12.3980 - acc: 0.5916 - val_loss: 11.8097 - val_acc: 0.5625
– freddy888
Mar 28 at 9:39
Not sure here, but some suggestions. Remove Plateau callback and rerun to see if it is causing issues. Also, try generating data similar to inputs to see if it is not a input issue (quality, quantity). You have lots of hyperparameters, try modifying. It is hard to know what is the source of your issues.
– Nathan McCoy
Mar 28 at 11:42
Not sure here, but some suggestions. Remove Plateau callback and rerun to see if it is causing issues. Also, try generating data similar to inputs to see if it is not a input issue (quality, quantity). You have lots of hyperparameters, try modifying. It is hard to know what is the source of your issues.
– Nathan McCoy
Mar 28 at 11:42
I just transformed the input to match the output by taking the percent change instead of price levels. The model works very nicely now. Hence, the general data should be correct. Do you think LocallyConnected2D makes sense as the first layer?
– freddy888
Mar 28 at 11:44
I just transformed the input to match the output by taking the percent change instead of price levels. The model works very nicely now. Hence, the general data should be correct. Do you think LocallyConnected2D makes sense as the first layer?
– freddy888
Mar 28 at 11:44
It will model filters independently, which assumes weights are independent. Depending on your input context you could also try conv2d layer. I would just try both tbh.
– Nathan McCoy
Mar 28 at 12:47
It will model filters independently, which assumes weights are independent. Depending on your input context you could also try conv2d layer. I would just try both tbh.
– Nathan McCoy
Mar 28 at 12:47
add a comment
|
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55393986%2fkeras-deep-learning-and-financial-returns%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.
Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55393986%2fkeras-deep-learning-and-financial-returns%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
What are the results from your test set?
– Nathan McCoy
Mar 28 at 9:30
1
It is not converging. Basically it stays at this for all epochs:
Epoch 5/1000 - 5s - loss: 12.3980 - acc: 0.5916 - val_loss: 11.8097 - val_acc: 0.5625
– freddy888
Mar 28 at 9:39
Not sure here, but some suggestions. Remove Plateau callback and rerun to see if it is causing issues. Also, try generating data similar to inputs to see if it is not a input issue (quality, quantity). You have lots of hyperparameters, try modifying. It is hard to know what is the source of your issues.
– Nathan McCoy
Mar 28 at 11:42
I just transformed the input to match the output by taking the percent change instead of price levels. The model works very nicely now. Hence, the general data should be correct. Do you think LocallyConnected2D makes sense as the first layer?
– freddy888
Mar 28 at 11:44
It will model filters independently, which assumes weights are independent. Depending on your input context you could also try conv2d layer. I would just try both tbh.
– Nathan McCoy
Mar 28 at 12:47