How to normalize a multiple input neural network?Role of Bias in Neural NetworksWhy do we have to normalize the input for an artificial neural network?Epoch vs Iteration when training neural networksWhat are advantages of Artificial Neural Networks over Support Vector Machines?LSTM: Understand timesteps, samples and features and especially the use in reshape and input_shapeTensorFlow: Neural Network accuracy always 100% on train and test setspoor performance keras lstmHow to use Scikit Learn Wrapper around Keras Bi-directional LSTM Model
Python web-scraper to download table of transistor counts from Wikipedia
How to draw a Venn diagram for X - (Y intersect Z)?
Output a Super Mario Image
International Orange?
Make 1998 using the least possible digits 8
How much would a 1 foot tall human weigh?
Asked to Not Use Transactions and to Use A Workaround to Simulate One
How to give my students a straightedge instead of a ruler
What 68-pin connector is this on my 2.5" solid state drive?
Are there objective criteria for classifying consonance v. dissonance?
Why is the car dealer insisting on a loan instead of cash?
If I want an interpretable model, are there methods other than Linear Regression?
Why is my fire extinguisher emptied after one use?
Examples of proofs by making reduction to a finite set
Why does the speed of sound decrease at high altitudes although the air density decreases?
Is there a tool to measure the "maturity" of a code in Git?
Building Truncatable Primes using Nest(List), While, Fold
How do we know that black holes are spinning?
How does a simple logistic regression model achieve a 92% classification accuracy on MNIST?
What was the motivation for the invention of electric pianos?
How to write characters doing illogical things in a believable way?
Test to know when to use GLM over Linear Regression?
Is it possible to format a USB from a live USB?
Why don't Wizards use wrist straps to protect against disarming charms?
How to normalize a multiple input neural network?
Role of Bias in Neural NetworksWhy do we have to normalize the input for an artificial neural network?Epoch vs Iteration when training neural networksWhat are advantages of Artificial Neural Networks over Support Vector Machines?LSTM: Understand timesteps, samples and features and especially the use in reshape and input_shapeTensorFlow: Neural Network accuracy always 100% on train and test setspoor performance keras lstmHow to use Scikit Learn Wrapper around Keras Bi-directional LSTM Model
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
I do have a question in regards on how to normalize and especially on how to denormalize neural networks with multiple inputs and only one output.
Do I need to normalize the Input variables independently from each other and then just use the scale of the variable I also want as an output to rescale my data.
For example: I have the input variables a and b.
a has a scale of 100-1000
b has a scale of 1-10
After normalization both variables are on a scale of 0-1.
My output data now needs to be the prediction for tomorrows a (a at t+1) and therefore again have a scale of 100-1000.
Will I therefore simply denormalize according to the way I normalized a (inverse a's normalization? Or do I need to consider something else?
For normalizing both variables my code looks as follows:
from pandas import Series
from sklearn.preprocessing import MinMaxScaler
series1 = Series(df["a"])
series2 = Series(df["b"])
values1 = series1.values
values1 = values1.reshape((len(values1), 1))
values2 = series2.values
values2 = values2.reshape((len(values2), 1))
scaler1 = MinMaxScaler(feature_range=(0, 1))
scaler1 = scaler1.fit(values1)
scaler2 = MinMaxScaler(feature_range=(0, 1))
scaler2 = scaler2.fit(values2)
normalized1 = scaler1.transform(values1)
df["Normalized_a"] = normalized1
normalized2 = scaler2.transform(values2)
df["Normalized_b"] = normalized2
closesnorm1 = df["Normalized_a"]
closesnorm2 = df["Normalized_b"]
### Combine two variables into one NumPy array
normalizeddata = df[["Normalized_a","Normalized_b"]].values
Then I splitted the data:
### Split the data
X_train = []
y_train = []
for i in range (3, len(normalizeddata) - 3):
y_train.append(normalizeddata[i,0])
X_train.append(np.array((normalizeddata[i+1:i+4][::-1])))
X_train = np.array(X_train).reshape(-1,3,2)
y_train = np.array(y_train)
X_test = []
y_test = []
for i in range (0,3):
y_test.append(normalizeddata[i,0])
X_test.append(np.array((normalizeddata[i+1:i+4][::-1])))
X_test = np.array(X_test).reshape(-1,3,2)
y_test = np.array(y_test)
The model itself looks as follows taking two variables into consideration (see input shape of NumPy array):
model = Sequential()
model.add(LSTM(100,activation="relu", input_shape = (3, 2), return_sequences = True))
model.add(Dropout(0.2))
model.add(LSTM(100,activation="relu", return_sequences = False))
model.add(Dropout(0.2))
model.add(LSTM(1,activation ="relu"))
model.compile(optimizer="adam", loss="mse")
model.fit(X_train, y_train, batch_size = 2, epochs = 10)
And last but not least I denormalized the output using Scaler1:
### Predicting y_test data
y_pred = model.predict(X_test)
y_pred = y_pred.reshape(-1)
df_pred = df[:3]
df_pred["a_predicted"] = scaler1.inverse_transform(y_pred.reshape(-1, 1))
Thanks a lot!
keras neural-network lstm normalization
|
show 2 more comments
I do have a question in regards on how to normalize and especially on how to denormalize neural networks with multiple inputs and only one output.
Do I need to normalize the Input variables independently from each other and then just use the scale of the variable I also want as an output to rescale my data.
For example: I have the input variables a and b.
a has a scale of 100-1000
b has a scale of 1-10
After normalization both variables are on a scale of 0-1.
My output data now needs to be the prediction for tomorrows a (a at t+1) and therefore again have a scale of 100-1000.
Will I therefore simply denormalize according to the way I normalized a (inverse a's normalization? Or do I need to consider something else?
For normalizing both variables my code looks as follows:
from pandas import Series
from sklearn.preprocessing import MinMaxScaler
series1 = Series(df["a"])
series2 = Series(df["b"])
values1 = series1.values
values1 = values1.reshape((len(values1), 1))
values2 = series2.values
values2 = values2.reshape((len(values2), 1))
scaler1 = MinMaxScaler(feature_range=(0, 1))
scaler1 = scaler1.fit(values1)
scaler2 = MinMaxScaler(feature_range=(0, 1))
scaler2 = scaler2.fit(values2)
normalized1 = scaler1.transform(values1)
df["Normalized_a"] = normalized1
normalized2 = scaler2.transform(values2)
df["Normalized_b"] = normalized2
closesnorm1 = df["Normalized_a"]
closesnorm2 = df["Normalized_b"]
### Combine two variables into one NumPy array
normalizeddata = df[["Normalized_a","Normalized_b"]].values
Then I splitted the data:
### Split the data
X_train = []
y_train = []
for i in range (3, len(normalizeddata) - 3):
y_train.append(normalizeddata[i,0])
X_train.append(np.array((normalizeddata[i+1:i+4][::-1])))
X_train = np.array(X_train).reshape(-1,3,2)
y_train = np.array(y_train)
X_test = []
y_test = []
for i in range (0,3):
y_test.append(normalizeddata[i,0])
X_test.append(np.array((normalizeddata[i+1:i+4][::-1])))
X_test = np.array(X_test).reshape(-1,3,2)
y_test = np.array(y_test)
The model itself looks as follows taking two variables into consideration (see input shape of NumPy array):
model = Sequential()
model.add(LSTM(100,activation="relu", input_shape = (3, 2), return_sequences = True))
model.add(Dropout(0.2))
model.add(LSTM(100,activation="relu", return_sequences = False))
model.add(Dropout(0.2))
model.add(LSTM(1,activation ="relu"))
model.compile(optimizer="adam", loss="mse")
model.fit(X_train, y_train, batch_size = 2, epochs = 10)
And last but not least I denormalized the output using Scaler1:
### Predicting y_test data
y_pred = model.predict(X_test)
y_pred = y_pred.reshape(-1)
df_pred = df[:3]
df_pred["a_predicted"] = scaler1.inverse_transform(y_pred.reshape(-1, 1))
Thanks a lot!
keras neural-network lstm normalization
You can use two different scalers, one to normalize the input features and another one to normalize the target output. Then later use the scaler used for scaling target feature to inverse scale the predictions
– Sreeram TP
Mar 28 at 11:51
@SreeramTP Thanks a lot! I provided a code example where I used two Scalers. This way it would be right?
– J.Weiser
Mar 28 at 12:19
So, you got 2 features, you have to forecast one feature using the lagged values of both the features. Am I correct.?
– Sreeram TP
Mar 28 at 12:26
@SreeramTP Yes, exactly
– J.Weiser
Mar 28 at 12:28
It will be better two use two scalers, say scaler a and scaler b. Then scale the feature a with scaler a and b with scaler b. Then prepare the dataset using lagged features. If the feature b in the one you are forecasting, make prediction and inverse scale with scaler b
– Sreeram TP
Mar 28 at 12:30
|
show 2 more comments
I do have a question in regards on how to normalize and especially on how to denormalize neural networks with multiple inputs and only one output.
Do I need to normalize the Input variables independently from each other and then just use the scale of the variable I also want as an output to rescale my data.
For example: I have the input variables a and b.
a has a scale of 100-1000
b has a scale of 1-10
After normalization both variables are on a scale of 0-1.
My output data now needs to be the prediction for tomorrows a (a at t+1) and therefore again have a scale of 100-1000.
Will I therefore simply denormalize according to the way I normalized a (inverse a's normalization? Or do I need to consider something else?
For normalizing both variables my code looks as follows:
from pandas import Series
from sklearn.preprocessing import MinMaxScaler
series1 = Series(df["a"])
series2 = Series(df["b"])
values1 = series1.values
values1 = values1.reshape((len(values1), 1))
values2 = series2.values
values2 = values2.reshape((len(values2), 1))
scaler1 = MinMaxScaler(feature_range=(0, 1))
scaler1 = scaler1.fit(values1)
scaler2 = MinMaxScaler(feature_range=(0, 1))
scaler2 = scaler2.fit(values2)
normalized1 = scaler1.transform(values1)
df["Normalized_a"] = normalized1
normalized2 = scaler2.transform(values2)
df["Normalized_b"] = normalized2
closesnorm1 = df["Normalized_a"]
closesnorm2 = df["Normalized_b"]
### Combine two variables into one NumPy array
normalizeddata = df[["Normalized_a","Normalized_b"]].values
Then I splitted the data:
### Split the data
X_train = []
y_train = []
for i in range (3, len(normalizeddata) - 3):
y_train.append(normalizeddata[i,0])
X_train.append(np.array((normalizeddata[i+1:i+4][::-1])))
X_train = np.array(X_train).reshape(-1,3,2)
y_train = np.array(y_train)
X_test = []
y_test = []
for i in range (0,3):
y_test.append(normalizeddata[i,0])
X_test.append(np.array((normalizeddata[i+1:i+4][::-1])))
X_test = np.array(X_test).reshape(-1,3,2)
y_test = np.array(y_test)
The model itself looks as follows taking two variables into consideration (see input shape of NumPy array):
model = Sequential()
model.add(LSTM(100,activation="relu", input_shape = (3, 2), return_sequences = True))
model.add(Dropout(0.2))
model.add(LSTM(100,activation="relu", return_sequences = False))
model.add(Dropout(0.2))
model.add(LSTM(1,activation ="relu"))
model.compile(optimizer="adam", loss="mse")
model.fit(X_train, y_train, batch_size = 2, epochs = 10)
And last but not least I denormalized the output using Scaler1:
### Predicting y_test data
y_pred = model.predict(X_test)
y_pred = y_pred.reshape(-1)
df_pred = df[:3]
df_pred["a_predicted"] = scaler1.inverse_transform(y_pred.reshape(-1, 1))
Thanks a lot!
keras neural-network lstm normalization
I do have a question in regards on how to normalize and especially on how to denormalize neural networks with multiple inputs and only one output.
Do I need to normalize the Input variables independently from each other and then just use the scale of the variable I also want as an output to rescale my data.
For example: I have the input variables a and b.
a has a scale of 100-1000
b has a scale of 1-10
After normalization both variables are on a scale of 0-1.
My output data now needs to be the prediction for tomorrows a (a at t+1) and therefore again have a scale of 100-1000.
Will I therefore simply denormalize according to the way I normalized a (inverse a's normalization? Or do I need to consider something else?
For normalizing both variables my code looks as follows:
from pandas import Series
from sklearn.preprocessing import MinMaxScaler
series1 = Series(df["a"])
series2 = Series(df["b"])
values1 = series1.values
values1 = values1.reshape((len(values1), 1))
values2 = series2.values
values2 = values2.reshape((len(values2), 1))
scaler1 = MinMaxScaler(feature_range=(0, 1))
scaler1 = scaler1.fit(values1)
scaler2 = MinMaxScaler(feature_range=(0, 1))
scaler2 = scaler2.fit(values2)
normalized1 = scaler1.transform(values1)
df["Normalized_a"] = normalized1
normalized2 = scaler2.transform(values2)
df["Normalized_b"] = normalized2
closesnorm1 = df["Normalized_a"]
closesnorm2 = df["Normalized_b"]
### Combine two variables into one NumPy array
normalizeddata = df[["Normalized_a","Normalized_b"]].values
Then I splitted the data:
### Split the data
X_train = []
y_train = []
for i in range (3, len(normalizeddata) - 3):
y_train.append(normalizeddata[i,0])
X_train.append(np.array((normalizeddata[i+1:i+4][::-1])))
X_train = np.array(X_train).reshape(-1,3,2)
y_train = np.array(y_train)
X_test = []
y_test = []
for i in range (0,3):
y_test.append(normalizeddata[i,0])
X_test.append(np.array((normalizeddata[i+1:i+4][::-1])))
X_test = np.array(X_test).reshape(-1,3,2)
y_test = np.array(y_test)
The model itself looks as follows taking two variables into consideration (see input shape of NumPy array):
model = Sequential()
model.add(LSTM(100,activation="relu", input_shape = (3, 2), return_sequences = True))
model.add(Dropout(0.2))
model.add(LSTM(100,activation="relu", return_sequences = False))
model.add(Dropout(0.2))
model.add(LSTM(1,activation ="relu"))
model.compile(optimizer="adam", loss="mse")
model.fit(X_train, y_train, batch_size = 2, epochs = 10)
And last but not least I denormalized the output using Scaler1:
### Predicting y_test data
y_pred = model.predict(X_test)
y_pred = y_pred.reshape(-1)
df_pred = df[:3]
df_pred["a_predicted"] = scaler1.inverse_transform(y_pred.reshape(-1, 1))
Thanks a lot!
keras neural-network lstm normalization
keras neural-network lstm normalization
edited Mar 28 at 12:27
Sreeram TP
4,4394 gold badges17 silver badges47 bronze badges
4,4394 gold badges17 silver badges47 bronze badges
asked Mar 28 at 11:37
J.WeiserJ.Weiser
134 bronze badges
134 bronze badges
You can use two different scalers, one to normalize the input features and another one to normalize the target output. Then later use the scaler used for scaling target feature to inverse scale the predictions
– Sreeram TP
Mar 28 at 11:51
@SreeramTP Thanks a lot! I provided a code example where I used two Scalers. This way it would be right?
– J.Weiser
Mar 28 at 12:19
So, you got 2 features, you have to forecast one feature using the lagged values of both the features. Am I correct.?
– Sreeram TP
Mar 28 at 12:26
@SreeramTP Yes, exactly
– J.Weiser
Mar 28 at 12:28
It will be better two use two scalers, say scaler a and scaler b. Then scale the feature a with scaler a and b with scaler b. Then prepare the dataset using lagged features. If the feature b in the one you are forecasting, make prediction and inverse scale with scaler b
– Sreeram TP
Mar 28 at 12:30
|
show 2 more comments
You can use two different scalers, one to normalize the input features and another one to normalize the target output. Then later use the scaler used for scaling target feature to inverse scale the predictions
– Sreeram TP
Mar 28 at 11:51
@SreeramTP Thanks a lot! I provided a code example where I used two Scalers. This way it would be right?
– J.Weiser
Mar 28 at 12:19
So, you got 2 features, you have to forecast one feature using the lagged values of both the features. Am I correct.?
– Sreeram TP
Mar 28 at 12:26
@SreeramTP Yes, exactly
– J.Weiser
Mar 28 at 12:28
It will be better two use two scalers, say scaler a and scaler b. Then scale the feature a with scaler a and b with scaler b. Then prepare the dataset using lagged features. If the feature b in the one you are forecasting, make prediction and inverse scale with scaler b
– Sreeram TP
Mar 28 at 12:30
You can use two different scalers, one to normalize the input features and another one to normalize the target output. Then later use the scaler used for scaling target feature to inverse scale the predictions
– Sreeram TP
Mar 28 at 11:51
You can use two different scalers, one to normalize the input features and another one to normalize the target output. Then later use the scaler used for scaling target feature to inverse scale the predictions
– Sreeram TP
Mar 28 at 11:51
@SreeramTP Thanks a lot! I provided a code example where I used two Scalers. This way it would be right?
– J.Weiser
Mar 28 at 12:19
@SreeramTP Thanks a lot! I provided a code example where I used two Scalers. This way it would be right?
– J.Weiser
Mar 28 at 12:19
So, you got 2 features, you have to forecast one feature using the lagged values of both the features. Am I correct.?
– Sreeram TP
Mar 28 at 12:26
So, you got 2 features, you have to forecast one feature using the lagged values of both the features. Am I correct.?
– Sreeram TP
Mar 28 at 12:26
@SreeramTP Yes, exactly
– J.Weiser
Mar 28 at 12:28
@SreeramTP Yes, exactly
– J.Weiser
Mar 28 at 12:28
It will be better two use two scalers, say scaler a and scaler b. Then scale the feature a with scaler a and b with scaler b. Then prepare the dataset using lagged features. If the feature b in the one you are forecasting, make prediction and inverse scale with scaler b
– Sreeram TP
Mar 28 at 12:30
It will be better two use two scalers, say scaler a and scaler b. Then scale the feature a with scaler a and b with scaler b. Then prepare the dataset using lagged features. If the feature b in the one you are forecasting, make prediction and inverse scale with scaler b
– Sreeram TP
Mar 28 at 12:30
|
show 2 more comments
2 Answers
2
active
oldest
votes
It will be better two use two scalers, say scaler a and scaler b.
Then scale the feature a with scaler a and b with scaler b. Then prepare the dataset using lagged features. If the feature b in the one you are forecasting, make prediction and inverse scale with scaler b.
add a comment
|
That depends on the activation function in the output layer and on the target output you use for the training. As you seem to want to have the output of the same kind as one of the inputs, it seems natural to me to normalize the target output the same way you normalize a
and, when you use the network for recall, use the inverse of the a
's normalization.
However, consider editing your question to include some data and sample code. See How to create a Minimal, Complete, and Verifiable example.
Thanks a lot! Now I provided some of the code in my example. So the way I did it using two scaler and simply use the scaler of the target output the same way as I did normalize "a" in the beginning seems correct?
– J.Weiser
Mar 28 at 12:23
Looks good to me. BTW, if you are satisfied with the answer, consider accepting it.
– Igor F.
Mar 28 at 12:38
add a comment
|
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55396658%2fhow-to-normalize-a-multiple-input-neural-network%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
It will be better two use two scalers, say scaler a and scaler b.
Then scale the feature a with scaler a and b with scaler b. Then prepare the dataset using lagged features. If the feature b in the one you are forecasting, make prediction and inverse scale with scaler b.
add a comment
|
It will be better two use two scalers, say scaler a and scaler b.
Then scale the feature a with scaler a and b with scaler b. Then prepare the dataset using lagged features. If the feature b in the one you are forecasting, make prediction and inverse scale with scaler b.
add a comment
|
It will be better two use two scalers, say scaler a and scaler b.
Then scale the feature a with scaler a and b with scaler b. Then prepare the dataset using lagged features. If the feature b in the one you are forecasting, make prediction and inverse scale with scaler b.
It will be better two use two scalers, say scaler a and scaler b.
Then scale the feature a with scaler a and b with scaler b. Then prepare the dataset using lagged features. If the feature b in the one you are forecasting, make prediction and inverse scale with scaler b.
answered Mar 28 at 12:55
Sreeram TPSreeram TP
4,4394 gold badges17 silver badges47 bronze badges
4,4394 gold badges17 silver badges47 bronze badges
add a comment
|
add a comment
|
That depends on the activation function in the output layer and on the target output you use for the training. As you seem to want to have the output of the same kind as one of the inputs, it seems natural to me to normalize the target output the same way you normalize a
and, when you use the network for recall, use the inverse of the a
's normalization.
However, consider editing your question to include some data and sample code. See How to create a Minimal, Complete, and Verifiable example.
Thanks a lot! Now I provided some of the code in my example. So the way I did it using two scaler and simply use the scaler of the target output the same way as I did normalize "a" in the beginning seems correct?
– J.Weiser
Mar 28 at 12:23
Looks good to me. BTW, if you are satisfied with the answer, consider accepting it.
– Igor F.
Mar 28 at 12:38
add a comment
|
That depends on the activation function in the output layer and on the target output you use for the training. As you seem to want to have the output of the same kind as one of the inputs, it seems natural to me to normalize the target output the same way you normalize a
and, when you use the network for recall, use the inverse of the a
's normalization.
However, consider editing your question to include some data and sample code. See How to create a Minimal, Complete, and Verifiable example.
Thanks a lot! Now I provided some of the code in my example. So the way I did it using two scaler and simply use the scaler of the target output the same way as I did normalize "a" in the beginning seems correct?
– J.Weiser
Mar 28 at 12:23
Looks good to me. BTW, if you are satisfied with the answer, consider accepting it.
– Igor F.
Mar 28 at 12:38
add a comment
|
That depends on the activation function in the output layer and on the target output you use for the training. As you seem to want to have the output of the same kind as one of the inputs, it seems natural to me to normalize the target output the same way you normalize a
and, when you use the network for recall, use the inverse of the a
's normalization.
However, consider editing your question to include some data and sample code. See How to create a Minimal, Complete, and Verifiable example.
That depends on the activation function in the output layer and on the target output you use for the training. As you seem to want to have the output of the same kind as one of the inputs, it seems natural to me to normalize the target output the same way you normalize a
and, when you use the network for recall, use the inverse of the a
's normalization.
However, consider editing your question to include some data and sample code. See How to create a Minimal, Complete, and Verifiable example.
answered Mar 28 at 11:48
Igor F.Igor F.
1,9771 gold badge18 silver badges26 bronze badges
1,9771 gold badge18 silver badges26 bronze badges
Thanks a lot! Now I provided some of the code in my example. So the way I did it using two scaler and simply use the scaler of the target output the same way as I did normalize "a" in the beginning seems correct?
– J.Weiser
Mar 28 at 12:23
Looks good to me. BTW, if you are satisfied with the answer, consider accepting it.
– Igor F.
Mar 28 at 12:38
add a comment
|
Thanks a lot! Now I provided some of the code in my example. So the way I did it using two scaler and simply use the scaler of the target output the same way as I did normalize "a" in the beginning seems correct?
– J.Weiser
Mar 28 at 12:23
Looks good to me. BTW, if you are satisfied with the answer, consider accepting it.
– Igor F.
Mar 28 at 12:38
Thanks a lot! Now I provided some of the code in my example. So the way I did it using two scaler and simply use the scaler of the target output the same way as I did normalize "a" in the beginning seems correct?
– J.Weiser
Mar 28 at 12:23
Thanks a lot! Now I provided some of the code in my example. So the way I did it using two scaler and simply use the scaler of the target output the same way as I did normalize "a" in the beginning seems correct?
– J.Weiser
Mar 28 at 12:23
Looks good to me. BTW, if you are satisfied with the answer, consider accepting it.
– Igor F.
Mar 28 at 12:38
Looks good to me. BTW, if you are satisfied with the answer, consider accepting it.
– Igor F.
Mar 28 at 12:38
add a comment
|
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55396658%2fhow-to-normalize-a-multiple-input-neural-network%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
You can use two different scalers, one to normalize the input features and another one to normalize the target output. Then later use the scaler used for scaling target feature to inverse scale the predictions
– Sreeram TP
Mar 28 at 11:51
@SreeramTP Thanks a lot! I provided a code example where I used two Scalers. This way it would be right?
– J.Weiser
Mar 28 at 12:19
So, you got 2 features, you have to forecast one feature using the lagged values of both the features. Am I correct.?
– Sreeram TP
Mar 28 at 12:26
@SreeramTP Yes, exactly
– J.Weiser
Mar 28 at 12:28
It will be better two use two scalers, say scaler a and scaler b. Then scale the feature a with scaler a and b with scaler b. Then prepare the dataset using lagged features. If the feature b in the one you are forecasting, make prediction and inverse scale with scaler b
– Sreeram TP
Mar 28 at 12:30