Understanding metrics computation in Keras The Next CEO of Stack OverflowHow is the training accuracy in Keras determined for every epoch?customised loss function in keras using theano functionKeras the simplest NN model: error in training.py with indicesloss, val_loss, acc and val_acc do not update at all over epochsWhy use axis=-1 in Keras metrics function?Keras AttributeError: 'list' object has no attribute 'ndim'Precision@n and Recall@n in Keras Neural Networkdesign a custom loss function in Keras (on the element index in tensors in Keras)TypeError: object of type 'Tensor' has no len() when using a custom metric in Tensorflowkeras custom function won't eval/compile/fitLoss of CNN in Keras becomes nan at some point of training
WOW air has ceased operation, can I get my tickets refunded?
Is there always a complete, orthogonal set of unitary matrices?
How to get from Geneva Airport to Metabief?
What did we know about the Kessel run before the prequels?
How many extra stops do monopods offer for tele photographs?
Does Germany produce more waste than the US?
Is it my responsibility to learn a new technology in my own time my employer wants to implement?
Why is information "lost" when it got into a black hole?
Why do remote US companies require working in the US?
Is it professional to write unrelated content in an almost-empty email?
What steps are necessary to read a Modern SSD in Medieval Europe?
What was the first Unix version to run on a microcomputer?
How to count occurrences of text in a file?
Why don't programming languages automatically manage the synchronous/asynchronous problem?
"misplaced omit" error when >centering columns
If Nick Fury and Coulson already knew about aliens (Kree and Skrull) why did they wait until Thor's appearance to start making weapons?
I want to delete every two lines after 3rd lines in file contain very large number of lines :
Which one is the true statement?
Domestic-to-international connection at Orlando (MCO)
Does it make sense to invest money on space investigation?
Is there a difference between "Fahrstuhl" and "Aufzug"
I believe this to be a fraud - hired, then asked to cash check and send cash as Bitcoin
Newlines in BSD sed vs gsed
Is a distribution that is normal, but highly skewed considered Gaussian?
Understanding metrics computation in Keras
The Next CEO of Stack OverflowHow is the training accuracy in Keras determined for every epoch?customised loss function in keras using theano functionKeras the simplest NN model: error in training.py with indicesloss, val_loss, acc and val_acc do not update at all over epochsWhy use axis=-1 in Keras metrics function?Keras AttributeError: 'list' object has no attribute 'ndim'Precision@n and Recall@n in Keras Neural Networkdesign a custom loss function in Keras (on the element index in tensors in Keras)TypeError: object of type 'Tensor' has no len() when using a custom metric in Tensorflowkeras custom function won't eval/compile/fitLoss of CNN in Keras becomes nan at some point of training
I have tried to implement a true positive metric in Keras :
def TP(y_true, y_pred):
estimated = K.argmax(y_pred, axis=1)
truth = K.argmax(y_true, axis=1)
TP = K.sum(truth * estimated)
return TP
based on my last layer output shape : (batch, 2).
The function has been tested with numpy argmax equivalent and works well.
I use a cross_entropy loss function and each epochs it gives me the metric value. But how this value could be a decimal number ? What am I doing wrong ? Thanks !
Edited : here is a sample code for the Keras model :
def TP(y_true, y_pred):
estimated = K.argmax(y_pred, axis=1)
truth = K.argmax(y_true, axis=1)
TP = K.sum(truth * estimated)
return TP
epochs = 10
batch_size = 2
model = Sequential([
Dense(32, input_shape=(4,)),
Activation('relu'),
Dense(2),
Activation('softmax'),
])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy', TP])
model.summary()
train = np.array([[17,0,1,0],[17,0,1,0],[17,0,1,0],[17,0,1,0],[17,0,1,0], [2,1,0,1],[0,1,0,1],[0,1,0,1],[0,1,0,1],[0,1,0,1]])
labels = np.array([ [1,0],[1,0],[1,0],[1,0],[1,0], [0,1],[0,1],[0,1],[0,1],[0,1] ])
model.fit(train, labels, epochs=epochs, batch_size=batch_size, verbose=2)
And here a test showing the TP function seems to work
def npTP(y_true, y_pred):
estimated = np.argmax(y_pred, axis=1)
truth = np.argmax(y_true, axis=1)
TP = np.sum(truth * estimated)
return TP
y_true = np.array([ [1,0],[1,0],[1,0],[1,0],[1,0], [0,1],[0,1],[0,1],[0,1],[0,1] ])
y_pred = np.array([ [0,1],[0,1],[0,1],[0,1],[0,1], [0,1],[0,1],[0,1],[0,1],[0,1]])
print("np check : ")
print(npTP(y_true, y_pred))
Running this code gives the following output :
Using TensorFlow backend.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 32) 160
_________________________________________________________________
activation_1 (Activation) (None, 32) 0
_________________________________________________________________
dense_2 (Dense) (None, 2) 66
_________________________________________________________________
activation_2 (Activation) (None, 2) 0
=================================================================
Total params: 226
Trainable params: 226
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
- 0s - loss: 0.3934 - acc: 0.6000 - TP: 0.2000
Epoch 2/10 ^^^^^^^^^^ here are the decimal values
- 0s - loss: 0.3736 - acc: 0.6000 - TP: 0.2000
Epoch 3/10 ^^^^^^^^^^
- 0s - loss: 0.3562 - acc: 0.6000 - TP: 0.2000
Epoch 4/10 ^^^^^^^^^^
- 0s - loss: 0.3416 - acc: 0.7000 - TP: 0.4000
Epoch 5/10 ^^^^^^^^^^
- 0s - loss: 0.3240 - acc: 1.0000 - TP: 1.0000
Epoch 6/10
- 0s - loss: 0.3118 - acc: 1.0000 - TP: 1.0000
Epoch 7/10
- 0s - loss: 0.2960 - acc: 1.0000 - TP: 1.0000
Epoch 8/10
- 0s - loss: 0.2806 - acc: 1.0000 - TP: 1.0000
Epoch 9/10
- 0s - loss: 0.2656 - acc: 1.0000 - TP: 1.0000
Epoch 10/10
- 0s - loss: 0.2535 - acc: 1.0000 - TP: 1.0000
np check :
5
Thanks !
keras
add a comment |
I have tried to implement a true positive metric in Keras :
def TP(y_true, y_pred):
estimated = K.argmax(y_pred, axis=1)
truth = K.argmax(y_true, axis=1)
TP = K.sum(truth * estimated)
return TP
based on my last layer output shape : (batch, 2).
The function has been tested with numpy argmax equivalent and works well.
I use a cross_entropy loss function and each epochs it gives me the metric value. But how this value could be a decimal number ? What am I doing wrong ? Thanks !
Edited : here is a sample code for the Keras model :
def TP(y_true, y_pred):
estimated = K.argmax(y_pred, axis=1)
truth = K.argmax(y_true, axis=1)
TP = K.sum(truth * estimated)
return TP
epochs = 10
batch_size = 2
model = Sequential([
Dense(32, input_shape=(4,)),
Activation('relu'),
Dense(2),
Activation('softmax'),
])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy', TP])
model.summary()
train = np.array([[17,0,1,0],[17,0,1,0],[17,0,1,0],[17,0,1,0],[17,0,1,0], [2,1,0,1],[0,1,0,1],[0,1,0,1],[0,1,0,1],[0,1,0,1]])
labels = np.array([ [1,0],[1,0],[1,0],[1,0],[1,0], [0,1],[0,1],[0,1],[0,1],[0,1] ])
model.fit(train, labels, epochs=epochs, batch_size=batch_size, verbose=2)
And here a test showing the TP function seems to work
def npTP(y_true, y_pred):
estimated = np.argmax(y_pred, axis=1)
truth = np.argmax(y_true, axis=1)
TP = np.sum(truth * estimated)
return TP
y_true = np.array([ [1,0],[1,0],[1,0],[1,0],[1,0], [0,1],[0,1],[0,1],[0,1],[0,1] ])
y_pred = np.array([ [0,1],[0,1],[0,1],[0,1],[0,1], [0,1],[0,1],[0,1],[0,1],[0,1]])
print("np check : ")
print(npTP(y_true, y_pred))
Running this code gives the following output :
Using TensorFlow backend.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 32) 160
_________________________________________________________________
activation_1 (Activation) (None, 32) 0
_________________________________________________________________
dense_2 (Dense) (None, 2) 66
_________________________________________________________________
activation_2 (Activation) (None, 2) 0
=================================================================
Total params: 226
Trainable params: 226
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
- 0s - loss: 0.3934 - acc: 0.6000 - TP: 0.2000
Epoch 2/10 ^^^^^^^^^^ here are the decimal values
- 0s - loss: 0.3736 - acc: 0.6000 - TP: 0.2000
Epoch 3/10 ^^^^^^^^^^
- 0s - loss: 0.3562 - acc: 0.6000 - TP: 0.2000
Epoch 4/10 ^^^^^^^^^^
- 0s - loss: 0.3416 - acc: 0.7000 - TP: 0.4000
Epoch 5/10 ^^^^^^^^^^
- 0s - loss: 0.3240 - acc: 1.0000 - TP: 1.0000
Epoch 6/10
- 0s - loss: 0.3118 - acc: 1.0000 - TP: 1.0000
Epoch 7/10
- 0s - loss: 0.2960 - acc: 1.0000 - TP: 1.0000
Epoch 8/10
- 0s - loss: 0.2806 - acc: 1.0000 - TP: 1.0000
Epoch 9/10
- 0s - loss: 0.2656 - acc: 1.0000 - TP: 1.0000
Epoch 10/10
- 0s - loss: 0.2535 - acc: 1.0000 - TP: 1.0000
np check :
5
Thanks !
keras
Please notice that posting questions is not a fire-and-forget thing, and the best moment to post is not before going away for lunch/coffee/whatever. The first 20-30 mins are of great importance if you want to get your question answered, and you are expected to be available to answer to comments & clarification requests; from How to ask: "After you post, leave the question open in your browser for a bit, and see if anyone comments. If you missed an obvious piece of information, be ready to respond by editing your question to include it".
– desertnaut
Mar 26 at 10:51
So, you have indeed 5 TP's (the last 5 elements of youry_pred
&y_true
); what exactly is the issue here and what is this "decimal" you refer to?
– desertnaut
Mar 26 at 10:59
The 5 TP's are when I use the numpy function. With the Keras metric included in thefit
training, the first 4 epochs gives 0.2 and 0.4 as number of true positives. I don't get why.
– etiennedm
Mar 26 at 11:02
This is a running average between batches & epochs, so it can take decimal values indeed: stackoverflow.com/questions/48831242/…
– desertnaut
Mar 26 at 11:08
1
Thank you for pointing that out, that is exactly what I was looking for.
– etiennedm
Mar 26 at 11:29
add a comment |
I have tried to implement a true positive metric in Keras :
def TP(y_true, y_pred):
estimated = K.argmax(y_pred, axis=1)
truth = K.argmax(y_true, axis=1)
TP = K.sum(truth * estimated)
return TP
based on my last layer output shape : (batch, 2).
The function has been tested with numpy argmax equivalent and works well.
I use a cross_entropy loss function and each epochs it gives me the metric value. But how this value could be a decimal number ? What am I doing wrong ? Thanks !
Edited : here is a sample code for the Keras model :
def TP(y_true, y_pred):
estimated = K.argmax(y_pred, axis=1)
truth = K.argmax(y_true, axis=1)
TP = K.sum(truth * estimated)
return TP
epochs = 10
batch_size = 2
model = Sequential([
Dense(32, input_shape=(4,)),
Activation('relu'),
Dense(2),
Activation('softmax'),
])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy', TP])
model.summary()
train = np.array([[17,0,1,0],[17,0,1,0],[17,0,1,0],[17,0,1,0],[17,0,1,0], [2,1,0,1],[0,1,0,1],[0,1,0,1],[0,1,0,1],[0,1,0,1]])
labels = np.array([ [1,0],[1,0],[1,0],[1,0],[1,0], [0,1],[0,1],[0,1],[0,1],[0,1] ])
model.fit(train, labels, epochs=epochs, batch_size=batch_size, verbose=2)
And here a test showing the TP function seems to work
def npTP(y_true, y_pred):
estimated = np.argmax(y_pred, axis=1)
truth = np.argmax(y_true, axis=1)
TP = np.sum(truth * estimated)
return TP
y_true = np.array([ [1,0],[1,0],[1,0],[1,0],[1,0], [0,1],[0,1],[0,1],[0,1],[0,1] ])
y_pred = np.array([ [0,1],[0,1],[0,1],[0,1],[0,1], [0,1],[0,1],[0,1],[0,1],[0,1]])
print("np check : ")
print(npTP(y_true, y_pred))
Running this code gives the following output :
Using TensorFlow backend.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 32) 160
_________________________________________________________________
activation_1 (Activation) (None, 32) 0
_________________________________________________________________
dense_2 (Dense) (None, 2) 66
_________________________________________________________________
activation_2 (Activation) (None, 2) 0
=================================================================
Total params: 226
Trainable params: 226
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
- 0s - loss: 0.3934 - acc: 0.6000 - TP: 0.2000
Epoch 2/10 ^^^^^^^^^^ here are the decimal values
- 0s - loss: 0.3736 - acc: 0.6000 - TP: 0.2000
Epoch 3/10 ^^^^^^^^^^
- 0s - loss: 0.3562 - acc: 0.6000 - TP: 0.2000
Epoch 4/10 ^^^^^^^^^^
- 0s - loss: 0.3416 - acc: 0.7000 - TP: 0.4000
Epoch 5/10 ^^^^^^^^^^
- 0s - loss: 0.3240 - acc: 1.0000 - TP: 1.0000
Epoch 6/10
- 0s - loss: 0.3118 - acc: 1.0000 - TP: 1.0000
Epoch 7/10
- 0s - loss: 0.2960 - acc: 1.0000 - TP: 1.0000
Epoch 8/10
- 0s - loss: 0.2806 - acc: 1.0000 - TP: 1.0000
Epoch 9/10
- 0s - loss: 0.2656 - acc: 1.0000 - TP: 1.0000
Epoch 10/10
- 0s - loss: 0.2535 - acc: 1.0000 - TP: 1.0000
np check :
5
Thanks !
keras
I have tried to implement a true positive metric in Keras :
def TP(y_true, y_pred):
estimated = K.argmax(y_pred, axis=1)
truth = K.argmax(y_true, axis=1)
TP = K.sum(truth * estimated)
return TP
based on my last layer output shape : (batch, 2).
The function has been tested with numpy argmax equivalent and works well.
I use a cross_entropy loss function and each epochs it gives me the metric value. But how this value could be a decimal number ? What am I doing wrong ? Thanks !
Edited : here is a sample code for the Keras model :
def TP(y_true, y_pred):
estimated = K.argmax(y_pred, axis=1)
truth = K.argmax(y_true, axis=1)
TP = K.sum(truth * estimated)
return TP
epochs = 10
batch_size = 2
model = Sequential([
Dense(32, input_shape=(4,)),
Activation('relu'),
Dense(2),
Activation('softmax'),
])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy', TP])
model.summary()
train = np.array([[17,0,1,0],[17,0,1,0],[17,0,1,0],[17,0,1,0],[17,0,1,0], [2,1,0,1],[0,1,0,1],[0,1,0,1],[0,1,0,1],[0,1,0,1]])
labels = np.array([ [1,0],[1,0],[1,0],[1,0],[1,0], [0,1],[0,1],[0,1],[0,1],[0,1] ])
model.fit(train, labels, epochs=epochs, batch_size=batch_size, verbose=2)
And here a test showing the TP function seems to work
def npTP(y_true, y_pred):
estimated = np.argmax(y_pred, axis=1)
truth = np.argmax(y_true, axis=1)
TP = np.sum(truth * estimated)
return TP
y_true = np.array([ [1,0],[1,0],[1,0],[1,0],[1,0], [0,1],[0,1],[0,1],[0,1],[0,1] ])
y_pred = np.array([ [0,1],[0,1],[0,1],[0,1],[0,1], [0,1],[0,1],[0,1],[0,1],[0,1]])
print("np check : ")
print(npTP(y_true, y_pred))
Running this code gives the following output :
Using TensorFlow backend.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 32) 160
_________________________________________________________________
activation_1 (Activation) (None, 32) 0
_________________________________________________________________
dense_2 (Dense) (None, 2) 66
_________________________________________________________________
activation_2 (Activation) (None, 2) 0
=================================================================
Total params: 226
Trainable params: 226
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
- 0s - loss: 0.3934 - acc: 0.6000 - TP: 0.2000
Epoch 2/10 ^^^^^^^^^^ here are the decimal values
- 0s - loss: 0.3736 - acc: 0.6000 - TP: 0.2000
Epoch 3/10 ^^^^^^^^^^
- 0s - loss: 0.3562 - acc: 0.6000 - TP: 0.2000
Epoch 4/10 ^^^^^^^^^^
- 0s - loss: 0.3416 - acc: 0.7000 - TP: 0.4000
Epoch 5/10 ^^^^^^^^^^
- 0s - loss: 0.3240 - acc: 1.0000 - TP: 1.0000
Epoch 6/10
- 0s - loss: 0.3118 - acc: 1.0000 - TP: 1.0000
Epoch 7/10
- 0s - loss: 0.2960 - acc: 1.0000 - TP: 1.0000
Epoch 8/10
- 0s - loss: 0.2806 - acc: 1.0000 - TP: 1.0000
Epoch 9/10
- 0s - loss: 0.2656 - acc: 1.0000 - TP: 1.0000
Epoch 10/10
- 0s - loss: 0.2535 - acc: 1.0000 - TP: 1.0000
np check :
5
Thanks !
keras
keras
edited Mar 26 at 11:06
etiennedm
asked Mar 21 at 18:06
etiennedmetiennedm
83
83
Please notice that posting questions is not a fire-and-forget thing, and the best moment to post is not before going away for lunch/coffee/whatever. The first 20-30 mins are of great importance if you want to get your question answered, and you are expected to be available to answer to comments & clarification requests; from How to ask: "After you post, leave the question open in your browser for a bit, and see if anyone comments. If you missed an obvious piece of information, be ready to respond by editing your question to include it".
– desertnaut
Mar 26 at 10:51
So, you have indeed 5 TP's (the last 5 elements of youry_pred
&y_true
); what exactly is the issue here and what is this "decimal" you refer to?
– desertnaut
Mar 26 at 10:59
The 5 TP's are when I use the numpy function. With the Keras metric included in thefit
training, the first 4 epochs gives 0.2 and 0.4 as number of true positives. I don't get why.
– etiennedm
Mar 26 at 11:02
This is a running average between batches & epochs, so it can take decimal values indeed: stackoverflow.com/questions/48831242/…
– desertnaut
Mar 26 at 11:08
1
Thank you for pointing that out, that is exactly what I was looking for.
– etiennedm
Mar 26 at 11:29
add a comment |
Please notice that posting questions is not a fire-and-forget thing, and the best moment to post is not before going away for lunch/coffee/whatever. The first 20-30 mins are of great importance if you want to get your question answered, and you are expected to be available to answer to comments & clarification requests; from How to ask: "After you post, leave the question open in your browser for a bit, and see if anyone comments. If you missed an obvious piece of information, be ready to respond by editing your question to include it".
– desertnaut
Mar 26 at 10:51
So, you have indeed 5 TP's (the last 5 elements of youry_pred
&y_true
); what exactly is the issue here and what is this "decimal" you refer to?
– desertnaut
Mar 26 at 10:59
The 5 TP's are when I use the numpy function. With the Keras metric included in thefit
training, the first 4 epochs gives 0.2 and 0.4 as number of true positives. I don't get why.
– etiennedm
Mar 26 at 11:02
This is a running average between batches & epochs, so it can take decimal values indeed: stackoverflow.com/questions/48831242/…
– desertnaut
Mar 26 at 11:08
1
Thank you for pointing that out, that is exactly what I was looking for.
– etiennedm
Mar 26 at 11:29
Please notice that posting questions is not a fire-and-forget thing, and the best moment to post is not before going away for lunch/coffee/whatever. The first 20-30 mins are of great importance if you want to get your question answered, and you are expected to be available to answer to comments & clarification requests; from How to ask: "After you post, leave the question open in your browser for a bit, and see if anyone comments. If you missed an obvious piece of information, be ready to respond by editing your question to include it".
– desertnaut
Mar 26 at 10:51
Please notice that posting questions is not a fire-and-forget thing, and the best moment to post is not before going away for lunch/coffee/whatever. The first 20-30 mins are of great importance if you want to get your question answered, and you are expected to be available to answer to comments & clarification requests; from How to ask: "After you post, leave the question open in your browser for a bit, and see if anyone comments. If you missed an obvious piece of information, be ready to respond by editing your question to include it".
– desertnaut
Mar 26 at 10:51
So, you have indeed 5 TP's (the last 5 elements of your
y_pred
& y_true
); what exactly is the issue here and what is this "decimal" you refer to?– desertnaut
Mar 26 at 10:59
So, you have indeed 5 TP's (the last 5 elements of your
y_pred
& y_true
); what exactly is the issue here and what is this "decimal" you refer to?– desertnaut
Mar 26 at 10:59
The 5 TP's are when I use the numpy function. With the Keras metric included in the
fit
training, the first 4 epochs gives 0.2 and 0.4 as number of true positives. I don't get why.– etiennedm
Mar 26 at 11:02
The 5 TP's are when I use the numpy function. With the Keras metric included in the
fit
training, the first 4 epochs gives 0.2 and 0.4 as number of true positives. I don't get why.– etiennedm
Mar 26 at 11:02
This is a running average between batches & epochs, so it can take decimal values indeed: stackoverflow.com/questions/48831242/…
– desertnaut
Mar 26 at 11:08
This is a running average between batches & epochs, so it can take decimal values indeed: stackoverflow.com/questions/48831242/…
– desertnaut
Mar 26 at 11:08
1
1
Thank you for pointing that out, that is exactly what I was looking for.
– etiennedm
Mar 26 at 11:29
Thank you for pointing that out, that is exactly what I was looking for.
– etiennedm
Mar 26 at 11:29
add a comment |
1 Answer
1
active
oldest
votes
As desertnaut pointed out, answer is explained in this thread.
Keras is doing a running average between batches & epochs.
Here with batch_size=2
and 10 samples, each epoch runs 5 trainings (10/2=5
).
To understand output metric of epoch 1, the total number of TP
after the 5 trainings had to be 1, so the metric gives 1/5 = 0.2
. Epoch 4 had 2 TP's in the 5 trainings giving 2/5 = 0.4
in the metric.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55286690%2funderstanding-metrics-computation-in-keras%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
As desertnaut pointed out, answer is explained in this thread.
Keras is doing a running average between batches & epochs.
Here with batch_size=2
and 10 samples, each epoch runs 5 trainings (10/2=5
).
To understand output metric of epoch 1, the total number of TP
after the 5 trainings had to be 1, so the metric gives 1/5 = 0.2
. Epoch 4 had 2 TP's in the 5 trainings giving 2/5 = 0.4
in the metric.
add a comment |
As desertnaut pointed out, answer is explained in this thread.
Keras is doing a running average between batches & epochs.
Here with batch_size=2
and 10 samples, each epoch runs 5 trainings (10/2=5
).
To understand output metric of epoch 1, the total number of TP
after the 5 trainings had to be 1, so the metric gives 1/5 = 0.2
. Epoch 4 had 2 TP's in the 5 trainings giving 2/5 = 0.4
in the metric.
add a comment |
As desertnaut pointed out, answer is explained in this thread.
Keras is doing a running average between batches & epochs.
Here with batch_size=2
and 10 samples, each epoch runs 5 trainings (10/2=5
).
To understand output metric of epoch 1, the total number of TP
after the 5 trainings had to be 1, so the metric gives 1/5 = 0.2
. Epoch 4 had 2 TP's in the 5 trainings giving 2/5 = 0.4
in the metric.
As desertnaut pointed out, answer is explained in this thread.
Keras is doing a running average between batches & epochs.
Here with batch_size=2
and 10 samples, each epoch runs 5 trainings (10/2=5
).
To understand output metric of epoch 1, the total number of TP
after the 5 trainings had to be 1, so the metric gives 1/5 = 0.2
. Epoch 4 had 2 TP's in the 5 trainings giving 2/5 = 0.4
in the metric.
answered Mar 26 at 11:54
etiennedmetiennedm
83
83
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55286690%2funderstanding-metrics-computation-in-keras%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Please notice that posting questions is not a fire-and-forget thing, and the best moment to post is not before going away for lunch/coffee/whatever. The first 20-30 mins are of great importance if you want to get your question answered, and you are expected to be available to answer to comments & clarification requests; from How to ask: "After you post, leave the question open in your browser for a bit, and see if anyone comments. If you missed an obvious piece of information, be ready to respond by editing your question to include it".
– desertnaut
Mar 26 at 10:51
So, you have indeed 5 TP's (the last 5 elements of your
y_pred
&y_true
); what exactly is the issue here and what is this "decimal" you refer to?– desertnaut
Mar 26 at 10:59
The 5 TP's are when I use the numpy function. With the Keras metric included in the
fit
training, the first 4 epochs gives 0.2 and 0.4 as number of true positives. I don't get why.– etiennedm
Mar 26 at 11:02
This is a running average between batches & epochs, so it can take decimal values indeed: stackoverflow.com/questions/48831242/…
– desertnaut
Mar 26 at 11:08
1
Thank you for pointing that out, that is exactly what I was looking for.
– etiennedm
Mar 26 at 11:29