InceptionV3+LSTM activity recognition, accuracy grows for 10 epochs and then drops downTest score vs test accuracy when evaluating model using Kerastraining loss increases while validation accuracy increasesloss, val_loss, acc and val_acc do not update at all over epochsKeras - negative cosine proximity lossConvNet validation accuracy relation with each epochKeras fit_generator and fit results are differentLoading weights after a training run in KERAS not recognising the highest level of accuracy achieved in previous runPredicting the price of the natural gas using LSTM neural networkwhat is the problem with my keras vae model,the acc is very badWhy is accuracy very low and losses high and fluctuating for cnn-lstm

What makes things real?

Yet another calculator problem

How would two worlds first establish an exchange rate between their currencies

Why would an airport be depicted with symbology for runways longer than 8,069 feet even though it is reported on the sectional as 7,200 feet?

How to find a reviewer/editor for my paper?

Contour plot of a sequence of spheres with increasing radius

Aftermarket seats

Who is the uncredited actor leading the squad in the Valerian movie?

A PEMDAS issue request for explanation

What happens when a file that is 100% paged in to the page cache gets modified by another process

Problem with listing a directory to grep

How to capture c-lightining logs?

When does order matter in probability?

How can I protect myself in case of attack in case like this?

Quick Shikaku Puzzle: Stars and Stripes

Leaving the USA for 10 yrs when you have asylum

Need help to understand the integral rules used solving the convolution of two functions

Is there a specific way to describe over-grown, old, tough vegetables?

Equilibrium points of bounce/instanton solution after Wick's rotation

How to calculate the proper layer height multiples?

Do you need to burn fuel between gravity assists?

Supervisor wants me to support a diploma-thesis software tool after I graduated

Short story: Interstellar inspector senses "off" nature of planet hiding aggressive culture

Is mountain bike good for long distances?



InceptionV3+LSTM activity recognition, accuracy grows for 10 epochs and then drops down


Test score vs test accuracy when evaluating model using Kerastraining loss increases while validation accuracy increasesloss, val_loss, acc and val_acc do not update at all over epochsKeras - negative cosine proximity lossConvNet validation accuracy relation with each epochKeras fit_generator and fit results are differentLoading weights after a training run in KERAS not recognising the highest level of accuracy achieved in previous runPredicting the price of the natural gas using LSTM neural networkwhat is the problem with my keras vae model,the acc is very badWhy is accuracy very low and losses high and fluctuating for cnn-lstm






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I'm trying to build model to do activity recognition.
Using InceptionV3 and backbone and LSTM for the detection, using pre-trained weights.



 train_generator = datagen.flow_from_directory(
'dataset/train',
target_size=(1,224, 224),
batch_size=batch_size,
class_mode='categorical', # this means our generator will only yield batches of data, no labels
shuffle=True,
classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])

validation_generator = datagen.flow_from_directory(
'dataset/validate',
target_size=(1,224, 224),
batch_size=batch_size,
class_mode='categorical', # this means our generator will only yield batches of data, no labels
shuffle=True,
classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])
return train_generator,validation_generator



I train 5 classes so split my data into folders for train and validate.
This is my CNN+LSTM architecture



image = Input(shape=(None,224,224,3),name='image_input')
cnn = applications.inception_v3.InceptionV3(
weights='imagenet',
include_top=False,
pooling='avg')
cnn.trainable = False
encoded_frame = TimeDistributed(Lambda(lambda x: cnn(x)))(image)
encoded_vid = LSTM(256)(encoded_frame)
layer1 = Dense(512, activation='relu')(encoded_vid)
dropout1 = Dropout(0.5)(layer1)
layer2 = Dense(256, activation='relu')(dropout1)
dropout2 = Dropout(0.5)(layer2)
layer3 = Dense(64, activation='relu')(dropout2)
dropout3 = Dropout(0.5)(layer3)
outputs = Dense(5, activation='softmax')(dropout3)
model = Model(inputs=[image],outputs=outputs)
sgd = SGD(lr=0.001, decay = 1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd,loss='categorical_crossentropy', metrics=['accuracy'])

model.fit_generator(train_generator,validation_data = validation_generator,steps_per_epoch=300, epochs=nb_epoch,callbacks=callbacks,shuffle=True,verbose=1)



_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
image_input (InputLayer) (None, None, 224, 224, 3) 0
_________________________________________________________________
time_distributed_1 (TimeDist (None, None, 2048) 0
_________________________________________________________________
lstm_1 (LSTM) (None, 256) 2360320
_________________________________________________________________
dense_1 (Dense) (None, 512) 131584
_________________________________________________________________
dropout_1 (Dropout) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 256) 131328
_________________________________________________________________
dropout_2 (Dropout) (None, 256) 0
_________________________________________________________________
dense_3 (Dense) (None, 64) 16448
_________________________________________________________________
dropout_3 (Dropout) (None, 64) 0
_________________________________________________________________
dense_4 (Dense) (None, 5) 325
_________________________________________________________________


Model compiles normally without problem.
Problem starts during the training. It reaches val_acc=0.50 and then drops back to val_acc=0.30 and the loss just freeze on 0.80 and mostly don't move.



Here the logs from training, as you see the model for some tome improves and then just slowly drops down and later just freeze.
Any idea what can be the reason?



Epoch 00002: val_loss improved from 1.56471 to 1.55652, saving model to ./weights_inception/Inception_V3.02-0.28.h5
Epoch 3/500
300/300 [==============================] - 66s 219ms/step - loss: 1.5436 - acc: 0.3281 - val_loss: 1.5476 - val_acc: 0.2981

Epoch 00003: val_loss improved from 1.55652 to 1.54757, saving model to ./weights_inception/Inception_V3.03-0.30.h5
Epoch 4/500
300/300 [==============================] - 66s 220ms/step - loss: 1.5109 - acc: 0.3593 - val_loss: 1.5284 - val_acc: 0.3588

Epoch 00004: val_loss improved from 1.54757 to 1.52841, saving model to ./weights_inception/Inception_V3.04-0.36.h5
Epoch 5/500
300/300 [==============================] - 66s 221ms/step - loss: 1.4167 - acc: 0.4167 - val_loss: 1.4945 - val_acc: 0.3553

Epoch 00005: val_loss improved from 1.52841 to 1.49446, saving model to ./weights_inception/Inception_V3.05-0.36.h5
Epoch 6/500
300/300 [==============================] - 66s 221ms/step - loss: 1.2941 - acc: 0.4683 - val_loss: 1.4735 - val_acc: 0.4443

Epoch 00006: val_loss improved from 1.49446 to 1.47345, saving model to ./weights_inception/Inception_V3.06-0.44.h5
Epoch 7/500
300/300 [==============================] - 66s 221ms/step - loss: 1.2096 - acc: 0.5116 - val_loss: 1.3738 - val_acc: 0.5186

Epoch 00007: val_loss improved from 1.47345 to 1.37381, saving model to ./weights_inception/Inception_V3.07-0.52.h5
Epoch 8/500
300/300 [==============================] - 66s 221ms/step - loss: 1.1477 - acc: 0.5487 - val_loss: 1.2337 - val_acc: 0.5788

Epoch 00008: val_loss improved from 1.37381 to 1.23367, saving model to ./weights_inception/Inception_V3.08-0.58.h5
Epoch 9/500
300/300 [==============================] - 66s 221ms/step - loss: 1.0809 - acc: 0.5831 - val_loss: 1.2247 - val_acc: 0.5658

Epoch 00009: val_loss improved from 1.23367 to 1.22473, saving model to ./weights_inception/Inception_V3.09-0.57.h5
Epoch 10/500
300/300 [==============================] - 66s 221ms/step - loss: 1.0362 - acc: 0.6089 - val_loss: 1.1704 - val_acc: 0.5774

Epoch 00010: val_loss improved from 1.22473 to 1.17035, saving model to ./weights_inception/Inception_V3.10-0.58.h5
Epoch 11/500
300/300 [==============================] - 66s 221ms/step - loss: 0.9811 - acc: 0.6317 - val_loss: 1.1612 - val_acc: 0.5616

Epoch 00011: val_loss improved from 1.17035 to 1.16121, saving model to ./weights_inception/Inception_V3.11-0.56.h5
Epoch 12/500
300/300 [==============================] - 66s 221ms/step - loss: 0.9444 - acc: 0.6471 - val_loss: 1.1533 - val_acc: 0.5613

Epoch 00012: val_loss improved from 1.16121 to 1.15330, saving model to ./weights_inception/Inception_V3.12-0.56.h5
Epoch 13/500
300/300 [==============================] - 66s 221ms/step - loss: 0.9072 - acc: 0.6650 - val_loss: 1.1843 - val_acc: 0.5361

Epoch 00013: val_loss did not improve from 1.15330
Epoch 14/500
300/300 [==============================] - 66s 221ms/step - loss: 0.8747 - acc: 0.6744 - val_loss: 1.2135 - val_acc: 0.5258

Epoch 00014: val_loss did not improve from 1.15330
Epoch 15/500
300/300 [==============================] - 67s 222ms/step - loss: 0.8666 - acc: 0.6829 - val_loss: 1.1585 - val_acc: 0.5443

Epoch 00015: val_loss did not improve from 1.15330
Epoch 16/500
300/300 [==============================] - 66s 222ms/step - loss: 0.8386 - acc: 0.6926 - val_loss: 1.1503 - val_acc: 0.5482

Epoch 00016: val_loss improved from 1.15330 to 1.15026, saving model to ./weights_inception/Inception_V3.16-0.55.h5
Epoch 17/500
300/300 [==============================] - 66s 221ms/step - loss: 0.8199 - acc: 0.7023 - val_loss: 1.2162 - val_acc: 0.5288

Epoch 00017: val_loss did not improve from 1.15026
Epoch 18/500
300/300 [==============================] - 66s 222ms/step - loss: 0.8018 - acc: 0.7150 - val_loss: 1.1995 - val_acc: 0.5179

Epoch 00018: val_loss did not improve from 1.15026
Epoch 19/500
300/300 [==============================] - 66s 221ms/step - loss: 0.7923 - acc: 0.7186 - val_loss: 1.2218 - val_acc: 0.5137

Epoch 00019: val_loss did not improve from 1.15026
Epoch 20/500
300/300 [==============================] - 67s 222ms/step - loss: 0.7748 - acc: 0.7268 - val_loss: 1.2880 - val_acc: 0.4574

Epoch 00020: val_loss did not improve from 1.15026
Epoch 21/500
300/300 [==============================] - 66s 221ms/step - loss: 0.7604 - acc: 0.7330 - val_loss: 1.2658 - val_acc: 0.4861









share|improve this question
































    0















    I'm trying to build model to do activity recognition.
    Using InceptionV3 and backbone and LSTM for the detection, using pre-trained weights.



     train_generator = datagen.flow_from_directory(
    'dataset/train',
    target_size=(1,224, 224),
    batch_size=batch_size,
    class_mode='categorical', # this means our generator will only yield batches of data, no labels
    shuffle=True,
    classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])

    validation_generator = datagen.flow_from_directory(
    'dataset/validate',
    target_size=(1,224, 224),
    batch_size=batch_size,
    class_mode='categorical', # this means our generator will only yield batches of data, no labels
    shuffle=True,
    classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])
    return train_generator,validation_generator



    I train 5 classes so split my data into folders for train and validate.
    This is my CNN+LSTM architecture



    image = Input(shape=(None,224,224,3),name='image_input')
    cnn = applications.inception_v3.InceptionV3(
    weights='imagenet',
    include_top=False,
    pooling='avg')
    cnn.trainable = False
    encoded_frame = TimeDistributed(Lambda(lambda x: cnn(x)))(image)
    encoded_vid = LSTM(256)(encoded_frame)
    layer1 = Dense(512, activation='relu')(encoded_vid)
    dropout1 = Dropout(0.5)(layer1)
    layer2 = Dense(256, activation='relu')(dropout1)
    dropout2 = Dropout(0.5)(layer2)
    layer3 = Dense(64, activation='relu')(dropout2)
    dropout3 = Dropout(0.5)(layer3)
    outputs = Dense(5, activation='softmax')(dropout3)
    model = Model(inputs=[image],outputs=outputs)
    sgd = SGD(lr=0.001, decay = 1e-6, momentum=0.9, nesterov=True)
    model.compile(optimizer=sgd,loss='categorical_crossentropy', metrics=['accuracy'])

    model.fit_generator(train_generator,validation_data = validation_generator,steps_per_epoch=300, epochs=nb_epoch,callbacks=callbacks,shuffle=True,verbose=1)



    _________________________________________________________________
    Layer (type) Output Shape Param #
    =================================================================
    image_input (InputLayer) (None, None, 224, 224, 3) 0
    _________________________________________________________________
    time_distributed_1 (TimeDist (None, None, 2048) 0
    _________________________________________________________________
    lstm_1 (LSTM) (None, 256) 2360320
    _________________________________________________________________
    dense_1 (Dense) (None, 512) 131584
    _________________________________________________________________
    dropout_1 (Dropout) (None, 512) 0
    _________________________________________________________________
    dense_2 (Dense) (None, 256) 131328
    _________________________________________________________________
    dropout_2 (Dropout) (None, 256) 0
    _________________________________________________________________
    dense_3 (Dense) (None, 64) 16448
    _________________________________________________________________
    dropout_3 (Dropout) (None, 64) 0
    _________________________________________________________________
    dense_4 (Dense) (None, 5) 325
    _________________________________________________________________


    Model compiles normally without problem.
    Problem starts during the training. It reaches val_acc=0.50 and then drops back to val_acc=0.30 and the loss just freeze on 0.80 and mostly don't move.



    Here the logs from training, as you see the model for some tome improves and then just slowly drops down and later just freeze.
    Any idea what can be the reason?



    Epoch 00002: val_loss improved from 1.56471 to 1.55652, saving model to ./weights_inception/Inception_V3.02-0.28.h5
    Epoch 3/500
    300/300 [==============================] - 66s 219ms/step - loss: 1.5436 - acc: 0.3281 - val_loss: 1.5476 - val_acc: 0.2981

    Epoch 00003: val_loss improved from 1.55652 to 1.54757, saving model to ./weights_inception/Inception_V3.03-0.30.h5
    Epoch 4/500
    300/300 [==============================] - 66s 220ms/step - loss: 1.5109 - acc: 0.3593 - val_loss: 1.5284 - val_acc: 0.3588

    Epoch 00004: val_loss improved from 1.54757 to 1.52841, saving model to ./weights_inception/Inception_V3.04-0.36.h5
    Epoch 5/500
    300/300 [==============================] - 66s 221ms/step - loss: 1.4167 - acc: 0.4167 - val_loss: 1.4945 - val_acc: 0.3553

    Epoch 00005: val_loss improved from 1.52841 to 1.49446, saving model to ./weights_inception/Inception_V3.05-0.36.h5
    Epoch 6/500
    300/300 [==============================] - 66s 221ms/step - loss: 1.2941 - acc: 0.4683 - val_loss: 1.4735 - val_acc: 0.4443

    Epoch 00006: val_loss improved from 1.49446 to 1.47345, saving model to ./weights_inception/Inception_V3.06-0.44.h5
    Epoch 7/500
    300/300 [==============================] - 66s 221ms/step - loss: 1.2096 - acc: 0.5116 - val_loss: 1.3738 - val_acc: 0.5186

    Epoch 00007: val_loss improved from 1.47345 to 1.37381, saving model to ./weights_inception/Inception_V3.07-0.52.h5
    Epoch 8/500
    300/300 [==============================] - 66s 221ms/step - loss: 1.1477 - acc: 0.5487 - val_loss: 1.2337 - val_acc: 0.5788

    Epoch 00008: val_loss improved from 1.37381 to 1.23367, saving model to ./weights_inception/Inception_V3.08-0.58.h5
    Epoch 9/500
    300/300 [==============================] - 66s 221ms/step - loss: 1.0809 - acc: 0.5831 - val_loss: 1.2247 - val_acc: 0.5658

    Epoch 00009: val_loss improved from 1.23367 to 1.22473, saving model to ./weights_inception/Inception_V3.09-0.57.h5
    Epoch 10/500
    300/300 [==============================] - 66s 221ms/step - loss: 1.0362 - acc: 0.6089 - val_loss: 1.1704 - val_acc: 0.5774

    Epoch 00010: val_loss improved from 1.22473 to 1.17035, saving model to ./weights_inception/Inception_V3.10-0.58.h5
    Epoch 11/500
    300/300 [==============================] - 66s 221ms/step - loss: 0.9811 - acc: 0.6317 - val_loss: 1.1612 - val_acc: 0.5616

    Epoch 00011: val_loss improved from 1.17035 to 1.16121, saving model to ./weights_inception/Inception_V3.11-0.56.h5
    Epoch 12/500
    300/300 [==============================] - 66s 221ms/step - loss: 0.9444 - acc: 0.6471 - val_loss: 1.1533 - val_acc: 0.5613

    Epoch 00012: val_loss improved from 1.16121 to 1.15330, saving model to ./weights_inception/Inception_V3.12-0.56.h5
    Epoch 13/500
    300/300 [==============================] - 66s 221ms/step - loss: 0.9072 - acc: 0.6650 - val_loss: 1.1843 - val_acc: 0.5361

    Epoch 00013: val_loss did not improve from 1.15330
    Epoch 14/500
    300/300 [==============================] - 66s 221ms/step - loss: 0.8747 - acc: 0.6744 - val_loss: 1.2135 - val_acc: 0.5258

    Epoch 00014: val_loss did not improve from 1.15330
    Epoch 15/500
    300/300 [==============================] - 67s 222ms/step - loss: 0.8666 - acc: 0.6829 - val_loss: 1.1585 - val_acc: 0.5443

    Epoch 00015: val_loss did not improve from 1.15330
    Epoch 16/500
    300/300 [==============================] - 66s 222ms/step - loss: 0.8386 - acc: 0.6926 - val_loss: 1.1503 - val_acc: 0.5482

    Epoch 00016: val_loss improved from 1.15330 to 1.15026, saving model to ./weights_inception/Inception_V3.16-0.55.h5
    Epoch 17/500
    300/300 [==============================] - 66s 221ms/step - loss: 0.8199 - acc: 0.7023 - val_loss: 1.2162 - val_acc: 0.5288

    Epoch 00017: val_loss did not improve from 1.15026
    Epoch 18/500
    300/300 [==============================] - 66s 222ms/step - loss: 0.8018 - acc: 0.7150 - val_loss: 1.1995 - val_acc: 0.5179

    Epoch 00018: val_loss did not improve from 1.15026
    Epoch 19/500
    300/300 [==============================] - 66s 221ms/step - loss: 0.7923 - acc: 0.7186 - val_loss: 1.2218 - val_acc: 0.5137

    Epoch 00019: val_loss did not improve from 1.15026
    Epoch 20/500
    300/300 [==============================] - 67s 222ms/step - loss: 0.7748 - acc: 0.7268 - val_loss: 1.2880 - val_acc: 0.4574

    Epoch 00020: val_loss did not improve from 1.15026
    Epoch 21/500
    300/300 [==============================] - 66s 221ms/step - loss: 0.7604 - acc: 0.7330 - val_loss: 1.2658 - val_acc: 0.4861









    share|improve this question




























      0












      0








      0








      I'm trying to build model to do activity recognition.
      Using InceptionV3 and backbone and LSTM for the detection, using pre-trained weights.



       train_generator = datagen.flow_from_directory(
      'dataset/train',
      target_size=(1,224, 224),
      batch_size=batch_size,
      class_mode='categorical', # this means our generator will only yield batches of data, no labels
      shuffle=True,
      classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])

      validation_generator = datagen.flow_from_directory(
      'dataset/validate',
      target_size=(1,224, 224),
      batch_size=batch_size,
      class_mode='categorical', # this means our generator will only yield batches of data, no labels
      shuffle=True,
      classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])
      return train_generator,validation_generator



      I train 5 classes so split my data into folders for train and validate.
      This is my CNN+LSTM architecture



      image = Input(shape=(None,224,224,3),name='image_input')
      cnn = applications.inception_v3.InceptionV3(
      weights='imagenet',
      include_top=False,
      pooling='avg')
      cnn.trainable = False
      encoded_frame = TimeDistributed(Lambda(lambda x: cnn(x)))(image)
      encoded_vid = LSTM(256)(encoded_frame)
      layer1 = Dense(512, activation='relu')(encoded_vid)
      dropout1 = Dropout(0.5)(layer1)
      layer2 = Dense(256, activation='relu')(dropout1)
      dropout2 = Dropout(0.5)(layer2)
      layer3 = Dense(64, activation='relu')(dropout2)
      dropout3 = Dropout(0.5)(layer3)
      outputs = Dense(5, activation='softmax')(dropout3)
      model = Model(inputs=[image],outputs=outputs)
      sgd = SGD(lr=0.001, decay = 1e-6, momentum=0.9, nesterov=True)
      model.compile(optimizer=sgd,loss='categorical_crossentropy', metrics=['accuracy'])

      model.fit_generator(train_generator,validation_data = validation_generator,steps_per_epoch=300, epochs=nb_epoch,callbacks=callbacks,shuffle=True,verbose=1)



      _________________________________________________________________
      Layer (type) Output Shape Param #
      =================================================================
      image_input (InputLayer) (None, None, 224, 224, 3) 0
      _________________________________________________________________
      time_distributed_1 (TimeDist (None, None, 2048) 0
      _________________________________________________________________
      lstm_1 (LSTM) (None, 256) 2360320
      _________________________________________________________________
      dense_1 (Dense) (None, 512) 131584
      _________________________________________________________________
      dropout_1 (Dropout) (None, 512) 0
      _________________________________________________________________
      dense_2 (Dense) (None, 256) 131328
      _________________________________________________________________
      dropout_2 (Dropout) (None, 256) 0
      _________________________________________________________________
      dense_3 (Dense) (None, 64) 16448
      _________________________________________________________________
      dropout_3 (Dropout) (None, 64) 0
      _________________________________________________________________
      dense_4 (Dense) (None, 5) 325
      _________________________________________________________________


      Model compiles normally without problem.
      Problem starts during the training. It reaches val_acc=0.50 and then drops back to val_acc=0.30 and the loss just freeze on 0.80 and mostly don't move.



      Here the logs from training, as you see the model for some tome improves and then just slowly drops down and later just freeze.
      Any idea what can be the reason?



      Epoch 00002: val_loss improved from 1.56471 to 1.55652, saving model to ./weights_inception/Inception_V3.02-0.28.h5
      Epoch 3/500
      300/300 [==============================] - 66s 219ms/step - loss: 1.5436 - acc: 0.3281 - val_loss: 1.5476 - val_acc: 0.2981

      Epoch 00003: val_loss improved from 1.55652 to 1.54757, saving model to ./weights_inception/Inception_V3.03-0.30.h5
      Epoch 4/500
      300/300 [==============================] - 66s 220ms/step - loss: 1.5109 - acc: 0.3593 - val_loss: 1.5284 - val_acc: 0.3588

      Epoch 00004: val_loss improved from 1.54757 to 1.52841, saving model to ./weights_inception/Inception_V3.04-0.36.h5
      Epoch 5/500
      300/300 [==============================] - 66s 221ms/step - loss: 1.4167 - acc: 0.4167 - val_loss: 1.4945 - val_acc: 0.3553

      Epoch 00005: val_loss improved from 1.52841 to 1.49446, saving model to ./weights_inception/Inception_V3.05-0.36.h5
      Epoch 6/500
      300/300 [==============================] - 66s 221ms/step - loss: 1.2941 - acc: 0.4683 - val_loss: 1.4735 - val_acc: 0.4443

      Epoch 00006: val_loss improved from 1.49446 to 1.47345, saving model to ./weights_inception/Inception_V3.06-0.44.h5
      Epoch 7/500
      300/300 [==============================] - 66s 221ms/step - loss: 1.2096 - acc: 0.5116 - val_loss: 1.3738 - val_acc: 0.5186

      Epoch 00007: val_loss improved from 1.47345 to 1.37381, saving model to ./weights_inception/Inception_V3.07-0.52.h5
      Epoch 8/500
      300/300 [==============================] - 66s 221ms/step - loss: 1.1477 - acc: 0.5487 - val_loss: 1.2337 - val_acc: 0.5788

      Epoch 00008: val_loss improved from 1.37381 to 1.23367, saving model to ./weights_inception/Inception_V3.08-0.58.h5
      Epoch 9/500
      300/300 [==============================] - 66s 221ms/step - loss: 1.0809 - acc: 0.5831 - val_loss: 1.2247 - val_acc: 0.5658

      Epoch 00009: val_loss improved from 1.23367 to 1.22473, saving model to ./weights_inception/Inception_V3.09-0.57.h5
      Epoch 10/500
      300/300 [==============================] - 66s 221ms/step - loss: 1.0362 - acc: 0.6089 - val_loss: 1.1704 - val_acc: 0.5774

      Epoch 00010: val_loss improved from 1.22473 to 1.17035, saving model to ./weights_inception/Inception_V3.10-0.58.h5
      Epoch 11/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.9811 - acc: 0.6317 - val_loss: 1.1612 - val_acc: 0.5616

      Epoch 00011: val_loss improved from 1.17035 to 1.16121, saving model to ./weights_inception/Inception_V3.11-0.56.h5
      Epoch 12/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.9444 - acc: 0.6471 - val_loss: 1.1533 - val_acc: 0.5613

      Epoch 00012: val_loss improved from 1.16121 to 1.15330, saving model to ./weights_inception/Inception_V3.12-0.56.h5
      Epoch 13/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.9072 - acc: 0.6650 - val_loss: 1.1843 - val_acc: 0.5361

      Epoch 00013: val_loss did not improve from 1.15330
      Epoch 14/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.8747 - acc: 0.6744 - val_loss: 1.2135 - val_acc: 0.5258

      Epoch 00014: val_loss did not improve from 1.15330
      Epoch 15/500
      300/300 [==============================] - 67s 222ms/step - loss: 0.8666 - acc: 0.6829 - val_loss: 1.1585 - val_acc: 0.5443

      Epoch 00015: val_loss did not improve from 1.15330
      Epoch 16/500
      300/300 [==============================] - 66s 222ms/step - loss: 0.8386 - acc: 0.6926 - val_loss: 1.1503 - val_acc: 0.5482

      Epoch 00016: val_loss improved from 1.15330 to 1.15026, saving model to ./weights_inception/Inception_V3.16-0.55.h5
      Epoch 17/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.8199 - acc: 0.7023 - val_loss: 1.2162 - val_acc: 0.5288

      Epoch 00017: val_loss did not improve from 1.15026
      Epoch 18/500
      300/300 [==============================] - 66s 222ms/step - loss: 0.8018 - acc: 0.7150 - val_loss: 1.1995 - val_acc: 0.5179

      Epoch 00018: val_loss did not improve from 1.15026
      Epoch 19/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.7923 - acc: 0.7186 - val_loss: 1.2218 - val_acc: 0.5137

      Epoch 00019: val_loss did not improve from 1.15026
      Epoch 20/500
      300/300 [==============================] - 67s 222ms/step - loss: 0.7748 - acc: 0.7268 - val_loss: 1.2880 - val_acc: 0.4574

      Epoch 00020: val_loss did not improve from 1.15026
      Epoch 21/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.7604 - acc: 0.7330 - val_loss: 1.2658 - val_acc: 0.4861









      share|improve this question
















      I'm trying to build model to do activity recognition.
      Using InceptionV3 and backbone and LSTM for the detection, using pre-trained weights.



       train_generator = datagen.flow_from_directory(
      'dataset/train',
      target_size=(1,224, 224),
      batch_size=batch_size,
      class_mode='categorical', # this means our generator will only yield batches of data, no labels
      shuffle=True,
      classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])

      validation_generator = datagen.flow_from_directory(
      'dataset/validate',
      target_size=(1,224, 224),
      batch_size=batch_size,
      class_mode='categorical', # this means our generator will only yield batches of data, no labels
      shuffle=True,
      classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])
      return train_generator,validation_generator



      I train 5 classes so split my data into folders for train and validate.
      This is my CNN+LSTM architecture



      image = Input(shape=(None,224,224,3),name='image_input')
      cnn = applications.inception_v3.InceptionV3(
      weights='imagenet',
      include_top=False,
      pooling='avg')
      cnn.trainable = False
      encoded_frame = TimeDistributed(Lambda(lambda x: cnn(x)))(image)
      encoded_vid = LSTM(256)(encoded_frame)
      layer1 = Dense(512, activation='relu')(encoded_vid)
      dropout1 = Dropout(0.5)(layer1)
      layer2 = Dense(256, activation='relu')(dropout1)
      dropout2 = Dropout(0.5)(layer2)
      layer3 = Dense(64, activation='relu')(dropout2)
      dropout3 = Dropout(0.5)(layer3)
      outputs = Dense(5, activation='softmax')(dropout3)
      model = Model(inputs=[image],outputs=outputs)
      sgd = SGD(lr=0.001, decay = 1e-6, momentum=0.9, nesterov=True)
      model.compile(optimizer=sgd,loss='categorical_crossentropy', metrics=['accuracy'])

      model.fit_generator(train_generator,validation_data = validation_generator,steps_per_epoch=300, epochs=nb_epoch,callbacks=callbacks,shuffle=True,verbose=1)



      _________________________________________________________________
      Layer (type) Output Shape Param #
      =================================================================
      image_input (InputLayer) (None, None, 224, 224, 3) 0
      _________________________________________________________________
      time_distributed_1 (TimeDist (None, None, 2048) 0
      _________________________________________________________________
      lstm_1 (LSTM) (None, 256) 2360320
      _________________________________________________________________
      dense_1 (Dense) (None, 512) 131584
      _________________________________________________________________
      dropout_1 (Dropout) (None, 512) 0
      _________________________________________________________________
      dense_2 (Dense) (None, 256) 131328
      _________________________________________________________________
      dropout_2 (Dropout) (None, 256) 0
      _________________________________________________________________
      dense_3 (Dense) (None, 64) 16448
      _________________________________________________________________
      dropout_3 (Dropout) (None, 64) 0
      _________________________________________________________________
      dense_4 (Dense) (None, 5) 325
      _________________________________________________________________


      Model compiles normally without problem.
      Problem starts during the training. It reaches val_acc=0.50 and then drops back to val_acc=0.30 and the loss just freeze on 0.80 and mostly don't move.



      Here the logs from training, as you see the model for some tome improves and then just slowly drops down and later just freeze.
      Any idea what can be the reason?



      Epoch 00002: val_loss improved from 1.56471 to 1.55652, saving model to ./weights_inception/Inception_V3.02-0.28.h5
      Epoch 3/500
      300/300 [==============================] - 66s 219ms/step - loss: 1.5436 - acc: 0.3281 - val_loss: 1.5476 - val_acc: 0.2981

      Epoch 00003: val_loss improved from 1.55652 to 1.54757, saving model to ./weights_inception/Inception_V3.03-0.30.h5
      Epoch 4/500
      300/300 [==============================] - 66s 220ms/step - loss: 1.5109 - acc: 0.3593 - val_loss: 1.5284 - val_acc: 0.3588

      Epoch 00004: val_loss improved from 1.54757 to 1.52841, saving model to ./weights_inception/Inception_V3.04-0.36.h5
      Epoch 5/500
      300/300 [==============================] - 66s 221ms/step - loss: 1.4167 - acc: 0.4167 - val_loss: 1.4945 - val_acc: 0.3553

      Epoch 00005: val_loss improved from 1.52841 to 1.49446, saving model to ./weights_inception/Inception_V3.05-0.36.h5
      Epoch 6/500
      300/300 [==============================] - 66s 221ms/step - loss: 1.2941 - acc: 0.4683 - val_loss: 1.4735 - val_acc: 0.4443

      Epoch 00006: val_loss improved from 1.49446 to 1.47345, saving model to ./weights_inception/Inception_V3.06-0.44.h5
      Epoch 7/500
      300/300 [==============================] - 66s 221ms/step - loss: 1.2096 - acc: 0.5116 - val_loss: 1.3738 - val_acc: 0.5186

      Epoch 00007: val_loss improved from 1.47345 to 1.37381, saving model to ./weights_inception/Inception_V3.07-0.52.h5
      Epoch 8/500
      300/300 [==============================] - 66s 221ms/step - loss: 1.1477 - acc: 0.5487 - val_loss: 1.2337 - val_acc: 0.5788

      Epoch 00008: val_loss improved from 1.37381 to 1.23367, saving model to ./weights_inception/Inception_V3.08-0.58.h5
      Epoch 9/500
      300/300 [==============================] - 66s 221ms/step - loss: 1.0809 - acc: 0.5831 - val_loss: 1.2247 - val_acc: 0.5658

      Epoch 00009: val_loss improved from 1.23367 to 1.22473, saving model to ./weights_inception/Inception_V3.09-0.57.h5
      Epoch 10/500
      300/300 [==============================] - 66s 221ms/step - loss: 1.0362 - acc: 0.6089 - val_loss: 1.1704 - val_acc: 0.5774

      Epoch 00010: val_loss improved from 1.22473 to 1.17035, saving model to ./weights_inception/Inception_V3.10-0.58.h5
      Epoch 11/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.9811 - acc: 0.6317 - val_loss: 1.1612 - val_acc: 0.5616

      Epoch 00011: val_loss improved from 1.17035 to 1.16121, saving model to ./weights_inception/Inception_V3.11-0.56.h5
      Epoch 12/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.9444 - acc: 0.6471 - val_loss: 1.1533 - val_acc: 0.5613

      Epoch 00012: val_loss improved from 1.16121 to 1.15330, saving model to ./weights_inception/Inception_V3.12-0.56.h5
      Epoch 13/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.9072 - acc: 0.6650 - val_loss: 1.1843 - val_acc: 0.5361

      Epoch 00013: val_loss did not improve from 1.15330
      Epoch 14/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.8747 - acc: 0.6744 - val_loss: 1.2135 - val_acc: 0.5258

      Epoch 00014: val_loss did not improve from 1.15330
      Epoch 15/500
      300/300 [==============================] - 67s 222ms/step - loss: 0.8666 - acc: 0.6829 - val_loss: 1.1585 - val_acc: 0.5443

      Epoch 00015: val_loss did not improve from 1.15330
      Epoch 16/500
      300/300 [==============================] - 66s 222ms/step - loss: 0.8386 - acc: 0.6926 - val_loss: 1.1503 - val_acc: 0.5482

      Epoch 00016: val_loss improved from 1.15330 to 1.15026, saving model to ./weights_inception/Inception_V3.16-0.55.h5
      Epoch 17/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.8199 - acc: 0.7023 - val_loss: 1.2162 - val_acc: 0.5288

      Epoch 00017: val_loss did not improve from 1.15026
      Epoch 18/500
      300/300 [==============================] - 66s 222ms/step - loss: 0.8018 - acc: 0.7150 - val_loss: 1.1995 - val_acc: 0.5179

      Epoch 00018: val_loss did not improve from 1.15026
      Epoch 19/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.7923 - acc: 0.7186 - val_loss: 1.2218 - val_acc: 0.5137

      Epoch 00019: val_loss did not improve from 1.15026
      Epoch 20/500
      300/300 [==============================] - 67s 222ms/step - loss: 0.7748 - acc: 0.7268 - val_loss: 1.2880 - val_acc: 0.4574

      Epoch 00020: val_loss did not improve from 1.15026
      Epoch 21/500
      300/300 [==============================] - 66s 221ms/step - loss: 0.7604 - acc: 0.7330 - val_loss: 1.2658 - val_acc: 0.4861






      keras lstm yolo






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 2 at 12:27









      Machavity

      25.3k15 gold badges63 silver badges83 bronze badges




      25.3k15 gold badges63 silver badges83 bronze badges










      asked Mar 28 at 7:35









      DmitryDmitry

      64 bronze badges




      64 bronze badges

























          2 Answers
          2






          active

          oldest

          votes


















          0
















          The model is starting to overfit. Ideally as you increase number of epochs training loss will decrease(depends on learning rate), if its not able to decrease may be your model can have a high bias for the data. You can use bigger model(more parameters or deeper model).



          you can also to reduce the learning rate, if it still freezes then model may have a low bias.






          share|improve this answer
































            0
















            Thank you for the help. Yes, the problem was overfitting, so i made more aggresive dropout on LSTM, and it helped. But the accuracy on val_loss and acc_val still very low



             video = Input(shape=(None, 224,224,3))
            cnn_base = VGG16(input_shape=(224,224,3),
            weights="imagenet",
            include_top=False)
            cnn_out = GlobalAveragePooling2D()(cnn_base.output)
            cnn = Model(inputs=cnn_base.input, outputs=cnn_out)
            cnn.trainable = False
            encoded_frames = TimeDistributed(cnn)(video)
            encoded_sequence = LSTM(32, dropout=0.5, W_regularizer=l2(0.01), recurrent_dropout=0.5)(encoded_frames)
            hidden_layer = Dense(units=64, activation="relu")(encoded_sequence)
            dropout = Dropout(0.2)(hidden_layer)
            outputs = Dense(5, activation="softmax")(dropout)
            model = Model([video], outputs)


            Here the logs



            Epoch 00033: val_loss improved from 1.62041 to 1.57951, saving model to 
            ./weights_inception/Inception_V3.33-0.76.h5
            Epoch 34/500
            100/100 [==============================] - 54s 537ms/step - loss: 0.6301 - acc:
            0.9764 - val_loss: 1.6190 - val_acc: 0.7627

            Epoch 00034: val_loss did not improve from 1.57951
            Epoch 35/500
            100/100 [==============================] - 54s 537ms/step - loss: 0.5907 - acc:
            0.9840 - val_loss: 1.5927 - val_acc: 0.7608

            Epoch 00035: val_loss did not improve from 1.57951
            Epoch 36/500
            100/100 [==============================] - 54s 537ms/step - loss: 0.5783 - acc:
            0.9812 - val_loss: 1.3477 - val_acc: 0.7769

            Epoch 00036: val_loss improved from 1.57951 to 1.34772, saving model to
            ./weights_inception/Inception_V3.36-0.78.h5
            Epoch 37/500
            100/100 [==============================] - 54s 537ms/step - loss: 0.5618 - acc:
            0.9802 - val_loss: 1.6545 - val_acc: 0.7384

            Epoch 00037: val_loss did not improve from 1.34772
            Epoch 38/500
            100/100 [==============================] - 54s 537ms/step - loss: 0.5382 - acc:
            0.9818 - val_loss: 1.8298 - val_acc: 0.7421

            Epoch 00038: val_loss did not improve from 1.34772
            Epoch 39/500
            100/100 [==============================] - 54s 536ms/step - loss: 0.5080 - acc:
            0.9844 - val_loss: 1.7948 - val_acc: 0.7290

            Epoch 00039: val_loss did not improve from 1.34772
            Epoch 40/500
            100/100 [==============================] - 54s 537ms/step - loss: 0.4800 - acc:
            0.9892 - val_loss: 1.8036 - val_acc: 0.7522





            share|improve this answer



























              Your Answer






              StackExchange.ifUsing("editor", function ()
              StackExchange.using("externalEditor", function ()
              StackExchange.using("snippets", function ()
              StackExchange.snippets.init();
              );
              );
              , "code-snippets");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "1"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );














              draft saved

              draft discarded
















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55392290%2finceptionv3lstm-activity-recognition-accuracy-grows-for-10-epochs-and-then-dro%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              0
















              The model is starting to overfit. Ideally as you increase number of epochs training loss will decrease(depends on learning rate), if its not able to decrease may be your model can have a high bias for the data. You can use bigger model(more parameters or deeper model).



              you can also to reduce the learning rate, if it still freezes then model may have a low bias.






              share|improve this answer





























                0
















                The model is starting to overfit. Ideally as you increase number of epochs training loss will decrease(depends on learning rate), if its not able to decrease may be your model can have a high bias for the data. You can use bigger model(more parameters or deeper model).



                you can also to reduce the learning rate, if it still freezes then model may have a low bias.






                share|improve this answer



























                  0














                  0










                  0









                  The model is starting to overfit. Ideally as you increase number of epochs training loss will decrease(depends on learning rate), if its not able to decrease may be your model can have a high bias for the data. You can use bigger model(more parameters or deeper model).



                  you can also to reduce the learning rate, if it still freezes then model may have a low bias.






                  share|improve this answer













                  The model is starting to overfit. Ideally as you increase number of epochs training loss will decrease(depends on learning rate), if its not able to decrease may be your model can have a high bias for the data. You can use bigger model(more parameters or deeper model).



                  you can also to reduce the learning rate, if it still freezes then model may have a low bias.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Mar 28 at 10:28









                  newlearnershivnewlearnershiv

                  1627 bronze badges




                  1627 bronze badges


























                      0
















                      Thank you for the help. Yes, the problem was overfitting, so i made more aggresive dropout on LSTM, and it helped. But the accuracy on val_loss and acc_val still very low



                       video = Input(shape=(None, 224,224,3))
                      cnn_base = VGG16(input_shape=(224,224,3),
                      weights="imagenet",
                      include_top=False)
                      cnn_out = GlobalAveragePooling2D()(cnn_base.output)
                      cnn = Model(inputs=cnn_base.input, outputs=cnn_out)
                      cnn.trainable = False
                      encoded_frames = TimeDistributed(cnn)(video)
                      encoded_sequence = LSTM(32, dropout=0.5, W_regularizer=l2(0.01), recurrent_dropout=0.5)(encoded_frames)
                      hidden_layer = Dense(units=64, activation="relu")(encoded_sequence)
                      dropout = Dropout(0.2)(hidden_layer)
                      outputs = Dense(5, activation="softmax")(dropout)
                      model = Model([video], outputs)


                      Here the logs



                      Epoch 00033: val_loss improved from 1.62041 to 1.57951, saving model to 
                      ./weights_inception/Inception_V3.33-0.76.h5
                      Epoch 34/500
                      100/100 [==============================] - 54s 537ms/step - loss: 0.6301 - acc:
                      0.9764 - val_loss: 1.6190 - val_acc: 0.7627

                      Epoch 00034: val_loss did not improve from 1.57951
                      Epoch 35/500
                      100/100 [==============================] - 54s 537ms/step - loss: 0.5907 - acc:
                      0.9840 - val_loss: 1.5927 - val_acc: 0.7608

                      Epoch 00035: val_loss did not improve from 1.57951
                      Epoch 36/500
                      100/100 [==============================] - 54s 537ms/step - loss: 0.5783 - acc:
                      0.9812 - val_loss: 1.3477 - val_acc: 0.7769

                      Epoch 00036: val_loss improved from 1.57951 to 1.34772, saving model to
                      ./weights_inception/Inception_V3.36-0.78.h5
                      Epoch 37/500
                      100/100 [==============================] - 54s 537ms/step - loss: 0.5618 - acc:
                      0.9802 - val_loss: 1.6545 - val_acc: 0.7384

                      Epoch 00037: val_loss did not improve from 1.34772
                      Epoch 38/500
                      100/100 [==============================] - 54s 537ms/step - loss: 0.5382 - acc:
                      0.9818 - val_loss: 1.8298 - val_acc: 0.7421

                      Epoch 00038: val_loss did not improve from 1.34772
                      Epoch 39/500
                      100/100 [==============================] - 54s 536ms/step - loss: 0.5080 - acc:
                      0.9844 - val_loss: 1.7948 - val_acc: 0.7290

                      Epoch 00039: val_loss did not improve from 1.34772
                      Epoch 40/500
                      100/100 [==============================] - 54s 537ms/step - loss: 0.4800 - acc:
                      0.9892 - val_loss: 1.8036 - val_acc: 0.7522





                      share|improve this answer





























                        0
















                        Thank you for the help. Yes, the problem was overfitting, so i made more aggresive dropout on LSTM, and it helped. But the accuracy on val_loss and acc_val still very low



                         video = Input(shape=(None, 224,224,3))
                        cnn_base = VGG16(input_shape=(224,224,3),
                        weights="imagenet",
                        include_top=False)
                        cnn_out = GlobalAveragePooling2D()(cnn_base.output)
                        cnn = Model(inputs=cnn_base.input, outputs=cnn_out)
                        cnn.trainable = False
                        encoded_frames = TimeDistributed(cnn)(video)
                        encoded_sequence = LSTM(32, dropout=0.5, W_regularizer=l2(0.01), recurrent_dropout=0.5)(encoded_frames)
                        hidden_layer = Dense(units=64, activation="relu")(encoded_sequence)
                        dropout = Dropout(0.2)(hidden_layer)
                        outputs = Dense(5, activation="softmax")(dropout)
                        model = Model([video], outputs)


                        Here the logs



                        Epoch 00033: val_loss improved from 1.62041 to 1.57951, saving model to 
                        ./weights_inception/Inception_V3.33-0.76.h5
                        Epoch 34/500
                        100/100 [==============================] - 54s 537ms/step - loss: 0.6301 - acc:
                        0.9764 - val_loss: 1.6190 - val_acc: 0.7627

                        Epoch 00034: val_loss did not improve from 1.57951
                        Epoch 35/500
                        100/100 [==============================] - 54s 537ms/step - loss: 0.5907 - acc:
                        0.9840 - val_loss: 1.5927 - val_acc: 0.7608

                        Epoch 00035: val_loss did not improve from 1.57951
                        Epoch 36/500
                        100/100 [==============================] - 54s 537ms/step - loss: 0.5783 - acc:
                        0.9812 - val_loss: 1.3477 - val_acc: 0.7769

                        Epoch 00036: val_loss improved from 1.57951 to 1.34772, saving model to
                        ./weights_inception/Inception_V3.36-0.78.h5
                        Epoch 37/500
                        100/100 [==============================] - 54s 537ms/step - loss: 0.5618 - acc:
                        0.9802 - val_loss: 1.6545 - val_acc: 0.7384

                        Epoch 00037: val_loss did not improve from 1.34772
                        Epoch 38/500
                        100/100 [==============================] - 54s 537ms/step - loss: 0.5382 - acc:
                        0.9818 - val_loss: 1.8298 - val_acc: 0.7421

                        Epoch 00038: val_loss did not improve from 1.34772
                        Epoch 39/500
                        100/100 [==============================] - 54s 536ms/step - loss: 0.5080 - acc:
                        0.9844 - val_loss: 1.7948 - val_acc: 0.7290

                        Epoch 00039: val_loss did not improve from 1.34772
                        Epoch 40/500
                        100/100 [==============================] - 54s 537ms/step - loss: 0.4800 - acc:
                        0.9892 - val_loss: 1.8036 - val_acc: 0.7522





                        share|improve this answer



























                          0














                          0










                          0









                          Thank you for the help. Yes, the problem was overfitting, so i made more aggresive dropout on LSTM, and it helped. But the accuracy on val_loss and acc_val still very low



                           video = Input(shape=(None, 224,224,3))
                          cnn_base = VGG16(input_shape=(224,224,3),
                          weights="imagenet",
                          include_top=False)
                          cnn_out = GlobalAveragePooling2D()(cnn_base.output)
                          cnn = Model(inputs=cnn_base.input, outputs=cnn_out)
                          cnn.trainable = False
                          encoded_frames = TimeDistributed(cnn)(video)
                          encoded_sequence = LSTM(32, dropout=0.5, W_regularizer=l2(0.01), recurrent_dropout=0.5)(encoded_frames)
                          hidden_layer = Dense(units=64, activation="relu")(encoded_sequence)
                          dropout = Dropout(0.2)(hidden_layer)
                          outputs = Dense(5, activation="softmax")(dropout)
                          model = Model([video], outputs)


                          Here the logs



                          Epoch 00033: val_loss improved from 1.62041 to 1.57951, saving model to 
                          ./weights_inception/Inception_V3.33-0.76.h5
                          Epoch 34/500
                          100/100 [==============================] - 54s 537ms/step - loss: 0.6301 - acc:
                          0.9764 - val_loss: 1.6190 - val_acc: 0.7627

                          Epoch 00034: val_loss did not improve from 1.57951
                          Epoch 35/500
                          100/100 [==============================] - 54s 537ms/step - loss: 0.5907 - acc:
                          0.9840 - val_loss: 1.5927 - val_acc: 0.7608

                          Epoch 00035: val_loss did not improve from 1.57951
                          Epoch 36/500
                          100/100 [==============================] - 54s 537ms/step - loss: 0.5783 - acc:
                          0.9812 - val_loss: 1.3477 - val_acc: 0.7769

                          Epoch 00036: val_loss improved from 1.57951 to 1.34772, saving model to
                          ./weights_inception/Inception_V3.36-0.78.h5
                          Epoch 37/500
                          100/100 [==============================] - 54s 537ms/step - loss: 0.5618 - acc:
                          0.9802 - val_loss: 1.6545 - val_acc: 0.7384

                          Epoch 00037: val_loss did not improve from 1.34772
                          Epoch 38/500
                          100/100 [==============================] - 54s 537ms/step - loss: 0.5382 - acc:
                          0.9818 - val_loss: 1.8298 - val_acc: 0.7421

                          Epoch 00038: val_loss did not improve from 1.34772
                          Epoch 39/500
                          100/100 [==============================] - 54s 536ms/step - loss: 0.5080 - acc:
                          0.9844 - val_loss: 1.7948 - val_acc: 0.7290

                          Epoch 00039: val_loss did not improve from 1.34772
                          Epoch 40/500
                          100/100 [==============================] - 54s 537ms/step - loss: 0.4800 - acc:
                          0.9892 - val_loss: 1.8036 - val_acc: 0.7522





                          share|improve this answer













                          Thank you for the help. Yes, the problem was overfitting, so i made more aggresive dropout on LSTM, and it helped. But the accuracy on val_loss and acc_val still very low



                           video = Input(shape=(None, 224,224,3))
                          cnn_base = VGG16(input_shape=(224,224,3),
                          weights="imagenet",
                          include_top=False)
                          cnn_out = GlobalAveragePooling2D()(cnn_base.output)
                          cnn = Model(inputs=cnn_base.input, outputs=cnn_out)
                          cnn.trainable = False
                          encoded_frames = TimeDistributed(cnn)(video)
                          encoded_sequence = LSTM(32, dropout=0.5, W_regularizer=l2(0.01), recurrent_dropout=0.5)(encoded_frames)
                          hidden_layer = Dense(units=64, activation="relu")(encoded_sequence)
                          dropout = Dropout(0.2)(hidden_layer)
                          outputs = Dense(5, activation="softmax")(dropout)
                          model = Model([video], outputs)


                          Here the logs



                          Epoch 00033: val_loss improved from 1.62041 to 1.57951, saving model to 
                          ./weights_inception/Inception_V3.33-0.76.h5
                          Epoch 34/500
                          100/100 [==============================] - 54s 537ms/step - loss: 0.6301 - acc:
                          0.9764 - val_loss: 1.6190 - val_acc: 0.7627

                          Epoch 00034: val_loss did not improve from 1.57951
                          Epoch 35/500
                          100/100 [==============================] - 54s 537ms/step - loss: 0.5907 - acc:
                          0.9840 - val_loss: 1.5927 - val_acc: 0.7608

                          Epoch 00035: val_loss did not improve from 1.57951
                          Epoch 36/500
                          100/100 [==============================] - 54s 537ms/step - loss: 0.5783 - acc:
                          0.9812 - val_loss: 1.3477 - val_acc: 0.7769

                          Epoch 00036: val_loss improved from 1.57951 to 1.34772, saving model to
                          ./weights_inception/Inception_V3.36-0.78.h5
                          Epoch 37/500
                          100/100 [==============================] - 54s 537ms/step - loss: 0.5618 - acc:
                          0.9802 - val_loss: 1.6545 - val_acc: 0.7384

                          Epoch 00037: val_loss did not improve from 1.34772
                          Epoch 38/500
                          100/100 [==============================] - 54s 537ms/step - loss: 0.5382 - acc:
                          0.9818 - val_loss: 1.8298 - val_acc: 0.7421

                          Epoch 00038: val_loss did not improve from 1.34772
                          Epoch 39/500
                          100/100 [==============================] - 54s 536ms/step - loss: 0.5080 - acc:
                          0.9844 - val_loss: 1.7948 - val_acc: 0.7290

                          Epoch 00039: val_loss did not improve from 1.34772
                          Epoch 40/500
                          100/100 [==============================] - 54s 537ms/step - loss: 0.4800 - acc:
                          0.9892 - val_loss: 1.8036 - val_acc: 0.7522






                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Mar 29 at 0:29









                          DmitryDmitry

                          64 bronze badges




                          64 bronze badges































                              draft saved

                              draft discarded















































                              Thanks for contributing an answer to Stack Overflow!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55392290%2finceptionv3lstm-activity-recognition-accuracy-grows-for-10-epochs-and-then-dro%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

                              SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

                              은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현