Predicting rare events and their strength with LSTM autoencoder Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) Data science time! April 2019 and salary with experience The Ask Question Wizard is Live!Ada-Delta method doesn't converge when used in Denoising AutoEncoder with MSE loss & ReLU activation?Tensorflow: jointly training CNN + LSTMLSTM autoencoder on sequences - what loss function?LSTM results seems to be shifted by one time period backRegarding Text Autoencoders in KERAS for topic modelingDiffering results for MNIST autoencoder due to different placement of activation functionMultivariate binary sequence prediction with LSTMProblem training an autoencoder for byte sequence classificationEqual output values given for Multiclass ClassificationLSTM/GRU autoencoder convergency

Central Vacuuming: Is it worth it, and how does it compare to normal vacuuming?

Why weren't discrete x86 CPUs ever used in game hardware?

Is CEO the "profession" with the most psychopaths?

What was the first language to use conditional keywords?

What does it mean that physics no longer uses mechanical models to describe phenomena?

Dating a Former Employee

Is there any word for a place full of confusion?

As a beginner, should I get a Squier Strat with a SSS config or a HSS?

Why should I vote and accept answers?

If Windows 7 doesn't support WSL, then what does Linux subsystem option mean?

Why do we need to use the builder design pattern when we can do the same thing with setters?

Maximum summed subsequences with non-adjacent items

How much damage would a cupful of neutron star matter do to the Earth?

How to write this math term? with cases it isn't working

How do I use the new nonlinear finite element in Mathematica 12 for this equation?

SF book about people trapped in a series of worlds they imagine

Did Krishna say in Bhagavad Gita "I am in every living being"

What is a fractional matching?

Hangman Game with C++

AppleTVs create a chatty alternate WiFi network

Can a new player join a group only when a new campaign starts?

How do living politicians protect their readily obtainable signatures from misuse?

How would a mousetrap for use in space work?

Why does it sometimes sound good to play a grace note as a lead in to a note in a melody?



Predicting rare events and their strength with LSTM autoencoder



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)
Data science time! April 2019 and salary with experience
The Ask Question Wizard is Live!Ada-Delta method doesn't converge when used in Denoising AutoEncoder with MSE loss & ReLU activation?Tensorflow: jointly training CNN + LSTMLSTM autoencoder on sequences - what loss function?LSTM results seems to be shifted by one time period backRegarding Text Autoencoders in KERAS for topic modelingDiffering results for MNIST autoencoder due to different placement of activation functionMultivariate binary sequence prediction with LSTMProblem training an autoencoder for byte sequence classificationEqual output values given for Multiclass ClassificationLSTM/GRU autoencoder convergency



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








1















I’m currently creating and LSTM to predict rare events. I’ve seen this paper which suggest: first an autoencoder LSTM for extracting features and second to use the embeddings for a second LSTM that will make the actual prediction. According to them, the autoencoder extract features (this is usually true) which are then useful for the prediction layers to predict.



In my case, I need to predict if it would be or not an extreme event (this is the most important thing) and then how strong is gonna be. Following their advice, I’ve created the model, but instead of adding one LSTM from embeddings to predictions I add two. One for binary prediction (It is, or it is not), ending with a sigmoid layer, and the second one for predicting how strong will be. Then I have three losses. The reconstruction loss (MSE), the prediction loss (MSE), and the binary loss (Binary Entropy).



The thing is that I’m not sure that is learning anything… the binary loss keeps in 0.5, and even the reconstruction loss is not really good. And of course, the bad thing is that the time series is plenty of 0, and some numbers from 1 to 10, so definitely MSE is not a good metric.



What do you think about this approach?



  1. This is the better architecture for predicting rare events? Which one would be better?

  2. Should I add some CNN or FC from the embeddings before the other to LSTM, for extracting 1D patterns from the embedding, or directly to make the prediction?

  3. Should the LSTM that predicts be just one? And only use MSE loss?

  4. Would be a good idea to multiply the two predictions to force in both cases the predicted days without the event coincide?

Thanks,










share|improve this question




























    1















    I’m currently creating and LSTM to predict rare events. I’ve seen this paper which suggest: first an autoencoder LSTM for extracting features and second to use the embeddings for a second LSTM that will make the actual prediction. According to them, the autoencoder extract features (this is usually true) which are then useful for the prediction layers to predict.



    In my case, I need to predict if it would be or not an extreme event (this is the most important thing) and then how strong is gonna be. Following their advice, I’ve created the model, but instead of adding one LSTM from embeddings to predictions I add two. One for binary prediction (It is, or it is not), ending with a sigmoid layer, and the second one for predicting how strong will be. Then I have three losses. The reconstruction loss (MSE), the prediction loss (MSE), and the binary loss (Binary Entropy).



    The thing is that I’m not sure that is learning anything… the binary loss keeps in 0.5, and even the reconstruction loss is not really good. And of course, the bad thing is that the time series is plenty of 0, and some numbers from 1 to 10, so definitely MSE is not a good metric.



    What do you think about this approach?



    1. This is the better architecture for predicting rare events? Which one would be better?

    2. Should I add some CNN or FC from the embeddings before the other to LSTM, for extracting 1D patterns from the embedding, or directly to make the prediction?

    3. Should the LSTM that predicts be just one? And only use MSE loss?

    4. Would be a good idea to multiply the two predictions to force in both cases the predicted days without the event coincide?

    Thanks,










    share|improve this question
























      1












      1








      1








      I’m currently creating and LSTM to predict rare events. I’ve seen this paper which suggest: first an autoencoder LSTM for extracting features and second to use the embeddings for a second LSTM that will make the actual prediction. According to them, the autoencoder extract features (this is usually true) which are then useful for the prediction layers to predict.



      In my case, I need to predict if it would be or not an extreme event (this is the most important thing) and then how strong is gonna be. Following their advice, I’ve created the model, but instead of adding one LSTM from embeddings to predictions I add two. One for binary prediction (It is, or it is not), ending with a sigmoid layer, and the second one for predicting how strong will be. Then I have three losses. The reconstruction loss (MSE), the prediction loss (MSE), and the binary loss (Binary Entropy).



      The thing is that I’m not sure that is learning anything… the binary loss keeps in 0.5, and even the reconstruction loss is not really good. And of course, the bad thing is that the time series is plenty of 0, and some numbers from 1 to 10, so definitely MSE is not a good metric.



      What do you think about this approach?



      1. This is the better architecture for predicting rare events? Which one would be better?

      2. Should I add some CNN or FC from the embeddings before the other to LSTM, for extracting 1D patterns from the embedding, or directly to make the prediction?

      3. Should the LSTM that predicts be just one? And only use MSE loss?

      4. Would be a good idea to multiply the two predictions to force in both cases the predicted days without the event coincide?

      Thanks,










      share|improve this question














      I’m currently creating and LSTM to predict rare events. I’ve seen this paper which suggest: first an autoencoder LSTM for extracting features and second to use the embeddings for a second LSTM that will make the actual prediction. According to them, the autoencoder extract features (this is usually true) which are then useful for the prediction layers to predict.



      In my case, I need to predict if it would be or not an extreme event (this is the most important thing) and then how strong is gonna be. Following their advice, I’ve created the model, but instead of adding one LSTM from embeddings to predictions I add two. One for binary prediction (It is, or it is not), ending with a sigmoid layer, and the second one for predicting how strong will be. Then I have three losses. The reconstruction loss (MSE), the prediction loss (MSE), and the binary loss (Binary Entropy).



      The thing is that I’m not sure that is learning anything… the binary loss keeps in 0.5, and even the reconstruction loss is not really good. And of course, the bad thing is that the time series is plenty of 0, and some numbers from 1 to 10, so definitely MSE is not a good metric.



      What do you think about this approach?



      1. This is the better architecture for predicting rare events? Which one would be better?

      2. Should I add some CNN or FC from the embeddings before the other to LSTM, for extracting 1D patterns from the embedding, or directly to make the prediction?

      3. Should the LSTM that predicts be just one? And only use MSE loss?

      4. Would be a good idea to multiply the two predictions to force in both cases the predicted days without the event coincide?

      Thanks,







      deep-learning time-series lstm feature-extraction autoencoder






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 22 at 10:11









      XbelXbel

      968




      968






















          0






          active

          oldest

          votes












          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55297291%2fpredicting-rare-events-and-their-strength-with-lstm-autoencoder%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55297291%2fpredicting-rare-events-and-their-strength-with-lstm-autoencoder%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

          은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현