How to deal with the non differentiability of argmaxGetting around tf.argmax which is not differentiableHow to merge two dictionaries in a single expression?How do I check if a list is empty?How do I check whether a file exists without exceptions?How can I safely create a nested directory?How do I sort a dictionary by value?How to make a chain of function decorators?How to make a flat list out of list of listsHow do I list all files of a directory?Is it possible to train an SVM or Random Forest on the final layer feature of a Convolutional Neural Network using Keras?Getting around tf.argmax which is not differentiable

Old French song lyrics with the word "baiser."

How many oliphaunts died in all of the Lord of the Rings battles?

How to judge a Ph.D. applicant that arrives "out of thin air"

Copying an existing HTML page and use it, is that against any copyright law?

Did the IBM PC use the 8088's NMI line?

If my pay period is split between 2 calendar years, which tax year do I file them in?

Why force the nose of 737 Max down in the first place?

Why is 'n' preferred over "n" for output streams?

Use cases for M-0 & C-0?

Why is drive/partition number still used?

Help with one interview question --expected length of numbers drawn

What is the most convenient way to prepare ferrous oxide (FeO) in the laboratory?

What is the most common end of life issue for a car?

Sci-fi change: Too much or Not enough

Trapped in an ocean Temple in Minecraft?

Why did House of Representatives need to condemn Trumps Tweets?

How should we understand λαμβάνω in John 5:34?

When does Haskell complain about incorrect typing in functions?

Sci fi story: Clever pigs that help a galaxy lawman

Why does Canada require mandatory bilingualism in all government posts?

What do you call a flexible diving platform?

What is the most efficient way to write 'for' loops in Matlab?

What is the difference between position, displacement, and distance traveled?

How to apply the changes to my `.zshrc` file after edit?



How to deal with the non differentiability of argmax


Getting around tf.argmax which is not differentiableHow to merge two dictionaries in a single expression?How do I check if a list is empty?How do I check whether a file exists without exceptions?How can I safely create a nested directory?How do I sort a dictionary by value?How to make a chain of function decorators?How to make a flat list out of list of listsHow do I list all files of a directory?Is it possible to train an SVM or Random Forest on the final layer feature of a Convolutional Neural Network using Keras?Getting around tf.argmax which is not differentiable






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








3















I have a neural network that has the final layer as a flattened layer that provides an output as a curve. The curve might have one or more local maxima, but I am only interested in finding the global maxima. The ground truth I have is the integer index (or argument) of the global maxima. I tried to write a custom loss in Keras like this:



def custom_loss(y_output,y_idx_pred):
return K.mean(K.sum((K.argmax(y_output)-y_idx_pred)**2))


I also cast the integers to float32. But I am getting error that there is 'None' gradient. I searched for an answer and found that argmax doesn't have any defined gradient. The suggestions I found was to either create a custom Argmax layer or to use softmax instead.



How do I even use softmax here? Softmax only provides me with the approximation of [0 0 1... ], not the integer index itself. How am I supposed to work with this? I even tried to treat the problem as a classification problem by turning the ground truth to [0 1 0... ] and used crossentropy, but the network could not learn anything. It did better when I just added a dense(1) layer and then trained the model. It seems like classification treats all the arguments equally, but that is not the case here. I need the euclidean l2 distance.



Where can I get a proper instruction of creating the custom Argmax layer? Will it even help in my situation? Is it possible to implement a custom loss function that is differentiable? What should I do?










share|improve this question



















  • 1





    Take a look stackoverflow.com/questions/46926809/…

    – Sharky
    Mar 26 at 17:46











  • That was really helpful. Thank you very much.

    – happypanda
    Mar 26 at 19:54

















3















I have a neural network that has the final layer as a flattened layer that provides an output as a curve. The curve might have one or more local maxima, but I am only interested in finding the global maxima. The ground truth I have is the integer index (or argument) of the global maxima. I tried to write a custom loss in Keras like this:



def custom_loss(y_output,y_idx_pred):
return K.mean(K.sum((K.argmax(y_output)-y_idx_pred)**2))


I also cast the integers to float32. But I am getting error that there is 'None' gradient. I searched for an answer and found that argmax doesn't have any defined gradient. The suggestions I found was to either create a custom Argmax layer or to use softmax instead.



How do I even use softmax here? Softmax only provides me with the approximation of [0 0 1... ], not the integer index itself. How am I supposed to work with this? I even tried to treat the problem as a classification problem by turning the ground truth to [0 1 0... ] and used crossentropy, but the network could not learn anything. It did better when I just added a dense(1) layer and then trained the model. It seems like classification treats all the arguments equally, but that is not the case here. I need the euclidean l2 distance.



Where can I get a proper instruction of creating the custom Argmax layer? Will it even help in my situation? Is it possible to implement a custom loss function that is differentiable? What should I do?










share|improve this question



















  • 1





    Take a look stackoverflow.com/questions/46926809/…

    – Sharky
    Mar 26 at 17:46











  • That was really helpful. Thank you very much.

    – happypanda
    Mar 26 at 19:54













3












3








3








I have a neural network that has the final layer as a flattened layer that provides an output as a curve. The curve might have one or more local maxima, but I am only interested in finding the global maxima. The ground truth I have is the integer index (or argument) of the global maxima. I tried to write a custom loss in Keras like this:



def custom_loss(y_output,y_idx_pred):
return K.mean(K.sum((K.argmax(y_output)-y_idx_pred)**2))


I also cast the integers to float32. But I am getting error that there is 'None' gradient. I searched for an answer and found that argmax doesn't have any defined gradient. The suggestions I found was to either create a custom Argmax layer or to use softmax instead.



How do I even use softmax here? Softmax only provides me with the approximation of [0 0 1... ], not the integer index itself. How am I supposed to work with this? I even tried to treat the problem as a classification problem by turning the ground truth to [0 1 0... ] and used crossentropy, but the network could not learn anything. It did better when I just added a dense(1) layer and then trained the model. It seems like classification treats all the arguments equally, but that is not the case here. I need the euclidean l2 distance.



Where can I get a proper instruction of creating the custom Argmax layer? Will it even help in my situation? Is it possible to implement a custom loss function that is differentiable? What should I do?










share|improve this question
















I have a neural network that has the final layer as a flattened layer that provides an output as a curve. The curve might have one or more local maxima, but I am only interested in finding the global maxima. The ground truth I have is the integer index (or argument) of the global maxima. I tried to write a custom loss in Keras like this:



def custom_loss(y_output,y_idx_pred):
return K.mean(K.sum((K.argmax(y_output)-y_idx_pred)**2))


I also cast the integers to float32. But I am getting error that there is 'None' gradient. I searched for an answer and found that argmax doesn't have any defined gradient. The suggestions I found was to either create a custom Argmax layer or to use softmax instead.



How do I even use softmax here? Softmax only provides me with the approximation of [0 0 1... ], not the integer index itself. How am I supposed to work with this? I even tried to treat the problem as a classification problem by turning the ground truth to [0 1 0... ] and used crossentropy, but the network could not learn anything. It did better when I just added a dense(1) layer and then trained the model. It seems like classification treats all the arguments equally, but that is not the case here. I need the euclidean l2 distance.



Where can I get a proper instruction of creating the custom Argmax layer? Will it even help in my situation? Is it possible to implement a custom loss function that is differentiable? What should I do?







python tensorflow keras






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 26 at 17:44









Arkistarvh Kltzuonstev

3,5083 gold badges13 silver badges35 bronze badges




3,5083 gold badges13 silver badges35 bronze badges










asked Mar 26 at 17:38









happypandahappypanda

162 bronze badges




162 bronze badges







  • 1





    Take a look stackoverflow.com/questions/46926809/…

    – Sharky
    Mar 26 at 17:46











  • That was really helpful. Thank you very much.

    – happypanda
    Mar 26 at 19:54












  • 1





    Take a look stackoverflow.com/questions/46926809/…

    – Sharky
    Mar 26 at 17:46











  • That was really helpful. Thank you very much.

    – happypanda
    Mar 26 at 19:54







1




1





Take a look stackoverflow.com/questions/46926809/…

– Sharky
Mar 26 at 17:46





Take a look stackoverflow.com/questions/46926809/…

– Sharky
Mar 26 at 17:46













That was really helpful. Thank you very much.

– happypanda
Mar 26 at 19:54





That was really helpful. Thank you very much.

– happypanda
Mar 26 at 19:54












0






active

oldest

votes










Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55363235%2fhow-to-deal-with-the-non-differentiability-of-argmax%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes




Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.







Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.



















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55363235%2fhow-to-deal-with-the-non-differentiability-of-argmax%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현