How to calculate the derivative of a NN output w.r.t one variable in the input with keras?How to flush output of print function?How do I pass a variable by reference?How to access environment variable values?How to read a text file into a string variable and strip newlines?Python - Reshape not workingValueError: Error when checking target: expected dense_5 to have 4 dimensions, but got array with shape (3, 1)Model not converge (loss not decrease)Convergence in a shallow neural networkHow are the output size of MaxPooling2D, Conv2D, UpSampling2D layers calculated?Is it possible to train a CNN starting at an intermediate layer (in general and in Keras)?

Is "stainless" a bulk or a surface property of stainless steel?

Are unaudited server logs admissible in a court of law?

Did Wernher von Braun really have a "Saturn V painted as the V2"?

Does git delete empty folders?

Are there categories whose internal hom is somewhat 'exotic'?

Check disk usage of files returned with spaces

Do living authors still get paid royalties for their old work?

Quick destruction of a helium filled airship?

Levenshtein Neighbours

Is there a commercial liquid with refractive index greater than n=2?

Starships without computers?

Meaning and structure of headline "Hair it is: A List of ..."

Nicely-spaced multiple choice options

Show two plots together: a two dimensional curve tangent to the maxima of a three dimensional plot

Why should I pay for an SSL certificate?

Number of matrices with bounded products of rows and columns

To plot branch cut of logarithm

Atmospheric methane to carbon

Gofer work in exchange for Letter of Recommendation

Independence of Mean and Variance of Discrete Uniform Distributions

How to detect a failed AES256 decryption programmatically?

Will some rockets really collapse under their own weight?

Have made several mistakes during the course of my PhD. Can't help but feel resentment. Can I get some advice about how to move forward?

What kind of (probable) traffic accident might lead to the desctruction of only (!) the brain stem and cerebellum?



How to calculate the derivative of a NN output w.r.t one variable in the input with keras?


How to flush output of print function?How do I pass a variable by reference?How to access environment variable values?How to read a text file into a string variable and strip newlines?Python - Reshape not workingValueError: Error when checking target: expected dense_5 to have 4 dimensions, but got array with shape (3, 1)Model not converge (loss not decrease)Convergence in a shallow neural networkHow are the output size of MaxPooling2D, Conv2D, UpSampling2D layers calculated?Is it possible to train a CNN starting at an intermediate layer (in general and in Keras)?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I'm using a neural network to learn an equation(Black-Scholes formula for option pricing). I've managed to let the NN to approximate the equation output well but I also want the derivative of the equation to be learned as well. In order to examine the result, I need to calculate the derivative of the NN output w.r.t to the first variable of my input.



My model is a 5 layers fully connected NN. I've already got the gradients for each layer but I'm not sure whether I'm calculating the derivative in the right way.



In the code below I'm calculating the derivative for the first variable by doing dot product for all layers and ignoring the bias. I'm just wondering whether this is the right way to do? Because after calculating the derivatives for all the test cases, I found them to be quite far away from their theoretical values. So either my code is wrong or I'm just failing to learn the equation correctly.



P.S. This is what the gradient structure looks like:



[<tf.Tensor 'gradients/dense_1/MatMul_grad/MatMul_1:0' shape=(2, 128) dtype=float32>,
<tf.Tensor 'gradients/dense_1/BiasAdd_grad/BiasAddGrad:0' shape=(128,) dtype=float32>,
<tf.Tensor 'gradients/dense_2/MatMul_grad/MatMul_1:0' shape=(128, 64) dtype=float32>,
<tf.Tensor 'gradients/dense_2/BiasAdd_grad/BiasAddGrad:0' shape=(64,) dtype=float32>,
<tf.Tensor 'gradients/dense_3/MatMul_grad/MatMul_1:0' shape=(64, 32) dtype=float32>,
<tf.Tensor 'gradients/dense_3/BiasAdd_grad/BiasAddGrad:0' shape=(32,) dtype=float32>,
<tf.Tensor 'gradients/dense_4/MatMul_grad/MatMul_1:0' shape=(32, 16) dtype=float32>,
<tf.Tensor 'gradients/dense_4/BiasAdd_grad/BiasAddGrad:0' shape=(16,) dtype=float32>,
<tf.Tensor 'gradients/dense_5/MatMul_grad/MatMul_1:0' shape=(16, 1) dtype=float32>,
<tf.Tensor 'gradients/dense_5/BiasAdd_grad/BiasAddGrad:0' shape=(1,) dtype=float32>]



[enter image description here][1]



# My NN structure: 

def build_nn_model(feats):
model = models.Sequential()

model.add(layers.Dense(128, activation='relu',
input_shape=(len(feats),)))

model.add(layers.Dense(64, activation='relu'))

model.add(layers.Dense(32, activation='relu'))

model.add(layers.Dense(16, activation='relu'))

model.add(layers.Dense(1))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mse'])

return model

# to get the gradient:

outputs = nn_model.output
trainable_variables = nn_model.trainable_weights
gradients = K.gradients(outputs, trainable_variables)

sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())

# to calculate the derivative

trainingExample = X_val[['var_1', 'var_2']].iloc[0].values.reshape(1, 2)
evaluated_gradients = sess.run(gradients,feed_dict=nn_model.input:trainingExample)

layer_1 = evaluated_gradients[0][0].reshape(1, 128)
layer_2 = evaluated_gradients[2]
layer_3 = evaluated_gradients[4]
layer_4 = evaluated_gradients[6]
layer_5 = evaluated_gradients[8]

derivative = layer_1.dot(layer_2).dot(layer_3).dot(layer_4).dot(layer_5)


[1]: https://i.stack.imgur.com/Kuidb.png









share|improve this question






























    0















    I'm using a neural network to learn an equation(Black-Scholes formula for option pricing). I've managed to let the NN to approximate the equation output well but I also want the derivative of the equation to be learned as well. In order to examine the result, I need to calculate the derivative of the NN output w.r.t to the first variable of my input.



    My model is a 5 layers fully connected NN. I've already got the gradients for each layer but I'm not sure whether I'm calculating the derivative in the right way.



    In the code below I'm calculating the derivative for the first variable by doing dot product for all layers and ignoring the bias. I'm just wondering whether this is the right way to do? Because after calculating the derivatives for all the test cases, I found them to be quite far away from their theoretical values. So either my code is wrong or I'm just failing to learn the equation correctly.



    P.S. This is what the gradient structure looks like:



    [<tf.Tensor 'gradients/dense_1/MatMul_grad/MatMul_1:0' shape=(2, 128) dtype=float32>,
    <tf.Tensor 'gradients/dense_1/BiasAdd_grad/BiasAddGrad:0' shape=(128,) dtype=float32>,
    <tf.Tensor 'gradients/dense_2/MatMul_grad/MatMul_1:0' shape=(128, 64) dtype=float32>,
    <tf.Tensor 'gradients/dense_2/BiasAdd_grad/BiasAddGrad:0' shape=(64,) dtype=float32>,
    <tf.Tensor 'gradients/dense_3/MatMul_grad/MatMul_1:0' shape=(64, 32) dtype=float32>,
    <tf.Tensor 'gradients/dense_3/BiasAdd_grad/BiasAddGrad:0' shape=(32,) dtype=float32>,
    <tf.Tensor 'gradients/dense_4/MatMul_grad/MatMul_1:0' shape=(32, 16) dtype=float32>,
    <tf.Tensor 'gradients/dense_4/BiasAdd_grad/BiasAddGrad:0' shape=(16,) dtype=float32>,
    <tf.Tensor 'gradients/dense_5/MatMul_grad/MatMul_1:0' shape=(16, 1) dtype=float32>,
    <tf.Tensor 'gradients/dense_5/BiasAdd_grad/BiasAddGrad:0' shape=(1,) dtype=float32>]



    [enter image description here][1]



    # My NN structure: 

    def build_nn_model(feats):
    model = models.Sequential()

    model.add(layers.Dense(128, activation='relu',
    input_shape=(len(feats),)))

    model.add(layers.Dense(64, activation='relu'))

    model.add(layers.Dense(32, activation='relu'))

    model.add(layers.Dense(16, activation='relu'))

    model.add(layers.Dense(1))
    model.compile(optimizer='rmsprop', loss='mse', metrics=['mse'])

    return model

    # to get the gradient:

    outputs = nn_model.output
    trainable_variables = nn_model.trainable_weights
    gradients = K.gradients(outputs, trainable_variables)

    sess = tf.InteractiveSession()
    sess.run(tf.initialize_all_variables())

    # to calculate the derivative

    trainingExample = X_val[['var_1', 'var_2']].iloc[0].values.reshape(1, 2)
    evaluated_gradients = sess.run(gradients,feed_dict=nn_model.input:trainingExample)

    layer_1 = evaluated_gradients[0][0].reshape(1, 128)
    layer_2 = evaluated_gradients[2]
    layer_3 = evaluated_gradients[4]
    layer_4 = evaluated_gradients[6]
    layer_5 = evaluated_gradients[8]

    derivative = layer_1.dot(layer_2).dot(layer_3).dot(layer_4).dot(layer_5)


    [1]: https://i.stack.imgur.com/Kuidb.png









    share|improve this question


























      0












      0








      0


      1






      I'm using a neural network to learn an equation(Black-Scholes formula for option pricing). I've managed to let the NN to approximate the equation output well but I also want the derivative of the equation to be learned as well. In order to examine the result, I need to calculate the derivative of the NN output w.r.t to the first variable of my input.



      My model is a 5 layers fully connected NN. I've already got the gradients for each layer but I'm not sure whether I'm calculating the derivative in the right way.



      In the code below I'm calculating the derivative for the first variable by doing dot product for all layers and ignoring the bias. I'm just wondering whether this is the right way to do? Because after calculating the derivatives for all the test cases, I found them to be quite far away from their theoretical values. So either my code is wrong or I'm just failing to learn the equation correctly.



      P.S. This is what the gradient structure looks like:



      [<tf.Tensor 'gradients/dense_1/MatMul_grad/MatMul_1:0' shape=(2, 128) dtype=float32>,
      <tf.Tensor 'gradients/dense_1/BiasAdd_grad/BiasAddGrad:0' shape=(128,) dtype=float32>,
      <tf.Tensor 'gradients/dense_2/MatMul_grad/MatMul_1:0' shape=(128, 64) dtype=float32>,
      <tf.Tensor 'gradients/dense_2/BiasAdd_grad/BiasAddGrad:0' shape=(64,) dtype=float32>,
      <tf.Tensor 'gradients/dense_3/MatMul_grad/MatMul_1:0' shape=(64, 32) dtype=float32>,
      <tf.Tensor 'gradients/dense_3/BiasAdd_grad/BiasAddGrad:0' shape=(32,) dtype=float32>,
      <tf.Tensor 'gradients/dense_4/MatMul_grad/MatMul_1:0' shape=(32, 16) dtype=float32>,
      <tf.Tensor 'gradients/dense_4/BiasAdd_grad/BiasAddGrad:0' shape=(16,) dtype=float32>,
      <tf.Tensor 'gradients/dense_5/MatMul_grad/MatMul_1:0' shape=(16, 1) dtype=float32>,
      <tf.Tensor 'gradients/dense_5/BiasAdd_grad/BiasAddGrad:0' shape=(1,) dtype=float32>]



      [enter image description here][1]



      # My NN structure: 

      def build_nn_model(feats):
      model = models.Sequential()

      model.add(layers.Dense(128, activation='relu',
      input_shape=(len(feats),)))

      model.add(layers.Dense(64, activation='relu'))

      model.add(layers.Dense(32, activation='relu'))

      model.add(layers.Dense(16, activation='relu'))

      model.add(layers.Dense(1))
      model.compile(optimizer='rmsprop', loss='mse', metrics=['mse'])

      return model

      # to get the gradient:

      outputs = nn_model.output
      trainable_variables = nn_model.trainable_weights
      gradients = K.gradients(outputs, trainable_variables)

      sess = tf.InteractiveSession()
      sess.run(tf.initialize_all_variables())

      # to calculate the derivative

      trainingExample = X_val[['var_1', 'var_2']].iloc[0].values.reshape(1, 2)
      evaluated_gradients = sess.run(gradients,feed_dict=nn_model.input:trainingExample)

      layer_1 = evaluated_gradients[0][0].reshape(1, 128)
      layer_2 = evaluated_gradients[2]
      layer_3 = evaluated_gradients[4]
      layer_4 = evaluated_gradients[6]
      layer_5 = evaluated_gradients[8]

      derivative = layer_1.dot(layer_2).dot(layer_3).dot(layer_4).dot(layer_5)


      [1]: https://i.stack.imgur.com/Kuidb.png









      share|improve this question














      I'm using a neural network to learn an equation(Black-Scholes formula for option pricing). I've managed to let the NN to approximate the equation output well but I also want the derivative of the equation to be learned as well. In order to examine the result, I need to calculate the derivative of the NN output w.r.t to the first variable of my input.



      My model is a 5 layers fully connected NN. I've already got the gradients for each layer but I'm not sure whether I'm calculating the derivative in the right way.



      In the code below I'm calculating the derivative for the first variable by doing dot product for all layers and ignoring the bias. I'm just wondering whether this is the right way to do? Because after calculating the derivatives for all the test cases, I found them to be quite far away from their theoretical values. So either my code is wrong or I'm just failing to learn the equation correctly.



      P.S. This is what the gradient structure looks like:



      [<tf.Tensor 'gradients/dense_1/MatMul_grad/MatMul_1:0' shape=(2, 128) dtype=float32>,
      <tf.Tensor 'gradients/dense_1/BiasAdd_grad/BiasAddGrad:0' shape=(128,) dtype=float32>,
      <tf.Tensor 'gradients/dense_2/MatMul_grad/MatMul_1:0' shape=(128, 64) dtype=float32>,
      <tf.Tensor 'gradients/dense_2/BiasAdd_grad/BiasAddGrad:0' shape=(64,) dtype=float32>,
      <tf.Tensor 'gradients/dense_3/MatMul_grad/MatMul_1:0' shape=(64, 32) dtype=float32>,
      <tf.Tensor 'gradients/dense_3/BiasAdd_grad/BiasAddGrad:0' shape=(32,) dtype=float32>,
      <tf.Tensor 'gradients/dense_4/MatMul_grad/MatMul_1:0' shape=(32, 16) dtype=float32>,
      <tf.Tensor 'gradients/dense_4/BiasAdd_grad/BiasAddGrad:0' shape=(16,) dtype=float32>,
      <tf.Tensor 'gradients/dense_5/MatMul_grad/MatMul_1:0' shape=(16, 1) dtype=float32>,
      <tf.Tensor 'gradients/dense_5/BiasAdd_grad/BiasAddGrad:0' shape=(1,) dtype=float32>]



      [enter image description here][1]



      # My NN structure: 

      def build_nn_model(feats):
      model = models.Sequential()

      model.add(layers.Dense(128, activation='relu',
      input_shape=(len(feats),)))

      model.add(layers.Dense(64, activation='relu'))

      model.add(layers.Dense(32, activation='relu'))

      model.add(layers.Dense(16, activation='relu'))

      model.add(layers.Dense(1))
      model.compile(optimizer='rmsprop', loss='mse', metrics=['mse'])

      return model

      # to get the gradient:

      outputs = nn_model.output
      trainable_variables = nn_model.trainable_weights
      gradients = K.gradients(outputs, trainable_variables)

      sess = tf.InteractiveSession()
      sess.run(tf.initialize_all_variables())

      # to calculate the derivative

      trainingExample = X_val[['var_1', 'var_2']].iloc[0].values.reshape(1, 2)
      evaluated_gradients = sess.run(gradients,feed_dict=nn_model.input:trainingExample)

      layer_1 = evaluated_gradients[0][0].reshape(1, 128)
      layer_2 = evaluated_gradients[2]
      layer_3 = evaluated_gradients[4]
      layer_4 = evaluated_gradients[6]
      layer_5 = evaluated_gradients[8]

      derivative = layer_1.dot(layer_2).dot(layer_3).dot(layer_4).dot(layer_5)


      [1]: https://i.stack.imgur.com/Kuidb.png






      python keras






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 27 at 13:55









      autoencoderautoencoder

      11 bronze badge




      11 bronze badge

























          0






          active

          oldest

          votes










          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55378971%2fhow-to-calculate-the-derivative-of-a-nn-output-w-r-t-one-variable-in-the-input-w%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes




          Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.







          Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.



















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55378971%2fhow-to-calculate-the-derivative-of-a-nn-output-w-r-t-one-variable-in-the-input-w%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

          은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현