Error in using K.function together with K.gradientsKeras error: expected dense_input_1 to have 3 dimensionsKeras AttributeError: 'list' object has no attribute 'ndim'AveragePooling2D doesn't recognize a dtypeTensorflow compute_output_shape() Not Working For Custom LayerInput tensors to a Model must come from `tf.layers.Input` when I concatenate two models with Keras API on Tensorflowhow to create specificity custom metric for Keras Neural Netscouldn't run embedding network Keras with multiplue inputKeras sparse_categorical_accuracy metric produces “Incompatible shapes” errormodel.predict in keras using universal sentence encoder giving shape errorError executing rnn model . How to fix it?

How to get the decimal part of a number in apex

In a series of books, what happens after the coming of age?

Drug Testing and Prescribed Medications

Concatenate all values of the same XML element using XPath/XQuery

Does this website provide consistent translation into Wookiee?

Translation of "invincible independence"

A problem with Hebrew and English underlined text

Where do 5 or more U.S. counties meet in a single point?

An adjective or a noun to describe a very small apartment / house etc

How can I finally understand the confusing modal verb "мочь"?

Assuming a normal distribution: what is the sd for a given mean?

Explaining intravenous drug abuse to a small child

Make me a minimum magic sum

Searching for a sentence that I only know part of it using Google's operators

Which "exotic salt" can lower water's freezing point by 70 °C?

Did Ham the Chimp follow commands, or did he just randomly push levers?

Extracting the parent, leaf, and extension from a valid path

Why is there a cap on 401k contributions?

How can I draw a rectangle around venn Diagrams?

Why is the episode called "The Last of the Starks"?

My C Drive is full without reason

Would a legitimized Baratheon have the best claim for the Iron Throne?

Saying the right thing then saying the wrong thing toch kedai dibur

What is more safe for browsing the web: PC or smartphone?



Error in using K.function together with K.gradients


Keras error: expected dense_input_1 to have 3 dimensionsKeras AttributeError: 'list' object has no attribute 'ndim'AveragePooling2D doesn't recognize a dtypeTensorflow compute_output_shape() Not Working For Custom LayerInput tensors to a Model must come from `tf.layers.Input` when I concatenate two models with Keras API on Tensorflowhow to create specificity custom metric for Keras Neural Netscouldn't run embedding network Keras with multiplue inputKeras sparse_categorical_accuracy metric produces “Incompatible shapes” errormodel.predict in keras using universal sentence encoder giving shape errorError executing rnn model . How to fix it?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








0















I'm writing a test model using keras, where I want do some mathematics depends on numerical values of the output of a layer and its the derivatives.



I'm using tensorflow backend.
I use K.function in order to get the values of the outputs of the Lambda layer and derivative layers. However I got some weird err if I choose the function in the Lambda layer as power function, e.g. x**2. If I change x**2 to sin(x), it works fine.



import numpy as np
from keras.models import Model
from keras.layers import Input, Layer, Lambda
from keras import backend as K

x = Input(shape=(1,))

# the Lambda layer
c = Lambda(lambda x: x**2)(x) # this will causs err
#c = Lambda(lambda x: K.sin(x))(x) # but this works fine


class dc_layer(Layer):

def __init__(self,*args,**kwargs):
self.is_placeholder = True
super(dc_layer, self).__init__(*args,**kwargs)

def call(self,inputs):
x = inputs[0]
c0 = inputs[1]
c1 = K.gradients(c0,x)
return c1

# the derivatives of the lambda layer
c1 = dc_layer()([x,c])
c2 = dc_layer()([x,c1])


Then I use backend.function to define a function in order to get layer outputs



# define a function to get the derivatives
get_layer_outputs = K.function([x],[c2])

x_data = np.linspace(0,1,6)
val = get_layer_outputs([x_data])[0]
print(val)


I got the following err message in jupyter notebook



InvalidArgumentError: data[0].shape = [1] does not start with indices[0].shape = [2]


which tracback to



---> 36 val = get_layer_outputs([x_data])[0]


but if I look at the c1 layer



# define a function to get the derivatives
get_layer_outputs = K.function([x],[c1])

x_data = np.linspace(0,1,6)
val = get_layer_outputs([x_data])[0]
print(val)


it works fine.



I guess it is some thing wrong when I use K.function. Any solutions/suggestions would be appreciated.



======================================================



Additional question:



Even if I try a very simple code, I got err when use K.function, as follows



x = Input(shape=(1,))
h = Dense(10,activation='sigmoid')(x)
c = Dense(1)(h)

get_layer_outputs = K.function([x],[c])

x_data = np.linspace(0,1,6)
val = get_layer_outputs([x_data])[0]
print(val)


I got



InvalidArgumentError: In[0] is not a matrix
[[Node: dense_24/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_input_19_0_0, dense_24/kernel/read)]]


Now I'm really confused on how to use K.function properly. Please help if you have any idea. Thanks!










share|improve this question






























    0















    I'm writing a test model using keras, where I want do some mathematics depends on numerical values of the output of a layer and its the derivatives.



    I'm using tensorflow backend.
    I use K.function in order to get the values of the outputs of the Lambda layer and derivative layers. However I got some weird err if I choose the function in the Lambda layer as power function, e.g. x**2. If I change x**2 to sin(x), it works fine.



    import numpy as np
    from keras.models import Model
    from keras.layers import Input, Layer, Lambda
    from keras import backend as K

    x = Input(shape=(1,))

    # the Lambda layer
    c = Lambda(lambda x: x**2)(x) # this will causs err
    #c = Lambda(lambda x: K.sin(x))(x) # but this works fine


    class dc_layer(Layer):

    def __init__(self,*args,**kwargs):
    self.is_placeholder = True
    super(dc_layer, self).__init__(*args,**kwargs)

    def call(self,inputs):
    x = inputs[0]
    c0 = inputs[1]
    c1 = K.gradients(c0,x)
    return c1

    # the derivatives of the lambda layer
    c1 = dc_layer()([x,c])
    c2 = dc_layer()([x,c1])


    Then I use backend.function to define a function in order to get layer outputs



    # define a function to get the derivatives
    get_layer_outputs = K.function([x],[c2])

    x_data = np.linspace(0,1,6)
    val = get_layer_outputs([x_data])[0]
    print(val)


    I got the following err message in jupyter notebook



    InvalidArgumentError: data[0].shape = [1] does not start with indices[0].shape = [2]


    which tracback to



    ---> 36 val = get_layer_outputs([x_data])[0]


    but if I look at the c1 layer



    # define a function to get the derivatives
    get_layer_outputs = K.function([x],[c1])

    x_data = np.linspace(0,1,6)
    val = get_layer_outputs([x_data])[0]
    print(val)


    it works fine.



    I guess it is some thing wrong when I use K.function. Any solutions/suggestions would be appreciated.



    ======================================================



    Additional question:



    Even if I try a very simple code, I got err when use K.function, as follows



    x = Input(shape=(1,))
    h = Dense(10,activation='sigmoid')(x)
    c = Dense(1)(h)

    get_layer_outputs = K.function([x],[c])

    x_data = np.linspace(0,1,6)
    val = get_layer_outputs([x_data])[0]
    print(val)


    I got



    InvalidArgumentError: In[0] is not a matrix
    [[Node: dense_24/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_input_19_0_0, dense_24/kernel/read)]]


    Now I'm really confused on how to use K.function properly. Please help if you have any idea. Thanks!










    share|improve this question


























      0












      0








      0








      I'm writing a test model using keras, where I want do some mathematics depends on numerical values of the output of a layer and its the derivatives.



      I'm using tensorflow backend.
      I use K.function in order to get the values of the outputs of the Lambda layer and derivative layers. However I got some weird err if I choose the function in the Lambda layer as power function, e.g. x**2. If I change x**2 to sin(x), it works fine.



      import numpy as np
      from keras.models import Model
      from keras.layers import Input, Layer, Lambda
      from keras import backend as K

      x = Input(shape=(1,))

      # the Lambda layer
      c = Lambda(lambda x: x**2)(x) # this will causs err
      #c = Lambda(lambda x: K.sin(x))(x) # but this works fine


      class dc_layer(Layer):

      def __init__(self,*args,**kwargs):
      self.is_placeholder = True
      super(dc_layer, self).__init__(*args,**kwargs)

      def call(self,inputs):
      x = inputs[0]
      c0 = inputs[1]
      c1 = K.gradients(c0,x)
      return c1

      # the derivatives of the lambda layer
      c1 = dc_layer()([x,c])
      c2 = dc_layer()([x,c1])


      Then I use backend.function to define a function in order to get layer outputs



      # define a function to get the derivatives
      get_layer_outputs = K.function([x],[c2])

      x_data = np.linspace(0,1,6)
      val = get_layer_outputs([x_data])[0]
      print(val)


      I got the following err message in jupyter notebook



      InvalidArgumentError: data[0].shape = [1] does not start with indices[0].shape = [2]


      which tracback to



      ---> 36 val = get_layer_outputs([x_data])[0]


      but if I look at the c1 layer



      # define a function to get the derivatives
      get_layer_outputs = K.function([x],[c1])

      x_data = np.linspace(0,1,6)
      val = get_layer_outputs([x_data])[0]
      print(val)


      it works fine.



      I guess it is some thing wrong when I use K.function. Any solutions/suggestions would be appreciated.



      ======================================================



      Additional question:



      Even if I try a very simple code, I got err when use K.function, as follows



      x = Input(shape=(1,))
      h = Dense(10,activation='sigmoid')(x)
      c = Dense(1)(h)

      get_layer_outputs = K.function([x],[c])

      x_data = np.linspace(0,1,6)
      val = get_layer_outputs([x_data])[0]
      print(val)


      I got



      InvalidArgumentError: In[0] is not a matrix
      [[Node: dense_24/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_input_19_0_0, dense_24/kernel/read)]]


      Now I'm really confused on how to use K.function properly. Please help if you have any idea. Thanks!










      share|improve this question
















      I'm writing a test model using keras, where I want do some mathematics depends on numerical values of the output of a layer and its the derivatives.



      I'm using tensorflow backend.
      I use K.function in order to get the values of the outputs of the Lambda layer and derivative layers. However I got some weird err if I choose the function in the Lambda layer as power function, e.g. x**2. If I change x**2 to sin(x), it works fine.



      import numpy as np
      from keras.models import Model
      from keras.layers import Input, Layer, Lambda
      from keras import backend as K

      x = Input(shape=(1,))

      # the Lambda layer
      c = Lambda(lambda x: x**2)(x) # this will causs err
      #c = Lambda(lambda x: K.sin(x))(x) # but this works fine


      class dc_layer(Layer):

      def __init__(self,*args,**kwargs):
      self.is_placeholder = True
      super(dc_layer, self).__init__(*args,**kwargs)

      def call(self,inputs):
      x = inputs[0]
      c0 = inputs[1]
      c1 = K.gradients(c0,x)
      return c1

      # the derivatives of the lambda layer
      c1 = dc_layer()([x,c])
      c2 = dc_layer()([x,c1])


      Then I use backend.function to define a function in order to get layer outputs



      # define a function to get the derivatives
      get_layer_outputs = K.function([x],[c2])

      x_data = np.linspace(0,1,6)
      val = get_layer_outputs([x_data])[0]
      print(val)


      I got the following err message in jupyter notebook



      InvalidArgumentError: data[0].shape = [1] does not start with indices[0].shape = [2]


      which tracback to



      ---> 36 val = get_layer_outputs([x_data])[0]


      but if I look at the c1 layer



      # define a function to get the derivatives
      get_layer_outputs = K.function([x],[c1])

      x_data = np.linspace(0,1,6)
      val = get_layer_outputs([x_data])[0]
      print(val)


      it works fine.



      I guess it is some thing wrong when I use K.function. Any solutions/suggestions would be appreciated.



      ======================================================



      Additional question:



      Even if I try a very simple code, I got err when use K.function, as follows



      x = Input(shape=(1,))
      h = Dense(10,activation='sigmoid')(x)
      c = Dense(1)(h)

      get_layer_outputs = K.function([x],[c])

      x_data = np.linspace(0,1,6)
      val = get_layer_outputs([x_data])[0]
      print(val)


      I got



      InvalidArgumentError: In[0] is not a matrix
      [[Node: dense_24/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_input_19_0_0, dense_24/kernel/read)]]


      Now I'm really confused on how to use K.function properly. Please help if you have any idea. Thanks!







      tensorflow keras jupyter-notebook python-3.5






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 23 at 7:05







      Lihui Chai

















      asked Mar 23 at 6:00









      Lihui ChaiLihui Chai

      183




      183






















          1 Answer
          1






          active

          oldest

          votes


















          0














          For me this works - your x_data vector was 0-Dimensional:



          import numpy as np

          from keras.models import Model
          from keras.layers import Input, Layer, Lambda, Dense
          from keras import backend as K

          x = Input(shape=(1,))

          # the Lambda layer
          c = Lambda(lambda x: x**2)(x) # this will causs err
          #c = Lambda(lambda x: K.sin(x))(x) # but this works fine


          class dc_layer(Layer):

          def __init__(self,*args,**kwargs):
          self.is_placeholder = True
          super(dc_layer, self).__init__(*args,**kwargs)

          def call(self,inputs):
          x = inputs[0]
          c0 = inputs[1]
          c1 = K.gradients(c0,x)
          return c1

          # the derivatives of the lambda layer
          c1 = dc_layer()([x,c]) # in Keras 2.0.2 need to unpack results, Keras 2.2.4 seems fine.
          c2 = dc_layer()([x,c1])

          # define a function to get the derivatives
          get_layer_outputs = K.function([x],[c2])

          x_data = np.linspace(0,1,6)[:,None] # ensure vector is 1D, not 0D
          val = get_layer_outputs([x_data])[0]
          print(val)


          output:



          [[2.]
          [2.]
          [2.]
          [2.]
          [2.]
          [2.]]





          share|improve this answer

























            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55311073%2ferror-in-using-k-function-together-with-k-gradients%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            For me this works - your x_data vector was 0-Dimensional:



            import numpy as np

            from keras.models import Model
            from keras.layers import Input, Layer, Lambda, Dense
            from keras import backend as K

            x = Input(shape=(1,))

            # the Lambda layer
            c = Lambda(lambda x: x**2)(x) # this will causs err
            #c = Lambda(lambda x: K.sin(x))(x) # but this works fine


            class dc_layer(Layer):

            def __init__(self,*args,**kwargs):
            self.is_placeholder = True
            super(dc_layer, self).__init__(*args,**kwargs)

            def call(self,inputs):
            x = inputs[0]
            c0 = inputs[1]
            c1 = K.gradients(c0,x)
            return c1

            # the derivatives of the lambda layer
            c1 = dc_layer()([x,c]) # in Keras 2.0.2 need to unpack results, Keras 2.2.4 seems fine.
            c2 = dc_layer()([x,c1])

            # define a function to get the derivatives
            get_layer_outputs = K.function([x],[c2])

            x_data = np.linspace(0,1,6)[:,None] # ensure vector is 1D, not 0D
            val = get_layer_outputs([x_data])[0]
            print(val)


            output:



            [[2.]
            [2.]
            [2.]
            [2.]
            [2.]
            [2.]]





            share|improve this answer





























              0














              For me this works - your x_data vector was 0-Dimensional:



              import numpy as np

              from keras.models import Model
              from keras.layers import Input, Layer, Lambda, Dense
              from keras import backend as K

              x = Input(shape=(1,))

              # the Lambda layer
              c = Lambda(lambda x: x**2)(x) # this will causs err
              #c = Lambda(lambda x: K.sin(x))(x) # but this works fine


              class dc_layer(Layer):

              def __init__(self,*args,**kwargs):
              self.is_placeholder = True
              super(dc_layer, self).__init__(*args,**kwargs)

              def call(self,inputs):
              x = inputs[0]
              c0 = inputs[1]
              c1 = K.gradients(c0,x)
              return c1

              # the derivatives of the lambda layer
              c1 = dc_layer()([x,c]) # in Keras 2.0.2 need to unpack results, Keras 2.2.4 seems fine.
              c2 = dc_layer()([x,c1])

              # define a function to get the derivatives
              get_layer_outputs = K.function([x],[c2])

              x_data = np.linspace(0,1,6)[:,None] # ensure vector is 1D, not 0D
              val = get_layer_outputs([x_data])[0]
              print(val)


              output:



              [[2.]
              [2.]
              [2.]
              [2.]
              [2.]
              [2.]]





              share|improve this answer



























                0












                0








                0







                For me this works - your x_data vector was 0-Dimensional:



                import numpy as np

                from keras.models import Model
                from keras.layers import Input, Layer, Lambda, Dense
                from keras import backend as K

                x = Input(shape=(1,))

                # the Lambda layer
                c = Lambda(lambda x: x**2)(x) # this will causs err
                #c = Lambda(lambda x: K.sin(x))(x) # but this works fine


                class dc_layer(Layer):

                def __init__(self,*args,**kwargs):
                self.is_placeholder = True
                super(dc_layer, self).__init__(*args,**kwargs)

                def call(self,inputs):
                x = inputs[0]
                c0 = inputs[1]
                c1 = K.gradients(c0,x)
                return c1

                # the derivatives of the lambda layer
                c1 = dc_layer()([x,c]) # in Keras 2.0.2 need to unpack results, Keras 2.2.4 seems fine.
                c2 = dc_layer()([x,c1])

                # define a function to get the derivatives
                get_layer_outputs = K.function([x],[c2])

                x_data = np.linspace(0,1,6)[:,None] # ensure vector is 1D, not 0D
                val = get_layer_outputs([x_data])[0]
                print(val)


                output:



                [[2.]
                [2.]
                [2.]
                [2.]
                [2.]
                [2.]]





                share|improve this answer















                For me this works - your x_data vector was 0-Dimensional:



                import numpy as np

                from keras.models import Model
                from keras.layers import Input, Layer, Lambda, Dense
                from keras import backend as K

                x = Input(shape=(1,))

                # the Lambda layer
                c = Lambda(lambda x: x**2)(x) # this will causs err
                #c = Lambda(lambda x: K.sin(x))(x) # but this works fine


                class dc_layer(Layer):

                def __init__(self,*args,**kwargs):
                self.is_placeholder = True
                super(dc_layer, self).__init__(*args,**kwargs)

                def call(self,inputs):
                x = inputs[0]
                c0 = inputs[1]
                c1 = K.gradients(c0,x)
                return c1

                # the derivatives of the lambda layer
                c1 = dc_layer()([x,c]) # in Keras 2.0.2 need to unpack results, Keras 2.2.4 seems fine.
                c2 = dc_layer()([x,c1])

                # define a function to get the derivatives
                get_layer_outputs = K.function([x],[c2])

                x_data = np.linspace(0,1,6)[:,None] # ensure vector is 1D, not 0D
                val = get_layer_outputs([x_data])[0]
                print(val)


                output:



                [[2.]
                [2.]
                [2.]
                [2.]
                [2.]
                [2.]]






                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Mar 23 at 10:29

























                answered Mar 23 at 9:33









                Kai AeberliKai Aeberli

                662515




                662515





























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55311073%2ferror-in-using-k-function-together-with-k-gradients%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

                    SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

                    은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현