I'm looking for the reverse of the functions: BatchNormalization, LeakyRelu, Lamda, and Reshape to de a visualization for my CNNCan't understand how filters in a Conv net are calculatedDerivatives in some Deconvolution layers mostly all zeroesReduce training time for cnnNeural Network with Sigmoid activation produces all 1'show to calculate the weights for deconvolution layer based on the trained value weights of the corresponding convolution layerKeras Conv2D custom kernel initializationFilter shape in fully connected layer and output layer in Convolutional Neural NetworkMultiple-input multiple-output CNN with custom loss functionwhat is the first initialized weight in pytorch convolutional layerComputing the gradients of new state (of the RNN) with respect to model parameters, (including CNN for inputs), in tensorflow; tf.gradient return None

Is there any reason to change the ISO manually?

Can there be plants on the dark side of a tidally locked world?

std::tuple sizeof, is it a missed optimization?

What happens when there is no available physical memory left for SQL Server?

Can I sleep overnight at Stansted Airport

slowest crash on the Moon?

Travel to USA with a stuffed puppet

Powering an offset stacked array of pistons

co-son-in-law or co-brother

pruning subdomains of other domains in a file using script (bash, awk or similar)

Do 643,000 Americans go bankrupt every year due to medical bills?

Does POSIX guarantee the paths to any standard utilities?

What exactly is a softlock?

Everyone for non livings

How did Gollum know Sauron was gathering the Haradrim to make war?

Finder/Terminal: Find files that contain less than 21 lines of text

Were the women of Travancore, India, taxed for covering their breasts by breast size?

Adding transparency to ink drawing

Why didn't Thatcher give Hong Kong to Taiwan?

Why do we need explainable AI?

Why did the Joi advertisement trigger K?

Strange formulas that gave rise to Koszul duality

Too many SOQL Queries when inserting records

In-universe, why does Doc Brown program the time machine to go to 1955?



I'm looking for the reverse of the functions: BatchNormalization, LeakyRelu, Lamda, and Reshape to de a visualization for my CNN


Can't understand how filters in a Conv net are calculatedDerivatives in some Deconvolution layers mostly all zeroesReduce training time for cnnNeural Network with Sigmoid activation produces all 1'show to calculate the weights for deconvolution layer based on the trained value weights of the corresponding convolution layerKeras Conv2D custom kernel initializationFilter shape in fully connected layer and output layer in Convolutional Neural NetworkMultiple-input multiple-output CNN with custom loss functionwhat is the first initialized weight in pytorch convolutional layerComputing the gradients of new state (of the RNN) with respect to model parameters, (including CNN for inputs), in tensorflow; tf.gradient return None






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I'm trying to implement a DeconvNet for visualizing my CNN in order to see what are the features that the different layers are looking for in my network, and for that I need the reverse functions used in the network (like Relu, BatchNormalization).



You can check this paper to understand what I'm trying to do : https://arxiv.org/abs/1311.2901



This is the Deconvolution code that I found on the internet:



Class DConvolution2D(object):

def __init__(self, layer):

self.layer = layer

weights = layer.get_weights()
W = weights[0]
b = weights[1]


filters = W.shape[3]
up_row = W.shape[0]
up_col = W.shape[1]
input_img = keras.layers.Input(shape = layer.input_shape[1:])

output=keras.layers.Conv2D(filters,(up_row,up_col),kernel_initializer=tf.constant_initializer(W),
bias_initializer=tf.constant_initializer(b),padding='same')(input_img)
self.up_func = K.function([input_img, K.learning_phase()], [output])
# Deconv filter (exchange no of filters and depth of each filter)
W = np.transpose(W, (0,1,3,2))
# Reverse columns and rows
W = W[::-1, ::-1,:,:]
down_filters = W.shape[3]
down_row = W.shape[0]
down_col = W.shape[1]
b = np.zeros(down_filters)
input_d = keras.layers.Input(shape = layer.output_shape[1:])

output=keras.layers.Conv2D(down_filters,(down_row,down_col),kernel_initializer=tf.constant_initializer(W),
bias_initializer=tf.constant_initializer(b),padding='same')(input_d)
self.down_func = K.function([input_d, K.learning_phase()], [output])

def up(self, data, learning_phase = 0):
#Forward pass
self.up_data = self.up_func([data, learning_phase])
self.up_data=np.squeeze(self.up_data,axis=0)
self.up_data=numpy.expand_dims(self.up_data,axis=0)
#print(self.up_data.shape)
return self.up_data

def down(self, data, learning_phase = 0):
# Backward pass
self.down_data= self.down_func([data, learning_phase])
self.down_data=np.squeeze(self.down_data,axis=0)
self.down_data=numpy.expand_dims(self.down_data,axis=0)
#print(self.down_data.shape)
return self.down_data


So I'm looking to do the same with the other function on the architecture of YOLO.



Thanx for helping, and sorry for my english if I wasn't clear










share|improve this question
































    0















    I'm trying to implement a DeconvNet for visualizing my CNN in order to see what are the features that the different layers are looking for in my network, and for that I need the reverse functions used in the network (like Relu, BatchNormalization).



    You can check this paper to understand what I'm trying to do : https://arxiv.org/abs/1311.2901



    This is the Deconvolution code that I found on the internet:



    Class DConvolution2D(object):

    def __init__(self, layer):

    self.layer = layer

    weights = layer.get_weights()
    W = weights[0]
    b = weights[1]


    filters = W.shape[3]
    up_row = W.shape[0]
    up_col = W.shape[1]
    input_img = keras.layers.Input(shape = layer.input_shape[1:])

    output=keras.layers.Conv2D(filters,(up_row,up_col),kernel_initializer=tf.constant_initializer(W),
    bias_initializer=tf.constant_initializer(b),padding='same')(input_img)
    self.up_func = K.function([input_img, K.learning_phase()], [output])
    # Deconv filter (exchange no of filters and depth of each filter)
    W = np.transpose(W, (0,1,3,2))
    # Reverse columns and rows
    W = W[::-1, ::-1,:,:]
    down_filters = W.shape[3]
    down_row = W.shape[0]
    down_col = W.shape[1]
    b = np.zeros(down_filters)
    input_d = keras.layers.Input(shape = layer.output_shape[1:])

    output=keras.layers.Conv2D(down_filters,(down_row,down_col),kernel_initializer=tf.constant_initializer(W),
    bias_initializer=tf.constant_initializer(b),padding='same')(input_d)
    self.down_func = K.function([input_d, K.learning_phase()], [output])

    def up(self, data, learning_phase = 0):
    #Forward pass
    self.up_data = self.up_func([data, learning_phase])
    self.up_data=np.squeeze(self.up_data,axis=0)
    self.up_data=numpy.expand_dims(self.up_data,axis=0)
    #print(self.up_data.shape)
    return self.up_data

    def down(self, data, learning_phase = 0):
    # Backward pass
    self.down_data= self.down_func([data, learning_phase])
    self.down_data=np.squeeze(self.down_data,axis=0)
    self.down_data=numpy.expand_dims(self.down_data,axis=0)
    #print(self.down_data.shape)
    return self.down_data


    So I'm looking to do the same with the other function on the architecture of YOLO.



    Thanx for helping, and sorry for my english if I wasn't clear










    share|improve this question




























      0












      0








      0








      I'm trying to implement a DeconvNet for visualizing my CNN in order to see what are the features that the different layers are looking for in my network, and for that I need the reverse functions used in the network (like Relu, BatchNormalization).



      You can check this paper to understand what I'm trying to do : https://arxiv.org/abs/1311.2901



      This is the Deconvolution code that I found on the internet:



      Class DConvolution2D(object):

      def __init__(self, layer):

      self.layer = layer

      weights = layer.get_weights()
      W = weights[0]
      b = weights[1]


      filters = W.shape[3]
      up_row = W.shape[0]
      up_col = W.shape[1]
      input_img = keras.layers.Input(shape = layer.input_shape[1:])

      output=keras.layers.Conv2D(filters,(up_row,up_col),kernel_initializer=tf.constant_initializer(W),
      bias_initializer=tf.constant_initializer(b),padding='same')(input_img)
      self.up_func = K.function([input_img, K.learning_phase()], [output])
      # Deconv filter (exchange no of filters and depth of each filter)
      W = np.transpose(W, (0,1,3,2))
      # Reverse columns and rows
      W = W[::-1, ::-1,:,:]
      down_filters = W.shape[3]
      down_row = W.shape[0]
      down_col = W.shape[1]
      b = np.zeros(down_filters)
      input_d = keras.layers.Input(shape = layer.output_shape[1:])

      output=keras.layers.Conv2D(down_filters,(down_row,down_col),kernel_initializer=tf.constant_initializer(W),
      bias_initializer=tf.constant_initializer(b),padding='same')(input_d)
      self.down_func = K.function([input_d, K.learning_phase()], [output])

      def up(self, data, learning_phase = 0):
      #Forward pass
      self.up_data = self.up_func([data, learning_phase])
      self.up_data=np.squeeze(self.up_data,axis=0)
      self.up_data=numpy.expand_dims(self.up_data,axis=0)
      #print(self.up_data.shape)
      return self.up_data

      def down(self, data, learning_phase = 0):
      # Backward pass
      self.down_data= self.down_func([data, learning_phase])
      self.down_data=np.squeeze(self.down_data,axis=0)
      self.down_data=numpy.expand_dims(self.down_data,axis=0)
      #print(self.down_data.shape)
      return self.down_data


      So I'm looking to do the same with the other function on the architecture of YOLO.



      Thanx for helping, and sorry for my english if I wasn't clear










      share|improve this question
















      I'm trying to implement a DeconvNet for visualizing my CNN in order to see what are the features that the different layers are looking for in my network, and for that I need the reverse functions used in the network (like Relu, BatchNormalization).



      You can check this paper to understand what I'm trying to do : https://arxiv.org/abs/1311.2901



      This is the Deconvolution code that I found on the internet:



      Class DConvolution2D(object):

      def __init__(self, layer):

      self.layer = layer

      weights = layer.get_weights()
      W = weights[0]
      b = weights[1]


      filters = W.shape[3]
      up_row = W.shape[0]
      up_col = W.shape[1]
      input_img = keras.layers.Input(shape = layer.input_shape[1:])

      output=keras.layers.Conv2D(filters,(up_row,up_col),kernel_initializer=tf.constant_initializer(W),
      bias_initializer=tf.constant_initializer(b),padding='same')(input_img)
      self.up_func = K.function([input_img, K.learning_phase()], [output])
      # Deconv filter (exchange no of filters and depth of each filter)
      W = np.transpose(W, (0,1,3,2))
      # Reverse columns and rows
      W = W[::-1, ::-1,:,:]
      down_filters = W.shape[3]
      down_row = W.shape[0]
      down_col = W.shape[1]
      b = np.zeros(down_filters)
      input_d = keras.layers.Input(shape = layer.output_shape[1:])

      output=keras.layers.Conv2D(down_filters,(down_row,down_col),kernel_initializer=tf.constant_initializer(W),
      bias_initializer=tf.constant_initializer(b),padding='same')(input_d)
      self.down_func = K.function([input_d, K.learning_phase()], [output])

      def up(self, data, learning_phase = 0):
      #Forward pass
      self.up_data = self.up_func([data, learning_phase])
      self.up_data=np.squeeze(self.up_data,axis=0)
      self.up_data=numpy.expand_dims(self.up_data,axis=0)
      #print(self.up_data.shape)
      return self.up_data

      def down(self, data, learning_phase = 0):
      # Backward pass
      self.down_data= self.down_func([data, learning_phase])
      self.down_data=np.squeeze(self.down_data,axis=0)
      self.down_data=numpy.expand_dims(self.down_data,axis=0)
      #print(self.down_data.shape)
      return self.down_data


      So I'm looking to do the same with the other function on the architecture of YOLO.



      Thanx for helping, and sorry for my english if I wasn't clear







      conv-neural-network yolo






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 28 at 2:34









      Pedro Rodrigues

      5854 silver badges15 bronze badges




      5854 silver badges15 bronze badges










      asked Mar 27 at 18:00









      AbderrahmaneAbderrahmane

      11 bronze badge




      11 bronze badge

























          0






          active

          oldest

          votes










          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55383780%2fim-looking-for-the-reverse-of-the-functions-batchnormalization-leakyrelu-lam%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes




          Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.







          Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.



















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55383780%2fim-looking-for-the-reverse-of-the-functions-batchnormalization-leakyrelu-lam%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

          은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현