how do I access a specific value in a tensor? The Next CEO of Stack OverflowHow to merge two dictionaries in a single expression?How do I check if a list is empty?How do I check whether a file exists without exceptions?How can I safely create a nested directory in Python?Accessing the index in 'for' loops?How do I sort a dictionary by value?How to make a chain of function decorators?How do I list all files of a directory?How to access environment variable values?How can I freeze last layer of my own model?

Missile strike detection (but it's not actually a missile)

Why, when going from special to general relativity, do we just replace partial derivatives with covariant derivatives?

If Nick Fury and Coulson already knew about aliens (Kree and Skrull) why did they wait until Thor's appearance to start making weapons?

How to check if all elements of 1 list are in the *same quantity* and in any order, in the list2?

How to count occurrences of text in a file?

Find non-case sensitive string in a mixed list of elements?

Grabbing quick drinks

Is it convenient to ask the journal's editor for two additional days to complete a review?

Is it my responsibility to learn a new technology in my own time my employer wants to implement?

Why is information "lost" when it got into a black hole?

What was the first Unix version to run on a microcomputer?

Does increasing your ability score affect your main stat?

Is it ever safe to open a suspicious HTML file (e.g. email attachment)?

Powershell. How to parse gci Name?

Are police here, aren't itthey?

WOW air has ceased operation, can I get my tickets refunded?

Would a grinding machine be a simple and workable propulsion system for an interplanetary spacecraft?

Running a General Election and the European Elections together

Why didn't Khan get resurrected in the Genesis Explosion?

How to get from Geneva Airport to Metabief?

How do I align (1) and (2)?

Would a completely good Muggle be able to use a wand?

Does soap repel water?

Can we say or write : "No, it'sn't"?



how do I access a specific value in a tensor?



The Next CEO of Stack OverflowHow to merge two dictionaries in a single expression?How do I check if a list is empty?How do I check whether a file exists without exceptions?How can I safely create a nested directory in Python?Accessing the index in 'for' loops?How do I sort a dictionary by value?How to make a chain of function decorators?How do I list all files of a directory?How to access environment variable values?How can I freeze last layer of my own model?










0















I have an input layer wtm=Input(4,4,1) and I want to access each value of this layer during learning. for accessing to wtm[1,1]( the value in row=1 and column=1) I use this code a=Kr.layers.Lambda(lambda x:x[1,1])(wtm) but the output shape is TensorShape([Dimension(4), Dimension(1)]) not (1,1) and I think it gives the first column.is it right? if I only need one value in specific row and column what should I do and how can I change it? I really need your help. I know this could be easy but I am a beginner and do not know how to work with this issue:(
Edit:
suppose



wtm=
1 0 0 1
1 1 1 0
1 0 1 0
1 0 1 1


we know wtm(0,0)=1 now I want to produce new tensor with shape (28,28,1) with value 1 and I want to do this for all values in wtm.



 wtm=Input((4,4,1))
image = Input((28, 28, 1))
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e')(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e')(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e')(conv2)
BN=BatchNormalization()(conv3)
encoded = Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I')(BN)

rep=Kr.layers.Lambda(lambda x:Kr.backend.repeat(x,28))
a=rep(Kr.layers.Lambda(lambda x:x[1,1])(wtm))

add_const = Kr.layers.Lambda(lambda x: x[0] + x[1])
encoded_merged = add_const([encoded,a])


#-----------------------decoder------------------------------------------------
#------------------------------------------------------------------------------
deconv1 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl1d')(encoded_merged)
deconv2 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl2d')(deconv1)
deconv3 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl3d')(deconv2)
deconv4 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl4d')(deconv3)
BNd=BatchNormalization()(deconv4)
#DrO2=Dropout(0.25,name='DrO2')(BNd)

decoded = Conv2D(1, (5, 5), activation='sigmoid', padding='same', name='decoder_output')(BNd)
#model=Model(inputs=image,outputs=decoded)

model=Model(inputs=[image,wtm],outputs=decoded)

decoded_noise = GaussianNoise(0.5)(decoded)

#----------------------w extraction------------------------------------
convw1 = Conv2D(64, (5,5), activation='relu', name='conl1w')(decoded_noise)#24
convw2 = Conv2D(64, (5,5), activation='relu', name='convl2w')(convw1)#20
#Avw1=AveragePooling2D(pool_size=(2,2))(convw2)
convw3 = Conv2D(64, (5,5), activation='relu' ,name='conl3w')(convw2)#16
convw4 = Conv2D(64, (5,5), activation='relu' ,name='conl4w')(convw3)#12
#Avw2=AveragePooling2D(pool_size=(2,2))(convw4)
convw5 = Conv2D(64, (5,5), activation='relu', name='conl5w')(convw4)#8
convw6 = Conv2D(64, (5,5), activation='relu', name='conl6w')(convw5)#4
convw7 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl7w',dilation_rate=(2,2))(convw6)#4
convw8 = Conv2D(64, (5,5), activation='relu', padding='same',name='conl8w',dilation_rate=(2,2))(convw7)#4
convw9 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl9w',dilation_rate=(2,2))(convw8)#4
convw10 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl10w',dilation_rate=(2,2))(convw9)#4
BNed=BatchNormalization()(convw10)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W',dilation_rate=(2,2))(BNed)

w_extraction=Model(inputs=[image,wtm],outputs=[decoded,pred_w])

w_extraction.summary()









share|improve this question
























  • Can you please show us a Minimal, Complete, and Verifiable example of what you have tried so far?

    – IonicSolutions
    Mar 21 at 17:56












  • I put the complete code above but my problem is the part I said before and I do not know can I access each value in tensor?

    – david
    Mar 21 at 18:11











  • why the output shape of this code Kr.layers.Lambda(lambda x:x[1,1])(wtm) is (4,1)? I think it should be (1,1). what is the problem?

    – david
    Mar 21 at 21:33











  • Can you create a smaller example of what you are trying to accomplish, please? It's quite difficult to see from your long code snippet what exactly you're asking. Also, please include all necessary import statements so that we can run the code on our own systems.

    – IonicSolutions
    Mar 23 at 15:26











  • I add an example that shows what I need.

    – david
    Mar 25 at 17:40















0















I have an input layer wtm=Input(4,4,1) and I want to access each value of this layer during learning. for accessing to wtm[1,1]( the value in row=1 and column=1) I use this code a=Kr.layers.Lambda(lambda x:x[1,1])(wtm) but the output shape is TensorShape([Dimension(4), Dimension(1)]) not (1,1) and I think it gives the first column.is it right? if I only need one value in specific row and column what should I do and how can I change it? I really need your help. I know this could be easy but I am a beginner and do not know how to work with this issue:(
Edit:
suppose



wtm=
1 0 0 1
1 1 1 0
1 0 1 0
1 0 1 1


we know wtm(0,0)=1 now I want to produce new tensor with shape (28,28,1) with value 1 and I want to do this for all values in wtm.



 wtm=Input((4,4,1))
image = Input((28, 28, 1))
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e')(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e')(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e')(conv2)
BN=BatchNormalization()(conv3)
encoded = Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I')(BN)

rep=Kr.layers.Lambda(lambda x:Kr.backend.repeat(x,28))
a=rep(Kr.layers.Lambda(lambda x:x[1,1])(wtm))

add_const = Kr.layers.Lambda(lambda x: x[0] + x[1])
encoded_merged = add_const([encoded,a])


#-----------------------decoder------------------------------------------------
#------------------------------------------------------------------------------
deconv1 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl1d')(encoded_merged)
deconv2 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl2d')(deconv1)
deconv3 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl3d')(deconv2)
deconv4 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl4d')(deconv3)
BNd=BatchNormalization()(deconv4)
#DrO2=Dropout(0.25,name='DrO2')(BNd)

decoded = Conv2D(1, (5, 5), activation='sigmoid', padding='same', name='decoder_output')(BNd)
#model=Model(inputs=image,outputs=decoded)

model=Model(inputs=[image,wtm],outputs=decoded)

decoded_noise = GaussianNoise(0.5)(decoded)

#----------------------w extraction------------------------------------
convw1 = Conv2D(64, (5,5), activation='relu', name='conl1w')(decoded_noise)#24
convw2 = Conv2D(64, (5,5), activation='relu', name='convl2w')(convw1)#20
#Avw1=AveragePooling2D(pool_size=(2,2))(convw2)
convw3 = Conv2D(64, (5,5), activation='relu' ,name='conl3w')(convw2)#16
convw4 = Conv2D(64, (5,5), activation='relu' ,name='conl4w')(convw3)#12
#Avw2=AveragePooling2D(pool_size=(2,2))(convw4)
convw5 = Conv2D(64, (5,5), activation='relu', name='conl5w')(convw4)#8
convw6 = Conv2D(64, (5,5), activation='relu', name='conl6w')(convw5)#4
convw7 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl7w',dilation_rate=(2,2))(convw6)#4
convw8 = Conv2D(64, (5,5), activation='relu', padding='same',name='conl8w',dilation_rate=(2,2))(convw7)#4
convw9 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl9w',dilation_rate=(2,2))(convw8)#4
convw10 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl10w',dilation_rate=(2,2))(convw9)#4
BNed=BatchNormalization()(convw10)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W',dilation_rate=(2,2))(BNed)

w_extraction=Model(inputs=[image,wtm],outputs=[decoded,pred_w])

w_extraction.summary()









share|improve this question
























  • Can you please show us a Minimal, Complete, and Verifiable example of what you have tried so far?

    – IonicSolutions
    Mar 21 at 17:56












  • I put the complete code above but my problem is the part I said before and I do not know can I access each value in tensor?

    – david
    Mar 21 at 18:11











  • why the output shape of this code Kr.layers.Lambda(lambda x:x[1,1])(wtm) is (4,1)? I think it should be (1,1). what is the problem?

    – david
    Mar 21 at 21:33











  • Can you create a smaller example of what you are trying to accomplish, please? It's quite difficult to see from your long code snippet what exactly you're asking. Also, please include all necessary import statements so that we can run the code on our own systems.

    – IonicSolutions
    Mar 23 at 15:26











  • I add an example that shows what I need.

    – david
    Mar 25 at 17:40













0












0








0








I have an input layer wtm=Input(4,4,1) and I want to access each value of this layer during learning. for accessing to wtm[1,1]( the value in row=1 and column=1) I use this code a=Kr.layers.Lambda(lambda x:x[1,1])(wtm) but the output shape is TensorShape([Dimension(4), Dimension(1)]) not (1,1) and I think it gives the first column.is it right? if I only need one value in specific row and column what should I do and how can I change it? I really need your help. I know this could be easy but I am a beginner and do not know how to work with this issue:(
Edit:
suppose



wtm=
1 0 0 1
1 1 1 0
1 0 1 0
1 0 1 1


we know wtm(0,0)=1 now I want to produce new tensor with shape (28,28,1) with value 1 and I want to do this for all values in wtm.



 wtm=Input((4,4,1))
image = Input((28, 28, 1))
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e')(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e')(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e')(conv2)
BN=BatchNormalization()(conv3)
encoded = Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I')(BN)

rep=Kr.layers.Lambda(lambda x:Kr.backend.repeat(x,28))
a=rep(Kr.layers.Lambda(lambda x:x[1,1])(wtm))

add_const = Kr.layers.Lambda(lambda x: x[0] + x[1])
encoded_merged = add_const([encoded,a])


#-----------------------decoder------------------------------------------------
#------------------------------------------------------------------------------
deconv1 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl1d')(encoded_merged)
deconv2 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl2d')(deconv1)
deconv3 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl3d')(deconv2)
deconv4 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl4d')(deconv3)
BNd=BatchNormalization()(deconv4)
#DrO2=Dropout(0.25,name='DrO2')(BNd)

decoded = Conv2D(1, (5, 5), activation='sigmoid', padding='same', name='decoder_output')(BNd)
#model=Model(inputs=image,outputs=decoded)

model=Model(inputs=[image,wtm],outputs=decoded)

decoded_noise = GaussianNoise(0.5)(decoded)

#----------------------w extraction------------------------------------
convw1 = Conv2D(64, (5,5), activation='relu', name='conl1w')(decoded_noise)#24
convw2 = Conv2D(64, (5,5), activation='relu', name='convl2w')(convw1)#20
#Avw1=AveragePooling2D(pool_size=(2,2))(convw2)
convw3 = Conv2D(64, (5,5), activation='relu' ,name='conl3w')(convw2)#16
convw4 = Conv2D(64, (5,5), activation='relu' ,name='conl4w')(convw3)#12
#Avw2=AveragePooling2D(pool_size=(2,2))(convw4)
convw5 = Conv2D(64, (5,5), activation='relu', name='conl5w')(convw4)#8
convw6 = Conv2D(64, (5,5), activation='relu', name='conl6w')(convw5)#4
convw7 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl7w',dilation_rate=(2,2))(convw6)#4
convw8 = Conv2D(64, (5,5), activation='relu', padding='same',name='conl8w',dilation_rate=(2,2))(convw7)#4
convw9 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl9w',dilation_rate=(2,2))(convw8)#4
convw10 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl10w',dilation_rate=(2,2))(convw9)#4
BNed=BatchNormalization()(convw10)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W',dilation_rate=(2,2))(BNed)

w_extraction=Model(inputs=[image,wtm],outputs=[decoded,pred_w])

w_extraction.summary()









share|improve this question
















I have an input layer wtm=Input(4,4,1) and I want to access each value of this layer during learning. for accessing to wtm[1,1]( the value in row=1 and column=1) I use this code a=Kr.layers.Lambda(lambda x:x[1,1])(wtm) but the output shape is TensorShape([Dimension(4), Dimension(1)]) not (1,1) and I think it gives the first column.is it right? if I only need one value in specific row and column what should I do and how can I change it? I really need your help. I know this could be easy but I am a beginner and do not know how to work with this issue:(
Edit:
suppose



wtm=
1 0 0 1
1 1 1 0
1 0 1 0
1 0 1 1


we know wtm(0,0)=1 now I want to produce new tensor with shape (28,28,1) with value 1 and I want to do this for all values in wtm.



 wtm=Input((4,4,1))
image = Input((28, 28, 1))
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e')(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e')(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e')(conv2)
BN=BatchNormalization()(conv3)
encoded = Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I')(BN)

rep=Kr.layers.Lambda(lambda x:Kr.backend.repeat(x,28))
a=rep(Kr.layers.Lambda(lambda x:x[1,1])(wtm))

add_const = Kr.layers.Lambda(lambda x: x[0] + x[1])
encoded_merged = add_const([encoded,a])


#-----------------------decoder------------------------------------------------
#------------------------------------------------------------------------------
deconv1 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl1d')(encoded_merged)
deconv2 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl2d')(deconv1)
deconv3 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl3d')(deconv2)
deconv4 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl4d')(deconv3)
BNd=BatchNormalization()(deconv4)
#DrO2=Dropout(0.25,name='DrO2')(BNd)

decoded = Conv2D(1, (5, 5), activation='sigmoid', padding='same', name='decoder_output')(BNd)
#model=Model(inputs=image,outputs=decoded)

model=Model(inputs=[image,wtm],outputs=decoded)

decoded_noise = GaussianNoise(0.5)(decoded)

#----------------------w extraction------------------------------------
convw1 = Conv2D(64, (5,5), activation='relu', name='conl1w')(decoded_noise)#24
convw2 = Conv2D(64, (5,5), activation='relu', name='convl2w')(convw1)#20
#Avw1=AveragePooling2D(pool_size=(2,2))(convw2)
convw3 = Conv2D(64, (5,5), activation='relu' ,name='conl3w')(convw2)#16
convw4 = Conv2D(64, (5,5), activation='relu' ,name='conl4w')(convw3)#12
#Avw2=AveragePooling2D(pool_size=(2,2))(convw4)
convw5 = Conv2D(64, (5,5), activation='relu', name='conl5w')(convw4)#8
convw6 = Conv2D(64, (5,5), activation='relu', name='conl6w')(convw5)#4
convw7 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl7w',dilation_rate=(2,2))(convw6)#4
convw8 = Conv2D(64, (5,5), activation='relu', padding='same',name='conl8w',dilation_rate=(2,2))(convw7)#4
convw9 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl9w',dilation_rate=(2,2))(convw8)#4
convw10 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl10w',dilation_rate=(2,2))(convw9)#4
BNed=BatchNormalization()(convw10)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W',dilation_rate=(2,2))(BNed)

w_extraction=Model(inputs=[image,wtm],outputs=[decoded,pred_w])

w_extraction.summary()






python keras tensor






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 25 at 17:39







david

















asked Mar 21 at 17:23









daviddavid

246




246












  • Can you please show us a Minimal, Complete, and Verifiable example of what you have tried so far?

    – IonicSolutions
    Mar 21 at 17:56












  • I put the complete code above but my problem is the part I said before and I do not know can I access each value in tensor?

    – david
    Mar 21 at 18:11











  • why the output shape of this code Kr.layers.Lambda(lambda x:x[1,1])(wtm) is (4,1)? I think it should be (1,1). what is the problem?

    – david
    Mar 21 at 21:33











  • Can you create a smaller example of what you are trying to accomplish, please? It's quite difficult to see from your long code snippet what exactly you're asking. Also, please include all necessary import statements so that we can run the code on our own systems.

    – IonicSolutions
    Mar 23 at 15:26











  • I add an example that shows what I need.

    – david
    Mar 25 at 17:40

















  • Can you please show us a Minimal, Complete, and Verifiable example of what you have tried so far?

    – IonicSolutions
    Mar 21 at 17:56












  • I put the complete code above but my problem is the part I said before and I do not know can I access each value in tensor?

    – david
    Mar 21 at 18:11











  • why the output shape of this code Kr.layers.Lambda(lambda x:x[1,1])(wtm) is (4,1)? I think it should be (1,1). what is the problem?

    – david
    Mar 21 at 21:33











  • Can you create a smaller example of what you are trying to accomplish, please? It's quite difficult to see from your long code snippet what exactly you're asking. Also, please include all necessary import statements so that we can run the code on our own systems.

    – IonicSolutions
    Mar 23 at 15:26











  • I add an example that shows what I need.

    – david
    Mar 25 at 17:40
















Can you please show us a Minimal, Complete, and Verifiable example of what you have tried so far?

– IonicSolutions
Mar 21 at 17:56






Can you please show us a Minimal, Complete, and Verifiable example of what you have tried so far?

– IonicSolutions
Mar 21 at 17:56














I put the complete code above but my problem is the part I said before and I do not know can I access each value in tensor?

– david
Mar 21 at 18:11





I put the complete code above but my problem is the part I said before and I do not know can I access each value in tensor?

– david
Mar 21 at 18:11













why the output shape of this code Kr.layers.Lambda(lambda x:x[1,1])(wtm) is (4,1)? I think it should be (1,1). what is the problem?

– david
Mar 21 at 21:33





why the output shape of this code Kr.layers.Lambda(lambda x:x[1,1])(wtm) is (4,1)? I think it should be (1,1). what is the problem?

– david
Mar 21 at 21:33













Can you create a smaller example of what you are trying to accomplish, please? It's quite difficult to see from your long code snippet what exactly you're asking. Also, please include all necessary import statements so that we can run the code on our own systems.

– IonicSolutions
Mar 23 at 15:26





Can you create a smaller example of what you are trying to accomplish, please? It's quite difficult to see from your long code snippet what exactly you're asking. Also, please include all necessary import statements so that we can run the code on our own systems.

– IonicSolutions
Mar 23 at 15:26













I add an example that shows what I need.

– david
Mar 25 at 17:40





I add an example that shows what I need.

– david
Mar 25 at 17:40












1 Answer
1






active

oldest

votes


















0














I believe you're not taking into account that the first dimension is the batch dimension.



If you run



from keras.layers import Input, Lambda

def inspector(x):
print(x.shape)
return x

inp = Input((4, 4, 1))
lmb = Lambda(inspector)(inp)


you will see that it prints



(?, 4, 4, 1)
(?, 4, 4, 1)


indicating that x is four-dimensional.






share|improve this answer























  • I know the first dimension is batch and the second and third dimension are the size of the image and the last one is the number of the channel that is here my images are gray and so it will be 1, but I need to access the values of the image for each sample in the batch.

    – david
    Mar 26 at 14:05












  • I suggest you try to work your way forward from here, carefully checking the dimensions. If I'm not mistaken, x[1] will return the first entry of the batch, hence x[1, 1] will (as you discovered) return a (4,1)-dimensional tensor.

    – IonicSolutions
    Mar 26 at 17:15











  • could you answer me this question? suppose wtm is the same as the example I put above. and if we consider wtm as an image with size 4x4 the wtm( 0,0) is 1. now if we consier wtm=Input( 4,4,1), what is the value of wtm[:,0,0,:]? is it 1 or not?

    – david
    Mar 26 at 19:30











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55285985%2fhow-do-i-access-a-specific-value-in-a-tensor%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














I believe you're not taking into account that the first dimension is the batch dimension.



If you run



from keras.layers import Input, Lambda

def inspector(x):
print(x.shape)
return x

inp = Input((4, 4, 1))
lmb = Lambda(inspector)(inp)


you will see that it prints



(?, 4, 4, 1)
(?, 4, 4, 1)


indicating that x is four-dimensional.






share|improve this answer























  • I know the first dimension is batch and the second and third dimension are the size of the image and the last one is the number of the channel that is here my images are gray and so it will be 1, but I need to access the values of the image for each sample in the batch.

    – david
    Mar 26 at 14:05












  • I suggest you try to work your way forward from here, carefully checking the dimensions. If I'm not mistaken, x[1] will return the first entry of the batch, hence x[1, 1] will (as you discovered) return a (4,1)-dimensional tensor.

    – IonicSolutions
    Mar 26 at 17:15











  • could you answer me this question? suppose wtm is the same as the example I put above. and if we consider wtm as an image with size 4x4 the wtm( 0,0) is 1. now if we consier wtm=Input( 4,4,1), what is the value of wtm[:,0,0,:]? is it 1 or not?

    – david
    Mar 26 at 19:30















0














I believe you're not taking into account that the first dimension is the batch dimension.



If you run



from keras.layers import Input, Lambda

def inspector(x):
print(x.shape)
return x

inp = Input((4, 4, 1))
lmb = Lambda(inspector)(inp)


you will see that it prints



(?, 4, 4, 1)
(?, 4, 4, 1)


indicating that x is four-dimensional.






share|improve this answer























  • I know the first dimension is batch and the second and third dimension are the size of the image and the last one is the number of the channel that is here my images are gray and so it will be 1, but I need to access the values of the image for each sample in the batch.

    – david
    Mar 26 at 14:05












  • I suggest you try to work your way forward from here, carefully checking the dimensions. If I'm not mistaken, x[1] will return the first entry of the batch, hence x[1, 1] will (as you discovered) return a (4,1)-dimensional tensor.

    – IonicSolutions
    Mar 26 at 17:15











  • could you answer me this question? suppose wtm is the same as the example I put above. and if we consider wtm as an image with size 4x4 the wtm( 0,0) is 1. now if we consier wtm=Input( 4,4,1), what is the value of wtm[:,0,0,:]? is it 1 or not?

    – david
    Mar 26 at 19:30













0












0








0







I believe you're not taking into account that the first dimension is the batch dimension.



If you run



from keras.layers import Input, Lambda

def inspector(x):
print(x.shape)
return x

inp = Input((4, 4, 1))
lmb = Lambda(inspector)(inp)


you will see that it prints



(?, 4, 4, 1)
(?, 4, 4, 1)


indicating that x is four-dimensional.






share|improve this answer













I believe you're not taking into account that the first dimension is the batch dimension.



If you run



from keras.layers import Input, Lambda

def inspector(x):
print(x.shape)
return x

inp = Input((4, 4, 1))
lmb = Lambda(inspector)(inp)


you will see that it prints



(?, 4, 4, 1)
(?, 4, 4, 1)


indicating that x is four-dimensional.







share|improve this answer












share|improve this answer



share|improve this answer










answered Mar 26 at 10:45









IonicSolutionsIonicSolutions

877718




877718












  • I know the first dimension is batch and the second and third dimension are the size of the image and the last one is the number of the channel that is here my images are gray and so it will be 1, but I need to access the values of the image for each sample in the batch.

    – david
    Mar 26 at 14:05












  • I suggest you try to work your way forward from here, carefully checking the dimensions. If I'm not mistaken, x[1] will return the first entry of the batch, hence x[1, 1] will (as you discovered) return a (4,1)-dimensional tensor.

    – IonicSolutions
    Mar 26 at 17:15











  • could you answer me this question? suppose wtm is the same as the example I put above. and if we consider wtm as an image with size 4x4 the wtm( 0,0) is 1. now if we consier wtm=Input( 4,4,1), what is the value of wtm[:,0,0,:]? is it 1 or not?

    – david
    Mar 26 at 19:30

















  • I know the first dimension is batch and the second and third dimension are the size of the image and the last one is the number of the channel that is here my images are gray and so it will be 1, but I need to access the values of the image for each sample in the batch.

    – david
    Mar 26 at 14:05












  • I suggest you try to work your way forward from here, carefully checking the dimensions. If I'm not mistaken, x[1] will return the first entry of the batch, hence x[1, 1] will (as you discovered) return a (4,1)-dimensional tensor.

    – IonicSolutions
    Mar 26 at 17:15











  • could you answer me this question? suppose wtm is the same as the example I put above. and if we consider wtm as an image with size 4x4 the wtm( 0,0) is 1. now if we consier wtm=Input( 4,4,1), what is the value of wtm[:,0,0,:]? is it 1 or not?

    – david
    Mar 26 at 19:30
















I know the first dimension is batch and the second and third dimension are the size of the image and the last one is the number of the channel that is here my images are gray and so it will be 1, but I need to access the values of the image for each sample in the batch.

– david
Mar 26 at 14:05






I know the first dimension is batch and the second and third dimension are the size of the image and the last one is the number of the channel that is here my images are gray and so it will be 1, but I need to access the values of the image for each sample in the batch.

– david
Mar 26 at 14:05














I suggest you try to work your way forward from here, carefully checking the dimensions. If I'm not mistaken, x[1] will return the first entry of the batch, hence x[1, 1] will (as you discovered) return a (4,1)-dimensional tensor.

– IonicSolutions
Mar 26 at 17:15





I suggest you try to work your way forward from here, carefully checking the dimensions. If I'm not mistaken, x[1] will return the first entry of the batch, hence x[1, 1] will (as you discovered) return a (4,1)-dimensional tensor.

– IonicSolutions
Mar 26 at 17:15













could you answer me this question? suppose wtm is the same as the example I put above. and if we consider wtm as an image with size 4x4 the wtm( 0,0) is 1. now if we consier wtm=Input( 4,4,1), what is the value of wtm[:,0,0,:]? is it 1 or not?

– david
Mar 26 at 19:30





could you answer me this question? suppose wtm is the same as the example I put above. and if we consider wtm as an image with size 4x4 the wtm( 0,0) is 1. now if we consier wtm=Input( 4,4,1), what is the value of wtm[:,0,0,:]? is it 1 or not?

– david
Mar 26 at 19:30



















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55285985%2fhow-do-i-access-a-specific-value-in-a-tensor%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현