Getting 5 random crops - TypeError: pic should be PIL Image or ndarray. Got TypeError: object of type 'Image' has no len()Keras (tensorflow backend) getting “TypeError: unhashable type: 'Dimension'”What is the benefit of random image crop on Convolutional Network?cuda runtime error (48): no kernel image is available for execution on the deviceLoading FITS images with PyTorchTypeError: Cannot handle the data type in PIL ImageTrouble using transforms.FiveCrop()/TenCrop() in PyTorch
GFCI Outlet in Bathroom, Lights not working
Is having a hidden directory under /etc safe?
How can I add depth to my story or how do I determine if my story already has depth?
Does any lore text explain why the planes of Acheron, Gehenna, and Carceri are the alignment they are?
Metal bar on DMM PCB
Strange math syntax in old basic listing
Why were the Night's Watch required to be celibate?
How can a single Member of the House block a Congressional bill?
How can Iron Man's suit withstand this?
What is a simple, physical situation where complex numbers emerge naturally?
Show sparse matrices like chessboards
Creating Fictional Slavic Place Names
Short story written from alien perspective with this line: "It's too bright to look at, so they don't"
What's the most polite way to tell a manager "shut up and let me work"?
Chopin: marche funèbre bar 15 impossible place
The term for the person/group a political party aligns themselves with to appear concerned about the general public
Accidentally cashed a check twice
Credit card offering 0.5 miles for every cent rounded up. Too good to be true?
Rotated Position of Integers
PhD student with mental health issues and bad performance
Hygienic footwear for prehensile feet?
What does War Machine's "Canopy! Canopy!" line mean in "Avengers: Endgame"?
Is there any Biblical Basis for 400 years of silence between Old and New Testament?
Incremental Ranges!
Getting 5 random crops - TypeError: pic should be PIL Image or ndarray. Got
TypeError: object of type 'Image' has no len()Keras (tensorflow backend) getting “TypeError: unhashable type: 'Dimension'”What is the benefit of random image crop on Convolutional Network?cuda runtime error (48): no kernel image is available for execution on the deviceLoading FITS images with PyTorchTypeError: Cannot handle the data type in PIL ImageTrouble using transforms.FiveCrop()/TenCrop() in PyTorch
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I do transformations on images as below (which works with RandCrop): (it is from this dataloader script: https://github.com/jeffreyhuang1/two-stream-action-recognition/blob/master/dataloader/motion_dataloader.py)
def train(self):
training_set = motion_dataset(dic=self.dic_video_train, in_channel=self.in_channel, root_dir=self.data_path,
mode=‘train’,
transform = transforms.Compose([
transforms.Resize([256,256]),
transforms.FiveCrop([224, 224]),
#transforms.RandomCrop([224, 224]),
transforms.ToTensor(),
#transforms.Normalize([0.5], [0.5])
]))
print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()
train_loader = DataLoader(
dataset=training_set,
batch_size=self.BATCH_SIZE,
shuffle=True,
num_workers=self.num_workers,
pin_memory=True
)
return train_loader
But when I do try to get Five Crops, I get this error:
Traceback (most recent call last):
File “motion_cnn.py”, line 267, in
main()
File “motion_cnn.py”, line 51, in main
train_loader,test_loader, test_video = data_loader.run()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 120, in run
train_loader = self.train()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 156, in train
print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 77, in getitem
data = self.stackopf()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 51, in stackopf
H = self.transform(imgH)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 60, in call
img = t(img)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 91, in call
return F.to_tensor(pic)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/functional.py”, line 50, in to_tensor
raise TypeError(‘pic should be PIL Image or ndarray. Got ’.format(type(pic)))
TypeError: pic should be PIL Image or ndarray. Got <type ‘tuple’>
Getting 5 random crops, I should handle a tuple of images instead of a PIL image - so I use Lambda, but then I get the error, at line 55, in stackopf
flow[2*(j),:,:] = H
RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[224,
224]): the number of sizes provided (2) must be greater or equal to
the number of dimensions in the tensor (4)
and when I try to set flow = torch.FloatTensor(5, 2*self.in_channel,self.img_rows,self.img_cols)
I get motion_dataloader.py", line 55, in stackopf
flow[:,2*(j),:,:] = H
RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[5,
224, 224]): the number of sizes provided (3) must be greater or equal
to the number of dimensions in the tensor (4)
when I multiply the train batchsize by 5 that is returned, I also get the same error.
conv-neural-network pytorch vision
add a comment |
I do transformations on images as below (which works with RandCrop): (it is from this dataloader script: https://github.com/jeffreyhuang1/two-stream-action-recognition/blob/master/dataloader/motion_dataloader.py)
def train(self):
training_set = motion_dataset(dic=self.dic_video_train, in_channel=self.in_channel, root_dir=self.data_path,
mode=‘train’,
transform = transforms.Compose([
transforms.Resize([256,256]),
transforms.FiveCrop([224, 224]),
#transforms.RandomCrop([224, 224]),
transforms.ToTensor(),
#transforms.Normalize([0.5], [0.5])
]))
print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()
train_loader = DataLoader(
dataset=training_set,
batch_size=self.BATCH_SIZE,
shuffle=True,
num_workers=self.num_workers,
pin_memory=True
)
return train_loader
But when I do try to get Five Crops, I get this error:
Traceback (most recent call last):
File “motion_cnn.py”, line 267, in
main()
File “motion_cnn.py”, line 51, in main
train_loader,test_loader, test_video = data_loader.run()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 120, in run
train_loader = self.train()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 156, in train
print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 77, in getitem
data = self.stackopf()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 51, in stackopf
H = self.transform(imgH)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 60, in call
img = t(img)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 91, in call
return F.to_tensor(pic)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/functional.py”, line 50, in to_tensor
raise TypeError(‘pic should be PIL Image or ndarray. Got ’.format(type(pic)))
TypeError: pic should be PIL Image or ndarray. Got <type ‘tuple’>
Getting 5 random crops, I should handle a tuple of images instead of a PIL image - so I use Lambda, but then I get the error, at line 55, in stackopf
flow[2*(j),:,:] = H
RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[224,
224]): the number of sizes provided (2) must be greater or equal to
the number of dimensions in the tensor (4)
and when I try to set flow = torch.FloatTensor(5, 2*self.in_channel,self.img_rows,self.img_cols)
I get motion_dataloader.py", line 55, in stackopf
flow[:,2*(j),:,:] = H
RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[5,
224, 224]): the number of sizes provided (3) must be greater or equal
to the number of dimensions in the tensor (4)
when I multiply the train batchsize by 5 that is returned, I also get the same error.
conv-neural-network pytorch vision
add a comment |
I do transformations on images as below (which works with RandCrop): (it is from this dataloader script: https://github.com/jeffreyhuang1/two-stream-action-recognition/blob/master/dataloader/motion_dataloader.py)
def train(self):
training_set = motion_dataset(dic=self.dic_video_train, in_channel=self.in_channel, root_dir=self.data_path,
mode=‘train’,
transform = transforms.Compose([
transforms.Resize([256,256]),
transforms.FiveCrop([224, 224]),
#transforms.RandomCrop([224, 224]),
transforms.ToTensor(),
#transforms.Normalize([0.5], [0.5])
]))
print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()
train_loader = DataLoader(
dataset=training_set,
batch_size=self.BATCH_SIZE,
shuffle=True,
num_workers=self.num_workers,
pin_memory=True
)
return train_loader
But when I do try to get Five Crops, I get this error:
Traceback (most recent call last):
File “motion_cnn.py”, line 267, in
main()
File “motion_cnn.py”, line 51, in main
train_loader,test_loader, test_video = data_loader.run()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 120, in run
train_loader = self.train()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 156, in train
print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 77, in getitem
data = self.stackopf()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 51, in stackopf
H = self.transform(imgH)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 60, in call
img = t(img)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 91, in call
return F.to_tensor(pic)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/functional.py”, line 50, in to_tensor
raise TypeError(‘pic should be PIL Image or ndarray. Got ’.format(type(pic)))
TypeError: pic should be PIL Image or ndarray. Got <type ‘tuple’>
Getting 5 random crops, I should handle a tuple of images instead of a PIL image - so I use Lambda, but then I get the error, at line 55, in stackopf
flow[2*(j),:,:] = H
RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[224,
224]): the number of sizes provided (2) must be greater or equal to
the number of dimensions in the tensor (4)
and when I try to set flow = torch.FloatTensor(5, 2*self.in_channel,self.img_rows,self.img_cols)
I get motion_dataloader.py", line 55, in stackopf
flow[:,2*(j),:,:] = H
RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[5,
224, 224]): the number of sizes provided (3) must be greater or equal
to the number of dimensions in the tensor (4)
when I multiply the train batchsize by 5 that is returned, I also get the same error.
conv-neural-network pytorch vision
I do transformations on images as below (which works with RandCrop): (it is from this dataloader script: https://github.com/jeffreyhuang1/two-stream-action-recognition/blob/master/dataloader/motion_dataloader.py)
def train(self):
training_set = motion_dataset(dic=self.dic_video_train, in_channel=self.in_channel, root_dir=self.data_path,
mode=‘train’,
transform = transforms.Compose([
transforms.Resize([256,256]),
transforms.FiveCrop([224, 224]),
#transforms.RandomCrop([224, 224]),
transforms.ToTensor(),
#transforms.Normalize([0.5], [0.5])
]))
print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()
train_loader = DataLoader(
dataset=training_set,
batch_size=self.BATCH_SIZE,
shuffle=True,
num_workers=self.num_workers,
pin_memory=True
)
return train_loader
But when I do try to get Five Crops, I get this error:
Traceback (most recent call last):
File “motion_cnn.py”, line 267, in
main()
File “motion_cnn.py”, line 51, in main
train_loader,test_loader, test_video = data_loader.run()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 120, in run
train_loader = self.train()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 156, in train
print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 77, in getitem
data = self.stackopf()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 51, in stackopf
H = self.transform(imgH)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 60, in call
img = t(img)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 91, in call
return F.to_tensor(pic)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/functional.py”, line 50, in to_tensor
raise TypeError(‘pic should be PIL Image or ndarray. Got ’.format(type(pic)))
TypeError: pic should be PIL Image or ndarray. Got <type ‘tuple’>
Getting 5 random crops, I should handle a tuple of images instead of a PIL image - so I use Lambda, but then I get the error, at line 55, in stackopf
flow[2*(j),:,:] = H
RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[224,
224]): the number of sizes provided (2) must be greater or equal to
the number of dimensions in the tensor (4)
and when I try to set flow = torch.FloatTensor(5, 2*self.in_channel,self.img_rows,self.img_cols)
I get motion_dataloader.py", line 55, in stackopf
flow[:,2*(j),:,:] = H
RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[5,
224, 224]): the number of sizes provided (3) must be greater or equal
to the number of dimensions in the tensor (4)
when I multiply the train batchsize by 5 that is returned, I also get the same error.
conv-neural-network pytorch vision
conv-neural-network pytorch vision
edited Mar 24 at 13:25
dusa
asked Mar 24 at 12:31
dusadusa
280215
280215
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55323821%2fgetting-5-random-crops-typeerror-pic-should-be-pil-image-or-ndarray-got-typ%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55323821%2fgetting-5-random-crops-typeerror-pic-should-be-pil-image-or-ndarray-got-typ%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown