Getting 5 random crops - TypeError: pic should be PIL Image or ndarray. Got TypeError: object of type 'Image' has no len()Keras (tensorflow backend) getting “TypeError: unhashable type: 'Dimension'”What is the benefit of random image crop on Convolutional Network?cuda runtime error (48): no kernel image is available for execution on the deviceLoading FITS images with PyTorchTypeError: Cannot handle the data type in PIL ImageTrouble using transforms.FiveCrop()/TenCrop() in PyTorch

GFCI Outlet in Bathroom, Lights not working

Is having a hidden directory under /etc safe?

How can I add depth to my story or how do I determine if my story already has depth?

Does any lore text explain why the planes of Acheron, Gehenna, and Carceri are the alignment they are?

Metal bar on DMM PCB

Strange math syntax in old basic listing

Why were the Night's Watch required to be celibate?

How can a single Member of the House block a Congressional bill?

How can Iron Man's suit withstand this?

What is a simple, physical situation where complex numbers emerge naturally?

Show sparse matrices like chessboards

Creating Fictional Slavic Place Names

Short story written from alien perspective with this line: "It's too bright to look at, so they don't"

What's the most polite way to tell a manager "shut up and let me work"?

Chopin: marche funèbre bar 15 impossible place

The term for the person/group a political party aligns themselves with to appear concerned about the general public

Accidentally cashed a check twice

Credit card offering 0.5 miles for every cent rounded up. Too good to be true?

Rotated Position of Integers

PhD student with mental health issues and bad performance

Hygienic footwear for prehensile feet?

What does War Machine's "Canopy! Canopy!" line mean in "Avengers: Endgame"?

Is there any Biblical Basis for 400 years of silence between Old and New Testament?

Incremental Ranges!



Getting 5 random crops - TypeError: pic should be PIL Image or ndarray. Got


TypeError: object of type 'Image' has no len()Keras (tensorflow backend) getting “TypeError: unhashable type: 'Dimension'”What is the benefit of random image crop on Convolutional Network?cuda runtime error (48): no kernel image is available for execution on the deviceLoading FITS images with PyTorchTypeError: Cannot handle the data type in PIL ImageTrouble using transforms.FiveCrop()/TenCrop() in PyTorch






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








1















I do transformations on images as below (which works with RandCrop): (it is from this dataloader script: https://github.com/jeffreyhuang1/two-stream-action-recognition/blob/master/dataloader/motion_dataloader.py)



def train(self):
training_set = motion_dataset(dic=self.dic_video_train, in_channel=self.in_channel, root_dir=self.data_path,
mode=‘train’,
transform = transforms.Compose([
transforms.Resize([256,256]),
transforms.FiveCrop([224, 224]),
#transforms.RandomCrop([224, 224]),
transforms.ToTensor(),
#transforms.Normalize([0.5], [0.5])
]))
print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()

train_loader = DataLoader(
dataset=training_set,
batch_size=self.BATCH_SIZE,
shuffle=True,
num_workers=self.num_workers,
pin_memory=True
)

return train_loader


But when I do try to get Five Crops, I get this error:



Traceback (most recent call last):
File “motion_cnn.py”, line 267, in
main()
File “motion_cnn.py”, line 51, in main
train_loader,test_loader, test_video = data_loader.run()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 120, in run
train_loader = self.train()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 156, in train
print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 77, in getitem
data = self.stackopf()
File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 51, in stackopf
H = self.transform(imgH)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 60, in call
img = t(img)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 91, in call
return F.to_tensor(pic)
File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/functional.py”, line 50, in to_tensor
raise TypeError(‘pic should be PIL Image or ndarray. Got ’.format(type(pic)))
TypeError: pic should be PIL Image or ndarray. Got <type ‘tuple’>


Getting 5 random crops, I should handle a tuple of images instead of a PIL image - so I use Lambda, but then I get the error, at line 55, in stackopf
flow[2*(j),:,:] = H




RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[224,
224]): the number of sizes provided (2) must be greater or equal to
the number of dimensions in the tensor (4)




and when I try to set flow = torch.FloatTensor(5, 2*self.in_channel,self.img_rows,self.img_cols)



I get motion_dataloader.py", line 55, in stackopf
flow[:,2*(j),:,:] = H




RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[5,
224, 224]): the number of sizes provided (3) must be greater or equal
to the number of dimensions in the tensor (4)




when I multiply the train batchsize by 5 that is returned, I also get the same error.










share|improve this question






























    1















    I do transformations on images as below (which works with RandCrop): (it is from this dataloader script: https://github.com/jeffreyhuang1/two-stream-action-recognition/blob/master/dataloader/motion_dataloader.py)



    def train(self):
    training_set = motion_dataset(dic=self.dic_video_train, in_channel=self.in_channel, root_dir=self.data_path,
    mode=‘train’,
    transform = transforms.Compose([
    transforms.Resize([256,256]),
    transforms.FiveCrop([224, 224]),
    #transforms.RandomCrop([224, 224]),
    transforms.ToTensor(),
    #transforms.Normalize([0.5], [0.5])
    ]))
    print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()

    train_loader = DataLoader(
    dataset=training_set,
    batch_size=self.BATCH_SIZE,
    shuffle=True,
    num_workers=self.num_workers,
    pin_memory=True
    )

    return train_loader


    But when I do try to get Five Crops, I get this error:



    Traceback (most recent call last):
    File “motion_cnn.py”, line 267, in
    main()
    File “motion_cnn.py”, line 51, in main
    train_loader,test_loader, test_video = data_loader.run()
    File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 120, in run
    train_loader = self.train()
    File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 156, in train
    print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()
    File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 77, in getitem
    data = self.stackopf()
    File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 51, in stackopf
    H = self.transform(imgH)
    File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 60, in call
    img = t(img)
    File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 91, in call
    return F.to_tensor(pic)
    File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/functional.py”, line 50, in to_tensor
    raise TypeError(‘pic should be PIL Image or ndarray. Got ’.format(type(pic)))
    TypeError: pic should be PIL Image or ndarray. Got <type ‘tuple’>


    Getting 5 random crops, I should handle a tuple of images instead of a PIL image - so I use Lambda, but then I get the error, at line 55, in stackopf
    flow[2*(j),:,:] = H




    RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[224,
    224]): the number of sizes provided (2) must be greater or equal to
    the number of dimensions in the tensor (4)




    and when I try to set flow = torch.FloatTensor(5, 2*self.in_channel,self.img_rows,self.img_cols)



    I get motion_dataloader.py", line 55, in stackopf
    flow[:,2*(j),:,:] = H




    RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[5,
    224, 224]): the number of sizes provided (3) must be greater or equal
    to the number of dimensions in the tensor (4)




    when I multiply the train batchsize by 5 that is returned, I also get the same error.










    share|improve this question


























      1












      1








      1








      I do transformations on images as below (which works with RandCrop): (it is from this dataloader script: https://github.com/jeffreyhuang1/two-stream-action-recognition/blob/master/dataloader/motion_dataloader.py)



      def train(self):
      training_set = motion_dataset(dic=self.dic_video_train, in_channel=self.in_channel, root_dir=self.data_path,
      mode=‘train’,
      transform = transforms.Compose([
      transforms.Resize([256,256]),
      transforms.FiveCrop([224, 224]),
      #transforms.RandomCrop([224, 224]),
      transforms.ToTensor(),
      #transforms.Normalize([0.5], [0.5])
      ]))
      print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()

      train_loader = DataLoader(
      dataset=training_set,
      batch_size=self.BATCH_SIZE,
      shuffle=True,
      num_workers=self.num_workers,
      pin_memory=True
      )

      return train_loader


      But when I do try to get Five Crops, I get this error:



      Traceback (most recent call last):
      File “motion_cnn.py”, line 267, in
      main()
      File “motion_cnn.py”, line 51, in main
      train_loader,test_loader, test_video = data_loader.run()
      File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 120, in run
      train_loader = self.train()
      File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 156, in train
      print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()
      File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 77, in getitem
      data = self.stackopf()
      File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 51, in stackopf
      H = self.transform(imgH)
      File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 60, in call
      img = t(img)
      File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 91, in call
      return F.to_tensor(pic)
      File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/functional.py”, line 50, in to_tensor
      raise TypeError(‘pic should be PIL Image or ndarray. Got ’.format(type(pic)))
      TypeError: pic should be PIL Image or ndarray. Got <type ‘tuple’>


      Getting 5 random crops, I should handle a tuple of images instead of a PIL image - so I use Lambda, but then I get the error, at line 55, in stackopf
      flow[2*(j),:,:] = H




      RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[224,
      224]): the number of sizes provided (2) must be greater or equal to
      the number of dimensions in the tensor (4)




      and when I try to set flow = torch.FloatTensor(5, 2*self.in_channel,self.img_rows,self.img_cols)



      I get motion_dataloader.py", line 55, in stackopf
      flow[:,2*(j),:,:] = H




      RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[5,
      224, 224]): the number of sizes provided (3) must be greater or equal
      to the number of dimensions in the tensor (4)




      when I multiply the train batchsize by 5 that is returned, I also get the same error.










      share|improve this question
















      I do transformations on images as below (which works with RandCrop): (it is from this dataloader script: https://github.com/jeffreyhuang1/two-stream-action-recognition/blob/master/dataloader/motion_dataloader.py)



      def train(self):
      training_set = motion_dataset(dic=self.dic_video_train, in_channel=self.in_channel, root_dir=self.data_path,
      mode=‘train’,
      transform = transforms.Compose([
      transforms.Resize([256,256]),
      transforms.FiveCrop([224, 224]),
      #transforms.RandomCrop([224, 224]),
      transforms.ToTensor(),
      #transforms.Normalize([0.5], [0.5])
      ]))
      print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()

      train_loader = DataLoader(
      dataset=training_set,
      batch_size=self.BATCH_SIZE,
      shuffle=True,
      num_workers=self.num_workers,
      pin_memory=True
      )

      return train_loader


      But when I do try to get Five Crops, I get this error:



      Traceback (most recent call last):
      File “motion_cnn.py”, line 267, in
      main()
      File “motion_cnn.py”, line 51, in main
      train_loader,test_loader, test_video = data_loader.run()
      File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 120, in run
      train_loader = self.train()
      File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 156, in train
      print ‘==> Training data :’,len(training_set),’ videos’,training_set[1][0].size()
      File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 77, in getitem
      data = self.stackopf()
      File “/media/d/DATA_2/two-stream-action-recognition-master/dataloader/motion_dataloader.py”, line 51, in stackopf
      H = self.transform(imgH)
      File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 60, in call
      img = t(img)
      File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/transforms.py”, line 91, in call
      return F.to_tensor(pic)
      File “/media/d/DATA_2/two-stream-action-recognition-master/venv/local/lib/python2.7/site-packages/torchvision/transforms/functional.py”, line 50, in to_tensor
      raise TypeError(‘pic should be PIL Image or ndarray. Got ’.format(type(pic)))
      TypeError: pic should be PIL Image or ndarray. Got <type ‘tuple’>


      Getting 5 random crops, I should handle a tuple of images instead of a PIL image - so I use Lambda, but then I get the error, at line 55, in stackopf
      flow[2*(j),:,:] = H




      RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[224,
      224]): the number of sizes provided (2) must be greater or equal to
      the number of dimensions in the tensor (4)




      and when I try to set flow = torch.FloatTensor(5, 2*self.in_channel,self.img_rows,self.img_cols)



      I get motion_dataloader.py", line 55, in stackopf
      flow[:,2*(j),:,:] = H




      RuntimeError: expand(torch.FloatTensor[5, 1, 224, 224], size=[5,
      224, 224]): the number of sizes provided (3) must be greater or equal
      to the number of dimensions in the tensor (4)




      when I multiply the train batchsize by 5 that is returned, I also get the same error.







      conv-neural-network pytorch vision






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 24 at 13:25







      dusa

















      asked Mar 24 at 12:31









      dusadusa

      280215




      280215






















          0






          active

          oldest

          votes












          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55323821%2fgetting-5-random-crops-typeerror-pic-should-be-pil-image-or-ndarray-got-typ%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55323821%2fgetting-5-random-crops-typeerror-pic-should-be-pil-image-or-ndarray-got-typ%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

          은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현