ValueError: Error when checking input: expected embedding_13_input to have 2 dimensions, but got array with shape (1, 1, 0)Error when checking model input: expected lstm_1_input to have 3 dimensions, but got array with shape (339732, 29)How to use Keras LSTM with word embeddings to predict word id'sValueError: Error when checking input: expected input_1 to have shape (None, 1) but got array with shape (5, 54)KERAS: Get a SLICE of RNN timesteps with return_sequence = TrueValueError: Error when checking target: expected dense_3 to have 2 dimensions, but got array with shape (500, 10, 14)Keras LSTM from for loop, using functional API with custom number of layersValueError: Error when checking input: expected lstm_1_input to have shape (973, 215) but got array with shape (61, 215)ValueError: Error when checking target: expected dense_19 to have 3 dimensions, but got array with shape (5, 3)Keras/Tensorflow Input to RNN layersValueError: Error when checking input: expected lstm_1_input to have 3 dimensions, but got array with shape (6782, 36)

How large would a mega structure have to be to host 1 billion people indefinitely?

What does it mean to "control target player"?

Why do textbooks often include the solutions to odd or even numbered problems but not both?

Trainee keeps missing deadlines for independent learning

Is "Busen" just the area between the breasts?

Why tighten down in a criss-cross pattern?

Why use cross notes in sheet music for hip hop tracks?

LWC - Best practice for routing?

Minimum distance between holes in inner tube

Is it illegal to withhold someone's passport and green card in California?

Parameterize chained calls to a utility program in Bash

Do I have to explain the mechanical superiority of the player-character within the fiction of the game?

How to avoid voltage drop when using full bridge rectifier as reverse polarity protection

What's currently blocking the construction of the wall between Mexico and the US?

Old sci-fi story: radiation mutated the animals, a boy loses a limb, but it's okay because "humans used to do great with only two arms"

How to make clear to people I don't want to answer their "Where are you from?" question?

What did River say when she woke from her proto-comatose state?

Why does Linux list NVMe drives as /dev/nvme0 instead of /dev/sda?

What was the Shuttle Carrier Aircraft escape tunnel?

What exactly is the 'online' in OLAP and OLTP?

Why did pressing the joystick button spit out keypresses?

How to remove this component from PCB

Why don't countries like Japan just print more money?

Is it damaging to turn off a small fridge for two days every week?



ValueError: Error when checking input: expected embedding_13_input to have 2 dimensions, but got array with shape (1, 1, 0)


Error when checking model input: expected lstm_1_input to have 3 dimensions, but got array with shape (339732, 29)How to use Keras LSTM with word embeddings to predict word id'sValueError: Error when checking input: expected input_1 to have shape (None, 1) but got array with shape (5, 54)KERAS: Get a SLICE of RNN timesteps with return_sequence = TrueValueError: Error when checking target: expected dense_3 to have 2 dimensions, but got array with shape (500, 10, 14)Keras LSTM from for loop, using functional API with custom number of layersValueError: Error when checking input: expected lstm_1_input to have shape (973, 215) but got array with shape (61, 215)ValueError: Error when checking target: expected dense_19 to have 3 dimensions, but got array with shape (5, 3)Keras/Tensorflow Input to RNN layersValueError: Error when checking input: expected lstm_1_input to have 3 dimensions, but got array with shape (6782, 36)






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








0















My code looks like this:



lr = 1e-3
window_length = 1
emb_size = 100
look_back = 10

expert_model = Sequential()
expert_model.add(Embedding(num_classes + 1, emb_size, input_length=look_back,mask_zero=True))
expert_model.add(LSTM(64, input_shape=(look_back,window_length)))
expert_model.add(Dense(num_classes, activation='softmax'))


All I want is to pass a list of classes of size 10 to an embedding layer and then to an LSTM one to predict the next class to come. Maybe the length of that list is not 10, so I put the mask_zero attribute to True and the vocabulary of the embedding layer with one extra value. Is this correct?



In addition, I'm not very sure what does the window_length means. Does it means the number of sequences to pass to the embedding? When I try to run this I get this error:



ValueError: Error when checking input: expected embedding_13_input to have 2 dimensions, but got array with shape (1, 1, 0)


To preprocess data I'm using a Processor object as this model is for a OpenAI environment called "RecoGym". The class is as follows:



class RecoProcessor(Processor):
def process_observation(self, observation):
if observation is None:
return np.array([], dtype='float32')
return np.array(observation, dtype='float32')

def process_state_batch(self, batch):
return np.array(batch).astype('float32')

def process_reward(self, reward):
return np.array(reward).astype('float32')

def process_demo_data(self, demo_data):
for step in demo_data:
step[0] = self.process_observation(step[0])
step[2] = self.process_reward(step[2])
return demo_data


Please, I need some help. If you could only give me a tutorial on this I would be very grateful.










share|improve this question



















  • 1





    Please also post the code you have used for preprocessing your data. There is a shape mismatch happening at preprocessing step.

    – user110327
    Mar 25 at 9:52











  • @user110327 There you have. It might be a little bit strange, but OpenAI library requires that way. You can find the environment easily looking for RecoGym at Google if you need to.

    – Angelo
    Mar 25 at 9:58











  • Window size is the length of the sequence. For example, if your sequence is "1,2,3.4" then a window size of 2 gives "1,2", "2,3", "3,4". I am sorry I am unable to help you with the actual issue because the input shape is still not clear to me.

    – user110327
    Mar 25 at 10:04

















0















My code looks like this:



lr = 1e-3
window_length = 1
emb_size = 100
look_back = 10

expert_model = Sequential()
expert_model.add(Embedding(num_classes + 1, emb_size, input_length=look_back,mask_zero=True))
expert_model.add(LSTM(64, input_shape=(look_back,window_length)))
expert_model.add(Dense(num_classes, activation='softmax'))


All I want is to pass a list of classes of size 10 to an embedding layer and then to an LSTM one to predict the next class to come. Maybe the length of that list is not 10, so I put the mask_zero attribute to True and the vocabulary of the embedding layer with one extra value. Is this correct?



In addition, I'm not very sure what does the window_length means. Does it means the number of sequences to pass to the embedding? When I try to run this I get this error:



ValueError: Error when checking input: expected embedding_13_input to have 2 dimensions, but got array with shape (1, 1, 0)


To preprocess data I'm using a Processor object as this model is for a OpenAI environment called "RecoGym". The class is as follows:



class RecoProcessor(Processor):
def process_observation(self, observation):
if observation is None:
return np.array([], dtype='float32')
return np.array(observation, dtype='float32')

def process_state_batch(self, batch):
return np.array(batch).astype('float32')

def process_reward(self, reward):
return np.array(reward).astype('float32')

def process_demo_data(self, demo_data):
for step in demo_data:
step[0] = self.process_observation(step[0])
step[2] = self.process_reward(step[2])
return demo_data


Please, I need some help. If you could only give me a tutorial on this I would be very grateful.










share|improve this question



















  • 1





    Please also post the code you have used for preprocessing your data. There is a shape mismatch happening at preprocessing step.

    – user110327
    Mar 25 at 9:52











  • @user110327 There you have. It might be a little bit strange, but OpenAI library requires that way. You can find the environment easily looking for RecoGym at Google if you need to.

    – Angelo
    Mar 25 at 9:58











  • Window size is the length of the sequence. For example, if your sequence is "1,2,3.4" then a window size of 2 gives "1,2", "2,3", "3,4". I am sorry I am unable to help you with the actual issue because the input shape is still not clear to me.

    – user110327
    Mar 25 at 10:04













0












0








0








My code looks like this:



lr = 1e-3
window_length = 1
emb_size = 100
look_back = 10

expert_model = Sequential()
expert_model.add(Embedding(num_classes + 1, emb_size, input_length=look_back,mask_zero=True))
expert_model.add(LSTM(64, input_shape=(look_back,window_length)))
expert_model.add(Dense(num_classes, activation='softmax'))


All I want is to pass a list of classes of size 10 to an embedding layer and then to an LSTM one to predict the next class to come. Maybe the length of that list is not 10, so I put the mask_zero attribute to True and the vocabulary of the embedding layer with one extra value. Is this correct?



In addition, I'm not very sure what does the window_length means. Does it means the number of sequences to pass to the embedding? When I try to run this I get this error:



ValueError: Error when checking input: expected embedding_13_input to have 2 dimensions, but got array with shape (1, 1, 0)


To preprocess data I'm using a Processor object as this model is for a OpenAI environment called "RecoGym". The class is as follows:



class RecoProcessor(Processor):
def process_observation(self, observation):
if observation is None:
return np.array([], dtype='float32')
return np.array(observation, dtype='float32')

def process_state_batch(self, batch):
return np.array(batch).astype('float32')

def process_reward(self, reward):
return np.array(reward).astype('float32')

def process_demo_data(self, demo_data):
for step in demo_data:
step[0] = self.process_observation(step[0])
step[2] = self.process_reward(step[2])
return demo_data


Please, I need some help. If you could only give me a tutorial on this I would be very grateful.










share|improve this question
















My code looks like this:



lr = 1e-3
window_length = 1
emb_size = 100
look_back = 10

expert_model = Sequential()
expert_model.add(Embedding(num_classes + 1, emb_size, input_length=look_back,mask_zero=True))
expert_model.add(LSTM(64, input_shape=(look_back,window_length)))
expert_model.add(Dense(num_classes, activation='softmax'))


All I want is to pass a list of classes of size 10 to an embedding layer and then to an LSTM one to predict the next class to come. Maybe the length of that list is not 10, so I put the mask_zero attribute to True and the vocabulary of the embedding layer with one extra value. Is this correct?



In addition, I'm not very sure what does the window_length means. Does it means the number of sequences to pass to the embedding? When I try to run this I get this error:



ValueError: Error when checking input: expected embedding_13_input to have 2 dimensions, but got array with shape (1, 1, 0)


To preprocess data I'm using a Processor object as this model is for a OpenAI environment called "RecoGym". The class is as follows:



class RecoProcessor(Processor):
def process_observation(self, observation):
if observation is None:
return np.array([], dtype='float32')
return np.array(observation, dtype='float32')

def process_state_batch(self, batch):
return np.array(batch).astype('float32')

def process_reward(self, reward):
return np.array(reward).astype('float32')

def process_demo_data(self, demo_data):
for step in demo_data:
step[0] = self.process_observation(step[0])
step[2] = self.process_reward(step[2])
return demo_data


Please, I need some help. If you could only give me a tutorial on this I would be very grateful.







python keras lstm openai-gym






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 25 at 9:56







Angelo

















asked Mar 25 at 8:45









AngeloAngelo

16113




16113







  • 1





    Please also post the code you have used for preprocessing your data. There is a shape mismatch happening at preprocessing step.

    – user110327
    Mar 25 at 9:52











  • @user110327 There you have. It might be a little bit strange, but OpenAI library requires that way. You can find the environment easily looking for RecoGym at Google if you need to.

    – Angelo
    Mar 25 at 9:58











  • Window size is the length of the sequence. For example, if your sequence is "1,2,3.4" then a window size of 2 gives "1,2", "2,3", "3,4". I am sorry I am unable to help you with the actual issue because the input shape is still not clear to me.

    – user110327
    Mar 25 at 10:04












  • 1





    Please also post the code you have used for preprocessing your data. There is a shape mismatch happening at preprocessing step.

    – user110327
    Mar 25 at 9:52











  • @user110327 There you have. It might be a little bit strange, but OpenAI library requires that way. You can find the environment easily looking for RecoGym at Google if you need to.

    – Angelo
    Mar 25 at 9:58











  • Window size is the length of the sequence. For example, if your sequence is "1,2,3.4" then a window size of 2 gives "1,2", "2,3", "3,4". I am sorry I am unable to help you with the actual issue because the input shape is still not clear to me.

    – user110327
    Mar 25 at 10:04







1




1





Please also post the code you have used for preprocessing your data. There is a shape mismatch happening at preprocessing step.

– user110327
Mar 25 at 9:52





Please also post the code you have used for preprocessing your data. There is a shape mismatch happening at preprocessing step.

– user110327
Mar 25 at 9:52













@user110327 There you have. It might be a little bit strange, but OpenAI library requires that way. You can find the environment easily looking for RecoGym at Google if you need to.

– Angelo
Mar 25 at 9:58





@user110327 There you have. It might be a little bit strange, but OpenAI library requires that way. You can find the environment easily looking for RecoGym at Google if you need to.

– Angelo
Mar 25 at 9:58













Window size is the length of the sequence. For example, if your sequence is "1,2,3.4" then a window size of 2 gives "1,2", "2,3", "3,4". I am sorry I am unable to help you with the actual issue because the input shape is still not clear to me.

– user110327
Mar 25 at 10:04





Window size is the length of the sequence. For example, if your sequence is "1,2,3.4" then a window size of 2 gives "1,2", "2,3", "3,4". I am sorry I am unable to help you with the actual issue because the input shape is still not clear to me.

– user110327
Mar 25 at 10:04












0






active

oldest

votes














Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55334019%2fvalueerror-error-when-checking-input-expected-embedding-13-input-to-have-2-dim%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55334019%2fvalueerror-error-when-checking-input-expected-embedding-13-input-to-have-2-dim%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현