AttributeError: The layer has never been called and thus has no defined input shapeTensorflow Autoencoder with 0 hidden units learns somethingKeras the simplest NN model: error in training.py with indicesTensorFlow: Neural Network accuracy always 100% on train and test setsPythonshell.parser error using npm python-shellHow to improve the under-fitting issue of my RNN Autoencoder on random squences?How to use Scikit Learn Wrapper around Keras Bi-directional LSTM ModelLSTM Nerual Network Input/Output dimensions errorAttributeError: 'Sequential' object has no attribute 'total_loss''Sequential' object has no attribute 'loss' - When I used GridSearchCV to tuning my Keras modelkeras.backend.function return a AttributeError: Layer dense is not connected, no input to return

Is there any use case for the bottom type as a function parameter type?

What does "Marchentalender" on the front of a postcard mean?

Glitch in AC sine wave interfering with phase cut dimming

How is character development a major role in the plot of a story

How to capture more stars?

Question about exercise 11.5 in TeXbook

Could I be denied entry into Ireland due to medical and police situations during a previous UK visit?

How current works

Is it ok to put a subplot to a story that is never meant to contribute to the development of the main plot?

Is it possible to change original filename of an exe?

What is the most important source of natural gas? coal, oil or other?

Is floating in space similar to falling under gravity?

What does uniform continuity mean exactly?

Do firearms count as ranged weapons?

Uses of T extends U?

What are these (utility?) boxes at the side of the house?

Crossword gone overboard

Transform the partial differential equation with new independent variables

What does the term “mohel” mean in Hilchot Melicha (salting)?

How to prevent bad sectors?

Is CD audio quality good enough for the final delivery of music?

Why are huge challot displayed on the wedding couple / Bar Mitzvah's table?

Preserving culinary oils

How can I find where certain bash function is defined?



AttributeError: The layer has never been called and thus has no defined input shape


Tensorflow Autoencoder with 0 hidden units learns somethingKeras the simplest NN model: error in training.py with indicesTensorFlow: Neural Network accuracy always 100% on train and test setsPythonshell.parser error using npm python-shellHow to improve the under-fitting issue of my RNN Autoencoder on random squences?How to use Scikit Learn Wrapper around Keras Bi-directional LSTM ModelLSTM Nerual Network Input/Output dimensions errorAttributeError: 'Sequential' object has no attribute 'total_loss''Sequential' object has no attribute 'loss' - When I used GridSearchCV to tuning my Keras modelkeras.backend.function return a AttributeError: Layer dense is not connected, no input to return






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








2















I'm tring to build an autoencoder in TensorFlow 2.0 by creating three classes: Encoder, Decoder and AutoEncoder.
Since I don't want to manually set input shapes I'm trying to infer the output shape of the decoder from the encoder's input_shape.



import os
import shutil

import numpy as np
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Layer


def mse(model, original):
return tf.reduce_mean(tf.square(tf.subtract(model(original), original)))


def train_autoencoder(loss, model, opt, original):
with tf.GradientTape() as tape:
gradients = tape.gradient(
loss(model, original), model.trainable_variables)
gradient_variables = zip(gradients, model.trainable_variables)
opt.apply_gradients(gradient_variables)


def log_results(model, X, max_outputs, epoch, prefix):
loss_values = mse(model, X)

sample_img = X[sample(range(X.shape[0]), max_outputs), :]
original = tf.reshape(sample_img, (max_outputs, 28, 28, 1))
encoded = tf.reshape(
model.encode(sample_img), (sample_img.shape[0], 8, 8, 1))
decoded = tf.reshape(
model(tf.constant(sample_img)), (sample_img.shape[0], 28, 28, 1))
tf.summary.scalar("_loss".format(prefix), loss_values, step=epoch + 1)
tf.summary.image(
"_original".format(prefix),
original,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_encoded".format(prefix),
encoded,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_decoded".format(prefix),
decoded,
max_outputs=max_outputs,
step=epoch + 1)

return loss_values


def preprocess_mnist(batch_size):
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()

X_train = X_train / np.max(X_train)
X_train = X_train.reshape(X_train.shape[0],
X_train.shape[1] * X_train.shape[2]).astype(
np.float32)
train_dataset = tf.data.Dataset.from_tensor_slices(X_train).batch(
batch_size)

y_train = y_train.astype(np.int32)
train_labels = tf.data.Dataset.from_tensor_slices(y_train).batch(
batch_size)

X_test = X_test / np.max(X_test)
X_test = X_test.reshape(
X_test.shape[0], X_test.shape[1] * X_test.shape[2]).astype(np.float32)

y_test = y_test.astype(np.int32)

return X_train, X_test, train_dataset, y_train, y_test, train_labels


class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.units = units

def build(self, input_shape):
self.output_layer = Dense(units=self.units, activation=tf.nn.relu)

@tf.function
def call(self, X):
return self.output_layer(X)


class Decoder(Layer):
def __init__(self, encoder):
super(Decoder, self).__init__()
self.encoder = encoder

def build(self, input_shape):
self.output_layer = Dense(units=self.encoder.input_shape)

@tf.function
def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.encoder.build(input_shape)
self.decoder = Decoder(encoder=self.encoder)

@tf.function
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

@tf.function
def encode(self, X):
return self.encoder(X)

@tf.function
def decode(self, Z):
return self.decode(Z)


def test_autoencoder(batch_size,
learning_rate,
epochs,
max_outputs=4,
seed=None):

tf.random.set_seed(seed)

X_train, X_test, train_dataset, _, _, _ = preprocess_mnist(
batch_size=batch_size)

autoencoder = AutoEncoder(units=64)
opt = tf.optimizers.Adam(learning_rate=learning_rate)

log_path = 'logs/autoencoder'
if os.path.exists(log_path):
shutil.rmtree(log_path)

writer = tf.summary.create_file_writer(log_path)

with writer.as_default():
with tf.summary.record_if(True):
for epoch in range(epochs):
for step, batch in enumerate(train_dataset):
train_autoencoder(mse, autoencoder, opt, batch)

# logs (train)
train_loss = log_results(
model=autoencoder,
X=X_train,
max_outputs=max_outputs,
epoch=epoch,
prefix='train')

# logs (test)
test_loss = log_results(
model=autoencoder,
X=X_test,
max_outputs=max_outputs,
epoch=epoch,
prefix='test')

writer.flush()

template = 'Epoch , Train loss: :.5f, Test loss: :.5f'
print(
template.format(epoch + 1, train_loss.numpy(),
test_loss.numpy()))

if not os.path.exists('saved_models'):
os.makedirs('saved_models')
np.savez_compressed('saved_models/encoder.npz',
*autoencoder.encoder.get_weights())


if __name__ == '__main__':
test_autoencoder(batch_size=128, learning_rate=1e-3, epochs=20, seed=42)


Since the encoder's input shape is used in the build function of the decoder, I'd expect that when I train the autoencoder the encoder is built first, then the decoder, but that doesn't seem to be the case. I've also tried to build the encoder in the build function of the decoder by calling self.encoder.build() at the start of the decoder's build function, but it didn't make any difference. What am I doing wrong?



Error I am receiving:



AttributeError: The layer has never been called and thus has no defined input shape.









share|improve this question
























  • Can you post your complete code?

    – DecentGradient
    Mar 24 at 23:06






  • 1





    I added the complete code.

    – Ivan Lorusso
    Mar 24 at 23:29











  • It may need to be self.output_layer = Dense(units=input_shape[-1])

    – DecentGradient
    Mar 26 at 19:53











  • You're probably right, but it still wouldn't solve my issue.

    – Ivan Lorusso
    Mar 26 at 20:43











  • Yeah :) ,hopefully its a step in the right direction

    – DecentGradient
    Mar 26 at 21:45

















2















I'm tring to build an autoencoder in TensorFlow 2.0 by creating three classes: Encoder, Decoder and AutoEncoder.
Since I don't want to manually set input shapes I'm trying to infer the output shape of the decoder from the encoder's input_shape.



import os
import shutil

import numpy as np
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Layer


def mse(model, original):
return tf.reduce_mean(tf.square(tf.subtract(model(original), original)))


def train_autoencoder(loss, model, opt, original):
with tf.GradientTape() as tape:
gradients = tape.gradient(
loss(model, original), model.trainable_variables)
gradient_variables = zip(gradients, model.trainable_variables)
opt.apply_gradients(gradient_variables)


def log_results(model, X, max_outputs, epoch, prefix):
loss_values = mse(model, X)

sample_img = X[sample(range(X.shape[0]), max_outputs), :]
original = tf.reshape(sample_img, (max_outputs, 28, 28, 1))
encoded = tf.reshape(
model.encode(sample_img), (sample_img.shape[0], 8, 8, 1))
decoded = tf.reshape(
model(tf.constant(sample_img)), (sample_img.shape[0], 28, 28, 1))
tf.summary.scalar("_loss".format(prefix), loss_values, step=epoch + 1)
tf.summary.image(
"_original".format(prefix),
original,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_encoded".format(prefix),
encoded,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_decoded".format(prefix),
decoded,
max_outputs=max_outputs,
step=epoch + 1)

return loss_values


def preprocess_mnist(batch_size):
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()

X_train = X_train / np.max(X_train)
X_train = X_train.reshape(X_train.shape[0],
X_train.shape[1] * X_train.shape[2]).astype(
np.float32)
train_dataset = tf.data.Dataset.from_tensor_slices(X_train).batch(
batch_size)

y_train = y_train.astype(np.int32)
train_labels = tf.data.Dataset.from_tensor_slices(y_train).batch(
batch_size)

X_test = X_test / np.max(X_test)
X_test = X_test.reshape(
X_test.shape[0], X_test.shape[1] * X_test.shape[2]).astype(np.float32)

y_test = y_test.astype(np.int32)

return X_train, X_test, train_dataset, y_train, y_test, train_labels


class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.units = units

def build(self, input_shape):
self.output_layer = Dense(units=self.units, activation=tf.nn.relu)

@tf.function
def call(self, X):
return self.output_layer(X)


class Decoder(Layer):
def __init__(self, encoder):
super(Decoder, self).__init__()
self.encoder = encoder

def build(self, input_shape):
self.output_layer = Dense(units=self.encoder.input_shape)

@tf.function
def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.encoder.build(input_shape)
self.decoder = Decoder(encoder=self.encoder)

@tf.function
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

@tf.function
def encode(self, X):
return self.encoder(X)

@tf.function
def decode(self, Z):
return self.decode(Z)


def test_autoencoder(batch_size,
learning_rate,
epochs,
max_outputs=4,
seed=None):

tf.random.set_seed(seed)

X_train, X_test, train_dataset, _, _, _ = preprocess_mnist(
batch_size=batch_size)

autoencoder = AutoEncoder(units=64)
opt = tf.optimizers.Adam(learning_rate=learning_rate)

log_path = 'logs/autoencoder'
if os.path.exists(log_path):
shutil.rmtree(log_path)

writer = tf.summary.create_file_writer(log_path)

with writer.as_default():
with tf.summary.record_if(True):
for epoch in range(epochs):
for step, batch in enumerate(train_dataset):
train_autoencoder(mse, autoencoder, opt, batch)

# logs (train)
train_loss = log_results(
model=autoencoder,
X=X_train,
max_outputs=max_outputs,
epoch=epoch,
prefix='train')

# logs (test)
test_loss = log_results(
model=autoencoder,
X=X_test,
max_outputs=max_outputs,
epoch=epoch,
prefix='test')

writer.flush()

template = 'Epoch , Train loss: :.5f, Test loss: :.5f'
print(
template.format(epoch + 1, train_loss.numpy(),
test_loss.numpy()))

if not os.path.exists('saved_models'):
os.makedirs('saved_models')
np.savez_compressed('saved_models/encoder.npz',
*autoencoder.encoder.get_weights())


if __name__ == '__main__':
test_autoencoder(batch_size=128, learning_rate=1e-3, epochs=20, seed=42)


Since the encoder's input shape is used in the build function of the decoder, I'd expect that when I train the autoencoder the encoder is built first, then the decoder, but that doesn't seem to be the case. I've also tried to build the encoder in the build function of the decoder by calling self.encoder.build() at the start of the decoder's build function, but it didn't make any difference. What am I doing wrong?



Error I am receiving:



AttributeError: The layer has never been called and thus has no defined input shape.









share|improve this question
























  • Can you post your complete code?

    – DecentGradient
    Mar 24 at 23:06






  • 1





    I added the complete code.

    – Ivan Lorusso
    Mar 24 at 23:29











  • It may need to be self.output_layer = Dense(units=input_shape[-1])

    – DecentGradient
    Mar 26 at 19:53











  • You're probably right, but it still wouldn't solve my issue.

    – Ivan Lorusso
    Mar 26 at 20:43











  • Yeah :) ,hopefully its a step in the right direction

    – DecentGradient
    Mar 26 at 21:45













2












2








2








I'm tring to build an autoencoder in TensorFlow 2.0 by creating three classes: Encoder, Decoder and AutoEncoder.
Since I don't want to manually set input shapes I'm trying to infer the output shape of the decoder from the encoder's input_shape.



import os
import shutil

import numpy as np
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Layer


def mse(model, original):
return tf.reduce_mean(tf.square(tf.subtract(model(original), original)))


def train_autoencoder(loss, model, opt, original):
with tf.GradientTape() as tape:
gradients = tape.gradient(
loss(model, original), model.trainable_variables)
gradient_variables = zip(gradients, model.trainable_variables)
opt.apply_gradients(gradient_variables)


def log_results(model, X, max_outputs, epoch, prefix):
loss_values = mse(model, X)

sample_img = X[sample(range(X.shape[0]), max_outputs), :]
original = tf.reshape(sample_img, (max_outputs, 28, 28, 1))
encoded = tf.reshape(
model.encode(sample_img), (sample_img.shape[0], 8, 8, 1))
decoded = tf.reshape(
model(tf.constant(sample_img)), (sample_img.shape[0], 28, 28, 1))
tf.summary.scalar("_loss".format(prefix), loss_values, step=epoch + 1)
tf.summary.image(
"_original".format(prefix),
original,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_encoded".format(prefix),
encoded,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_decoded".format(prefix),
decoded,
max_outputs=max_outputs,
step=epoch + 1)

return loss_values


def preprocess_mnist(batch_size):
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()

X_train = X_train / np.max(X_train)
X_train = X_train.reshape(X_train.shape[0],
X_train.shape[1] * X_train.shape[2]).astype(
np.float32)
train_dataset = tf.data.Dataset.from_tensor_slices(X_train).batch(
batch_size)

y_train = y_train.astype(np.int32)
train_labels = tf.data.Dataset.from_tensor_slices(y_train).batch(
batch_size)

X_test = X_test / np.max(X_test)
X_test = X_test.reshape(
X_test.shape[0], X_test.shape[1] * X_test.shape[2]).astype(np.float32)

y_test = y_test.astype(np.int32)

return X_train, X_test, train_dataset, y_train, y_test, train_labels


class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.units = units

def build(self, input_shape):
self.output_layer = Dense(units=self.units, activation=tf.nn.relu)

@tf.function
def call(self, X):
return self.output_layer(X)


class Decoder(Layer):
def __init__(self, encoder):
super(Decoder, self).__init__()
self.encoder = encoder

def build(self, input_shape):
self.output_layer = Dense(units=self.encoder.input_shape)

@tf.function
def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.encoder.build(input_shape)
self.decoder = Decoder(encoder=self.encoder)

@tf.function
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

@tf.function
def encode(self, X):
return self.encoder(X)

@tf.function
def decode(self, Z):
return self.decode(Z)


def test_autoencoder(batch_size,
learning_rate,
epochs,
max_outputs=4,
seed=None):

tf.random.set_seed(seed)

X_train, X_test, train_dataset, _, _, _ = preprocess_mnist(
batch_size=batch_size)

autoencoder = AutoEncoder(units=64)
opt = tf.optimizers.Adam(learning_rate=learning_rate)

log_path = 'logs/autoencoder'
if os.path.exists(log_path):
shutil.rmtree(log_path)

writer = tf.summary.create_file_writer(log_path)

with writer.as_default():
with tf.summary.record_if(True):
for epoch in range(epochs):
for step, batch in enumerate(train_dataset):
train_autoencoder(mse, autoencoder, opt, batch)

# logs (train)
train_loss = log_results(
model=autoencoder,
X=X_train,
max_outputs=max_outputs,
epoch=epoch,
prefix='train')

# logs (test)
test_loss = log_results(
model=autoencoder,
X=X_test,
max_outputs=max_outputs,
epoch=epoch,
prefix='test')

writer.flush()

template = 'Epoch , Train loss: :.5f, Test loss: :.5f'
print(
template.format(epoch + 1, train_loss.numpy(),
test_loss.numpy()))

if not os.path.exists('saved_models'):
os.makedirs('saved_models')
np.savez_compressed('saved_models/encoder.npz',
*autoencoder.encoder.get_weights())


if __name__ == '__main__':
test_autoencoder(batch_size=128, learning_rate=1e-3, epochs=20, seed=42)


Since the encoder's input shape is used in the build function of the decoder, I'd expect that when I train the autoencoder the encoder is built first, then the decoder, but that doesn't seem to be the case. I've also tried to build the encoder in the build function of the decoder by calling self.encoder.build() at the start of the decoder's build function, but it didn't make any difference. What am I doing wrong?



Error I am receiving:



AttributeError: The layer has never been called and thus has no defined input shape.









share|improve this question
















I'm tring to build an autoencoder in TensorFlow 2.0 by creating three classes: Encoder, Decoder and AutoEncoder.
Since I don't want to manually set input shapes I'm trying to infer the output shape of the decoder from the encoder's input_shape.



import os
import shutil

import numpy as np
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Layer


def mse(model, original):
return tf.reduce_mean(tf.square(tf.subtract(model(original), original)))


def train_autoencoder(loss, model, opt, original):
with tf.GradientTape() as tape:
gradients = tape.gradient(
loss(model, original), model.trainable_variables)
gradient_variables = zip(gradients, model.trainable_variables)
opt.apply_gradients(gradient_variables)


def log_results(model, X, max_outputs, epoch, prefix):
loss_values = mse(model, X)

sample_img = X[sample(range(X.shape[0]), max_outputs), :]
original = tf.reshape(sample_img, (max_outputs, 28, 28, 1))
encoded = tf.reshape(
model.encode(sample_img), (sample_img.shape[0], 8, 8, 1))
decoded = tf.reshape(
model(tf.constant(sample_img)), (sample_img.shape[0], 28, 28, 1))
tf.summary.scalar("_loss".format(prefix), loss_values, step=epoch + 1)
tf.summary.image(
"_original".format(prefix),
original,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_encoded".format(prefix),
encoded,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_decoded".format(prefix),
decoded,
max_outputs=max_outputs,
step=epoch + 1)

return loss_values


def preprocess_mnist(batch_size):
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()

X_train = X_train / np.max(X_train)
X_train = X_train.reshape(X_train.shape[0],
X_train.shape[1] * X_train.shape[2]).astype(
np.float32)
train_dataset = tf.data.Dataset.from_tensor_slices(X_train).batch(
batch_size)

y_train = y_train.astype(np.int32)
train_labels = tf.data.Dataset.from_tensor_slices(y_train).batch(
batch_size)

X_test = X_test / np.max(X_test)
X_test = X_test.reshape(
X_test.shape[0], X_test.shape[1] * X_test.shape[2]).astype(np.float32)

y_test = y_test.astype(np.int32)

return X_train, X_test, train_dataset, y_train, y_test, train_labels


class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.units = units

def build(self, input_shape):
self.output_layer = Dense(units=self.units, activation=tf.nn.relu)

@tf.function
def call(self, X):
return self.output_layer(X)


class Decoder(Layer):
def __init__(self, encoder):
super(Decoder, self).__init__()
self.encoder = encoder

def build(self, input_shape):
self.output_layer = Dense(units=self.encoder.input_shape)

@tf.function
def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.encoder.build(input_shape)
self.decoder = Decoder(encoder=self.encoder)

@tf.function
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

@tf.function
def encode(self, X):
return self.encoder(X)

@tf.function
def decode(self, Z):
return self.decode(Z)


def test_autoencoder(batch_size,
learning_rate,
epochs,
max_outputs=4,
seed=None):

tf.random.set_seed(seed)

X_train, X_test, train_dataset, _, _, _ = preprocess_mnist(
batch_size=batch_size)

autoencoder = AutoEncoder(units=64)
opt = tf.optimizers.Adam(learning_rate=learning_rate)

log_path = 'logs/autoencoder'
if os.path.exists(log_path):
shutil.rmtree(log_path)

writer = tf.summary.create_file_writer(log_path)

with writer.as_default():
with tf.summary.record_if(True):
for epoch in range(epochs):
for step, batch in enumerate(train_dataset):
train_autoencoder(mse, autoencoder, opt, batch)

# logs (train)
train_loss = log_results(
model=autoencoder,
X=X_train,
max_outputs=max_outputs,
epoch=epoch,
prefix='train')

# logs (test)
test_loss = log_results(
model=autoencoder,
X=X_test,
max_outputs=max_outputs,
epoch=epoch,
prefix='test')

writer.flush()

template = 'Epoch , Train loss: :.5f, Test loss: :.5f'
print(
template.format(epoch + 1, train_loss.numpy(),
test_loss.numpy()))

if not os.path.exists('saved_models'):
os.makedirs('saved_models')
np.savez_compressed('saved_models/encoder.npz',
*autoencoder.encoder.get_weights())


if __name__ == '__main__':
test_autoencoder(batch_size=128, learning_rate=1e-3, epochs=20, seed=42)


Since the encoder's input shape is used in the build function of the decoder, I'd expect that when I train the autoencoder the encoder is built first, then the decoder, but that doesn't seem to be the case. I've also tried to build the encoder in the build function of the decoder by calling self.encoder.build() at the start of the decoder's build function, but it didn't make any difference. What am I doing wrong?



Error I am receiving:



AttributeError: The layer has never been called and thus has no defined input shape.






tensorflow tf.keras tensorflow2.0






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 8 at 9:36









Szymon Maszke

3,2731730




3,2731730










asked Mar 24 at 8:16









Ivan LorussoIvan Lorusso

234




234












  • Can you post your complete code?

    – DecentGradient
    Mar 24 at 23:06






  • 1





    I added the complete code.

    – Ivan Lorusso
    Mar 24 at 23:29











  • It may need to be self.output_layer = Dense(units=input_shape[-1])

    – DecentGradient
    Mar 26 at 19:53











  • You're probably right, but it still wouldn't solve my issue.

    – Ivan Lorusso
    Mar 26 at 20:43











  • Yeah :) ,hopefully its a step in the right direction

    – DecentGradient
    Mar 26 at 21:45

















  • Can you post your complete code?

    – DecentGradient
    Mar 24 at 23:06






  • 1





    I added the complete code.

    – Ivan Lorusso
    Mar 24 at 23:29











  • It may need to be self.output_layer = Dense(units=input_shape[-1])

    – DecentGradient
    Mar 26 at 19:53











  • You're probably right, but it still wouldn't solve my issue.

    – Ivan Lorusso
    Mar 26 at 20:43











  • Yeah :) ,hopefully its a step in the right direction

    – DecentGradient
    Mar 26 at 21:45
















Can you post your complete code?

– DecentGradient
Mar 24 at 23:06





Can you post your complete code?

– DecentGradient
Mar 24 at 23:06




1




1





I added the complete code.

– Ivan Lorusso
Mar 24 at 23:29





I added the complete code.

– Ivan Lorusso
Mar 24 at 23:29













It may need to be self.output_layer = Dense(units=input_shape[-1])

– DecentGradient
Mar 26 at 19:53





It may need to be self.output_layer = Dense(units=input_shape[-1])

– DecentGradient
Mar 26 at 19:53













You're probably right, but it still wouldn't solve my issue.

– Ivan Lorusso
Mar 26 at 20:43





You're probably right, but it still wouldn't solve my issue.

– Ivan Lorusso
Mar 26 at 20:43













Yeah :) ,hopefully its a step in the right direction

– DecentGradient
Mar 26 at 21:45





Yeah :) ,hopefully its a step in the right direction

– DecentGradient
Mar 26 at 21:45












1 Answer
1






active

oldest

votes


















1














You were almost there, just overcomplicated things a bit. You are receiving this error because Decoder layer is dependent on the Encoder layer which wasn't built yet (as the call to build was unsuccessful) and it's input_shape attribute was not set.



Solution would be to pass correct output shapes from AutoEncoder object like this:



class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.units = units

def build(self, _):
self.output_layer = Dense(units=self.units)

def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])


Notice I have removed @tf,function decorator as you are unlikely to get any efficiency boost (keras already creates the static graph under the hood for you).



Furthermore, as one can see, your build is not dependent on input_shape information so all the creation can be safely moved to the constructor like this:



class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.output_layer = Dense(units=units, activation=tf.nn.relu)

def call(self, X):
return self.output_layer(X)


class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.output_layer = Dense(units=units)

def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])

def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

def encode(self, X):
return self.encoder(X)

def decode(self, Z):
return self.decode(Z)


Above begs a question whether separate Decoder and Encoder layers are really needed. IMO those should be left out, which leaves us only with this short and readable snippet:



class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Dense(units=self.units, activation=tf.nn.relu)
self.decoder = Dense(units=input_shape[-1])

def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

def encode(self, X):
return self.encoder(X)

def decode(self, Z):
return self.decode(Z)


BTW. You have an error in sample but that's a minor you can handle on your own no doubt.






share|improve this answer


















  • 1





    Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.

    – Ivan Lorusso
    Apr 9 at 17:34











  • Sure, no pun intended, glad I could help.

    – Szymon Maszke
    Apr 9 at 17:40











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55321864%2fattributeerror-the-layer-has-never-been-called-and-thus-has-no-defined-input-sh%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














You were almost there, just overcomplicated things a bit. You are receiving this error because Decoder layer is dependent on the Encoder layer which wasn't built yet (as the call to build was unsuccessful) and it's input_shape attribute was not set.



Solution would be to pass correct output shapes from AutoEncoder object like this:



class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.units = units

def build(self, _):
self.output_layer = Dense(units=self.units)

def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])


Notice I have removed @tf,function decorator as you are unlikely to get any efficiency boost (keras already creates the static graph under the hood for you).



Furthermore, as one can see, your build is not dependent on input_shape information so all the creation can be safely moved to the constructor like this:



class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.output_layer = Dense(units=units, activation=tf.nn.relu)

def call(self, X):
return self.output_layer(X)


class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.output_layer = Dense(units=units)

def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])

def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

def encode(self, X):
return self.encoder(X)

def decode(self, Z):
return self.decode(Z)


Above begs a question whether separate Decoder and Encoder layers are really needed. IMO those should be left out, which leaves us only with this short and readable snippet:



class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Dense(units=self.units, activation=tf.nn.relu)
self.decoder = Dense(units=input_shape[-1])

def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

def encode(self, X):
return self.encoder(X)

def decode(self, Z):
return self.decode(Z)


BTW. You have an error in sample but that's a minor you can handle on your own no doubt.






share|improve this answer


















  • 1





    Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.

    – Ivan Lorusso
    Apr 9 at 17:34











  • Sure, no pun intended, glad I could help.

    – Szymon Maszke
    Apr 9 at 17:40















1














You were almost there, just overcomplicated things a bit. You are receiving this error because Decoder layer is dependent on the Encoder layer which wasn't built yet (as the call to build was unsuccessful) and it's input_shape attribute was not set.



Solution would be to pass correct output shapes from AutoEncoder object like this:



class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.units = units

def build(self, _):
self.output_layer = Dense(units=self.units)

def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])


Notice I have removed @tf,function decorator as you are unlikely to get any efficiency boost (keras already creates the static graph under the hood for you).



Furthermore, as one can see, your build is not dependent on input_shape information so all the creation can be safely moved to the constructor like this:



class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.output_layer = Dense(units=units, activation=tf.nn.relu)

def call(self, X):
return self.output_layer(X)


class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.output_layer = Dense(units=units)

def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])

def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

def encode(self, X):
return self.encoder(X)

def decode(self, Z):
return self.decode(Z)


Above begs a question whether separate Decoder and Encoder layers are really needed. IMO those should be left out, which leaves us only with this short and readable snippet:



class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Dense(units=self.units, activation=tf.nn.relu)
self.decoder = Dense(units=input_shape[-1])

def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

def encode(self, X):
return self.encoder(X)

def decode(self, Z):
return self.decode(Z)


BTW. You have an error in sample but that's a minor you can handle on your own no doubt.






share|improve this answer


















  • 1





    Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.

    – Ivan Lorusso
    Apr 9 at 17:34











  • Sure, no pun intended, glad I could help.

    – Szymon Maszke
    Apr 9 at 17:40













1












1








1







You were almost there, just overcomplicated things a bit. You are receiving this error because Decoder layer is dependent on the Encoder layer which wasn't built yet (as the call to build was unsuccessful) and it's input_shape attribute was not set.



Solution would be to pass correct output shapes from AutoEncoder object like this:



class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.units = units

def build(self, _):
self.output_layer = Dense(units=self.units)

def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])


Notice I have removed @tf,function decorator as you are unlikely to get any efficiency boost (keras already creates the static graph under the hood for you).



Furthermore, as one can see, your build is not dependent on input_shape information so all the creation can be safely moved to the constructor like this:



class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.output_layer = Dense(units=units, activation=tf.nn.relu)

def call(self, X):
return self.output_layer(X)


class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.output_layer = Dense(units=units)

def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])

def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

def encode(self, X):
return self.encoder(X)

def decode(self, Z):
return self.decode(Z)


Above begs a question whether separate Decoder and Encoder layers are really needed. IMO those should be left out, which leaves us only with this short and readable snippet:



class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Dense(units=self.units, activation=tf.nn.relu)
self.decoder = Dense(units=input_shape[-1])

def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

def encode(self, X):
return self.encoder(X)

def decode(self, Z):
return self.decode(Z)


BTW. You have an error in sample but that's a minor you can handle on your own no doubt.






share|improve this answer













You were almost there, just overcomplicated things a bit. You are receiving this error because Decoder layer is dependent on the Encoder layer which wasn't built yet (as the call to build was unsuccessful) and it's input_shape attribute was not set.



Solution would be to pass correct output shapes from AutoEncoder object like this:



class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.units = units

def build(self, _):
self.output_layer = Dense(units=self.units)

def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])


Notice I have removed @tf,function decorator as you are unlikely to get any efficiency boost (keras already creates the static graph under the hood for you).



Furthermore, as one can see, your build is not dependent on input_shape information so all the creation can be safely moved to the constructor like this:



class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.output_layer = Dense(units=units, activation=tf.nn.relu)

def call(self, X):
return self.output_layer(X)


class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.output_layer = Dense(units=units)

def call(self, X):
return self.output_layer(X)


class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])

def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

def encode(self, X):
return self.encoder(X)

def decode(self, Z):
return self.decode(Z)


Above begs a question whether separate Decoder and Encoder layers are really needed. IMO those should be left out, which leaves us only with this short and readable snippet:



class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units

def build(self, input_shape):
self.encoder = Dense(units=self.units, activation=tf.nn.relu)
self.decoder = Dense(units=input_shape[-1])

def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)

def encode(self, X):
return self.encoder(X)

def decode(self, Z):
return self.decode(Z)


BTW. You have an error in sample but that's a minor you can handle on your own no doubt.







share|improve this answer












share|improve this answer



share|improve this answer










answered Apr 8 at 9:59









Szymon MaszkeSzymon Maszke

3,2731730




3,2731730







  • 1





    Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.

    – Ivan Lorusso
    Apr 9 at 17:34











  • Sure, no pun intended, glad I could help.

    – Szymon Maszke
    Apr 9 at 17:40












  • 1





    Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.

    – Ivan Lorusso
    Apr 9 at 17:34











  • Sure, no pun intended, glad I could help.

    – Szymon Maszke
    Apr 9 at 17:40







1




1





Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.

– Ivan Lorusso
Apr 9 at 17:34





Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.

– Ivan Lorusso
Apr 9 at 17:34













Sure, no pun intended, glad I could help.

– Szymon Maszke
Apr 9 at 17:40





Sure, no pun intended, glad I could help.

– Szymon Maszke
Apr 9 at 17:40



















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55321864%2fattributeerror-the-layer-has-never-been-called-and-thus-has-no-defined-input-sh%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현