AttributeError: The layer has never been called and thus has no defined input shapeTensorflow Autoencoder with 0 hidden units learns somethingKeras the simplest NN model: error in training.py with indicesTensorFlow: Neural Network accuracy always 100% on train and test setsPythonshell.parser error using npm python-shellHow to improve the under-fitting issue of my RNN Autoencoder on random squences?How to use Scikit Learn Wrapper around Keras Bi-directional LSTM ModelLSTM Nerual Network Input/Output dimensions errorAttributeError: 'Sequential' object has no attribute 'total_loss''Sequential' object has no attribute 'loss' - When I used GridSearchCV to tuning my Keras modelkeras.backend.function return a AttributeError: Layer dense is not connected, no input to return
Pattern matching repeated arguments of Times
If a massive object like Jupiter flew past the Earth how close would it need to come to pull people off of the surface?
Could I be denied entry into Ireland due to medical and police situations during a previous UK visit?
Leading and Suffering Numbers
How to return && object from function?
What are the benefits of cryosleep?
1960s sci-fi novella with a character who is treated as invisible by being ignored
Apparent Ring of Craters on the Moon
Is floating in space similar to falling under gravity?
Crossword gone overboard
Can non-English-speaking characters use wordplay specific to English?
Canon 70D often overexposing or underexposing shots
Where do I put diamond mines on my map?
shutdown at specific date
Why doesn't the Earth's acceleration towards the Moon accumulate to push the Earth off its orbit?
Employer demanding to see degree after poor code review
File globbing pattern, !(*example), behaves differently in bash script than it does in bash shell
Glitch in AC sine wave interfering with phase cut dimming
Break equation in parts
Compact Mechanical Energy Source
Plot exactly N bounce of a ball
A Mathematical Discussion: Fill in the Blank
Can a wire having a 610-670 THz (frequency of blue light) AC frequency supply, generate blue light?
How many chess players are over 2500 Elo?
AttributeError: The layer has never been called and thus has no defined input shape
Tensorflow Autoencoder with 0 hidden units learns somethingKeras the simplest NN model: error in training.py with indicesTensorFlow: Neural Network accuracy always 100% on train and test setsPythonshell.parser error using npm python-shellHow to improve the under-fitting issue of my RNN Autoencoder on random squences?How to use Scikit Learn Wrapper around Keras Bi-directional LSTM ModelLSTM Nerual Network Input/Output dimensions errorAttributeError: 'Sequential' object has no attribute 'total_loss''Sequential' object has no attribute 'loss' - When I used GridSearchCV to tuning my Keras modelkeras.backend.function return a AttributeError: Layer dense is not connected, no input to return
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I'm tring to build an autoencoder in TensorFlow 2.0 by creating three classes: Encoder, Decoder and AutoEncoder.
Since I don't want to manually set input shapes I'm trying to infer the output shape of the decoder from the encoder's input_shape.
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Layer
def mse(model, original):
return tf.reduce_mean(tf.square(tf.subtract(model(original), original)))
def train_autoencoder(loss, model, opt, original):
with tf.GradientTape() as tape:
gradients = tape.gradient(
loss(model, original), model.trainable_variables)
gradient_variables = zip(gradients, model.trainable_variables)
opt.apply_gradients(gradient_variables)
def log_results(model, X, max_outputs, epoch, prefix):
loss_values = mse(model, X)
sample_img = X[sample(range(X.shape[0]), max_outputs), :]
original = tf.reshape(sample_img, (max_outputs, 28, 28, 1))
encoded = tf.reshape(
model.encode(sample_img), (sample_img.shape[0], 8, 8, 1))
decoded = tf.reshape(
model(tf.constant(sample_img)), (sample_img.shape[0], 28, 28, 1))
tf.summary.scalar("_loss".format(prefix), loss_values, step=epoch + 1)
tf.summary.image(
"_original".format(prefix),
original,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_encoded".format(prefix),
encoded,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_decoded".format(prefix),
decoded,
max_outputs=max_outputs,
step=epoch + 1)
return loss_values
def preprocess_mnist(batch_size):
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train / np.max(X_train)
X_train = X_train.reshape(X_train.shape[0],
X_train.shape[1] * X_train.shape[2]).astype(
np.float32)
train_dataset = tf.data.Dataset.from_tensor_slices(X_train).batch(
batch_size)
y_train = y_train.astype(np.int32)
train_labels = tf.data.Dataset.from_tensor_slices(y_train).batch(
batch_size)
X_test = X_test / np.max(X_test)
X_test = X_test.reshape(
X_test.shape[0], X_test.shape[1] * X_test.shape[2]).astype(np.float32)
y_test = y_test.astype(np.int32)
return X_train, X_test, train_dataset, y_train, y_test, train_labels
class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.units = units
def build(self, input_shape):
self.output_layer = Dense(units=self.units, activation=tf.nn.relu)
@tf.function
def call(self, X):
return self.output_layer(X)
class Decoder(Layer):
def __init__(self, encoder):
super(Decoder, self).__init__()
self.encoder = encoder
def build(self, input_shape):
self.output_layer = Dense(units=self.encoder.input_shape)
@tf.function
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.encoder.build(input_shape)
self.decoder = Decoder(encoder=self.encoder)
@tf.function
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
@tf.function
def encode(self, X):
return self.encoder(X)
@tf.function
def decode(self, Z):
return self.decode(Z)
def test_autoencoder(batch_size,
learning_rate,
epochs,
max_outputs=4,
seed=None):
tf.random.set_seed(seed)
X_train, X_test, train_dataset, _, _, _ = preprocess_mnist(
batch_size=batch_size)
autoencoder = AutoEncoder(units=64)
opt = tf.optimizers.Adam(learning_rate=learning_rate)
log_path = 'logs/autoencoder'
if os.path.exists(log_path):
shutil.rmtree(log_path)
writer = tf.summary.create_file_writer(log_path)
with writer.as_default():
with tf.summary.record_if(True):
for epoch in range(epochs):
for step, batch in enumerate(train_dataset):
train_autoencoder(mse, autoencoder, opt, batch)
# logs (train)
train_loss = log_results(
model=autoencoder,
X=X_train,
max_outputs=max_outputs,
epoch=epoch,
prefix='train')
# logs (test)
test_loss = log_results(
model=autoencoder,
X=X_test,
max_outputs=max_outputs,
epoch=epoch,
prefix='test')
writer.flush()
template = 'Epoch , Train loss: :.5f, Test loss: :.5f'
print(
template.format(epoch + 1, train_loss.numpy(),
test_loss.numpy()))
if not os.path.exists('saved_models'):
os.makedirs('saved_models')
np.savez_compressed('saved_models/encoder.npz',
*autoencoder.encoder.get_weights())
if __name__ == '__main__':
test_autoencoder(batch_size=128, learning_rate=1e-3, epochs=20, seed=42)
Since the encoder's input shape is used in the build function of the decoder, I'd expect that when I train the autoencoder the encoder is built first, then the decoder, but that doesn't seem to be the case. I've also tried to build the encoder in the build function of the decoder by calling self.encoder.build()
at the start of the decoder's build function, but it didn't make any difference. What am I doing wrong?
Error I am receiving:
AttributeError: The layer has never been called and thus has no defined input shape.
tensorflow tf.keras tensorflow2.0
|
show 1 more comment
I'm tring to build an autoencoder in TensorFlow 2.0 by creating three classes: Encoder, Decoder and AutoEncoder.
Since I don't want to manually set input shapes I'm trying to infer the output shape of the decoder from the encoder's input_shape.
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Layer
def mse(model, original):
return tf.reduce_mean(tf.square(tf.subtract(model(original), original)))
def train_autoencoder(loss, model, opt, original):
with tf.GradientTape() as tape:
gradients = tape.gradient(
loss(model, original), model.trainable_variables)
gradient_variables = zip(gradients, model.trainable_variables)
opt.apply_gradients(gradient_variables)
def log_results(model, X, max_outputs, epoch, prefix):
loss_values = mse(model, X)
sample_img = X[sample(range(X.shape[0]), max_outputs), :]
original = tf.reshape(sample_img, (max_outputs, 28, 28, 1))
encoded = tf.reshape(
model.encode(sample_img), (sample_img.shape[0], 8, 8, 1))
decoded = tf.reshape(
model(tf.constant(sample_img)), (sample_img.shape[0], 28, 28, 1))
tf.summary.scalar("_loss".format(prefix), loss_values, step=epoch + 1)
tf.summary.image(
"_original".format(prefix),
original,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_encoded".format(prefix),
encoded,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_decoded".format(prefix),
decoded,
max_outputs=max_outputs,
step=epoch + 1)
return loss_values
def preprocess_mnist(batch_size):
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train / np.max(X_train)
X_train = X_train.reshape(X_train.shape[0],
X_train.shape[1] * X_train.shape[2]).astype(
np.float32)
train_dataset = tf.data.Dataset.from_tensor_slices(X_train).batch(
batch_size)
y_train = y_train.astype(np.int32)
train_labels = tf.data.Dataset.from_tensor_slices(y_train).batch(
batch_size)
X_test = X_test / np.max(X_test)
X_test = X_test.reshape(
X_test.shape[0], X_test.shape[1] * X_test.shape[2]).astype(np.float32)
y_test = y_test.astype(np.int32)
return X_train, X_test, train_dataset, y_train, y_test, train_labels
class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.units = units
def build(self, input_shape):
self.output_layer = Dense(units=self.units, activation=tf.nn.relu)
@tf.function
def call(self, X):
return self.output_layer(X)
class Decoder(Layer):
def __init__(self, encoder):
super(Decoder, self).__init__()
self.encoder = encoder
def build(self, input_shape):
self.output_layer = Dense(units=self.encoder.input_shape)
@tf.function
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.encoder.build(input_shape)
self.decoder = Decoder(encoder=self.encoder)
@tf.function
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
@tf.function
def encode(self, X):
return self.encoder(X)
@tf.function
def decode(self, Z):
return self.decode(Z)
def test_autoencoder(batch_size,
learning_rate,
epochs,
max_outputs=4,
seed=None):
tf.random.set_seed(seed)
X_train, X_test, train_dataset, _, _, _ = preprocess_mnist(
batch_size=batch_size)
autoencoder = AutoEncoder(units=64)
opt = tf.optimizers.Adam(learning_rate=learning_rate)
log_path = 'logs/autoencoder'
if os.path.exists(log_path):
shutil.rmtree(log_path)
writer = tf.summary.create_file_writer(log_path)
with writer.as_default():
with tf.summary.record_if(True):
for epoch in range(epochs):
for step, batch in enumerate(train_dataset):
train_autoencoder(mse, autoencoder, opt, batch)
# logs (train)
train_loss = log_results(
model=autoencoder,
X=X_train,
max_outputs=max_outputs,
epoch=epoch,
prefix='train')
# logs (test)
test_loss = log_results(
model=autoencoder,
X=X_test,
max_outputs=max_outputs,
epoch=epoch,
prefix='test')
writer.flush()
template = 'Epoch , Train loss: :.5f, Test loss: :.5f'
print(
template.format(epoch + 1, train_loss.numpy(),
test_loss.numpy()))
if not os.path.exists('saved_models'):
os.makedirs('saved_models')
np.savez_compressed('saved_models/encoder.npz',
*autoencoder.encoder.get_weights())
if __name__ == '__main__':
test_autoencoder(batch_size=128, learning_rate=1e-3, epochs=20, seed=42)
Since the encoder's input shape is used in the build function of the decoder, I'd expect that when I train the autoencoder the encoder is built first, then the decoder, but that doesn't seem to be the case. I've also tried to build the encoder in the build function of the decoder by calling self.encoder.build()
at the start of the decoder's build function, but it didn't make any difference. What am I doing wrong?
Error I am receiving:
AttributeError: The layer has never been called and thus has no defined input shape.
tensorflow tf.keras tensorflow2.0
Can you post your complete code?
– DecentGradient
Mar 24 at 23:06
1
I added the complete code.
– Ivan Lorusso
Mar 24 at 23:29
It may need to beself.output_layer = Dense(units=input_shape[-1])
– DecentGradient
Mar 26 at 19:53
You're probably right, but it still wouldn't solve my issue.
– Ivan Lorusso
Mar 26 at 20:43
Yeah :) ,hopefully its a step in the right direction
– DecentGradient
Mar 26 at 21:45
|
show 1 more comment
I'm tring to build an autoencoder in TensorFlow 2.0 by creating three classes: Encoder, Decoder and AutoEncoder.
Since I don't want to manually set input shapes I'm trying to infer the output shape of the decoder from the encoder's input_shape.
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Layer
def mse(model, original):
return tf.reduce_mean(tf.square(tf.subtract(model(original), original)))
def train_autoencoder(loss, model, opt, original):
with tf.GradientTape() as tape:
gradients = tape.gradient(
loss(model, original), model.trainable_variables)
gradient_variables = zip(gradients, model.trainable_variables)
opt.apply_gradients(gradient_variables)
def log_results(model, X, max_outputs, epoch, prefix):
loss_values = mse(model, X)
sample_img = X[sample(range(X.shape[0]), max_outputs), :]
original = tf.reshape(sample_img, (max_outputs, 28, 28, 1))
encoded = tf.reshape(
model.encode(sample_img), (sample_img.shape[0], 8, 8, 1))
decoded = tf.reshape(
model(tf.constant(sample_img)), (sample_img.shape[0], 28, 28, 1))
tf.summary.scalar("_loss".format(prefix), loss_values, step=epoch + 1)
tf.summary.image(
"_original".format(prefix),
original,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_encoded".format(prefix),
encoded,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_decoded".format(prefix),
decoded,
max_outputs=max_outputs,
step=epoch + 1)
return loss_values
def preprocess_mnist(batch_size):
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train / np.max(X_train)
X_train = X_train.reshape(X_train.shape[0],
X_train.shape[1] * X_train.shape[2]).astype(
np.float32)
train_dataset = tf.data.Dataset.from_tensor_slices(X_train).batch(
batch_size)
y_train = y_train.astype(np.int32)
train_labels = tf.data.Dataset.from_tensor_slices(y_train).batch(
batch_size)
X_test = X_test / np.max(X_test)
X_test = X_test.reshape(
X_test.shape[0], X_test.shape[1] * X_test.shape[2]).astype(np.float32)
y_test = y_test.astype(np.int32)
return X_train, X_test, train_dataset, y_train, y_test, train_labels
class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.units = units
def build(self, input_shape):
self.output_layer = Dense(units=self.units, activation=tf.nn.relu)
@tf.function
def call(self, X):
return self.output_layer(X)
class Decoder(Layer):
def __init__(self, encoder):
super(Decoder, self).__init__()
self.encoder = encoder
def build(self, input_shape):
self.output_layer = Dense(units=self.encoder.input_shape)
@tf.function
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.encoder.build(input_shape)
self.decoder = Decoder(encoder=self.encoder)
@tf.function
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
@tf.function
def encode(self, X):
return self.encoder(X)
@tf.function
def decode(self, Z):
return self.decode(Z)
def test_autoencoder(batch_size,
learning_rate,
epochs,
max_outputs=4,
seed=None):
tf.random.set_seed(seed)
X_train, X_test, train_dataset, _, _, _ = preprocess_mnist(
batch_size=batch_size)
autoencoder = AutoEncoder(units=64)
opt = tf.optimizers.Adam(learning_rate=learning_rate)
log_path = 'logs/autoencoder'
if os.path.exists(log_path):
shutil.rmtree(log_path)
writer = tf.summary.create_file_writer(log_path)
with writer.as_default():
with tf.summary.record_if(True):
for epoch in range(epochs):
for step, batch in enumerate(train_dataset):
train_autoencoder(mse, autoencoder, opt, batch)
# logs (train)
train_loss = log_results(
model=autoencoder,
X=X_train,
max_outputs=max_outputs,
epoch=epoch,
prefix='train')
# logs (test)
test_loss = log_results(
model=autoencoder,
X=X_test,
max_outputs=max_outputs,
epoch=epoch,
prefix='test')
writer.flush()
template = 'Epoch , Train loss: :.5f, Test loss: :.5f'
print(
template.format(epoch + 1, train_loss.numpy(),
test_loss.numpy()))
if not os.path.exists('saved_models'):
os.makedirs('saved_models')
np.savez_compressed('saved_models/encoder.npz',
*autoencoder.encoder.get_weights())
if __name__ == '__main__':
test_autoencoder(batch_size=128, learning_rate=1e-3, epochs=20, seed=42)
Since the encoder's input shape is used in the build function of the decoder, I'd expect that when I train the autoencoder the encoder is built first, then the decoder, but that doesn't seem to be the case. I've also tried to build the encoder in the build function of the decoder by calling self.encoder.build()
at the start of the decoder's build function, but it didn't make any difference. What am I doing wrong?
Error I am receiving:
AttributeError: The layer has never been called and thus has no defined input shape.
tensorflow tf.keras tensorflow2.0
I'm tring to build an autoencoder in TensorFlow 2.0 by creating three classes: Encoder, Decoder and AutoEncoder.
Since I don't want to manually set input shapes I'm trying to infer the output shape of the decoder from the encoder's input_shape.
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Layer
def mse(model, original):
return tf.reduce_mean(tf.square(tf.subtract(model(original), original)))
def train_autoencoder(loss, model, opt, original):
with tf.GradientTape() as tape:
gradients = tape.gradient(
loss(model, original), model.trainable_variables)
gradient_variables = zip(gradients, model.trainable_variables)
opt.apply_gradients(gradient_variables)
def log_results(model, X, max_outputs, epoch, prefix):
loss_values = mse(model, X)
sample_img = X[sample(range(X.shape[0]), max_outputs), :]
original = tf.reshape(sample_img, (max_outputs, 28, 28, 1))
encoded = tf.reshape(
model.encode(sample_img), (sample_img.shape[0], 8, 8, 1))
decoded = tf.reshape(
model(tf.constant(sample_img)), (sample_img.shape[0], 28, 28, 1))
tf.summary.scalar("_loss".format(prefix), loss_values, step=epoch + 1)
tf.summary.image(
"_original".format(prefix),
original,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_encoded".format(prefix),
encoded,
max_outputs=max_outputs,
step=epoch + 1)
tf.summary.image(
"_decoded".format(prefix),
decoded,
max_outputs=max_outputs,
step=epoch + 1)
return loss_values
def preprocess_mnist(batch_size):
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train / np.max(X_train)
X_train = X_train.reshape(X_train.shape[0],
X_train.shape[1] * X_train.shape[2]).astype(
np.float32)
train_dataset = tf.data.Dataset.from_tensor_slices(X_train).batch(
batch_size)
y_train = y_train.astype(np.int32)
train_labels = tf.data.Dataset.from_tensor_slices(y_train).batch(
batch_size)
X_test = X_test / np.max(X_test)
X_test = X_test.reshape(
X_test.shape[0], X_test.shape[1] * X_test.shape[2]).astype(np.float32)
y_test = y_test.astype(np.int32)
return X_train, X_test, train_dataset, y_train, y_test, train_labels
class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.units = units
def build(self, input_shape):
self.output_layer = Dense(units=self.units, activation=tf.nn.relu)
@tf.function
def call(self, X):
return self.output_layer(X)
class Decoder(Layer):
def __init__(self, encoder):
super(Decoder, self).__init__()
self.encoder = encoder
def build(self, input_shape):
self.output_layer = Dense(units=self.encoder.input_shape)
@tf.function
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.encoder.build(input_shape)
self.decoder = Decoder(encoder=self.encoder)
@tf.function
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
@tf.function
def encode(self, X):
return self.encoder(X)
@tf.function
def decode(self, Z):
return self.decode(Z)
def test_autoencoder(batch_size,
learning_rate,
epochs,
max_outputs=4,
seed=None):
tf.random.set_seed(seed)
X_train, X_test, train_dataset, _, _, _ = preprocess_mnist(
batch_size=batch_size)
autoencoder = AutoEncoder(units=64)
opt = tf.optimizers.Adam(learning_rate=learning_rate)
log_path = 'logs/autoencoder'
if os.path.exists(log_path):
shutil.rmtree(log_path)
writer = tf.summary.create_file_writer(log_path)
with writer.as_default():
with tf.summary.record_if(True):
for epoch in range(epochs):
for step, batch in enumerate(train_dataset):
train_autoencoder(mse, autoencoder, opt, batch)
# logs (train)
train_loss = log_results(
model=autoencoder,
X=X_train,
max_outputs=max_outputs,
epoch=epoch,
prefix='train')
# logs (test)
test_loss = log_results(
model=autoencoder,
X=X_test,
max_outputs=max_outputs,
epoch=epoch,
prefix='test')
writer.flush()
template = 'Epoch , Train loss: :.5f, Test loss: :.5f'
print(
template.format(epoch + 1, train_loss.numpy(),
test_loss.numpy()))
if not os.path.exists('saved_models'):
os.makedirs('saved_models')
np.savez_compressed('saved_models/encoder.npz',
*autoencoder.encoder.get_weights())
if __name__ == '__main__':
test_autoencoder(batch_size=128, learning_rate=1e-3, epochs=20, seed=42)
Since the encoder's input shape is used in the build function of the decoder, I'd expect that when I train the autoencoder the encoder is built first, then the decoder, but that doesn't seem to be the case. I've also tried to build the encoder in the build function of the decoder by calling self.encoder.build()
at the start of the decoder's build function, but it didn't make any difference. What am I doing wrong?
Error I am receiving:
AttributeError: The layer has never been called and thus has no defined input shape.
tensorflow tf.keras tensorflow2.0
tensorflow tf.keras tensorflow2.0
edited Apr 8 at 9:36
Szymon Maszke
3,2731730
3,2731730
asked Mar 24 at 8:16
Ivan LorussoIvan Lorusso
234
234
Can you post your complete code?
– DecentGradient
Mar 24 at 23:06
1
I added the complete code.
– Ivan Lorusso
Mar 24 at 23:29
It may need to beself.output_layer = Dense(units=input_shape[-1])
– DecentGradient
Mar 26 at 19:53
You're probably right, but it still wouldn't solve my issue.
– Ivan Lorusso
Mar 26 at 20:43
Yeah :) ,hopefully its a step in the right direction
– DecentGradient
Mar 26 at 21:45
|
show 1 more comment
Can you post your complete code?
– DecentGradient
Mar 24 at 23:06
1
I added the complete code.
– Ivan Lorusso
Mar 24 at 23:29
It may need to beself.output_layer = Dense(units=input_shape[-1])
– DecentGradient
Mar 26 at 19:53
You're probably right, but it still wouldn't solve my issue.
– Ivan Lorusso
Mar 26 at 20:43
Yeah :) ,hopefully its a step in the right direction
– DecentGradient
Mar 26 at 21:45
Can you post your complete code?
– DecentGradient
Mar 24 at 23:06
Can you post your complete code?
– DecentGradient
Mar 24 at 23:06
1
1
I added the complete code.
– Ivan Lorusso
Mar 24 at 23:29
I added the complete code.
– Ivan Lorusso
Mar 24 at 23:29
It may need to be
self.output_layer = Dense(units=input_shape[-1])
– DecentGradient
Mar 26 at 19:53
It may need to be
self.output_layer = Dense(units=input_shape[-1])
– DecentGradient
Mar 26 at 19:53
You're probably right, but it still wouldn't solve my issue.
– Ivan Lorusso
Mar 26 at 20:43
You're probably right, but it still wouldn't solve my issue.
– Ivan Lorusso
Mar 26 at 20:43
Yeah :) ,hopefully its a step in the right direction
– DecentGradient
Mar 26 at 21:45
Yeah :) ,hopefully its a step in the right direction
– DecentGradient
Mar 26 at 21:45
|
show 1 more comment
1 Answer
1
active
oldest
votes
You were almost there, just overcomplicated things a bit. You are receiving this error because Decoder
layer is dependent on the Encoder
layer which wasn't built yet (as the call to build
was unsuccessful) and it's input_shape
attribute was not set.
Solution would be to pass correct output shapes from AutoEncoder
object like this:
class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.units = units
def build(self, _):
self.output_layer = Dense(units=self.units)
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])
Notice I have removed @tf,function
decorator as you are unlikely to get any efficiency boost (keras
already creates the static graph under the hood for you).
Furthermore, as one can see, your build is not dependent on input_shape
information so all the creation can be safely moved to the constructor like this:
class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.output_layer = Dense(units=units, activation=tf.nn.relu)
def call(self, X):
return self.output_layer(X)
class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.output_layer = Dense(units=units)
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
def encode(self, X):
return self.encoder(X)
def decode(self, Z):
return self.decode(Z)
Above begs a question whether separate Decoder
and Encoder
layers are really needed. IMO those should be left out, which leaves us only with this short and readable snippet:
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Dense(units=self.units, activation=tf.nn.relu)
self.decoder = Dense(units=input_shape[-1])
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
def encode(self, X):
return self.encoder(X)
def decode(self, Z):
return self.decode(Z)
BTW. You have an error in sample
but that's a minor you can handle on your own no doubt.
1
Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.
– Ivan Lorusso
Apr 9 at 17:34
Sure, no pun intended, glad I could help.
– Szymon Maszke
Apr 9 at 17:40
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55321864%2fattributeerror-the-layer-has-never-been-called-and-thus-has-no-defined-input-sh%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
You were almost there, just overcomplicated things a bit. You are receiving this error because Decoder
layer is dependent on the Encoder
layer which wasn't built yet (as the call to build
was unsuccessful) and it's input_shape
attribute was not set.
Solution would be to pass correct output shapes from AutoEncoder
object like this:
class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.units = units
def build(self, _):
self.output_layer = Dense(units=self.units)
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])
Notice I have removed @tf,function
decorator as you are unlikely to get any efficiency boost (keras
already creates the static graph under the hood for you).
Furthermore, as one can see, your build is not dependent on input_shape
information so all the creation can be safely moved to the constructor like this:
class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.output_layer = Dense(units=units, activation=tf.nn.relu)
def call(self, X):
return self.output_layer(X)
class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.output_layer = Dense(units=units)
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
def encode(self, X):
return self.encoder(X)
def decode(self, Z):
return self.decode(Z)
Above begs a question whether separate Decoder
and Encoder
layers are really needed. IMO those should be left out, which leaves us only with this short and readable snippet:
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Dense(units=self.units, activation=tf.nn.relu)
self.decoder = Dense(units=input_shape[-1])
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
def encode(self, X):
return self.encoder(X)
def decode(self, Z):
return self.decode(Z)
BTW. You have an error in sample
but that's a minor you can handle on your own no doubt.
1
Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.
– Ivan Lorusso
Apr 9 at 17:34
Sure, no pun intended, glad I could help.
– Szymon Maszke
Apr 9 at 17:40
add a comment |
You were almost there, just overcomplicated things a bit. You are receiving this error because Decoder
layer is dependent on the Encoder
layer which wasn't built yet (as the call to build
was unsuccessful) and it's input_shape
attribute was not set.
Solution would be to pass correct output shapes from AutoEncoder
object like this:
class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.units = units
def build(self, _):
self.output_layer = Dense(units=self.units)
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])
Notice I have removed @tf,function
decorator as you are unlikely to get any efficiency boost (keras
already creates the static graph under the hood for you).
Furthermore, as one can see, your build is not dependent on input_shape
information so all the creation can be safely moved to the constructor like this:
class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.output_layer = Dense(units=units, activation=tf.nn.relu)
def call(self, X):
return self.output_layer(X)
class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.output_layer = Dense(units=units)
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
def encode(self, X):
return self.encoder(X)
def decode(self, Z):
return self.decode(Z)
Above begs a question whether separate Decoder
and Encoder
layers are really needed. IMO those should be left out, which leaves us only with this short and readable snippet:
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Dense(units=self.units, activation=tf.nn.relu)
self.decoder = Dense(units=input_shape[-1])
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
def encode(self, X):
return self.encoder(X)
def decode(self, Z):
return self.decode(Z)
BTW. You have an error in sample
but that's a minor you can handle on your own no doubt.
1
Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.
– Ivan Lorusso
Apr 9 at 17:34
Sure, no pun intended, glad I could help.
– Szymon Maszke
Apr 9 at 17:40
add a comment |
You were almost there, just overcomplicated things a bit. You are receiving this error because Decoder
layer is dependent on the Encoder
layer which wasn't built yet (as the call to build
was unsuccessful) and it's input_shape
attribute was not set.
Solution would be to pass correct output shapes from AutoEncoder
object like this:
class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.units = units
def build(self, _):
self.output_layer = Dense(units=self.units)
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])
Notice I have removed @tf,function
decorator as you are unlikely to get any efficiency boost (keras
already creates the static graph under the hood for you).
Furthermore, as one can see, your build is not dependent on input_shape
information so all the creation can be safely moved to the constructor like this:
class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.output_layer = Dense(units=units, activation=tf.nn.relu)
def call(self, X):
return self.output_layer(X)
class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.output_layer = Dense(units=units)
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
def encode(self, X):
return self.encoder(X)
def decode(self, Z):
return self.decode(Z)
Above begs a question whether separate Decoder
and Encoder
layers are really needed. IMO those should be left out, which leaves us only with this short and readable snippet:
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Dense(units=self.units, activation=tf.nn.relu)
self.decoder = Dense(units=input_shape[-1])
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
def encode(self, X):
return self.encoder(X)
def decode(self, Z):
return self.decode(Z)
BTW. You have an error in sample
but that's a minor you can handle on your own no doubt.
You were almost there, just overcomplicated things a bit. You are receiving this error because Decoder
layer is dependent on the Encoder
layer which wasn't built yet (as the call to build
was unsuccessful) and it's input_shape
attribute was not set.
Solution would be to pass correct output shapes from AutoEncoder
object like this:
class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.units = units
def build(self, _):
self.output_layer = Dense(units=self.units)
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])
Notice I have removed @tf,function
decorator as you are unlikely to get any efficiency boost (keras
already creates the static graph under the hood for you).
Furthermore, as one can see, your build is not dependent on input_shape
information so all the creation can be safely moved to the constructor like this:
class Encoder(Layer):
def __init__(self, units):
super(Encoder, self).__init__()
self.output_layer = Dense(units=units, activation=tf.nn.relu)
def call(self, X):
return self.output_layer(X)
class Decoder(Layer):
def __init__(self, units):
super(Decoder, self).__init__()
self.output_layer = Dense(units=units)
def call(self, X):
return self.output_layer(X)
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Encoder(units=self.units)
self.decoder = Decoder(units=input_shape[-1])
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
def encode(self, X):
return self.encoder(X)
def decode(self, Z):
return self.decode(Z)
Above begs a question whether separate Decoder
and Encoder
layers are really needed. IMO those should be left out, which leaves us only with this short and readable snippet:
class AutoEncoder(Model):
def __init__(self, units):
super(AutoEncoder, self).__init__()
self.units = units
def build(self, input_shape):
self.encoder = Dense(units=self.units, activation=tf.nn.relu)
self.decoder = Dense(units=input_shape[-1])
def call(self, X):
Z = self.encoder(X)
return self.decoder(Z)
def encode(self, X):
return self.encoder(X)
def decode(self, Z):
return self.decode(Z)
BTW. You have an error in sample
but that's a minor you can handle on your own no doubt.
answered Apr 8 at 9:59
Szymon MaszkeSzymon Maszke
3,2731730
3,2731730
1
Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.
– Ivan Lorusso
Apr 9 at 17:34
Sure, no pun intended, glad I could help.
– Szymon Maszke
Apr 9 at 17:40
add a comment |
1
Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.
– Ivan Lorusso
Apr 9 at 17:34
Sure, no pun intended, glad I could help.
– Szymon Maszke
Apr 9 at 17:40
1
1
Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.
– Ivan Lorusso
Apr 9 at 17:34
Thanks, that's exactly what I was looking for. I know I could just create the AutoEncoder (or use a sequential model), but I wanted to experiment a bit, considering it was my first attempt with TF 2.0 I wanted to complicate things a bit.
– Ivan Lorusso
Apr 9 at 17:34
Sure, no pun intended, glad I could help.
– Szymon Maszke
Apr 9 at 17:40
Sure, no pun intended, glad I could help.
– Szymon Maszke
Apr 9 at 17:40
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55321864%2fattributeerror-the-layer-has-never-been-called-and-thus-has-no-defined-input-sh%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Can you post your complete code?
– DecentGradient
Mar 24 at 23:06
1
I added the complete code.
– Ivan Lorusso
Mar 24 at 23:29
It may need to be
self.output_layer = Dense(units=input_shape[-1])
– DecentGradient
Mar 26 at 19:53
You're probably right, but it still wouldn't solve my issue.
– Ivan Lorusso
Mar 26 at 20:43
Yeah :) ,hopefully its a step in the right direction
– DecentGradient
Mar 26 at 21:45