How to fix 'Segmentation fault (core dumped" error in KerasKeras + tensorflow + P100 : cudaErrorNotSupported = 71 errorTensorflow-gpu with Keras ErrorAn error ocurred while starting the kernel while usingKeras pretrained model is not downloadedTensorFlow MirroredStrategy freeze on particular GPU comboHow to check if keras tensorflow backend is running on the GPU or CPU?tensorflow run any opmtimizer get exit code 139 interrupted by signal 11: SIGSEGVInstantiating Separate Tensorflow Session for Each GPUKeras Gpu: Configuration

Boss wants me to ignore a software API license

Why is the second S silent in "Sens dessus dessous"?

Is it possible to know the exact chord from the roman numerals

graphs in latex

Piecewise convexity and global convexity

How much can I judge a company based on a phone screening?

How do I ask for 2-3 days per week remote work in a job interview?

Are there really no countries that protect Freedom of Speech as the United States does?

How can God warn people of the upcoming rapture without disrupting society?

Would the USA be eligible to join the European Union?

Bringing Power Supplies on Plane?

Is there a fallacy about "appeal to 'big words'"?

Good textbook for queueing theory and performance modeling

Finding the shaded region

If a person claims to know anything could it be disproven by saying 'prove that we are not in a simulation'?

Is this n-speak?

How would armour (and combat) change if the fighter didn't need to actually wear it?

Why did IBM make the PC BIOS source code public?

Why aren’t there water shutoff valves for each room?

What kind of liquid can be seen 'leaking' from the upper surface of the wing of a Boeing 737-800?

Human with super efficient metabolism

What should we do with manuals from the 80s?

How can I find an old paper when the usual methods fail?

Xdebug not working over Drush



How to fix 'Segmentation fault (core dumped" error in Keras


Keras + tensorflow + P100 : cudaErrorNotSupported = 71 errorTensorflow-gpu with Keras ErrorAn error ocurred while starting the kernel while usingKeras pretrained model is not downloadedTensorFlow MirroredStrategy freeze on particular GPU comboHow to check if keras tensorflow backend is running on the GPU or CPU?tensorflow run any opmtimizer get exit code 139 interrupted by signal 11: SIGSEGVInstantiating Separate Tensorflow Session for Each GPUKeras Gpu: Configuration






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I'm having issues with Keras. Basically, it gives me the following error "Segmentation fault (core dumped)" when I try to fit a model with a conv2d layer.



My code works on the CPU. It also works without any conv2d layers (even though it's ineffective for my use case). I've got cuda, cudnn, and tensorflow installed. I've tried reinstalling keras and tensorflow.



Code:



def model_build():
model = Sequential()
model.add(Conv2D(input_shape = (env_size()[0], env_size()[1], 1), filters=4, kernel_size=(3,3), strides=1, activation=swisher))
model.add(Conv2D(filters=4, kernel_size=(5,5), strides=1, activation=swisher))
model.add(Conv2D(filters=4, kernel_size=(5,5), strides=1, activation=swisher))
model.add(Conv2D(filters=4, kernel_size=(5,5), strides=1, activation=swisher))
model.add(Flatten())
model.add(Dense(128, activation='softmax'))
model.add(Dense(4, activation='softmax'))
return model

if __name__ == '__main__':
y = model_build()
y.compile(loss = "mean_squared_error", optimizer = 'adam')
y.fit(x=env(), y = np.array([[0,0,0,0]])


Error:



Using TensorFlow backend.
Epoch 1/1
2019-03-27 05:52:27.687323: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-03-27 05:52:27.789975: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-03-27 05:52:27.790819: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties:
name: GeForce RTX 2060 major: 7 minor: 5 memoryClockRate(GHz): 1.83
pciBusID: 0000:01:00.0
totalMemory: 5.73GiB freeMemory: 5.40GiB
2019-03-27 05:52:27.790834: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0
2019-03-27 05:52:28.068080: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-27 05:52:28.068115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0
2019-03-27 05:52:28.068121: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N
2019-03-27 05:52:28.068487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5147 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2060, pci bus id: 0000:01:00.0, compute capability: 7.5)
2019-03-27 05:52:28.177752: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.337277: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.500486: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.586280: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.675738: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
Segmentation fault (core dumped)


EDIT:



Self-contained example.



import numpy as np
import keras

model = keras.models.Sequential() #Sequential model type.
model.add(keras.layers.Conv2D(filters=1, kernel_size=(3,3), strides = 1, activation="sigmoid")) #Convolutional layer.
model.add(keras.layers.Flatten()) #Flatten layer.
model.add(keras.layers.Dense(4)) #Dense layer of 4 units.
model.compile(loss='mean_squared_error', optimizer='adam') #compile model.
y = np.random.rand(1,4) #Random expected output
x = np.random.rand(1, 38, 21, 1) # Random input.
model.fit(x, y) #And fit...


EDIT2:



Keras version: 'v2.1.6-tf'



Tensorflow-GPU version: 'v1.12'



Python version: 'v3.5.2'



CUDA version: 'v9.0.176'



CUDNN version: 'v7.2.1.38-1+cuda9.0



Ubuntu version: 'v16.04'










share|improve this question


























  • What is env() returning.? What is the size of it in memory.?

    – Sreeram TP
    Mar 27 at 13:15











  • @SreeramTP I have edited my post with the code for env() and a related function...

    – ZeroMaxinumXZ
    Mar 27 at 14:35












  • Your images are just too big, you should downscale them to something like 320x180 or similar size, and it might start working.

    – Matias Valdenegro
    Mar 28 at 11:43











  • @MatiasValdenegro, tried it all the way to 96 by 54 and it still gives a segfault error.

    – ZeroMaxinumXZ
    Mar 28 at 12:00











  • Then maybe the problem is somewhere in your code (which we haven't seen) and it is not related to TensorFlow going out of memory at all

    – Matias Valdenegro
    Mar 28 at 12:01

















0















I'm having issues with Keras. Basically, it gives me the following error "Segmentation fault (core dumped)" when I try to fit a model with a conv2d layer.



My code works on the CPU. It also works without any conv2d layers (even though it's ineffective for my use case). I've got cuda, cudnn, and tensorflow installed. I've tried reinstalling keras and tensorflow.



Code:



def model_build():
model = Sequential()
model.add(Conv2D(input_shape = (env_size()[0], env_size()[1], 1), filters=4, kernel_size=(3,3), strides=1, activation=swisher))
model.add(Conv2D(filters=4, kernel_size=(5,5), strides=1, activation=swisher))
model.add(Conv2D(filters=4, kernel_size=(5,5), strides=1, activation=swisher))
model.add(Conv2D(filters=4, kernel_size=(5,5), strides=1, activation=swisher))
model.add(Flatten())
model.add(Dense(128, activation='softmax'))
model.add(Dense(4, activation='softmax'))
return model

if __name__ == '__main__':
y = model_build()
y.compile(loss = "mean_squared_error", optimizer = 'adam')
y.fit(x=env(), y = np.array([[0,0,0,0]])


Error:



Using TensorFlow backend.
Epoch 1/1
2019-03-27 05:52:27.687323: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-03-27 05:52:27.789975: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-03-27 05:52:27.790819: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties:
name: GeForce RTX 2060 major: 7 minor: 5 memoryClockRate(GHz): 1.83
pciBusID: 0000:01:00.0
totalMemory: 5.73GiB freeMemory: 5.40GiB
2019-03-27 05:52:27.790834: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0
2019-03-27 05:52:28.068080: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-27 05:52:28.068115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0
2019-03-27 05:52:28.068121: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N
2019-03-27 05:52:28.068487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5147 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2060, pci bus id: 0000:01:00.0, compute capability: 7.5)
2019-03-27 05:52:28.177752: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.337277: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.500486: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.586280: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.675738: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
Segmentation fault (core dumped)


EDIT:



Self-contained example.



import numpy as np
import keras

model = keras.models.Sequential() #Sequential model type.
model.add(keras.layers.Conv2D(filters=1, kernel_size=(3,3), strides = 1, activation="sigmoid")) #Convolutional layer.
model.add(keras.layers.Flatten()) #Flatten layer.
model.add(keras.layers.Dense(4)) #Dense layer of 4 units.
model.compile(loss='mean_squared_error', optimizer='adam') #compile model.
y = np.random.rand(1,4) #Random expected output
x = np.random.rand(1, 38, 21, 1) # Random input.
model.fit(x, y) #And fit...


EDIT2:



Keras version: 'v2.1.6-tf'



Tensorflow-GPU version: 'v1.12'



Python version: 'v3.5.2'



CUDA version: 'v9.0.176'



CUDNN version: 'v7.2.1.38-1+cuda9.0



Ubuntu version: 'v16.04'










share|improve this question


























  • What is env() returning.? What is the size of it in memory.?

    – Sreeram TP
    Mar 27 at 13:15











  • @SreeramTP I have edited my post with the code for env() and a related function...

    – ZeroMaxinumXZ
    Mar 27 at 14:35












  • Your images are just too big, you should downscale them to something like 320x180 or similar size, and it might start working.

    – Matias Valdenegro
    Mar 28 at 11:43











  • @MatiasValdenegro, tried it all the way to 96 by 54 and it still gives a segfault error.

    – ZeroMaxinumXZ
    Mar 28 at 12:00











  • Then maybe the problem is somewhere in your code (which we haven't seen) and it is not related to TensorFlow going out of memory at all

    – Matias Valdenegro
    Mar 28 at 12:01













0












0








0








I'm having issues with Keras. Basically, it gives me the following error "Segmentation fault (core dumped)" when I try to fit a model with a conv2d layer.



My code works on the CPU. It also works without any conv2d layers (even though it's ineffective for my use case). I've got cuda, cudnn, and tensorflow installed. I've tried reinstalling keras and tensorflow.



Code:



def model_build():
model = Sequential()
model.add(Conv2D(input_shape = (env_size()[0], env_size()[1], 1), filters=4, kernel_size=(3,3), strides=1, activation=swisher))
model.add(Conv2D(filters=4, kernel_size=(5,5), strides=1, activation=swisher))
model.add(Conv2D(filters=4, kernel_size=(5,5), strides=1, activation=swisher))
model.add(Conv2D(filters=4, kernel_size=(5,5), strides=1, activation=swisher))
model.add(Flatten())
model.add(Dense(128, activation='softmax'))
model.add(Dense(4, activation='softmax'))
return model

if __name__ == '__main__':
y = model_build()
y.compile(loss = "mean_squared_error", optimizer = 'adam')
y.fit(x=env(), y = np.array([[0,0,0,0]])


Error:



Using TensorFlow backend.
Epoch 1/1
2019-03-27 05:52:27.687323: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-03-27 05:52:27.789975: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-03-27 05:52:27.790819: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties:
name: GeForce RTX 2060 major: 7 minor: 5 memoryClockRate(GHz): 1.83
pciBusID: 0000:01:00.0
totalMemory: 5.73GiB freeMemory: 5.40GiB
2019-03-27 05:52:27.790834: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0
2019-03-27 05:52:28.068080: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-27 05:52:28.068115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0
2019-03-27 05:52:28.068121: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N
2019-03-27 05:52:28.068487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5147 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2060, pci bus id: 0000:01:00.0, compute capability: 7.5)
2019-03-27 05:52:28.177752: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.337277: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.500486: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.586280: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.675738: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
Segmentation fault (core dumped)


EDIT:



Self-contained example.



import numpy as np
import keras

model = keras.models.Sequential() #Sequential model type.
model.add(keras.layers.Conv2D(filters=1, kernel_size=(3,3), strides = 1, activation="sigmoid")) #Convolutional layer.
model.add(keras.layers.Flatten()) #Flatten layer.
model.add(keras.layers.Dense(4)) #Dense layer of 4 units.
model.compile(loss='mean_squared_error', optimizer='adam') #compile model.
y = np.random.rand(1,4) #Random expected output
x = np.random.rand(1, 38, 21, 1) # Random input.
model.fit(x, y) #And fit...


EDIT2:



Keras version: 'v2.1.6-tf'



Tensorflow-GPU version: 'v1.12'



Python version: 'v3.5.2'



CUDA version: 'v9.0.176'



CUDNN version: 'v7.2.1.38-1+cuda9.0



Ubuntu version: 'v16.04'










share|improve this question
















I'm having issues with Keras. Basically, it gives me the following error "Segmentation fault (core dumped)" when I try to fit a model with a conv2d layer.



My code works on the CPU. It also works without any conv2d layers (even though it's ineffective for my use case). I've got cuda, cudnn, and tensorflow installed. I've tried reinstalling keras and tensorflow.



Code:



def model_build():
model = Sequential()
model.add(Conv2D(input_shape = (env_size()[0], env_size()[1], 1), filters=4, kernel_size=(3,3), strides=1, activation=swisher))
model.add(Conv2D(filters=4, kernel_size=(5,5), strides=1, activation=swisher))
model.add(Conv2D(filters=4, kernel_size=(5,5), strides=1, activation=swisher))
model.add(Conv2D(filters=4, kernel_size=(5,5), strides=1, activation=swisher))
model.add(Flatten())
model.add(Dense(128, activation='softmax'))
model.add(Dense(4, activation='softmax'))
return model

if __name__ == '__main__':
y = model_build()
y.compile(loss = "mean_squared_error", optimizer = 'adam')
y.fit(x=env(), y = np.array([[0,0,0,0]])


Error:



Using TensorFlow backend.
Epoch 1/1
2019-03-27 05:52:27.687323: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-03-27 05:52:27.789975: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-03-27 05:52:27.790819: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties:
name: GeForce RTX 2060 major: 7 minor: 5 memoryClockRate(GHz): 1.83
pciBusID: 0000:01:00.0
totalMemory: 5.73GiB freeMemory: 5.40GiB
2019-03-27 05:52:27.790834: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0
2019-03-27 05:52:28.068080: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-27 05:52:28.068115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0
2019-03-27 05:52:28.068121: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N
2019-03-27 05:52:28.068487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5147 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2060, pci bus id: 0000:01:00.0, compute capability: 7.5)
2019-03-27 05:52:28.177752: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.337277: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.500486: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.586280: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
2019-03-27 05:52:28.675738: W tensorflow/core/framework/allocator.cc:113] Allocation of 518619136 exceeds 10% of system memory.
Segmentation fault (core dumped)


EDIT:



Self-contained example.



import numpy as np
import keras

model = keras.models.Sequential() #Sequential model type.
model.add(keras.layers.Conv2D(filters=1, kernel_size=(3,3), strides = 1, activation="sigmoid")) #Convolutional layer.
model.add(keras.layers.Flatten()) #Flatten layer.
model.add(keras.layers.Dense(4)) #Dense layer of 4 units.
model.compile(loss='mean_squared_error', optimizer='adam') #compile model.
y = np.random.rand(1,4) #Random expected output
x = np.random.rand(1, 38, 21, 1) # Random input.
model.fit(x, y) #And fit...


EDIT2:



Keras version: 'v2.1.6-tf'



Tensorflow-GPU version: 'v1.12'



Python version: 'v3.5.2'



CUDA version: 'v9.0.176'



CUDNN version: 'v7.2.1.38-1+cuda9.0



Ubuntu version: 'v16.04'







python tensorflow keras






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 30 at 16:42







ZeroMaxinumXZ

















asked Mar 27 at 10:58









ZeroMaxinumXZZeroMaxinumXZ

428 bronze badges




428 bronze badges















  • What is env() returning.? What is the size of it in memory.?

    – Sreeram TP
    Mar 27 at 13:15











  • @SreeramTP I have edited my post with the code for env() and a related function...

    – ZeroMaxinumXZ
    Mar 27 at 14:35












  • Your images are just too big, you should downscale them to something like 320x180 or similar size, and it might start working.

    – Matias Valdenegro
    Mar 28 at 11:43











  • @MatiasValdenegro, tried it all the way to 96 by 54 and it still gives a segfault error.

    – ZeroMaxinumXZ
    Mar 28 at 12:00











  • Then maybe the problem is somewhere in your code (which we haven't seen) and it is not related to TensorFlow going out of memory at all

    – Matias Valdenegro
    Mar 28 at 12:01

















  • What is env() returning.? What is the size of it in memory.?

    – Sreeram TP
    Mar 27 at 13:15











  • @SreeramTP I have edited my post with the code for env() and a related function...

    – ZeroMaxinumXZ
    Mar 27 at 14:35












  • Your images are just too big, you should downscale them to something like 320x180 or similar size, and it might start working.

    – Matias Valdenegro
    Mar 28 at 11:43











  • @MatiasValdenegro, tried it all the way to 96 by 54 and it still gives a segfault error.

    – ZeroMaxinumXZ
    Mar 28 at 12:00











  • Then maybe the problem is somewhere in your code (which we haven't seen) and it is not related to TensorFlow going out of memory at all

    – Matias Valdenegro
    Mar 28 at 12:01
















What is env() returning.? What is the size of it in memory.?

– Sreeram TP
Mar 27 at 13:15





What is env() returning.? What is the size of it in memory.?

– Sreeram TP
Mar 27 at 13:15













@SreeramTP I have edited my post with the code for env() and a related function...

– ZeroMaxinumXZ
Mar 27 at 14:35






@SreeramTP I have edited my post with the code for env() and a related function...

– ZeroMaxinumXZ
Mar 27 at 14:35














Your images are just too big, you should downscale them to something like 320x180 or similar size, and it might start working.

– Matias Valdenegro
Mar 28 at 11:43





Your images are just too big, you should downscale them to something like 320x180 or similar size, and it might start working.

– Matias Valdenegro
Mar 28 at 11:43













@MatiasValdenegro, tried it all the way to 96 by 54 and it still gives a segfault error.

– ZeroMaxinumXZ
Mar 28 at 12:00





@MatiasValdenegro, tried it all the way to 96 by 54 and it still gives a segfault error.

– ZeroMaxinumXZ
Mar 28 at 12:00













Then maybe the problem is somewhere in your code (which we haven't seen) and it is not related to TensorFlow going out of memory at all

– Matias Valdenegro
Mar 28 at 12:01





Then maybe the problem is somewhere in your code (which we haven't seen) and it is not related to TensorFlow going out of memory at all

– Matias Valdenegro
Mar 28 at 12:01












2 Answers
2






active

oldest

votes


















0














Your MWE works fine for me (if I add , input_shape=(38, 21, 1) to the first convolution layer):



import numpy as np
import keras

model = keras.models.Sequential() #Sequential model type.
model.add(keras.layers.Conv2D(filters=1, kernel_size=(3,3), strides = 1, activation="sigmoid", input_shape=(38, 21, 1))) #Convolutional layer.
model.add(keras.layers.Flatten()) #Flatten layer.
model.add(keras.layers.Dense(4)) #Dense layer of 4 units.
model.compile(loss='mean_squared_error', optimizer='adam') #compile model.
y = np.random.rand(2, 4) #Random expected output
x = np.random.rand(2, 38, 21, 1) # Random input.
model.fit(x, y)


That means that your issue must come from your system or installation.



Looking at the compatibility chart of tensorflow shows that your python, tensorflow and CUDA versions should be compatible.



For your configuration the cuDNN version 7.0.x is recommended.
The cuDNN version 7.2 that you are using is probably incompatible.
Try installing / using cuDNN 7.0.x.






share|improve this answer
































    0














    It seems that your GPU does not have enough memory. Your model does not seem to be too big, so I would guess that the problem comes from the line:



    y.fit(x=env(), y = np.array([[0,0,0,0]])


    The output of env() might be too big to be handle by your GPU memory.






    share|improve this answer

























    • Oh ok. Is there any way I can sort-of "chop it up" and feed the array in batches?

      – ZeroMaxinumXZ
      Mar 27 at 11:17











    • Absolutely. You can use minibatch generated by a "generator". Keras for instance comes with an image generator

      – Romain Thalineau
      Mar 27 at 11:39












    • thanks but the same error is still occuring, no matter the batch size.

      – ZeroMaxinumXZ
      Mar 28 at 5:19











    • also had the same issue due to not enough RAM. Are you loading a lot of data?

      – dzang
      Mar 30 at 16:21













    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55375559%2fhow-to-fix-segmentation-fault-core-dumped-error-in-keras%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Your MWE works fine for me (if I add , input_shape=(38, 21, 1) to the first convolution layer):



    import numpy as np
    import keras

    model = keras.models.Sequential() #Sequential model type.
    model.add(keras.layers.Conv2D(filters=1, kernel_size=(3,3), strides = 1, activation="sigmoid", input_shape=(38, 21, 1))) #Convolutional layer.
    model.add(keras.layers.Flatten()) #Flatten layer.
    model.add(keras.layers.Dense(4)) #Dense layer of 4 units.
    model.compile(loss='mean_squared_error', optimizer='adam') #compile model.
    y = np.random.rand(2, 4) #Random expected output
    x = np.random.rand(2, 38, 21, 1) # Random input.
    model.fit(x, y)


    That means that your issue must come from your system or installation.



    Looking at the compatibility chart of tensorflow shows that your python, tensorflow and CUDA versions should be compatible.



    For your configuration the cuDNN version 7.0.x is recommended.
    The cuDNN version 7.2 that you are using is probably incompatible.
    Try installing / using cuDNN 7.0.x.






    share|improve this answer





























      0














      Your MWE works fine for me (if I add , input_shape=(38, 21, 1) to the first convolution layer):



      import numpy as np
      import keras

      model = keras.models.Sequential() #Sequential model type.
      model.add(keras.layers.Conv2D(filters=1, kernel_size=(3,3), strides = 1, activation="sigmoid", input_shape=(38, 21, 1))) #Convolutional layer.
      model.add(keras.layers.Flatten()) #Flatten layer.
      model.add(keras.layers.Dense(4)) #Dense layer of 4 units.
      model.compile(loss='mean_squared_error', optimizer='adam') #compile model.
      y = np.random.rand(2, 4) #Random expected output
      x = np.random.rand(2, 38, 21, 1) # Random input.
      model.fit(x, y)


      That means that your issue must come from your system or installation.



      Looking at the compatibility chart of tensorflow shows that your python, tensorflow and CUDA versions should be compatible.



      For your configuration the cuDNN version 7.0.x is recommended.
      The cuDNN version 7.2 that you are using is probably incompatible.
      Try installing / using cuDNN 7.0.x.






      share|improve this answer



























        0












        0








        0







        Your MWE works fine for me (if I add , input_shape=(38, 21, 1) to the first convolution layer):



        import numpy as np
        import keras

        model = keras.models.Sequential() #Sequential model type.
        model.add(keras.layers.Conv2D(filters=1, kernel_size=(3,3), strides = 1, activation="sigmoid", input_shape=(38, 21, 1))) #Convolutional layer.
        model.add(keras.layers.Flatten()) #Flatten layer.
        model.add(keras.layers.Dense(4)) #Dense layer of 4 units.
        model.compile(loss='mean_squared_error', optimizer='adam') #compile model.
        y = np.random.rand(2, 4) #Random expected output
        x = np.random.rand(2, 38, 21, 1) # Random input.
        model.fit(x, y)


        That means that your issue must come from your system or installation.



        Looking at the compatibility chart of tensorflow shows that your python, tensorflow and CUDA versions should be compatible.



        For your configuration the cuDNN version 7.0.x is recommended.
        The cuDNN version 7.2 that you are using is probably incompatible.
        Try installing / using cuDNN 7.0.x.






        share|improve this answer













        Your MWE works fine for me (if I add , input_shape=(38, 21, 1) to the first convolution layer):



        import numpy as np
        import keras

        model = keras.models.Sequential() #Sequential model type.
        model.add(keras.layers.Conv2D(filters=1, kernel_size=(3,3), strides = 1, activation="sigmoid", input_shape=(38, 21, 1))) #Convolutional layer.
        model.add(keras.layers.Flatten()) #Flatten layer.
        model.add(keras.layers.Dense(4)) #Dense layer of 4 units.
        model.compile(loss='mean_squared_error', optimizer='adam') #compile model.
        y = np.random.rand(2, 4) #Random expected output
        x = np.random.rand(2, 38, 21, 1) # Random input.
        model.fit(x, y)


        That means that your issue must come from your system or installation.



        Looking at the compatibility chart of tensorflow shows that your python, tensorflow and CUDA versions should be compatible.



        For your configuration the cuDNN version 7.0.x is recommended.
        The cuDNN version 7.2 that you are using is probably incompatible.
        Try installing / using cuDNN 7.0.x.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Mar 30 at 17:44









        SpenSpen

        2,4093 gold badges23 silver badges47 bronze badges




        2,4093 gold badges23 silver badges47 bronze badges


























            0














            It seems that your GPU does not have enough memory. Your model does not seem to be too big, so I would guess that the problem comes from the line:



            y.fit(x=env(), y = np.array([[0,0,0,0]])


            The output of env() might be too big to be handle by your GPU memory.






            share|improve this answer

























            • Oh ok. Is there any way I can sort-of "chop it up" and feed the array in batches?

              – ZeroMaxinumXZ
              Mar 27 at 11:17











            • Absolutely. You can use minibatch generated by a "generator". Keras for instance comes with an image generator

              – Romain Thalineau
              Mar 27 at 11:39












            • thanks but the same error is still occuring, no matter the batch size.

              – ZeroMaxinumXZ
              Mar 28 at 5:19











            • also had the same issue due to not enough RAM. Are you loading a lot of data?

              – dzang
              Mar 30 at 16:21















            0














            It seems that your GPU does not have enough memory. Your model does not seem to be too big, so I would guess that the problem comes from the line:



            y.fit(x=env(), y = np.array([[0,0,0,0]])


            The output of env() might be too big to be handle by your GPU memory.






            share|improve this answer

























            • Oh ok. Is there any way I can sort-of "chop it up" and feed the array in batches?

              – ZeroMaxinumXZ
              Mar 27 at 11:17











            • Absolutely. You can use minibatch generated by a "generator". Keras for instance comes with an image generator

              – Romain Thalineau
              Mar 27 at 11:39












            • thanks but the same error is still occuring, no matter the batch size.

              – ZeroMaxinumXZ
              Mar 28 at 5:19











            • also had the same issue due to not enough RAM. Are you loading a lot of data?

              – dzang
              Mar 30 at 16:21













            0












            0








            0







            It seems that your GPU does not have enough memory. Your model does not seem to be too big, so I would guess that the problem comes from the line:



            y.fit(x=env(), y = np.array([[0,0,0,0]])


            The output of env() might be too big to be handle by your GPU memory.






            share|improve this answer













            It seems that your GPU does not have enough memory. Your model does not seem to be too big, so I would guess that the problem comes from the line:



            y.fit(x=env(), y = np.array([[0,0,0,0]])


            The output of env() might be too big to be handle by your GPU memory.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Mar 27 at 11:06









            Romain ThalineauRomain Thalineau

            344 bronze badges




            344 bronze badges















            • Oh ok. Is there any way I can sort-of "chop it up" and feed the array in batches?

              – ZeroMaxinumXZ
              Mar 27 at 11:17











            • Absolutely. You can use minibatch generated by a "generator". Keras for instance comes with an image generator

              – Romain Thalineau
              Mar 27 at 11:39












            • thanks but the same error is still occuring, no matter the batch size.

              – ZeroMaxinumXZ
              Mar 28 at 5:19











            • also had the same issue due to not enough RAM. Are you loading a lot of data?

              – dzang
              Mar 30 at 16:21

















            • Oh ok. Is there any way I can sort-of "chop it up" and feed the array in batches?

              – ZeroMaxinumXZ
              Mar 27 at 11:17











            • Absolutely. You can use minibatch generated by a "generator". Keras for instance comes with an image generator

              – Romain Thalineau
              Mar 27 at 11:39












            • thanks but the same error is still occuring, no matter the batch size.

              – ZeroMaxinumXZ
              Mar 28 at 5:19











            • also had the same issue due to not enough RAM. Are you loading a lot of data?

              – dzang
              Mar 30 at 16:21
















            Oh ok. Is there any way I can sort-of "chop it up" and feed the array in batches?

            – ZeroMaxinumXZ
            Mar 27 at 11:17





            Oh ok. Is there any way I can sort-of "chop it up" and feed the array in batches?

            – ZeroMaxinumXZ
            Mar 27 at 11:17













            Absolutely. You can use minibatch generated by a "generator". Keras for instance comes with an image generator

            – Romain Thalineau
            Mar 27 at 11:39






            Absolutely. You can use minibatch generated by a "generator". Keras for instance comes with an image generator

            – Romain Thalineau
            Mar 27 at 11:39














            thanks but the same error is still occuring, no matter the batch size.

            – ZeroMaxinumXZ
            Mar 28 at 5:19





            thanks but the same error is still occuring, no matter the batch size.

            – ZeroMaxinumXZ
            Mar 28 at 5:19













            also had the same issue due to not enough RAM. Are you loading a lot of data?

            – dzang
            Mar 30 at 16:21





            also had the same issue due to not enough RAM. Are you loading a lot of data?

            – dzang
            Mar 30 at 16:21

















            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55375559%2fhow-to-fix-segmentation-fault-core-dumped-error-in-keras%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

            SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

            은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현