Redis ha helm chart error - NOREPLICAS Not enough good replicas to writeredis 2.8.7 sentinel environment configuration questions for linux, how to make it autostart, what they should subscribe to?Redis Sentinel manual failover command timesoutKubernetes Service, Redis Sentinel , is not returning Redis Master IP & Port consistentlyWhy no linear scaling of Redis ClusterWriting custom functions in Helm chartsOperator or Helm chart for MongoDB replicasConfiguring third-party Helm charts from my application Helm chartCreate custom helm chartsConnecting to redis (from helm chart) inside Kubernetes clusterPerformance issues of containers in kubernetes pods

Do the books ever say oliphaunts aren’t elephants?

What is an Accessible Word?

How well would the Moon protect the Earth from an Asteroid?

What is the reason for cards stating "Until end of turn, you don't lose this mana as steps and phases end"?

2010 (?) science fiction TV show with new world

Employer stores plain text personal data in a 'data warehouse'

How do I make my photos have more impact?

8086 stack segment and avoiding overflow in interrupts

Why does aggregate initialization not work anymore since C++20 if a constructor is explicitly defaulted or deleted?

Why were contact sensors put on three of the Lunar Module's four legs? Did they ever bend and stick out sideways?

Who said "one can be a powerful king with a very small sceptre"?

How did the SysRq key get onto modern keyboards if it's rarely used?

Is there any app for reduce battery draining for ios 12?

Should I accept an invitation to give a talk from someone who might review my proposal?

If you inherit a Roth 401(k), is it taxed?

Can Papyrus be folded?

Should I intervene when a colleague in a different department makes students run laps as part of their grade?

Unknown indication below upper stave

A variant of the Multiple Traveling Salesman Problem

Why is softmax function used to calculate probabilities although we can divide each value by the sum of the vector?

How to have poached eggs in "sphere form"?

How can I kill my goat?

Is it okay for me to decline a project on ethical grounds?

Problem with Eigenvectors



Redis ha helm chart error - NOREPLICAS Not enough good replicas to write


redis 2.8.7 sentinel environment configuration questions for linux, how to make it autostart, what they should subscribe to?Redis Sentinel manual failover command timesoutKubernetes Service, Redis Sentinel , is not returning Redis Master IP & Port consistentlyWhy no linear scaling of Redis ClusterWriting custom functions in Helm chartsOperator or Helm chart for MongoDB replicasConfiguring third-party Helm charts from my application Helm chartCreate custom helm chartsConnecting to redis (from helm chart) inside Kubernetes clusterPerformance issues of containers in kubernetes pods






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I am trying to setup redis-ha helm chart on my local kubernetes (docker for windows).



helm values file I am using is,



## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
repository: redis
tag: 5.0.3-alpine
pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3

## Custom labels for the redis pod
labels:

## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: false
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the redis-ha.fullname template
# name:

## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##

rbac:
create: false

## Redis specific configuration options
redis:
port: 6379
masterGroupName: mymaster
config:
## Additional redis conf options can be added below
## For all available options see http://download.redis.io/redis-stable/redis.conf
min-slaves-to-write: 1
min-slaves-max-lag: 5 # Value in seconds
maxmemory: "0" # Max memory to use for each redis instance. Default is unlimited.
maxmemory-policy: "volatile-lru" # Max memory policy to use for each redis instance. Default is volatile-lru.
# Determines if scheduled RDB backups are created. Default is false.
# Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
save: "900 1"
# When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
repl-diskless-sync: "yes"
rdbcompression: "yes"
rdbchecksum: "yes"

## Custom redis.conf files used to override default settings. If this file is
## specified then the redis.config above will be ignored.
# customConfig: |-
# Define configuration here

resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 700Mi
cpu: 250m

## Sentinel specific configuration options
sentinel:
port: 26379
quorum: 2
config:
## Additional sentinel conf options can be added below. Only options that
## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
## be properly templated.
## For available options see http://download.redis.io/redis-stable/sentinel.conf
down-after-milliseconds: 10000
## Failover timeout value in milliseconds
failover-timeout: 180000
parallel-syncs: 5

## Custom sentinel.conf files used to override default settings. If this file is
## specified then the sentinel.config above will be ignored.
# customConfig: |-
# Define configuration here

resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 200Mi
cpu: 250m

securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true

## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity:

# Prometheus exporter specific configuration options
exporter:
enabled: false
image: oliver006/redis_exporter
tag: v0.31.0
pullPolicy: IfNotPresent

# prometheus port & scrape path
port: 9121
scrapePath: /metrics

# cpu/memory resource limits/requests
resources:

# Additional args for redis exporter
extraArgs:

podDisruptionBudget:
# maxUnavailable: 1
# minAvailable: 1

## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:

## Use existing secret containing "auth" key (ignores redisPassword)
# existingSecret:

persistentVolume:
enabled: true
## redis-ha data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessModes:
- ReadWriteOnce
size: 1Gi
annotations:
init:
resources:

# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
## path is evaluated as template so placeholders are replaced
# path: "/data/ .Release.Name "

# if chown is true, an init-container with root permissions is launched to
# change the owner of the hostPath folder to the user defined in the
# security context
chown: true


redis-ha is getting deployed correctly and when I do kubectl get all,



NAME READY STATUS RESTARTS AGE
pod/rc-redis-ha-server-0 2/2 Running 0 1h
pod/rc-redis-ha-server-1 2/2 Running 0 1h
pod/rc-redis-ha-server-2 2/2 Running 0 1h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d
service/rc-redis-ha ClusterIP None <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-0 ClusterIP 10.105.187.154 <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-1 ClusterIP 10.107.36.58 <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-2 ClusterIP 10.98.38.214 <none> 6379/TCP,26379/TCP 1h

NAME DESIRED CURRENT AGE
statefulset.apps/rc-redis-ha-server 3 3 1h


I try to access the redis-ha using Java application, which uses lettuce driver to connect to redis. Sample java code to access redis,



package io.c12.bala.lettuce;

import io.lettuce.core.RedisClient;
import io.lettuce.core.api.StatefulRedisConnection;
import io.lettuce.core.api.sync.RedisCommands;

import java.util.logging.Logger;


public class RedisClusterConnect

private static final Logger logger = Logger.getLogger(RedisClusterConnect.class.getName());
public static void main(String[] args)
logger.info("Starting test");

// Syntax: redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId
RedisClient redisClient = RedisClient.create("redis-sentinel://rc-redis-ha:26379/0#mymaster");
StatefulRedisConnection<String, String> connection = redisClient.connect();


RedisCommands<String, String> command = connection.sync();
command.set("Hello", "World");
logger.info("Ran set command successfully");
logger.info("Value from Redis - " + command.get("Hello"));

connection.close();
redisClient.shutdown();




I packaged the application as runnable jar, created a container and pushed it to same kubernetes cluster where redis is running. The application now throws an error.



Exception in thread "main" io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:122)
at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
at com.sun.proxy.$Proxy0.set(Unknown Source)
at io.c12.bala.lettuce.RedisClusterConnect.main(RedisClusterConnect.java:22)
Caused by: io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:108)
at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:120)
at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:111)
at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:646)
at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:604)
at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:556)


I tried with jedis driver too, and with springboot application, getting the same error from the Redis-ha cluster.



** UPDATE **
when I run info command inside redis-cli, I am getting getting



connected_slaves:2
min_slaves_good_slaves:0


Seems the Slaves are not behaving properly. When switched to min-slaves-to-write: 0. Able to read and Write to Redis Cluster.



Any help on this is appreciated.










share|improve this question


























  • Hi Bala - Can you run the following and detail the output: helm version.

    – Mike Ubezzi MSFT
    Mar 28 at 19:40











  • running latest version of helm Client: &version.VersionSemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean" Server: &version.VersionSemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"

    – Bala
    Mar 29 at 14:52












  • I am running all this on Docker for Window.. and the Kubernetes comes with it is not lastest. Version of K8s is v1.10.11

    – Bala
    Mar 29 at 14:58

















0















I am trying to setup redis-ha helm chart on my local kubernetes (docker for windows).



helm values file I am using is,



## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
repository: redis
tag: 5.0.3-alpine
pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3

## Custom labels for the redis pod
labels:

## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: false
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the redis-ha.fullname template
# name:

## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##

rbac:
create: false

## Redis specific configuration options
redis:
port: 6379
masterGroupName: mymaster
config:
## Additional redis conf options can be added below
## For all available options see http://download.redis.io/redis-stable/redis.conf
min-slaves-to-write: 1
min-slaves-max-lag: 5 # Value in seconds
maxmemory: "0" # Max memory to use for each redis instance. Default is unlimited.
maxmemory-policy: "volatile-lru" # Max memory policy to use for each redis instance. Default is volatile-lru.
# Determines if scheduled RDB backups are created. Default is false.
# Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
save: "900 1"
# When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
repl-diskless-sync: "yes"
rdbcompression: "yes"
rdbchecksum: "yes"

## Custom redis.conf files used to override default settings. If this file is
## specified then the redis.config above will be ignored.
# customConfig: |-
# Define configuration here

resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 700Mi
cpu: 250m

## Sentinel specific configuration options
sentinel:
port: 26379
quorum: 2
config:
## Additional sentinel conf options can be added below. Only options that
## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
## be properly templated.
## For available options see http://download.redis.io/redis-stable/sentinel.conf
down-after-milliseconds: 10000
## Failover timeout value in milliseconds
failover-timeout: 180000
parallel-syncs: 5

## Custom sentinel.conf files used to override default settings. If this file is
## specified then the sentinel.config above will be ignored.
# customConfig: |-
# Define configuration here

resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 200Mi
cpu: 250m

securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true

## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity:

# Prometheus exporter specific configuration options
exporter:
enabled: false
image: oliver006/redis_exporter
tag: v0.31.0
pullPolicy: IfNotPresent

# prometheus port & scrape path
port: 9121
scrapePath: /metrics

# cpu/memory resource limits/requests
resources:

# Additional args for redis exporter
extraArgs:

podDisruptionBudget:
# maxUnavailable: 1
# minAvailable: 1

## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:

## Use existing secret containing "auth" key (ignores redisPassword)
# existingSecret:

persistentVolume:
enabled: true
## redis-ha data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessModes:
- ReadWriteOnce
size: 1Gi
annotations:
init:
resources:

# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
## path is evaluated as template so placeholders are replaced
# path: "/data/ .Release.Name "

# if chown is true, an init-container with root permissions is launched to
# change the owner of the hostPath folder to the user defined in the
# security context
chown: true


redis-ha is getting deployed correctly and when I do kubectl get all,



NAME READY STATUS RESTARTS AGE
pod/rc-redis-ha-server-0 2/2 Running 0 1h
pod/rc-redis-ha-server-1 2/2 Running 0 1h
pod/rc-redis-ha-server-2 2/2 Running 0 1h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d
service/rc-redis-ha ClusterIP None <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-0 ClusterIP 10.105.187.154 <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-1 ClusterIP 10.107.36.58 <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-2 ClusterIP 10.98.38.214 <none> 6379/TCP,26379/TCP 1h

NAME DESIRED CURRENT AGE
statefulset.apps/rc-redis-ha-server 3 3 1h


I try to access the redis-ha using Java application, which uses lettuce driver to connect to redis. Sample java code to access redis,



package io.c12.bala.lettuce;

import io.lettuce.core.RedisClient;
import io.lettuce.core.api.StatefulRedisConnection;
import io.lettuce.core.api.sync.RedisCommands;

import java.util.logging.Logger;


public class RedisClusterConnect

private static final Logger logger = Logger.getLogger(RedisClusterConnect.class.getName());
public static void main(String[] args)
logger.info("Starting test");

// Syntax: redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId
RedisClient redisClient = RedisClient.create("redis-sentinel://rc-redis-ha:26379/0#mymaster");
StatefulRedisConnection<String, String> connection = redisClient.connect();


RedisCommands<String, String> command = connection.sync();
command.set("Hello", "World");
logger.info("Ran set command successfully");
logger.info("Value from Redis - " + command.get("Hello"));

connection.close();
redisClient.shutdown();




I packaged the application as runnable jar, created a container and pushed it to same kubernetes cluster where redis is running. The application now throws an error.



Exception in thread "main" io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:122)
at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
at com.sun.proxy.$Proxy0.set(Unknown Source)
at io.c12.bala.lettuce.RedisClusterConnect.main(RedisClusterConnect.java:22)
Caused by: io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:108)
at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:120)
at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:111)
at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:646)
at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:604)
at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:556)


I tried with jedis driver too, and with springboot application, getting the same error from the Redis-ha cluster.



** UPDATE **
when I run info command inside redis-cli, I am getting getting



connected_slaves:2
min_slaves_good_slaves:0


Seems the Slaves are not behaving properly. When switched to min-slaves-to-write: 0. Able to read and Write to Redis Cluster.



Any help on this is appreciated.










share|improve this question


























  • Hi Bala - Can you run the following and detail the output: helm version.

    – Mike Ubezzi MSFT
    Mar 28 at 19:40











  • running latest version of helm Client: &version.VersionSemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean" Server: &version.VersionSemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"

    – Bala
    Mar 29 at 14:52












  • I am running all this on Docker for Window.. and the Kubernetes comes with it is not lastest. Version of K8s is v1.10.11

    – Bala
    Mar 29 at 14:58













0












0








0








I am trying to setup redis-ha helm chart on my local kubernetes (docker for windows).



helm values file I am using is,



## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
repository: redis
tag: 5.0.3-alpine
pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3

## Custom labels for the redis pod
labels:

## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: false
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the redis-ha.fullname template
# name:

## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##

rbac:
create: false

## Redis specific configuration options
redis:
port: 6379
masterGroupName: mymaster
config:
## Additional redis conf options can be added below
## For all available options see http://download.redis.io/redis-stable/redis.conf
min-slaves-to-write: 1
min-slaves-max-lag: 5 # Value in seconds
maxmemory: "0" # Max memory to use for each redis instance. Default is unlimited.
maxmemory-policy: "volatile-lru" # Max memory policy to use for each redis instance. Default is volatile-lru.
# Determines if scheduled RDB backups are created. Default is false.
# Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
save: "900 1"
# When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
repl-diskless-sync: "yes"
rdbcompression: "yes"
rdbchecksum: "yes"

## Custom redis.conf files used to override default settings. If this file is
## specified then the redis.config above will be ignored.
# customConfig: |-
# Define configuration here

resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 700Mi
cpu: 250m

## Sentinel specific configuration options
sentinel:
port: 26379
quorum: 2
config:
## Additional sentinel conf options can be added below. Only options that
## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
## be properly templated.
## For available options see http://download.redis.io/redis-stable/sentinel.conf
down-after-milliseconds: 10000
## Failover timeout value in milliseconds
failover-timeout: 180000
parallel-syncs: 5

## Custom sentinel.conf files used to override default settings. If this file is
## specified then the sentinel.config above will be ignored.
# customConfig: |-
# Define configuration here

resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 200Mi
cpu: 250m

securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true

## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity:

# Prometheus exporter specific configuration options
exporter:
enabled: false
image: oliver006/redis_exporter
tag: v0.31.0
pullPolicy: IfNotPresent

# prometheus port & scrape path
port: 9121
scrapePath: /metrics

# cpu/memory resource limits/requests
resources:

# Additional args for redis exporter
extraArgs:

podDisruptionBudget:
# maxUnavailable: 1
# minAvailable: 1

## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:

## Use existing secret containing "auth" key (ignores redisPassword)
# existingSecret:

persistentVolume:
enabled: true
## redis-ha data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessModes:
- ReadWriteOnce
size: 1Gi
annotations:
init:
resources:

# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
## path is evaluated as template so placeholders are replaced
# path: "/data/ .Release.Name "

# if chown is true, an init-container with root permissions is launched to
# change the owner of the hostPath folder to the user defined in the
# security context
chown: true


redis-ha is getting deployed correctly and when I do kubectl get all,



NAME READY STATUS RESTARTS AGE
pod/rc-redis-ha-server-0 2/2 Running 0 1h
pod/rc-redis-ha-server-1 2/2 Running 0 1h
pod/rc-redis-ha-server-2 2/2 Running 0 1h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d
service/rc-redis-ha ClusterIP None <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-0 ClusterIP 10.105.187.154 <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-1 ClusterIP 10.107.36.58 <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-2 ClusterIP 10.98.38.214 <none> 6379/TCP,26379/TCP 1h

NAME DESIRED CURRENT AGE
statefulset.apps/rc-redis-ha-server 3 3 1h


I try to access the redis-ha using Java application, which uses lettuce driver to connect to redis. Sample java code to access redis,



package io.c12.bala.lettuce;

import io.lettuce.core.RedisClient;
import io.lettuce.core.api.StatefulRedisConnection;
import io.lettuce.core.api.sync.RedisCommands;

import java.util.logging.Logger;


public class RedisClusterConnect

private static final Logger logger = Logger.getLogger(RedisClusterConnect.class.getName());
public static void main(String[] args)
logger.info("Starting test");

// Syntax: redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId
RedisClient redisClient = RedisClient.create("redis-sentinel://rc-redis-ha:26379/0#mymaster");
StatefulRedisConnection<String, String> connection = redisClient.connect();


RedisCommands<String, String> command = connection.sync();
command.set("Hello", "World");
logger.info("Ran set command successfully");
logger.info("Value from Redis - " + command.get("Hello"));

connection.close();
redisClient.shutdown();




I packaged the application as runnable jar, created a container and pushed it to same kubernetes cluster where redis is running. The application now throws an error.



Exception in thread "main" io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:122)
at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
at com.sun.proxy.$Proxy0.set(Unknown Source)
at io.c12.bala.lettuce.RedisClusterConnect.main(RedisClusterConnect.java:22)
Caused by: io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:108)
at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:120)
at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:111)
at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:646)
at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:604)
at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:556)


I tried with jedis driver too, and with springboot application, getting the same error from the Redis-ha cluster.



** UPDATE **
when I run info command inside redis-cli, I am getting getting



connected_slaves:2
min_slaves_good_slaves:0


Seems the Slaves are not behaving properly. When switched to min-slaves-to-write: 0. Able to read and Write to Redis Cluster.



Any help on this is appreciated.










share|improve this question
















I am trying to setup redis-ha helm chart on my local kubernetes (docker for windows).



helm values file I am using is,



## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
repository: redis
tag: 5.0.3-alpine
pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3

## Custom labels for the redis pod
labels:

## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: false
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the redis-ha.fullname template
# name:

## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##

rbac:
create: false

## Redis specific configuration options
redis:
port: 6379
masterGroupName: mymaster
config:
## Additional redis conf options can be added below
## For all available options see http://download.redis.io/redis-stable/redis.conf
min-slaves-to-write: 1
min-slaves-max-lag: 5 # Value in seconds
maxmemory: "0" # Max memory to use for each redis instance. Default is unlimited.
maxmemory-policy: "volatile-lru" # Max memory policy to use for each redis instance. Default is volatile-lru.
# Determines if scheduled RDB backups are created. Default is false.
# Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
save: "900 1"
# When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
repl-diskless-sync: "yes"
rdbcompression: "yes"
rdbchecksum: "yes"

## Custom redis.conf files used to override default settings. If this file is
## specified then the redis.config above will be ignored.
# customConfig: |-
# Define configuration here

resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 700Mi
cpu: 250m

## Sentinel specific configuration options
sentinel:
port: 26379
quorum: 2
config:
## Additional sentinel conf options can be added below. Only options that
## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
## be properly templated.
## For available options see http://download.redis.io/redis-stable/sentinel.conf
down-after-milliseconds: 10000
## Failover timeout value in milliseconds
failover-timeout: 180000
parallel-syncs: 5

## Custom sentinel.conf files used to override default settings. If this file is
## specified then the sentinel.config above will be ignored.
# customConfig: |-
# Define configuration here

resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 200Mi
cpu: 250m

securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true

## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity:

# Prometheus exporter specific configuration options
exporter:
enabled: false
image: oliver006/redis_exporter
tag: v0.31.0
pullPolicy: IfNotPresent

# prometheus port & scrape path
port: 9121
scrapePath: /metrics

# cpu/memory resource limits/requests
resources:

# Additional args for redis exporter
extraArgs:

podDisruptionBudget:
# maxUnavailable: 1
# minAvailable: 1

## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:

## Use existing secret containing "auth" key (ignores redisPassword)
# existingSecret:

persistentVolume:
enabled: true
## redis-ha data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessModes:
- ReadWriteOnce
size: 1Gi
annotations:
init:
resources:

# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
## path is evaluated as template so placeholders are replaced
# path: "/data/ .Release.Name "

# if chown is true, an init-container with root permissions is launched to
# change the owner of the hostPath folder to the user defined in the
# security context
chown: true


redis-ha is getting deployed correctly and when I do kubectl get all,



NAME READY STATUS RESTARTS AGE
pod/rc-redis-ha-server-0 2/2 Running 0 1h
pod/rc-redis-ha-server-1 2/2 Running 0 1h
pod/rc-redis-ha-server-2 2/2 Running 0 1h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d
service/rc-redis-ha ClusterIP None <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-0 ClusterIP 10.105.187.154 <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-1 ClusterIP 10.107.36.58 <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-2 ClusterIP 10.98.38.214 <none> 6379/TCP,26379/TCP 1h

NAME DESIRED CURRENT AGE
statefulset.apps/rc-redis-ha-server 3 3 1h


I try to access the redis-ha using Java application, which uses lettuce driver to connect to redis. Sample java code to access redis,



package io.c12.bala.lettuce;

import io.lettuce.core.RedisClient;
import io.lettuce.core.api.StatefulRedisConnection;
import io.lettuce.core.api.sync.RedisCommands;

import java.util.logging.Logger;


public class RedisClusterConnect

private static final Logger logger = Logger.getLogger(RedisClusterConnect.class.getName());
public static void main(String[] args)
logger.info("Starting test");

// Syntax: redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId
RedisClient redisClient = RedisClient.create("redis-sentinel://rc-redis-ha:26379/0#mymaster");
StatefulRedisConnection<String, String> connection = redisClient.connect();


RedisCommands<String, String> command = connection.sync();
command.set("Hello", "World");
logger.info("Ran set command successfully");
logger.info("Value from Redis - " + command.get("Hello"));

connection.close();
redisClient.shutdown();




I packaged the application as runnable jar, created a container and pushed it to same kubernetes cluster where redis is running. The application now throws an error.



Exception in thread "main" io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:122)
at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
at com.sun.proxy.$Proxy0.set(Unknown Source)
at io.c12.bala.lettuce.RedisClusterConnect.main(RedisClusterConnect.java:22)
Caused by: io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:108)
at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:120)
at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:111)
at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:646)
at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:604)
at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:556)


I tried with jedis driver too, and with springboot application, getting the same error from the Redis-ha cluster.



** UPDATE **
when I run info command inside redis-cli, I am getting getting



connected_slaves:2
min_slaves_good_slaves:0


Seems the Slaves are not behaving properly. When switched to min-slaves-to-write: 0. Able to read and Write to Redis Cluster.



Any help on this is appreciated.







redis kubernetes kubernetes-helm






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 28 at 18:13







Bala

















asked Mar 26 at 20:33









BalaBala

1812 silver badges15 bronze badges




1812 silver badges15 bronze badges















  • Hi Bala - Can you run the following and detail the output: helm version.

    – Mike Ubezzi MSFT
    Mar 28 at 19:40











  • running latest version of helm Client: &version.VersionSemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean" Server: &version.VersionSemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"

    – Bala
    Mar 29 at 14:52












  • I am running all this on Docker for Window.. and the Kubernetes comes with it is not lastest. Version of K8s is v1.10.11

    – Bala
    Mar 29 at 14:58

















  • Hi Bala - Can you run the following and detail the output: helm version.

    – Mike Ubezzi MSFT
    Mar 28 at 19:40











  • running latest version of helm Client: &version.VersionSemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean" Server: &version.VersionSemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"

    – Bala
    Mar 29 at 14:52












  • I am running all this on Docker for Window.. and the Kubernetes comes with it is not lastest. Version of K8s is v1.10.11

    – Bala
    Mar 29 at 14:58
















Hi Bala - Can you run the following and detail the output: helm version.

– Mike Ubezzi MSFT
Mar 28 at 19:40





Hi Bala - Can you run the following and detail the output: helm version.

– Mike Ubezzi MSFT
Mar 28 at 19:40













running latest version of helm Client: &version.VersionSemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean" Server: &version.VersionSemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"

– Bala
Mar 29 at 14:52






running latest version of helm Client: &version.VersionSemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean" Server: &version.VersionSemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"

– Bala
Mar 29 at 14:52














I am running all this on Docker for Window.. and the Kubernetes comes with it is not lastest. Version of K8s is v1.10.11

– Bala
Mar 29 at 14:58





I am running all this on Docker for Window.. and the Kubernetes comes with it is not lastest. Version of K8s is v1.10.11

– Bala
Mar 29 at 14:58












1 Answer
1






active

oldest

votes


















0














When I deployed the helm chart with same values to Kubernetes cluster running on AWS, it works fine.



Seems issue with Kubernetes on Docker for Windows.






share|improve this answer

























  • The issue could be with 2.x of Helm. github.com/helm/charts/issues/9065 Can you try 3.x?

    – Mike Ubezzi MSFT
    Mar 28 at 20:08











  • I am using the lastest version of helm chart. Its 3.3.3

    – Bala
    Mar 29 at 14:59










Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55365775%2fredis-ha-helm-chart-error-noreplicas-not-enough-good-replicas-to-write%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














When I deployed the helm chart with same values to Kubernetes cluster running on AWS, it works fine.



Seems issue with Kubernetes on Docker for Windows.






share|improve this answer

























  • The issue could be with 2.x of Helm. github.com/helm/charts/issues/9065 Can you try 3.x?

    – Mike Ubezzi MSFT
    Mar 28 at 20:08











  • I am using the lastest version of helm chart. Its 3.3.3

    – Bala
    Mar 29 at 14:59















0














When I deployed the helm chart with same values to Kubernetes cluster running on AWS, it works fine.



Seems issue with Kubernetes on Docker for Windows.






share|improve this answer

























  • The issue could be with 2.x of Helm. github.com/helm/charts/issues/9065 Can you try 3.x?

    – Mike Ubezzi MSFT
    Mar 28 at 20:08











  • I am using the lastest version of helm chart. Its 3.3.3

    – Bala
    Mar 29 at 14:59













0












0








0







When I deployed the helm chart with same values to Kubernetes cluster running on AWS, it works fine.



Seems issue with Kubernetes on Docker for Windows.






share|improve this answer













When I deployed the helm chart with same values to Kubernetes cluster running on AWS, it works fine.



Seems issue with Kubernetes on Docker for Windows.







share|improve this answer












share|improve this answer



share|improve this answer










answered Mar 28 at 19:34









BalaBala

1812 silver badges15 bronze badges




1812 silver badges15 bronze badges















  • The issue could be with 2.x of Helm. github.com/helm/charts/issues/9065 Can you try 3.x?

    – Mike Ubezzi MSFT
    Mar 28 at 20:08











  • I am using the lastest version of helm chart. Its 3.3.3

    – Bala
    Mar 29 at 14:59

















  • The issue could be with 2.x of Helm. github.com/helm/charts/issues/9065 Can you try 3.x?

    – Mike Ubezzi MSFT
    Mar 28 at 20:08











  • I am using the lastest version of helm chart. Its 3.3.3

    – Bala
    Mar 29 at 14:59
















The issue could be with 2.x of Helm. github.com/helm/charts/issues/9065 Can you try 3.x?

– Mike Ubezzi MSFT
Mar 28 at 20:08





The issue could be with 2.x of Helm. github.com/helm/charts/issues/9065 Can you try 3.x?

– Mike Ubezzi MSFT
Mar 28 at 20:08













I am using the lastest version of helm chart. Its 3.3.3

– Bala
Mar 29 at 14:59





I am using the lastest version of helm chart. Its 3.3.3

– Bala
Mar 29 at 14:59








Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.







Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.



















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55365775%2fredis-ha-helm-chart-error-noreplicas-not-enough-good-replicas-to-write%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현