mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-25 04:33:26 +00:00
Add an example of running Cloud Native Cassandra on k8s.
This commit is contained in:
parent
8cdaab5a10
commit
04f51b60de
320
examples/cassandra/README.md
Normal file
320
examples/cassandra/README.md
Normal file
@ -0,0 +1,320 @@
|
||||
## Cloud Native Deployments of Cassandra using Kubernetes
|
||||
|
||||
The following document describes the development of a _cloud native_ [Cassandra](http://cassandra.apache.org/) deployment on Kubernetes. When we say _cloud native_ we mean an application which understands that it is running within a cluster manager, and uses this cluster management infrastructure to help implement the application. In particular, in this instance, a custom Cassandra ```SeedProvider``` is used to enable Cassandra to dynamically discover new Cassandra nodes as they join the cluster.
|
||||
|
||||
This document also attempts to describe the core components of Kubernetes, _Pods_, _Services_ and _Replication Controllers_.
|
||||
|
||||
### Prerequisites
|
||||
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` and ```kubecfg``` command line tools somewhere in your path. Please see the [getting started](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides) for installation instructions for your platform.
|
||||
|
||||
### A note for the impatient
|
||||
This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end.
|
||||
|
||||
### Simple Single Pod Cassandra Node
|
||||
In Kubernetes, the atomic unit of an application is a [_Pod_](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes. In this simple case, we define a single container running Cassandra for our pod:
|
||||
|
||||
```yaml
|
||||
id: cassandra
|
||||
kind: Pod
|
||||
apiVersion: v1beta1
|
||||
desiredState:
|
||||
manifest:
|
||||
version: v1beta1
|
||||
id: cassandra
|
||||
containers:
|
||||
- name: cassandra
|
||||
image: kubernetes/cassandra
|
||||
command:
|
||||
- /run.sh
|
||||
cpu: 1000
|
||||
ports:
|
||||
- name: cql
|
||||
containerPort: 9042
|
||||
- name: thrift
|
||||
containerPort: 9160
|
||||
env:
|
||||
- key: MAX_HEAP_SIZE
|
||||
value: 512M
|
||||
- key: HEAP_NEWSIZE
|
||||
value: 100M
|
||||
labels:
|
||||
name: cassandra
|
||||
```
|
||||
|
||||
There are a few things to note in this description. First is that we are running the ```kubernetes/cassandra``` image. This is a standard Cassandra installation on top of Debian. However it also adds a custom [```SeedProvider```](https://svn.apache.org/repos/asf/cassandra/trunk/src/java/org/apache/cassandra/locator/SeedProvider.java) to Cassandra. In Cassandra, a ```SeedProvider``` bootstraps the gossip protocol that Cassandra uses to find other nodes. The ```KubernetesSeedProvider``` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later)
|
||||
|
||||
You may also note that we are setting some Cassandra parameters (```MAX_HEAP_SIZE``` and ```HEAP_NEWSIZE```). We also tell Kubernetes that the container exposes both the ```CQL``` and ```Thrift``` API ports. Finally, we tell the cluster manager that we need 1000 milli-cpus (1 core).
|
||||
|
||||
Given this configuration, we can create the pod as follows
|
||||
|
||||
```sh
|
||||
$ kubectl create -f cassandra.yaml
|
||||
```
|
||||
|
||||
After a few moments, you should be able to see the pod running:
|
||||
|
||||
```sh
|
||||
$ kubectl get pods cassandra
|
||||
POD CONTAINER(S) IMAGE(S) HOST LABELS STATUS
|
||||
cassandra cassandra kubernetes/cassandra kubernetes-minion-1/1.2.3.4 name=cassandra Running
|
||||
```
|
||||
|
||||
|
||||
### Adding a Cassandra Service
|
||||
In Kubernetes a _Service_ describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods available via the Kubernetes API. This is the way that we use initially use Services with Cassandra.
|
||||
|
||||
Here is the service description:
|
||||
```yaml
|
||||
id: cassandra
|
||||
kind: Service
|
||||
apiVersion: v1beta1
|
||||
port: 9042
|
||||
containerPort: 9042
|
||||
selector:
|
||||
name: cassandra
|
||||
```
|
||||
|
||||
The important thing to note here is the ```selector``` a label selector is a query that identifies the set of _Pods_ contained by the _Service_. In this case the selector is ```name=cassandra```. If you look back at the Pod specification above, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
|
||||
|
||||
Create this service as follows:
|
||||
```sh
|
||||
$ kubectl create -f cassandra-service.yaml
|
||||
```
|
||||
|
||||
Once the service is created, you can query it's endpoints:
|
||||
```sh
|
||||
$ kubectl get endpoints cassandra -o yaml
|
||||
apiVersion: v1beta1
|
||||
creationTimestamp: 2015-01-05T05:51:50Z
|
||||
endpoints:
|
||||
- 10.244.1.10:9042
|
||||
id: cassandra
|
||||
kind: Endpoints
|
||||
namespace: default
|
||||
resourceVersion: 69130
|
||||
selfLink: /api/v1beta1/endpoints/cassandra?namespace=default
|
||||
uid: f1937b47-949e-11e4-8a8b-42010af0e23e
|
||||
```
|
||||
|
||||
You can see that the _Service_ has found the pod we created in step one.
|
||||
|
||||
### Adding replicated nodes
|
||||
Of course, a single node cluster isn't particularly interesting. The real power of Kubernetes and Cassandra lies in easily building a replicated, resizable Cassandra cluster.
|
||||
|
||||
In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||
|
||||
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Cassandra Pod.
|
||||
|
||||
```yaml
|
||||
id: cassandra
|
||||
kind: ReplicationController
|
||||
apiVersion: v1beta1
|
||||
desiredState:
|
||||
replicas: 1
|
||||
replicaSelector:
|
||||
name: cassandra
|
||||
# This is identical to the pod config above
|
||||
podTemplate:
|
||||
desiredState:
|
||||
manifest:
|
||||
version: v1beta1
|
||||
id: cassandra
|
||||
containers:
|
||||
- name: cassandra
|
||||
image: kubernetes/cassandra
|
||||
command:
|
||||
- /run.sh
|
||||
cpu: 1000
|
||||
ports:
|
||||
- name: cql
|
||||
containerPort: 9042
|
||||
- name: thrift
|
||||
containerPort: 9160
|
||||
env:
|
||||
- key: MAX_HEAP_SIZE
|
||||
value: 512M
|
||||
- key: HEAP_NEWSIZE
|
||||
value: 100M
|
||||
labels:
|
||||
name: cassandra
|
||||
```
|
||||
|
||||
The bulk of the replication controller config is actually identical to the Cassandra pod declaration above, it simply gives the controller a recipe to use when creating new pods. The other parts are the ```replicaSelector``` which contains the controller's selector query, and the ```replicas``` parameter which specifies the desired number of replicas, in this case 1.
|
||||
|
||||
Create this controller:
|
||||
|
||||
```sh
|
||||
$ kubectl create -f cassandra-controller.yaml
|
||||
```
|
||||
|
||||
Now this is actually not that interesting, since we haven't actually done anything new. Now it will get interesting.
|
||||
|
||||
Let's resize our cluster to 2:
|
||||
```sh
|
||||
$ kubecfg resize cassandra 2
|
||||
```
|
||||
|
||||
Now if you list the pods in your cluster, you should see two cassandra pods:
|
||||
|
||||
```sh
|
||||
$ kubectl get pods
|
||||
POD CONTAINER(S) IMAGE(S) HOST LABELS STATUS
|
||||
cassandra cassandra kubernetes/cassandra kubernetes-minion-1.c.my-cloud-code.internal/1.2.3.4 name=cassandra Running
|
||||
16b2beab-94a1-11e4-8a8b-42010af0e23e cassandra kubernetes/cassandra kubernetes-minion-3.c.my-cloud-code.internal/2.3.4.5 name=cassandra Running
|
||||
```
|
||||
|
||||
Notice that one of the pods has the human readable name ```cassandra``` that you specified in your config before, and one has a random string, since it was named by the replication controller.
|
||||
|
||||
To prove that this all works, you can use the ```nodetool``` command to examine the status of the cluster, for example:
|
||||
|
||||
```sh
|
||||
$ ssh 1.2.3.4
|
||||
$ docker exec <cassandra-container-id> nodetool status
|
||||
Datacenter: datacenter1
|
||||
=======================
|
||||
Status=Up/Down
|
||||
|/ State=Normal/Leaving/Joining/Moving
|
||||
-- Address Load Tokens Owns (effective) Host ID Rack
|
||||
UN 10.244.3.29 72.07 KB 256 100.0% f736f0b5-bd1f-46f1-9b9d-7e8f22f37c9e rack1
|
||||
UN 10.244.1.10 41.14 KB 256 100.0% 42617acd-b16e-4ee3-9486-68a6743657b1 rack1
|
||||
```
|
||||
|
||||
Now let's resize our cluster to 4 nodes:
|
||||
```sh
|
||||
$ kubecfg resize cassandra 4
|
||||
```
|
||||
|
||||
Examining the status again:
|
||||
```sh
|
||||
$ docker exec <cassandra-container-id> nodetool status
|
||||
Datacenter: datacenter1
|
||||
=======================
|
||||
Status=Up/Down
|
||||
|/ State=Normal/Leaving/Joining/Moving
|
||||
-- Address Load Tokens Owns (effective) Host ID Rack
|
||||
UN 10.244.3.29 72.07 KB 256 49.5% f736f0b5-bd1f-46f1-9b9d-7e8f22f37c9e rack1
|
||||
UN 10.244.2.14 61.62 KB 256 52.6% 3e9981a6-6919-42c4-b2b8-af50f23a68f2 rack1
|
||||
UN 10.244.1.10 41.14 KB 256 49.5% 42617acd-b16e-4ee3-9486-68a6743657b1 rack1
|
||||
UN 10.244.4.8 63.83 KB 256 48.3% eeb73967-d1e6-43c1-bb54-512f8117d372 rack1
|
||||
```
|
||||
|
||||
### tl; dr;
|
||||
For those of you who are impatient, here is the summary of the commands we ran in this tutorial.
|
||||
|
||||
```sh
|
||||
# create a single cassandra node
|
||||
kubectl create -f cassandra.yaml
|
||||
|
||||
# create a service to track all cassandra nodes
|
||||
kubectl create -f cassandra-service.yaml
|
||||
|
||||
# create a replication controller to replicate cassandra nodes
|
||||
kubectl create -f cassandra-controller.yaml
|
||||
|
||||
# scale up to 2 nodes
|
||||
kubecfg resize cassandra 2
|
||||
|
||||
# validate the cluster
|
||||
docker exec <container-id> nodetool status
|
||||
|
||||
# scale up to 4 nodes
|
||||
kubecfg resize cassandra 4
|
||||
```
|
||||
|
||||
### Seed Provider Source
|
||||
```java
|
||||
package io.k8s.cassandra;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.net.InetAddress;
|
||||
import java.net.UnknownHostException;
|
||||
import java.net.URL;
|
||||
import java.net.URLConnection;
|
||||
import java.security.KeyManagementException;
|
||||
import java.security.NoSuchAlgorithmException;
|
||||
import java.security.SecureRandom;
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import org.codehaus.jackson.JsonNode;
|
||||
import org.codehaus.jackson.annotate.JsonIgnoreProperties;
|
||||
import org.codehaus.jackson.map.ObjectMapper;
|
||||
import org.apache.cassandra.locator.SeedProvider;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
public class KubernetesSeedProvider implements SeedProvider {
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
static class Endpoints {
|
||||
public String[] endpoints;
|
||||
}
|
||||
|
||||
private static String getEnvOrDefault(String var, String def) {
|
||||
String val = System.getenv(var);
|
||||
if (val == null) {
|
||||
val = def;
|
||||
}
|
||||
return val;
|
||||
}
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(KubernetesSeedProvider.class);
|
||||
|
||||
private List defaultSeeds;
|
||||
|
||||
public KubernetesSeedProvider(Map<String, String> params) {
|
||||
// Taken from SimpleSeedProvider.java
|
||||
// These are used as a fallback, if we get nothing from k8s.
|
||||
String[] hosts = params.get("seeds").split(",", -1);
|
||||
defaultSeeds = new ArrayList<InetAddress>(hosts.length);
|
||||
for (String host : hosts)
|
||||
{
|
||||
try {
|
||||
defaultSeeds.add(InetAddress.getByName(host.trim()));
|
||||
}
|
||||
catch (UnknownHostException ex)
|
||||
{
|
||||
// not fatal... DD will bark if there end up being zero seeds.
|
||||
logger.warn("Seed provider couldn't lookup host " + host);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public List<InetAddress> getSeeds() {
|
||||
List<InetAddress> list = new ArrayList<InetAddress>();
|
||||
String protocol = getEnvOrDefault("KUBERNETES_API_PROTOCOL", "http");
|
||||
String hostName = getEnvOrDefault("KUBERNETES_RO_SERVICE_HOST", "localhost");
|
||||
String hostPort = getEnvOrDefault("KUBERNETES_RO_SERVICE_PORT", "8080");
|
||||
|
||||
String host = protocol + "://" + hostName + ":" + hostPort;
|
||||
String serviceName = getEnvOrDefault("CASSANDRA_SERVICE", "cassandra");
|
||||
String path = "/api/v1beta1/endpoints/";
|
||||
try {
|
||||
URL url = new URL(host + path + serviceName);
|
||||
ObjectMapper mapper = new ObjectMapper();
|
||||
Endpoints endpoints = mapper.readValue(url, Endpoints.class);
|
||||
if (endpoints != null) {
|
||||
for (String endpoint : endpoints.endpoints) {
|
||||
String[] parts = endpoint.split(":");
|
||||
list.add(InetAddress.getByName(parts[0]));
|
||||
}
|
||||
}
|
||||
} catch (IOException ex) {
|
||||
logger.warn("Request to kubernetes apiserver failed");
|
||||
}
|
||||
if (list.size() == 0) {
|
||||
// If we got nothing, we might be the first instance, in that case
|
||||
// fall back on the seeds that were passed in cassandra.yaml.
|
||||
return defaultSeeds;
|
||||
}
|
||||
return list;
|
||||
}
|
||||
|
||||
// Simple main to test the implementation
|
||||
public static void main(String[] args) {
|
||||
SeedProvider provider = new KubernetesSeedProvider(new HashMap<String, String>());
|
||||
System.out.println(provider.getSeeds());
|
||||
}
|
||||
}
|
||||
```
|
30
examples/cassandra/cassandra-controller.yaml
Normal file
30
examples/cassandra/cassandra-controller.yaml
Normal file
@ -0,0 +1,30 @@
|
||||
id: cassandra
|
||||
kind: ReplicationController
|
||||
apiVersion: v1beta1
|
||||
desiredState:
|
||||
replicas: 1
|
||||
replicaSelector:
|
||||
name: cassandra
|
||||
podTemplate:
|
||||
desiredState:
|
||||
manifest:
|
||||
version: v1beta1
|
||||
id: cassandra
|
||||
containers:
|
||||
- name: cassandra
|
||||
image: kubernetes/cassandra
|
||||
command:
|
||||
- /run.sh
|
||||
cpu: 1000
|
||||
ports:
|
||||
- name: cql
|
||||
containerPort: 9042
|
||||
- name: thrift
|
||||
containerPort: 9160
|
||||
env:
|
||||
- key: MAX_HEAP_SIZE
|
||||
value: 512M
|
||||
- key: HEAP_NEWSIZE
|
||||
value: 100M
|
||||
labels:
|
||||
name: cassandra
|
7
examples/cassandra/cassandra-service.yaml
Normal file
7
examples/cassandra/cassandra-service.yaml
Normal file
@ -0,0 +1,7 @@
|
||||
id: cassandra
|
||||
kind: Service
|
||||
apiVersion: v1beta1
|
||||
port: 9042
|
||||
containerPort: 9042
|
||||
selector:
|
||||
name: cassandra
|
29
examples/cassandra/cassandra.yaml
Normal file
29
examples/cassandra/cassandra.yaml
Normal file
@ -0,0 +1,29 @@
|
||||
id: cassandra
|
||||
kind: Pod
|
||||
apiVersion: v1beta1
|
||||
desiredState:
|
||||
manifest:
|
||||
version: v1beta1
|
||||
id: cassandra
|
||||
containers:
|
||||
- name: cassandra
|
||||
image: kubernetes/cassandra
|
||||
command:
|
||||
- /run.sh
|
||||
cpu: 1000
|
||||
ports:
|
||||
- name: cql
|
||||
containerPort: 9042
|
||||
- name: thrift
|
||||
containerPort: 9160
|
||||
env:
|
||||
- key: MAX_HEAP_SIZE
|
||||
value: 512M
|
||||
- key: HEAP_NEWSIZE
|
||||
value: 100M
|
||||
- key: KUBERNETES_API_PROTOCOL
|
||||
value: http
|
||||
labels:
|
||||
name: cassandra
|
||||
|
||||
|
21
examples/cassandra/image/Dockerfile
Normal file
21
examples/cassandra/image/Dockerfile
Normal file
@ -0,0 +1,21 @@
|
||||
FROM google/debian:wheezy
|
||||
|
||||
COPY cassandra.list /etc/apt/sources.list.d/cassandra.list
|
||||
|
||||
RUN gpg --keyserver pgp.mit.edu --recv-keys F758CE318D77295D
|
||||
RUN gpg --export --armor F758CE318D77295D | apt-key add -
|
||||
|
||||
RUN gpg --keyserver pgp.mit.edu --recv-keys 2B5C1B00
|
||||
RUN gpg --export --armor 2B5C1B00 | apt-key add -
|
||||
|
||||
RUN gpg --keyserver pgp.mit.edu --recv-keys 0353B12C
|
||||
RUN gpg --export --armor 0353B12C | apt-key add -
|
||||
|
||||
RUN apt-get update
|
||||
RUN apt-get -qq -y install cassandra
|
||||
|
||||
COPY cassandra.yaml /etc/cassandra/cassandra.yaml
|
||||
COPY run.sh /run.sh
|
||||
COPY kubernetes-cassandra.jar /kubernetes-cassandra.jar
|
||||
|
||||
CMD /run.sh
|
3
examples/cassandra/image/cassandra.list
Normal file
3
examples/cassandra/image/cassandra.list
Normal file
@ -0,0 +1,3 @@
|
||||
deb http://www.apache.org/dist/cassandra/debian 21x main
|
||||
deb-src http://www.apache.org/dist/cassandra/debian 21x main
|
||||
|
764
examples/cassandra/image/cassandra.yaml
Normal file
764
examples/cassandra/image/cassandra.yaml
Normal file
@ -0,0 +1,764 @@
|
||||
# Cassandra storage config YAML
|
||||
|
||||
# NOTE:
|
||||
# See http://wiki.apache.org/cassandra/StorageConfiguration for
|
||||
# full explanations of configuration directives
|
||||
# /NOTE
|
||||
|
||||
# The name of the cluster. This is mainly used to prevent machines in
|
||||
# one logical cluster from joining another.
|
||||
cluster_name: 'Test Cluster'
|
||||
|
||||
# This defines the number of tokens randomly assigned to this node on the ring
|
||||
# The more tokens, relative to other nodes, the larger the proportion of data
|
||||
# that this node will store. You probably want all nodes to have the same number
|
||||
# of tokens assuming they have equal hardware capability.
|
||||
#
|
||||
# If you leave this unspecified, Cassandra will use the default of 1 token for legacy compatibility,
|
||||
# and will use the initial_token as described below.
|
||||
#
|
||||
# Specifying initial_token will override this setting on the node's initial start,
|
||||
# on subsequent starts, this setting will apply even if initial token is set.
|
||||
#
|
||||
# If you already have a cluster with 1 token per node, and wish to migrate to
|
||||
# multiple tokens per node, see http://wiki.apache.org/cassandra/Operations
|
||||
num_tokens: 256
|
||||
|
||||
# initial_token allows you to specify tokens manually. While you can use # it with
|
||||
# vnodes (num_tokens > 1, above) -- in which case you should provide a
|
||||
# comma-separated list -- it's primarily used when adding nodes # to legacy clusters
|
||||
# that do not have vnodes enabled.
|
||||
# initial_token:
|
||||
|
||||
# See http://wiki.apache.org/cassandra/HintedHandoff
|
||||
# May either be "true" or "false" to enable globally, or contain a list
|
||||
# of data centers to enable per-datacenter.
|
||||
# hinted_handoff_enabled: DC1,DC2
|
||||
hinted_handoff_enabled: true
|
||||
# this defines the maximum amount of time a dead host will have hints
|
||||
# generated. After it has been dead this long, new hints for it will not be
|
||||
# created until it has been seen alive and gone down again.
|
||||
max_hint_window_in_ms: 10800000 # 3 hours
|
||||
# Maximum throttle in KBs per second, per delivery thread. This will be
|
||||
# reduced proportionally to the number of nodes in the cluster. (If there
|
||||
# are two nodes in the cluster, each delivery thread will use the maximum
|
||||
# rate; if there are three, each will throttle to half of the maximum,
|
||||
# since we expect two nodes to be delivering hints simultaneously.)
|
||||
hinted_handoff_throttle_in_kb: 1024
|
||||
# Number of threads with which to deliver hints;
|
||||
# Consider increasing this number when you have multi-dc deployments, since
|
||||
# cross-dc handoff tends to be slower
|
||||
max_hints_delivery_threads: 2
|
||||
|
||||
# Maximum throttle in KBs per second, total. This will be
|
||||
# reduced proportionally to the number of nodes in the cluster.
|
||||
batchlog_replay_throttle_in_kb: 1024
|
||||
|
||||
# Authentication backend, implementing IAuthenticator; used to identify users
|
||||
# Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthenticator,
|
||||
# PasswordAuthenticator}.
|
||||
#
|
||||
# - AllowAllAuthenticator performs no checks - set it to disable authentication.
|
||||
# - PasswordAuthenticator relies on username/password pairs to authenticate
|
||||
# users. It keeps usernames and hashed passwords in system_auth.credentials table.
|
||||
# Please increase system_auth keyspace replication factor if you use this authenticator.
|
||||
authenticator: AllowAllAuthenticator
|
||||
|
||||
# Authorization backend, implementing IAuthorizer; used to limit access/provide permissions
|
||||
# Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthorizer,
|
||||
# CassandraAuthorizer}.
|
||||
#
|
||||
# - AllowAllAuthorizer allows any action to any user - set it to disable authorization.
|
||||
# - CassandraAuthorizer stores permissions in system_auth.permissions table. Please
|
||||
# increase system_auth keyspace replication factor if you use this authorizer.
|
||||
authorizer: AllowAllAuthorizer
|
||||
|
||||
# Validity period for permissions cache (fetching permissions can be an
|
||||
# expensive operation depending on the authorizer, CassandraAuthorizer is
|
||||
# one example). Defaults to 2000, set to 0 to disable.
|
||||
# Will be disabled automatically for AllowAllAuthorizer.
|
||||
permissions_validity_in_ms: 2000
|
||||
|
||||
# The partitioner is responsible for distributing groups of rows (by
|
||||
# partition key) across nodes in the cluster. You should leave this
|
||||
# alone for new clusters. The partitioner can NOT be changed without
|
||||
# reloading all data, so when upgrading you should set this to the
|
||||
# same partitioner you were already using.
|
||||
#
|
||||
# Besides Murmur3Partitioner, partitioners included for backwards
|
||||
# compatibility include RandomPartitioner, ByteOrderedPartitioner, and
|
||||
# OrderPreservingPartitioner.
|
||||
#
|
||||
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
|
||||
|
||||
# Directories where Cassandra should store data on disk. Cassandra
|
||||
# will spread data evenly across them, subject to the granularity of
|
||||
# the configured compaction strategy.
|
||||
# If not set, the default directory is $CASSANDRA_HOME/data/data.
|
||||
data_file_directories:
|
||||
- /var/lib/cassandra/data
|
||||
|
||||
# commit log. when running on magnetic HDD, this should be a
|
||||
# separate spindle than the data directories.
|
||||
# If not set, the default directory is $CASSANDRA_HOME/data/commitlog.
|
||||
commitlog_directory: /var/lib/cassandra/commitlog
|
||||
|
||||
# policy for data disk failures:
|
||||
# die: shut down gossip and Thrift and kill the JVM for any fs errors or
|
||||
# single-sstable errors, so the node can be replaced.
|
||||
# stop_paranoid: shut down gossip and Thrift even for single-sstable errors.
|
||||
# stop: shut down gossip and Thrift, leaving the node effectively dead, but
|
||||
# can still be inspected via JMX.
|
||||
# best_effort: stop using the failed disk and respond to requests based on
|
||||
# remaining available sstables. This means you WILL see obsolete
|
||||
# data at CL.ONE!
|
||||
# ignore: ignore fatal errors and let requests fail, as in pre-1.2 Cassandra
|
||||
disk_failure_policy: stop
|
||||
|
||||
# policy for commit disk failures:
|
||||
# die: shut down gossip and Thrift and kill the JVM, so the node can be replaced.
|
||||
# stop: shut down gossip and Thrift, leaving the node effectively dead, but
|
||||
# can still be inspected via JMX.
|
||||
# stop_commit: shutdown the commit log, letting writes collect but
|
||||
# continuing to service reads, as in pre-2.0.5 Cassandra
|
||||
# ignore: ignore fatal errors and let the batches fail
|
||||
commit_failure_policy: stop
|
||||
|
||||
# Maximum size of the key cache in memory.
|
||||
#
|
||||
# Each key cache hit saves 1 seek and each row cache hit saves 2 seeks at the
|
||||
# minimum, sometimes more. The key cache is fairly tiny for the amount of
|
||||
# time it saves, so it's worthwhile to use it at large numbers.
|
||||
# The row cache saves even more time, but must contain the entire row,
|
||||
# so it is extremely space-intensive. It's best to only use the
|
||||
# row cache if you have hot rows or static rows.
|
||||
#
|
||||
# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
|
||||
#
|
||||
# Default value is empty to make it "auto" (min(5% of Heap (in MB), 100MB)). Set to 0 to disable key cache.
|
||||
key_cache_size_in_mb:
|
||||
|
||||
# Duration in seconds after which Cassandra should
|
||||
# save the key cache. Caches are saved to saved_caches_directory as
|
||||
# specified in this configuration file.
|
||||
#
|
||||
# Saved caches greatly improve cold-start speeds, and is relatively cheap in
|
||||
# terms of I/O for the key cache. Row cache saving is much more expensive and
|
||||
# has limited use.
|
||||
#
|
||||
# Default is 14400 or 4 hours.
|
||||
key_cache_save_period: 14400
|
||||
|
||||
# Number of keys from the key cache to save
|
||||
# Disabled by default, meaning all keys are going to be saved
|
||||
# key_cache_keys_to_save: 100
|
||||
|
||||
# Maximum size of the row cache in memory.
|
||||
# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
|
||||
#
|
||||
# Default value is 0, to disable row caching.
|
||||
row_cache_size_in_mb: 0
|
||||
|
||||
# Duration in seconds after which Cassandra should
|
||||
# save the row cache. Caches are saved to saved_caches_directory as specified
|
||||
# in this configuration file.
|
||||
#
|
||||
# Saved caches greatly improve cold-start speeds, and is relatively cheap in
|
||||
# terms of I/O for the key cache. Row cache saving is much more expensive and
|
||||
# has limited use.
|
||||
#
|
||||
# Default is 0 to disable saving the row cache.
|
||||
row_cache_save_period: 0
|
||||
|
||||
# Number of keys from the row cache to save
|
||||
# Disabled by default, meaning all keys are going to be saved
|
||||
# row_cache_keys_to_save: 100
|
||||
|
||||
# Maximum size of the counter cache in memory.
|
||||
#
|
||||
# Counter cache helps to reduce counter locks' contention for hot counter cells.
|
||||
# In case of RF = 1 a counter cache hit will cause Cassandra to skip the read before
|
||||
# write entirely. With RF > 1 a counter cache hit will still help to reduce the duration
|
||||
# of the lock hold, helping with hot counter cell updates, but will not allow skipping
|
||||
# the read entirely. Only the local (clock, count) tuple of a counter cell is kept
|
||||
# in memory, not the whole counter, so it's relatively cheap.
|
||||
#
|
||||
# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
|
||||
#
|
||||
# Default value is empty to make it "auto" (min(2.5% of Heap (in MB), 50MB)). Set to 0 to disable counter cache.
|
||||
# NOTE: if you perform counter deletes and rely on low gcgs, you should disable the counter cache.
|
||||
counter_cache_size_in_mb:
|
||||
|
||||
# Duration in seconds after which Cassandra should
|
||||
# save the counter cache (keys only). Caches are saved to saved_caches_directory as
|
||||
# specified in this configuration file.
|
||||
#
|
||||
# Default is 7200 or 2 hours.
|
||||
counter_cache_save_period: 7200
|
||||
|
||||
# Number of keys from the counter cache to save
|
||||
# Disabled by default, meaning all keys are going to be saved
|
||||
# counter_cache_keys_to_save: 100
|
||||
|
||||
# The off-heap memory allocator. Affects storage engine metadata as
|
||||
# well as caches. Experiments show that JEMAlloc saves some memory
|
||||
# than the native GCC allocator (i.e., JEMalloc is more
|
||||
# fragmentation-resistant).
|
||||
#
|
||||
# Supported values are: NativeAllocator, JEMallocAllocator
|
||||
#
|
||||
# If you intend to use JEMallocAllocator you have to install JEMalloc as library and
|
||||
# modify cassandra-env.sh as directed in the file.
|
||||
#
|
||||
# Defaults to NativeAllocator
|
||||
# memory_allocator: NativeAllocator
|
||||
|
||||
# saved caches
|
||||
# If not set, the default directory is $CASSANDRA_HOME/data/saved_caches.
|
||||
saved_caches_directory: /var/lib/cassandra/saved_caches
|
||||
|
||||
# commitlog_sync may be either "periodic" or "batch."
|
||||
# When in batch mode, Cassandra won't ack writes until the commit log
|
||||
# has been fsynced to disk. It will wait up to
|
||||
# commitlog_sync_batch_window_in_ms milliseconds for other writes, before
|
||||
# performing the sync.
|
||||
#
|
||||
# commitlog_sync: batch
|
||||
# commitlog_sync_batch_window_in_ms: 50
|
||||
#
|
||||
# the other option is "periodic" where writes may be acked immediately
|
||||
# and the CommitLog is simply synced every commitlog_sync_period_in_ms
|
||||
# milliseconds. commitlog_periodic_queue_size allows 1024*(CPU cores) pending
|
||||
# entries on the commitlog queue by default. If you are writing very large
|
||||
# blobs, you should reduce that; 16*cores works reasonably well for 1MB blobs.
|
||||
# It should be at least as large as the concurrent_writes setting.
|
||||
commitlog_sync: periodic
|
||||
commitlog_sync_period_in_ms: 10000
|
||||
# commitlog_periodic_queue_size:
|
||||
|
||||
# The size of the individual commitlog file segments. A commitlog
|
||||
# segment may be archived, deleted, or recycled once all the data
|
||||
# in it (potentially from each columnfamily in the system) has been
|
||||
# flushed to sstables.
|
||||
#
|
||||
# The default size is 32, which is almost always fine, but if you are
|
||||
# archiving commitlog segments (see commitlog_archiving.properties),
|
||||
# then you probably want a finer granularity of archiving; 8 or 16 MB
|
||||
# is reasonable.
|
||||
commitlog_segment_size_in_mb: 32
|
||||
|
||||
# any class that implements the SeedProvider interface and has a
|
||||
# constructor that takes a Map<String, String> of parameters will do.
|
||||
seed_provider:
|
||||
# Addresses of hosts that are deemed contact points.
|
||||
# Cassandra nodes use this list of hosts to find each other and learn
|
||||
# the topology of the ring. You must change this if you are running
|
||||
# multiple nodes!
|
||||
- class_name: io.k8s.cassandra.KubernetesSeedProvider
|
||||
parameters:
|
||||
# seeds is actually a comma-delimited list of addresses.
|
||||
# Ex: "<ip1>,<ip2>,<ip3>"
|
||||
- seeds: "%%ip%%"
|
||||
|
||||
# For workloads with more data than can fit in memory, Cassandra's
|
||||
# bottleneck will be reads that need to fetch data from
|
||||
# disk. "concurrent_reads" should be set to (16 * number_of_drives) in
|
||||
# order to allow the operations to enqueue low enough in the stack
|
||||
# that the OS and drives can reorder them. Same applies to
|
||||
# "concurrent_counter_writes", since counter writes read the current
|
||||
# values before incrementing and writing them back.
|
||||
#
|
||||
# On the other hand, since writes are almost never IO bound, the ideal
|
||||
# number of "concurrent_writes" is dependent on the number of cores in
|
||||
# your system; (8 * number_of_cores) is a good rule of thumb.
|
||||
concurrent_reads: 32
|
||||
concurrent_writes: 32
|
||||
concurrent_counter_writes: 32
|
||||
|
||||
# Total memory to use for sstable-reading buffers. Defaults to
|
||||
# the smaller of 1/4 of heap or 512MB.
|
||||
# file_cache_size_in_mb: 512
|
||||
|
||||
# Total permitted memory to use for memtables. Cassandra will stop
|
||||
# accepting writes when the limit is exceeded until a flush completes,
|
||||
# and will trigger a flush based on memtable_cleanup_threshold
|
||||
# If omitted, Cassandra will set both to 1/4 the size of the heap.
|
||||
# memtable_heap_space_in_mb: 2048
|
||||
# memtable_offheap_space_in_mb: 2048
|
||||
|
||||
# Ratio of occupied non-flushing memtable size to total permitted size
|
||||
# that will trigger a flush of the largest memtable. Lager mct will
|
||||
# mean larger flushes and hence less compaction, but also less concurrent
|
||||
# flush activity which can make it difficult to keep your disks fed
|
||||
# under heavy write load.
|
||||
#
|
||||
# memtable_cleanup_threshold defaults to 1 / (memtable_flush_writers + 1)
|
||||
# memtable_cleanup_threshold: 0.11
|
||||
|
||||
# Specify the way Cassandra allocates and manages memtable memory.
|
||||
# Options are:
|
||||
# heap_buffers: on heap nio buffers
|
||||
# offheap_buffers: off heap (direct) nio buffers
|
||||
# offheap_objects: native memory, eliminating nio buffer heap overhead
|
||||
memtable_allocation_type: heap_buffers
|
||||
|
||||
# Total space to use for commitlogs. Since commitlog segments are
|
||||
# mmapped, and hence use up address space, the default size is 32
|
||||
# on 32-bit JVMs, and 8192 on 64-bit JVMs.
|
||||
#
|
||||
# If space gets above this value (it will round up to the next nearest
|
||||
# segment multiple), Cassandra will flush every dirty CF in the oldest
|
||||
# segment and remove it. So a small total commitlog space will tend
|
||||
# to cause more flush activity on less-active columnfamilies.
|
||||
# commitlog_total_space_in_mb: 8192
|
||||
|
||||
# This sets the amount of memtable flush writer threads. These will
|
||||
# be blocked by disk io, and each one will hold a memtable in memory
|
||||
# while blocked.
|
||||
#
|
||||
# memtable_flush_writers defaults to the smaller of (number of disks,
|
||||
# number of cores), with a minimum of 2 and a maximum of 8.
|
||||
#
|
||||
# If your data directories are backed by SSD, you should increase this
|
||||
# to the number of cores.
|
||||
#memtable_flush_writers: 8
|
||||
|
||||
# A fixed memory pool size in MB for for SSTable index summaries. If left
|
||||
# empty, this will default to 5% of the heap size. If the memory usage of
|
||||
# all index summaries exceeds this limit, SSTables with low read rates will
|
||||
# shrink their index summaries in order to meet this limit. However, this
|
||||
# is a best-effort process. In extreme conditions Cassandra may need to use
|
||||
# more than this amount of memory.
|
||||
index_summary_capacity_in_mb:
|
||||
|
||||
# How frequently index summaries should be resampled. This is done
|
||||
# periodically to redistribute memory from the fixed-size pool to sstables
|
||||
# proportional their recent read rates. Setting to -1 will disable this
|
||||
# process, leaving existing index summaries at their current sampling level.
|
||||
index_summary_resize_interval_in_minutes: 60
|
||||
|
||||
# Whether to, when doing sequential writing, fsync() at intervals in
|
||||
# order to force the operating system to flush the dirty
|
||||
# buffers. Enable this to avoid sudden dirty buffer flushing from
|
||||
# impacting read latencies. Almost always a good idea on SSDs; not
|
||||
# necessarily on platters.
|
||||
trickle_fsync: false
|
||||
trickle_fsync_interval_in_kb: 10240
|
||||
|
||||
# TCP port, for commands and data
|
||||
storage_port: 7000
|
||||
|
||||
# SSL port, for encrypted communication. Unused unless enabled in
|
||||
# encryption_options
|
||||
ssl_storage_port: 7001
|
||||
|
||||
# Address or interface to bind to and tell other Cassandra nodes to connect to.
|
||||
# You _must_ change this if you want multiple nodes to be able to communicate!
|
||||
#
|
||||
# Set listen_address OR listen_interface, not both. Interfaces must correspond
|
||||
# to a single address, IP aliasing is not supported.
|
||||
#
|
||||
# Leaving it blank leaves it up to InetAddress.getLocalHost(). This
|
||||
# will always do the Right Thing _if_ the node is properly configured
|
||||
# (hostname, name resolution, etc), and the Right Thing is to use the
|
||||
# address associated with the hostname (it might not be).
|
||||
#
|
||||
# Setting listen_address to 0.0.0.0 is always wrong.
|
||||
listen_address: %%ip%%
|
||||
# listen_interface: eth0
|
||||
|
||||
# Address to broadcast to other Cassandra nodes
|
||||
# Leaving this blank will set it to the same value as listen_address
|
||||
# broadcast_address: 1.2.3.4
|
||||
|
||||
# Internode authentication backend, implementing IInternodeAuthenticator;
|
||||
# used to allow/disallow connections from peer nodes.
|
||||
# internode_authenticator: org.apache.cassandra.auth.AllowAllInternodeAuthenticator
|
||||
|
||||
# Whether to start the native transport server.
|
||||
# Please note that the address on which the native transport is bound is the
|
||||
# same as the rpc_address. The port however is different and specified below.
|
||||
start_native_transport: true
|
||||
# port for the CQL native transport to listen for clients on
|
||||
native_transport_port: 9042
|
||||
# The maximum threads for handling requests when the native transport is used.
|
||||
# This is similar to rpc_max_threads though the default differs slightly (and
|
||||
# there is no native_transport_min_threads, idle threads will always be stopped
|
||||
# after 30 seconds).
|
||||
# native_transport_max_threads: 128
|
||||
#
|
||||
# The maximum size of allowed frame. Frame (requests) larger than this will
|
||||
# be rejected as invalid. The default is 256MB.
|
||||
# native_transport_max_frame_size_in_mb: 256
|
||||
|
||||
# Whether to start the thrift rpc server.
|
||||
start_rpc: true
|
||||
|
||||
# The address or interface to bind the Thrift RPC service and native transport
|
||||
# server to.
|
||||
#
|
||||
# Set rpc_address OR rpc_interface, not both. Interfaces must correspond
|
||||
# to a single address, IP aliasing is not supported.
|
||||
#
|
||||
# Leaving rpc_address blank has the same effect as on listen_address
|
||||
# (i.e. it will be based on the configured hostname of the node).
|
||||
#
|
||||
# Note that unlike listen_address, you can specify 0.0.0.0, but you must also
|
||||
# set broadcast_rpc_address to a value other than 0.0.0.0.
|
||||
rpc_address: %%ip%%
|
||||
# rpc_interface: eth1
|
||||
|
||||
# port for Thrift to listen for clients on
|
||||
rpc_port: 9160
|
||||
|
||||
# RPC address to broadcast to drivers and other Cassandra nodes. This cannot
|
||||
# be set to 0.0.0.0. If left blank, this will be set to the value of
|
||||
# rpc_address. If rpc_address is set to 0.0.0.0, broadcast_rpc_address must
|
||||
# be set.
|
||||
# broadcast_rpc_address: 1.2.3.4
|
||||
|
||||
# enable or disable keepalive on rpc/native connections
|
||||
rpc_keepalive: true
|
||||
|
||||
# Cassandra provides two out-of-the-box options for the RPC Server:
|
||||
#
|
||||
# sync -> One thread per thrift connection. For a very large number of clients, memory
|
||||
# will be your limiting factor. On a 64 bit JVM, 180KB is the minimum stack size
|
||||
# per thread, and that will correspond to your use of virtual memory (but physical memory
|
||||
# may be limited depending on use of stack space).
|
||||
#
|
||||
# hsha -> Stands for "half synchronous, half asynchronous." All thrift clients are handled
|
||||
# asynchronously using a small number of threads that does not vary with the amount
|
||||
# of thrift clients (and thus scales well to many clients). The rpc requests are still
|
||||
# synchronous (one thread per active request). If hsha is selected then it is essential
|
||||
# that rpc_max_threads is changed from the default value of unlimited.
|
||||
#
|
||||
# The default is sync because on Windows hsha is about 30% slower. On Linux,
|
||||
# sync/hsha performance is about the same, with hsha of course using less memory.
|
||||
#
|
||||
# Alternatively, can provide your own RPC server by providing the fully-qualified class name
|
||||
# of an o.a.c.t.TServerFactory that can create an instance of it.
|
||||
rpc_server_type: sync
|
||||
|
||||
# Uncomment rpc_min|max_thread to set request pool size limits.
|
||||
#
|
||||
# Regardless of your choice of RPC server (see above), the number of maximum requests in the
|
||||
# RPC thread pool dictates how many concurrent requests are possible (but if you are using the sync
|
||||
# RPC server, it also dictates the number of clients that can be connected at all).
|
||||
#
|
||||
# The default is unlimited and thus provides no protection against clients overwhelming the server. You are
|
||||
# encouraged to set a maximum that makes sense for you in production, but do keep in mind that
|
||||
# rpc_max_threads represents the maximum number of client requests this server may execute concurrently.
|
||||
#
|
||||
# rpc_min_threads: 16
|
||||
# rpc_max_threads: 2048
|
||||
|
||||
# uncomment to set socket buffer sizes on rpc connections
|
||||
# rpc_send_buff_size_in_bytes:
|
||||
# rpc_recv_buff_size_in_bytes:
|
||||
|
||||
# Uncomment to set socket buffer size for internode communication
|
||||
# Note that when setting this, the buffer size is limited by net.core.wmem_max
|
||||
# and when not setting it it is defined by net.ipv4.tcp_wmem
|
||||
# See:
|
||||
# /proc/sys/net/core/wmem_max
|
||||
# /proc/sys/net/core/rmem_max
|
||||
# /proc/sys/net/ipv4/tcp_wmem
|
||||
# /proc/sys/net/ipv4/tcp_wmem
|
||||
# and: man tcp
|
||||
# internode_send_buff_size_in_bytes:
|
||||
# internode_recv_buff_size_in_bytes:
|
||||
|
||||
# Frame size for thrift (maximum message length).
|
||||
thrift_framed_transport_size_in_mb: 15
|
||||
|
||||
# Set to true to have Cassandra create a hard link to each sstable
|
||||
# flushed or streamed locally in a backups/ subdirectory of the
|
||||
# keyspace data. Removing these links is the operator's
|
||||
# responsibility.
|
||||
incremental_backups: false
|
||||
|
||||
# Whether or not to take a snapshot before each compaction. Be
|
||||
# careful using this option, since Cassandra won't clean up the
|
||||
# snapshots for you. Mostly useful if you're paranoid when there
|
||||
# is a data format change.
|
||||
snapshot_before_compaction: false
|
||||
|
||||
# Whether or not a snapshot is taken of the data before keyspace truncation
|
||||
# or dropping of column families. The STRONGLY advised default of true
|
||||
# should be used to provide data safety. If you set this flag to false, you will
|
||||
# lose data on truncation or drop.
|
||||
auto_snapshot: true
|
||||
|
||||
# When executing a scan, within or across a partition, we need to keep the
|
||||
# tombstones seen in memory so we can return them to the coordinator, which
|
||||
# will use them to make sure other replicas also know about the deleted rows.
|
||||
# With workloads that generate a lot of tombstones, this can cause performance
|
||||
# problems and even exaust the server heap.
|
||||
# (http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets)
|
||||
# Adjust the thresholds here if you understand the dangers and want to
|
||||
# scan more tombstones anyway. These thresholds may also be adjusted at runtime
|
||||
# using the StorageService mbean.
|
||||
tombstone_warn_threshold: 1000
|
||||
tombstone_failure_threshold: 100000
|
||||
|
||||
# Granularity of the collation index of rows within a partition.
|
||||
# Increase if your rows are large, or if you have a very large
|
||||
# number of rows per partition. The competing goals are these:
|
||||
# 1) a smaller granularity means more index entries are generated
|
||||
# and looking up rows withing the partition by collation column
|
||||
# is faster
|
||||
# 2) but, Cassandra will keep the collation index in memory for hot
|
||||
# rows (as part of the key cache), so a larger granularity means
|
||||
# you can cache more hot rows
|
||||
column_index_size_in_kb: 64
|
||||
|
||||
|
||||
# Log WARN on any batch size exceeding this value. 5kb per batch by default.
|
||||
# Caution should be taken on increasing the size of this threshold as it can lead to node instability.
|
||||
batch_size_warn_threshold_in_kb: 5
|
||||
|
||||
# Number of simultaneous compactions to allow, NOT including
|
||||
# validation "compactions" for anti-entropy repair. Simultaneous
|
||||
# compactions can help preserve read performance in a mixed read/write
|
||||
# workload, by mitigating the tendency of small sstables to accumulate
|
||||
# during a single long running compactions. The default is usually
|
||||
# fine and if you experience problems with compaction running too
|
||||
# slowly or too fast, you should look at
|
||||
# compaction_throughput_mb_per_sec first.
|
||||
#
|
||||
# concurrent_compactors defaults to the smaller of (number of disks,
|
||||
# number of cores), with a minimum of 2 and a maximum of 8.
|
||||
#
|
||||
# If your data directories are backed by SSD, you should increase this
|
||||
# to the number of cores.
|
||||
#concurrent_compactors: 1
|
||||
|
||||
# Throttles compaction to the given total throughput across the entire
|
||||
# system. The faster you insert data, the faster you need to compact in
|
||||
# order to keep the sstable count down, but in general, setting this to
|
||||
# 16 to 32 times the rate you are inserting data is more than sufficient.
|
||||
# Setting this to 0 disables throttling. Note that this account for all types
|
||||
# of compaction, including validation compaction.
|
||||
compaction_throughput_mb_per_sec: 16
|
||||
|
||||
# When compacting, the replacement sstable(s) can be opened before they
|
||||
# are completely written, and used in place of the prior sstables for
|
||||
# any range that has been written. This helps to smoothly transfer reads
|
||||
# between the sstables, reducing page cache churn and keeping hot rows hot
|
||||
sstable_preemptive_open_interval_in_mb: 50
|
||||
|
||||
# Throttles all outbound streaming file transfers on this node to the
|
||||
# given total throughput in Mbps. This is necessary because Cassandra does
|
||||
# mostly sequential IO when streaming data during bootstrap or repair, which
|
||||
# can lead to saturating the network connection and degrading rpc performance.
|
||||
# When unset, the default is 200 Mbps or 25 MB/s.
|
||||
# stream_throughput_outbound_megabits_per_sec: 200
|
||||
|
||||
# Throttles all streaming file transfer between the datacenters,
|
||||
# this setting allows users to throttle inter dc stream throughput in addition
|
||||
# to throttling all network stream traffic as configured with
|
||||
# stream_throughput_outbound_megabits_per_sec
|
||||
# inter_dc_stream_throughput_outbound_megabits_per_sec:
|
||||
|
||||
# How long the coordinator should wait for read operations to complete
|
||||
read_request_timeout_in_ms: 5000
|
||||
# How long the coordinator should wait for seq or index scans to complete
|
||||
range_request_timeout_in_ms: 10000
|
||||
# How long the coordinator should wait for writes to complete
|
||||
write_request_timeout_in_ms: 2000
|
||||
# How long the coordinator should wait for counter writes to complete
|
||||
counter_write_request_timeout_in_ms: 5000
|
||||
# How long a coordinator should continue to retry a CAS operation
|
||||
# that contends with other proposals for the same row
|
||||
cas_contention_timeout_in_ms: 1000
|
||||
# How long the coordinator should wait for truncates to complete
|
||||
# (This can be much longer, because unless auto_snapshot is disabled
|
||||
# we need to flush first so we can snapshot before removing the data.)
|
||||
truncate_request_timeout_in_ms: 60000
|
||||
# The default timeout for other, miscellaneous operations
|
||||
request_timeout_in_ms: 10000
|
||||
|
||||
# Enable operation timeout information exchange between nodes to accurately
|
||||
# measure request timeouts. If disabled, replicas will assume that requests
|
||||
# were forwarded to them instantly by the coordinator, which means that
|
||||
# under overload conditions we will waste that much extra time processing
|
||||
# already-timed-out requests.
|
||||
#
|
||||
# Warning: before enabling this property make sure to ntp is installed
|
||||
# and the times are synchronized between the nodes.
|
||||
cross_node_timeout: false
|
||||
|
||||
# Enable socket timeout for streaming operation.
|
||||
# When a timeout occurs during streaming, streaming is retried from the start
|
||||
# of the current file. This _can_ involve re-streaming an important amount of
|
||||
# data, so you should avoid setting the value too low.
|
||||
# Default value is 0, which never timeout streams.
|
||||
# streaming_socket_timeout_in_ms: 0
|
||||
|
||||
# phi value that must be reached for a host to be marked down.
|
||||
# most users should never need to adjust this.
|
||||
# phi_convict_threshold: 8
|
||||
|
||||
# endpoint_snitch -- Set this to a class that implements
|
||||
# IEndpointSnitch. The snitch has two functions:
|
||||
# - it teaches Cassandra enough about your network topology to route
|
||||
# requests efficiently
|
||||
# - it allows Cassandra to spread replicas around your cluster to avoid
|
||||
# correlated failures. It does this by grouping machines into
|
||||
# "datacenters" and "racks." Cassandra will do its best not to have
|
||||
# more than one replica on the same "rack" (which may not actually
|
||||
# be a physical location)
|
||||
#
|
||||
# IF YOU CHANGE THE SNITCH AFTER DATA IS INSERTED INTO THE CLUSTER,
|
||||
# YOU MUST RUN A FULL REPAIR, SINCE THE SNITCH AFFECTS WHERE REPLICAS
|
||||
# ARE PLACED.
|
||||
#
|
||||
# Out of the box, Cassandra provides
|
||||
# - SimpleSnitch:
|
||||
# Treats Strategy order as proximity. This can improve cache
|
||||
# locality when disabling read repair. Only appropriate for
|
||||
# single-datacenter deployments.
|
||||
# - GossipingPropertyFileSnitch
|
||||
# This should be your go-to snitch for production use. The rack
|
||||
# and datacenter for the local node are defined in
|
||||
# cassandra-rackdc.properties and propagated to other nodes via
|
||||
# gossip. If cassandra-topology.properties exists, it is used as a
|
||||
# fallback, allowing migration from the PropertyFileSnitch.
|
||||
# - PropertyFileSnitch:
|
||||
# Proximity is determined by rack and data center, which are
|
||||
# explicitly configured in cassandra-topology.properties.
|
||||
# - Ec2Snitch:
|
||||
# Appropriate for EC2 deployments in a single Region. Loads Region
|
||||
# and Availability Zone information from the EC2 API. The Region is
|
||||
# treated as the datacenter, and the Availability Zone as the rack.
|
||||
# Only private IPs are used, so this will not work across multiple
|
||||
# Regions.
|
||||
# - Ec2MultiRegionSnitch:
|
||||
# Uses public IPs as broadcast_address to allow cross-region
|
||||
# connectivity. (Thus, you should set seed addresses to the public
|
||||
# IP as well.) You will need to open the storage_port or
|
||||
# ssl_storage_port on the public IP firewall. (For intra-Region
|
||||
# traffic, Cassandra will switch to the private IP after
|
||||
# establishing a connection.)
|
||||
# - RackInferringSnitch:
|
||||
# Proximity is determined by rack and data center, which are
|
||||
# assumed to correspond to the 3rd and 2nd octet of each node's IP
|
||||
# address, respectively. Unless this happens to match your
|
||||
# deployment conventions, this is best used as an example of
|
||||
# writing a custom Snitch class and is provided in that spirit.
|
||||
#
|
||||
# You can use a custom Snitch by setting this to the full class name
|
||||
# of the snitch, which will be assumed to be on your classpath.
|
||||
endpoint_snitch: SimpleSnitch
|
||||
|
||||
# controls how often to perform the more expensive part of host score
|
||||
# calculation
|
||||
dynamic_snitch_update_interval_in_ms: 100
|
||||
# controls how often to reset all host scores, allowing a bad host to
|
||||
# possibly recover
|
||||
dynamic_snitch_reset_interval_in_ms: 600000
|
||||
# if set greater than zero and read_repair_chance is < 1.0, this will allow
|
||||
# 'pinning' of replicas to hosts in order to increase cache capacity.
|
||||
# The badness threshold will control how much worse the pinned host has to be
|
||||
# before the dynamic snitch will prefer other replicas over it. This is
|
||||
# expressed as a double which represents a percentage. Thus, a value of
|
||||
# 0.2 means Cassandra would continue to prefer the static snitch values
|
||||
# until the pinned host was 20% worse than the fastest.
|
||||
dynamic_snitch_badness_threshold: 0.1
|
||||
|
||||
# request_scheduler -- Set this to a class that implements
|
||||
# RequestScheduler, which will schedule incoming client requests
|
||||
# according to the specific policy. This is useful for multi-tenancy
|
||||
# with a single Cassandra cluster.
|
||||
# NOTE: This is specifically for requests from the client and does
|
||||
# not affect inter node communication.
|
||||
# org.apache.cassandra.scheduler.NoScheduler - No scheduling takes place
|
||||
# org.apache.cassandra.scheduler.RoundRobinScheduler - Round robin of
|
||||
# client requests to a node with a separate queue for each
|
||||
# request_scheduler_id. The scheduler is further customized by
|
||||
# request_scheduler_options as described below.
|
||||
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
|
||||
|
||||
# Scheduler Options vary based on the type of scheduler
|
||||
# NoScheduler - Has no options
|
||||
# RoundRobin
|
||||
# - throttle_limit -- The throttle_limit is the number of in-flight
|
||||
# requests per client. Requests beyond
|
||||
# that limit are queued up until
|
||||
# running requests can complete.
|
||||
# The value of 80 here is twice the number of
|
||||
# concurrent_reads + concurrent_writes.
|
||||
# - default_weight -- default_weight is optional and allows for
|
||||
# overriding the default which is 1.
|
||||
# - weights -- Weights are optional and will default to 1 or the
|
||||
# overridden default_weight. The weight translates into how
|
||||
# many requests are handled during each turn of the
|
||||
# RoundRobin, based on the scheduler id.
|
||||
#
|
||||
# request_scheduler_options:
|
||||
# throttle_limit: 80
|
||||
# default_weight: 5
|
||||
# weights:
|
||||
# Keyspace1: 1
|
||||
# Keyspace2: 5
|
||||
|
||||
# request_scheduler_id -- An identifier based on which to perform
|
||||
# the request scheduling. Currently the only valid option is keyspace.
|
||||
# request_scheduler_id: keyspace
|
||||
|
||||
# Enable or disable inter-node encryption
|
||||
# Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
|
||||
# users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
|
||||
# suite for authentication, key exchange and encryption of the actual data transfers.
|
||||
# Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
|
||||
# NOTE: No custom encryption options are enabled at the moment
|
||||
# The available internode options are : all, none, dc, rack
|
||||
#
|
||||
# If set to dc cassandra will encrypt the traffic between the DCs
|
||||
# If set to rack cassandra will encrypt the traffic between the racks
|
||||
#
|
||||
# The passwords used in these options must match the passwords used when generating
|
||||
# the keystore and truststore. For instructions on generating these files, see:
|
||||
# http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
|
||||
#
|
||||
server_encryption_options:
|
||||
internode_encryption: none
|
||||
keystore: conf/.keystore
|
||||
keystore_password: cassandra
|
||||
truststore: conf/.truststore
|
||||
truststore_password: cassandra
|
||||
# More advanced defaults below:
|
||||
# protocol: TLS
|
||||
# algorithm: SunX509
|
||||
# store_type: JKS
|
||||
# cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
|
||||
# require_client_auth: false
|
||||
|
||||
# enable or disable client/server encryption.
|
||||
client_encryption_options:
|
||||
enabled: false
|
||||
keystore: conf/.keystore
|
||||
keystore_password: cassandra
|
||||
# require_client_auth: false
|
||||
# Set trustore and truststore_password if require_client_auth is true
|
||||
# truststore: conf/.truststore
|
||||
# truststore_password: cassandra
|
||||
# More advanced defaults below:
|
||||
# protocol: TLS
|
||||
# algorithm: SunX509
|
||||
# store_type: JKS
|
||||
# cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
|
||||
|
||||
# internode_compression controls whether traffic between nodes is
|
||||
# compressed.
|
||||
# can be: all - all traffic is compressed
|
||||
# dc - traffic between different datacenters is compressed
|
||||
# none - nothing is compressed.
|
||||
internode_compression: all
|
||||
|
||||
# Enable or disable tcp_nodelay for inter-dc communication.
|
||||
# Disabling it will result in larger (but fewer) network packets being sent,
|
||||
# reducing overhead from the TCP protocol itself, at the cost of increasing
|
||||
# latency if you block for cross-datacenter responses.
|
||||
inter_dc_tcp_nodelay: false
|
BIN
examples/cassandra/image/kubernetes-cassandra.jar
Normal file
BIN
examples/cassandra/image/kubernetes-cassandra.jar
Normal file
Binary file not shown.
19
examples/cassandra/image/run.sh
Normal file
19
examples/cassandra/image/run.sh
Normal file
@ -0,0 +1,19 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Copyright 2014 Google Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
perl -pi -e "s/%%ip%%/$(hostname -I)/g" /etc/cassandra/cassandra.yaml
|
||||
export CLASSPATH=/kubernetes-cassandra.jar
|
||||
cassandra -f
|
@ -0,0 +1,94 @@
|
||||
package io.k8s.cassandra;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.net.InetAddress;
|
||||
import java.net.UnknownHostException;
|
||||
import java.net.URL;
|
||||
import java.net.URLConnection;
|
||||
import java.security.KeyManagementException;
|
||||
import java.security.NoSuchAlgorithmException;
|
||||
import java.security.SecureRandom;
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import org.codehaus.jackson.JsonNode;
|
||||
import org.codehaus.jackson.annotate.JsonIgnoreProperties;
|
||||
import org.codehaus.jackson.map.ObjectMapper;
|
||||
import org.apache.cassandra.locator.SeedProvider;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
public class KubernetesSeedProvider implements SeedProvider {
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
static class Endpoints {
|
||||
public String[] endpoints;
|
||||
}
|
||||
|
||||
private static String getEnvOrDefault(String var, String def) {
|
||||
String val = System.getenv(var);
|
||||
if (val == null) {
|
||||
val = def;
|
||||
}
|
||||
return val;
|
||||
}
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(KubernetesSeedProvider.class);
|
||||
|
||||
private List defaultSeeds;
|
||||
|
||||
public KubernetesSeedProvider(Map<String, String> params) {
|
||||
// Taken from SimpleSeedProvider.java
|
||||
// These are used as a fallback, if we get nothing from k8s.
|
||||
String[] hosts = params.get("seeds").split(",", -1);
|
||||
defaultSeeds = new ArrayList<InetAddress>(hosts.length);
|
||||
for (String host : hosts)
|
||||
{
|
||||
try {
|
||||
defaultSeeds.add(InetAddress.getByName(host.trim()));
|
||||
}
|
||||
catch (UnknownHostException ex)
|
||||
{
|
||||
// not fatal... DD will bark if there end up being zero seeds.
|
||||
logger.warn("Seed provider couldn't lookup host " + host);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public List<InetAddress> getSeeds() {
|
||||
List<InetAddress> list = new ArrayList<InetAddress>();
|
||||
String protocol = getEnvOrDefault("KUBERNETES_API_PROTOCOL", "http");
|
||||
String hostName = getEnvOrDefault("KUBERNETES_RO_SERVICE_HOST", "localhost");
|
||||
String hostPort = getEnvOrDefault("KUBERNETES_RO_SERVICE_PORT", "8080");
|
||||
|
||||
String host = protocol + "://" + hostName + ":" + hostPort;
|
||||
String serviceName = getEnvOrDefault("CASSANDRA_SERVICE", "cassandra");
|
||||
String path = "/api/v1beta1/endpoints/";
|
||||
try {
|
||||
URL url = new URL(host + path + serviceName);
|
||||
ObjectMapper mapper = new ObjectMapper();
|
||||
Endpoints endpoints = mapper.readValue(url, Endpoints.class);
|
||||
if (endpoints != null) {
|
||||
for (String endpoint : endpoints.endpoints) {
|
||||
String[] parts = endpoint.split(":");
|
||||
list.add(InetAddress.getByName(parts[0]));
|
||||
}
|
||||
}
|
||||
} catch (IOException ex) {
|
||||
logger.warn("Request to kubernetes apiserver failed");
|
||||
}
|
||||
if (list.size() == 0) {
|
||||
// If we got nothing, we might be the first instance, in that case
|
||||
// fall back on the seeds that were passed in cassandra.yaml.
|
||||
return defaultSeeds;
|
||||
}
|
||||
return list;
|
||||
}
|
||||
|
||||
// Simple main to test the implementation
|
||||
public static void main(String[] args) {
|
||||
SeedProvider provider = new KubernetesSeedProvider(new HashMap<String, String>());
|
||||
System.out.println(provider.getSeeds());
|
||||
}
|
||||
}
|
Loading…
Reference in New Issue
Block a user