mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-08 12:41:58 +00:00
Spelling fixes inspired by github.com/client9/misspell
This commit is contained in:
@@ -47,7 +47,7 @@ By defaults, there are only three pods (hence replication controllers) for this
|
||||
|
||||
When the replication controller is created, it results in the corresponding container to start, run an entrypoint script that installs the mysql system tables, set up users, and build up a list of servers that is used with the galera parameter ```wsrep_cluster_address```. This is a list of running nodes that galera uses for election of a node to obtain SST (Single State Transfer) from.
|
||||
|
||||
Note: Kubernetes best-practices is to pre-create the services for each controller, and the configuration files which contain the service and replication controller for each node, when created, will result in both a service and replication contrller running for the given node. An important thing to know is that it's important that initally pxc-node1.yaml be processed first and no other pxc-nodeN services that don't have corresponding replication controllers should exist. The reason for this is that if there is a node in ```wsrep_clsuter_address``` without a backing galera node there will be nothing to obtain SST from which will cause the node to shut itself down and the container in question to exit (and another soon relaunched, repeatedly).
|
||||
Note: Kubernetes best-practices is to pre-create the services for each controller, and the configuration files which contain the service and replication controller for each node, when created, will result in both a service and replication contrller running for the given node. An important thing to know is that it's important that initially pxc-node1.yaml be processed first and no other pxc-nodeN services that don't have corresponding replication controllers should exist. The reason for this is that if there is a node in ```wsrep_clsuter_address``` without a backing galera node there will be nothing to obtain SST from which will cause the node to shut itself down and the container in question to exit (and another soon relaunched, repeatedly).
|
||||
|
||||
First, create the overall cluster service that will be used to connect to the cluster:
|
||||
|
||||
|
@@ -37,7 +37,7 @@ if [ "$1" = 'mysqld' ]; then
|
||||
DATADIR="$("$@" --verbose --help 2>/dev/null | awk '$1 == "datadir" { print $2; exit }')"
|
||||
|
||||
# only check if system tables not created from mysql_install_db and permissions
|
||||
# set with initial SQL script before proceding to build SQL script
|
||||
# set with initial SQL script before proceeding to build SQL script
|
||||
if [ ! -d "$DATADIR/mysql" ]; then
|
||||
# fail if user didn't supply a root password
|
||||
if [ -z "$MYSQL_ROOT_PASSWORD" -a -z "$MYSQL_ALLOW_EMPTY_PASSWORD" ]; then
|
||||
@@ -95,7 +95,7 @@ EOSQL
|
||||
chown -R mysql:mysql "$DATADIR"
|
||||
fi
|
||||
|
||||
# if cluster is turned on, then procede to build cluster setting strings
|
||||
# if cluster is turned on, then proceed to build cluster setting strings
|
||||
# that will be interpolated into the config files
|
||||
if [ -n "$GALERA_CLUSTER" ]; then
|
||||
# this is the Single State Transfer user (SST, initial dump or xtrabackup user)
|
||||
|
Reference in New Issue
Block a user