mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-17 15:13:08 +00:00
Replaced (or defined first instance of) GKE/GCE with Google Container Engine/Google Compute Engine
Fixes #10354
This commit is contained in:
@@ -38,7 +38,7 @@ these configurations the secure port is typically set to 6443.
|
||||
|
||||
A firewall rule is typically configured to allow external HTTPS access to port 443.
|
||||
|
||||
The above are defaults and reflect how Kubernetes is deployed to GCE using
|
||||
The above are defaults and reflect how Kubernetes is deployed to Google Compute Engine using
|
||||
kube-up.sh. Other cloud providers may vary.
|
||||
|
||||
## Use Cases vs IP:Ports
|
||||
|
@@ -56,7 +56,7 @@ If you want more control over the upgrading process, you may use the following w
|
||||
`kubectl update nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": false}}'`.
|
||||
If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
|
||||
be created automatically when you create a new VM instance (if you're using a cloud provider that supports
|
||||
node discovery; currently this is only GCE, not including CoreOS on GCE using kube-register). See [Node](node.md).
|
||||
node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register). See [Node](node.md).
|
||||
|
||||
|
||||
[]()
|
||||
|
@@ -4,7 +4,7 @@ There are multiple guides on running Kubernetes with [CoreOS](http://coreos.com)
|
||||
|
||||
* [Single Node Cluster](coreos/coreos_single_node_cluster.md)
|
||||
* [Multi-node Cluster](coreos/coreos_multinode_cluster.md)
|
||||
* [Setup Multi-node Cluster on GCE in an easy way](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md)
|
||||
* [Setup Multi-node Cluster on Google Compute Engine in an easy way](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md)
|
||||
* [Multi-node cluster using cloud-config and Weave on Vagrant](https://github.com/errordeveloper/weave-demos/blob/master/poseidon/README.md)
|
||||
* [Multi-node cluster using cloud-config and Vagrant](https://github.com/pires/kubernetes-vagrant-coreos-cluster/blob/master/README.md)
|
||||
* [Yet another multi-node cluster using cloud-config and Vagrant](https://github.com/AntonioMeireles/kubernetes-vagrant-coreos-cluster/blob/master/README.md) (similar to the one above but with an increased, more *aggressive* focus on features and flexibility)
|
||||
|
@@ -59,9 +59,9 @@ aws ec2 run-instances \
|
||||
--user-data file://node.yaml
|
||||
```
|
||||
|
||||
### GCE
|
||||
### Google Compute Engine (GCE)
|
||||
|
||||
*Attention:* Replace ```<gce_image_id>``` below for a [suitable version of CoreOS image for GCE](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
|
||||
*Attention:* Replace ```<gce_image_id>``` below for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
|
||||
|
||||
#### Provision the Master
|
||||
|
||||
|
@@ -28,9 +28,9 @@ aws ec2 run-instances \
|
||||
--user-data file://standalone.yaml
|
||||
```
|
||||
|
||||
### GCE
|
||||
### Google Compute Engine (GCE)
|
||||
|
||||
*Attention:* Replace ```<gce_image_id>``` bellow for a [suitable version of CoreOS image for GCE](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
|
||||
*Attention:* Replace ```<gce_image_id>``` bellow for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
|
||||
|
||||
```
|
||||
gcloud compute instances create standalone \
|
||||
|
@@ -23,7 +23,7 @@ The example below creates a Kubernetes cluster with 4 worker node Virtual Machin
|
||||
|
||||
### Before you start
|
||||
|
||||
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Container Engine](https://cloud.google.com/container-engine/) for hosted cluster installation and management.
|
||||
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Container Engine](https://cloud.google.com/container-engine/) (GKE) for hosted cluster installation and management.
|
||||
|
||||
If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
|
||||
|
||||
|
@@ -1,6 +1,6 @@
|
||||
# Cluster Level Logging with Elasticsearch and Kibana
|
||||
|
||||
On the GCE platform the default cluster level logging support targets
|
||||
On the Google Compute Engine (GCE) platform the default cluster level logging support targets
|
||||
[Google Cloud Logging](https://cloud.google.com/logging/docs/) as described at the [Logging](logging.md) getting
|
||||
started page. Here we describe how to set up a cluster to ingest logs into Elasticsearch and view them using Kibana as an
|
||||
alternative to Google Cloud Logging.
|
||||
|
@@ -19,7 +19,7 @@ Here is the same information in a picture which shows how the pods might be plac
|
||||
|
||||

|
||||
|
||||
This diagram shows four nodes created on a GCE cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod’s execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
|
||||
This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod’s execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
|
||||
[cluster DNS service](/docs/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.
|
||||
|
||||
To help explain how cluster level logging works let’s start off with a synthetic log generator pod specification [counter-pod.yaml](/examples/blog-logging/counter-pod.yaml):
|
||||
@@ -167,7 +167,7 @@ Here is some sample output:
|
||||
|
||||

|
||||
|
||||
We could also fetch the logs from Google Cloud Storage buckets to our desktop or laptop and then search them locally. The following command fetches logs for the counter pod running in a cluster which is itself in a GCE project called `myproject`. Only logs for the date 2015-06-11 are fetched.
|
||||
We could also fetch the logs from Google Cloud Storage buckets to our desktop or laptop and then search them locally. The following command fetches logs for the counter pod running in a cluster which is itself in a Compute Engine project called `myproject`. Only logs for the date 2015-06-11 are fetched.
|
||||
|
||||
|
||||
```
|
||||
|
@@ -15,7 +15,7 @@ Getting started on Rackspace
|
||||
|
||||
* Supported Version: v0.18.1
|
||||
|
||||
In general, the dev-build-and-up.sh workflow for Rackspace is the similar to GCE. The specific implementation is different due to the use of CoreOS, Rackspace Cloud Files and the overall network design.
|
||||
In general, the dev-build-and-up.sh workflow for Rackspace is the similar to Google Compute Engine. The specific implementation is different due to the use of CoreOS, Rackspace Cloud Files and the overall network design.
|
||||
|
||||
These scripts should be used to deploy development environments for Kubernetes. If your account leverages RackConnect or non-standard networking, these scripts will most likely not work without modification.
|
||||
|
||||
|
@@ -34,7 +34,7 @@ $ export CONTAINER_RUNTIME=rkt
|
||||
$ hack/local-up-cluster.sh
|
||||
```
|
||||
|
||||
### CoreOS cluster on GCE
|
||||
### CoreOS cluster on Google Compute Engine (GCE)
|
||||
|
||||
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
|
||||
```shell
|
||||
|
@@ -17,7 +17,7 @@ Private registries may require keys to read images from them.
|
||||
Credentials can be provided in several ways:
|
||||
- Using Google Container Registry
|
||||
- Per-cluster
|
||||
- automatically configured on GCE/GKE
|
||||
- automatically configured on Google Compute Engine or Google Container Engine
|
||||
- all pods can read the project's private registry
|
||||
- Configuring Nodes to Authenticate to a Private Registry
|
||||
- all pods can read any configured private registries
|
||||
|
@@ -85,7 +85,7 @@ as an introduction to various technologies and serves as a jumping-off point.
|
||||
If some techniques become vastly preferable to others, we might detail them more
|
||||
here.
|
||||
|
||||
### Google Compute Engine
|
||||
### Google Compute Engine (GCE)
|
||||
|
||||
For the Google Compute Engine cluster configuration scripts, we use [advanced
|
||||
routing](https://developers.google.com/compute/docs/networking#routing) to
|
||||
|
@@ -69,7 +69,7 @@ Kubernetes from the node.
|
||||
## Node Management
|
||||
|
||||
Unlike [Pods](pods.md) and [Services](services.md), a Node is not inherently
|
||||
created by Kubernetes: it is either created from cloud providers like GCE,
|
||||
created by Kubernetes: it is either created from cloud providers like Google Compute Engine,
|
||||
or from your physical or virtual machines. What this means is that when
|
||||
Kubernetes creates a node, it only creates a representation for the node.
|
||||
After creation, Kubernetes will check whether the node is valid or not.
|
||||
|
@@ -385,7 +385,7 @@ This makes some kinds of firewalling impossible.
|
||||
LoadBalancers only support TCP, not UDP.
|
||||
|
||||
The `Type` field is designed as nested functionality - each level adds to the
|
||||
previous. This is not strictly required on all cloud providers (e.g. GCE does
|
||||
previous. This is not strictly required on all cloud providers (e.g. Google Compute Engine does
|
||||
not need to allocate a `NodePort` to make `LoadBalancer` work, but AWS does)
|
||||
but the current API requires it.
|
||||
|
||||
|
Reference in New Issue
Block a user