mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-25 20:53:33 +00:00
Updating docs/ to v1
This commit is contained in:
parent
640c40da65
commit
6e83eb2636
@ -32,22 +32,22 @@ To retrieve the address of the Kubernetes master cluster and the proxy URLs for
|
||||
$ kubectl cluster-info
|
||||
|
||||
Kubernetes master is running at https://104.197.5.247
|
||||
elasticsearch-logging is running at https://104.197.5.247/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging
|
||||
kibana-logging is running at https://104.197.5.247/api/v1beta3/proxy/namespaces/default/services/kibana-logging
|
||||
kube-dns is running at https://104.197.5.247/api/v1beta3/proxy/namespaces/default/services/kube-dns
|
||||
grafana is running at https://104.197.5.247/api/v1beta3/proxy/namespaces/default/services/monitoring-grafana
|
||||
heapster is running at https://104.197.5.247/api/v1beta3/proxy/namespaces/default/services/monitoring-heapster
|
||||
elasticsearch-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging
|
||||
kibana-logging is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/kibana-logging
|
||||
kube-dns is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/kube-dns
|
||||
grafana is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/monitoring-grafana
|
||||
heapster is running at https://104.197.5.247/api/v1/proxy/namespaces/default/services/monitoring-heapster
|
||||
```
|
||||
|
||||
**Note**: Currently, adding trailing forward slashes '.../' to proxy URLs is required, for example: `https://104.197.5.247/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging/`.
|
||||
**Note**: Currently, adding trailing forward slashes '.../' to proxy URLs is required, for example: `https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/`.
|
||||
|
||||
#### Manually constructing proxy URLs
|
||||
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
|
||||
`http://`*`kubernetes_master_address`*`/`*`service_path`*`/`*`service_name`*`/`*`service_endpoint-suffix-parameter`*
|
||||
##### Examples
|
||||
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging/_search?q=user:kimchy`
|
||||
* To access the Elasticsearch service endpoint `_search?q=user:kimchy`, you would use: `http://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_search?q=user:kimchy`
|
||||
|
||||
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging/_cluster/health?pretty=true`
|
||||
* To access the Elasticsearch cluster health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_cluster/health?pretty=true`
|
||||
```
|
||||
{
|
||||
"cluster_name" : "kubernetes_logging",
|
||||
@ -79,10 +79,10 @@ Run `curl` commands using the following formats:
|
||||
For example, to get status information about the Elasticsearch logging service, you would run one of the following commands:
|
||||
|
||||
* Basic authentication:
|
||||
`$ curl -k -u admin:4mty0Vl9nNFfwLJz https://104.197.5.247/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging/`
|
||||
`$ curl -k -u admin:4mty0Vl9nNFfwLJz https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/`
|
||||
|
||||
* Token authentication:
|
||||
`$ curl -k -H "Authorization: Bearer cvIH2BYtNS85QG0KSLHgl5Oba4YNQOrx" https://104.197.5.247/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging/`
|
||||
`$ curl -k -H "Authorization: Bearer cvIH2BYtNS85QG0KSLHgl5Oba4YNQOrx" https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/`
|
||||
|
||||
The result for either authentication method:
|
||||
```
|
||||
@ -104,7 +104,7 @@ The result for either authentication method:
|
||||
#### Using web browsers
|
||||
In a web browser, navigate to the proxy URL and then enter your username and password when prompted. For example, you would copy and paste the following proxy URL into the address bar of your browser:
|
||||
```
|
||||
https://104.197.5.247/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging/
|
||||
https://104.197.5.247/api/v1/proxy/namespaces/default/services/elasticsearch-logging/
|
||||
```
|
||||
|
||||
## <a name="redirect"></a>Requesting redirects
|
||||
@ -123,7 +123,7 @@ To request a redirect and then verify the address that gets returned, let's run
|
||||
|
||||
To request a redirect for the Elasticsearch service, we can run the following `curl` command:
|
||||
```
|
||||
user@oban:~$ curl -L -k -u admin:4mty0Vl9nNFfwLJz https://104.197.5.247/api/v1beta3/redirect/namespaces/default/services/elasticsearch-logging/
|
||||
user@oban:~$ curl -L -k -u admin:4mty0Vl9nNFfwLJz https://104.197.5.247/api/v1/redirect/namespaces/default/services/elasticsearch-logging/
|
||||
{
|
||||
"status" : 200,
|
||||
"name" : "Skin",
|
||||
@ -140,9 +140,9 @@ user@oban:~$ curl -L -k -u admin:4mty0Vl9nNFfwLJz https://104.197.5.247/api/v1be
|
||||
```
|
||||
**Note**: We use the `-L` flag in the request so that `curl` follows the returned redirect address and retrieves the Elasticsearch service information.
|
||||
|
||||
If we examine the actual redirect header (instead run the same `curl` command with `-v`), we see that the request to `https://104.197.5.247/api/v1beta3/redirect/namespaces/default/services/elasticsearch-logging/` is redirected to `http://10.244.2.7:9200`:
|
||||
If we examine the actual redirect header (instead run the same `curl` command with `-v`), we see that the request to `https://104.197.5.247/api/v1/redirect/namespaces/default/services/elasticsearch-logging/` is redirected to `http://10.244.2.7:9200`:
|
||||
```
|
||||
user@oban:~$ curl -v -k -u admin:4mty0Vl9nNFfwLJz https://104.197.5.247/api/v1beta3/redirect/namespaces/default/services/elasticsearch-logging/
|
||||
user@oban:~$ curl -v -k -u admin:4mty0Vl9nNFfwLJz https://104.197.5.247/api/v1/redirect/namespaces/default/services/elasticsearch-logging/
|
||||
* About to connect() to 104.197.5.247 port 443 (#0)
|
||||
* Trying 104.197.5.247...
|
||||
* connected
|
||||
@ -168,7 +168,7 @@ user@oban:~$ curl -v -k -u admin:4mty0Vl9nNFfwLJz https://104.197.5.247/api/v1be
|
||||
* issuer: CN=104.197.5.247@1425498024
|
||||
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
|
||||
* Server auth using Basic with user 'admin'
|
||||
> GET /api/v1beta3/redirect/namespaces/default/services/elasticsearch-logging HTTP/1.1
|
||||
> GET /api/v1/redirect/namespaces/default/services/elasticsearch-logging HTTP/1.1
|
||||
> Authorization: Basic YWRtaW46M210eTBWbDluTkZmd0xKeg==
|
||||
> User-Agent: curl/7.26.0
|
||||
> Host: 104.197.5.247
|
||||
|
@ -218,7 +218,7 @@ spec:
|
||||
...and we POST that to the server (as JSON). Then let's say we want to *add* a container to this Pod.
|
||||
|
||||
```yaml
|
||||
PATCH /api/v1beta3/namespaces/default/pods/pod-name
|
||||
PATCH /api/v1/namespaces/default/pods/pod-name
|
||||
spec:
|
||||
containers:
|
||||
- name: log-tailer
|
||||
@ -461,9 +461,9 @@ The status object is encoded as JSON and provided as the body of the response.
|
||||
|
||||
**Example:**
|
||||
```
|
||||
$ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1beta3/namespaces/default/pods/grafana
|
||||
$ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1/namespaces/default/pods/grafana
|
||||
|
||||
> GET /api/v1beta3/namespaces/default/pods/grafana HTTP/1.1
|
||||
> GET /api/v1/namespaces/default/pods/grafana HTTP/1.1
|
||||
> User-Agent: curl/7.26.0
|
||||
> Host: 10.240.122.184
|
||||
> Accept: */*
|
||||
@ -477,13 +477,13 @@ $ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https:/
|
||||
<
|
||||
{
|
||||
"kind": "Status",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {},
|
||||
"status": "Failure",
|
||||
"message": "pods \"grafana\" not found",
|
||||
"reason": "NotFound",
|
||||
"details": {
|
||||
"id": "grafana",
|
||||
"name": "grafana",
|
||||
"kind": "pods"
|
||||
},
|
||||
"code": 404
|
||||
@ -511,7 +511,7 @@ Possible values for the ```reason``` and ```details``` fields:
|
||||
* Details (optional):
|
||||
* `kind string`
|
||||
* The kind attribute of the unauthorized resource (on some operations may differ from the requested resource).
|
||||
* `id string`
|
||||
* `name string`
|
||||
* The identifier of the unauthorized resource.
|
||||
* HTTP status code: `401 StatusUnauthorized`
|
||||
* `Forbidden`
|
||||
@ -519,7 +519,7 @@ Possible values for the ```reason``` and ```details``` fields:
|
||||
* Details (optional):
|
||||
* `kind string`
|
||||
* The kind attribute of the forbidden resource (on some operations may differ from the requested resource).
|
||||
* `id string`
|
||||
* `name string`
|
||||
* The identifier of the forbidden resource.
|
||||
* HTTP status code: `403 StatusForbidden`
|
||||
* `NotFound`
|
||||
@ -527,7 +527,7 @@ Possible values for the ```reason``` and ```details``` fields:
|
||||
* Details (optional):
|
||||
* `kind string`
|
||||
* The kind attribute of the missing resource (on some operations may differ from the requested resource).
|
||||
* `id string`
|
||||
* `name string`
|
||||
* The identifier of the missing resource.
|
||||
* HTTP status code: `404 StatusNotFound`
|
||||
* `AlreadyExists`
|
||||
@ -535,7 +535,7 @@ Possible values for the ```reason``` and ```details``` fields:
|
||||
* Details (optional):
|
||||
* `kind string`
|
||||
* The kind attribute of the conflicting resource.
|
||||
* `id string`
|
||||
* `name string`
|
||||
* The identifier of the conflicting resource.
|
||||
* HTTP status code: `409 StatusConflict`
|
||||
* `Conflict`
|
||||
@ -546,7 +546,7 @@ Possible values for the ```reason``` and ```details``` fields:
|
||||
* Details (optional):
|
||||
* `kind string`
|
||||
* the kind attribute of the invalid resource
|
||||
* `id string`
|
||||
* `name string`
|
||||
* the identifier of the invalid resource
|
||||
* `causes`
|
||||
* One or more `StatusCause` entries indicating the data in the provided resource that was invalid. The `reason`, `message`, and `field` attributes will be set.
|
||||
@ -560,7 +560,7 @@ Possible values for the ```reason``` and ```details``` fields:
|
||||
* Details (optional):
|
||||
* `kind string`
|
||||
* The kind attribute of the resource being acted on.
|
||||
* `id string`
|
||||
* `name string`
|
||||
* The operation that is being attempted.
|
||||
* The server should set the `Retry-After` HTTP header and return `retryAfterSeconds` in the details field of the object. A value of `0` is the default.
|
||||
* Http status code: `504 StatusServerTimeout`
|
||||
|
@ -30,7 +30,7 @@ A request has 4 attributes that can be considered for authorization:
|
||||
- whether the request is readonly (GETs are readonly)
|
||||
- what resource is being accessed
|
||||
- applies only to the API endpoints, such as
|
||||
`/api/v1beta3/namespaces/default/pods`. For miscellaneous endpoints, like `/version`, the
|
||||
`/api/v1/namespaces/default/pods`. For miscellaneous endpoints, like `/version`, the
|
||||
resource is the empty string.
|
||||
- the namespace of the object being access, or the empty string if the
|
||||
endpoint does not support namespaced objects.
|
||||
|
@ -18,8 +18,8 @@ There is a sequence of steps to upgrade to a new API version.
|
||||
|
||||
### Turn on or off an API version for your cluster
|
||||
|
||||
Specific API versions can be turned on or off by passing --runtime-config=api/<version> flag while bringing up the server. For example: to turn off v1beta3 API, pass --runtime-config=api/v1beta3=false.
|
||||
runtime-config also supports 2 special keys: api/all and api/legacy to control all and legacy APIs respectively. For example, for turning off all api versions except v1beta3, pass --runtime-config=api/all=false,api/v1beta3=true.
|
||||
Specific API versions can be turned on or off by passing --runtime-config=api/<version> flag while bringing up the server. For example: to turn off v1 API, pass --runtime-config=api/v1=false.
|
||||
runtime-config also supports 2 special keys: api/all and api/legacy to control all and legacy APIs respectively. For example, for turning off all api versions except v1, pass --runtime-config=api/all=false,api/v1=true.
|
||||
|
||||
### Switching your cluster's storage API version
|
||||
|
||||
@ -31,7 +31,7 @@ You can use the kube-version-change utility to convert config files between diff
|
||||
|
||||
```
|
||||
$ hack/build-go.sh cmd/kube-version-change
|
||||
$ _output/local/go/bin/kube-version-change -i myPod.v1beta1.yaml -o myPod.v1beta3.yaml
|
||||
$ _output/local/go/bin/kube-version-change -i myPod.v1beta3.yaml -o myPod.v1.yaml
|
||||
```
|
||||
|
||||
### Maintenance on a Node
|
||||
@ -44,7 +44,7 @@ pods are replicated, upgrades can be done without special coordination.
|
||||
|
||||
If you want more control over the upgrading process, you may use the following workflow:
|
||||
1. Mark the node to be rebooted as unschedulable:
|
||||
`kubectl update nodes $NODENAME --patch='{"apiVersion": "v1beta3", "spec": {"unschedulable": true}}'`.
|
||||
`kubectl update nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": true}}'`.
|
||||
This keeps new pods from landing on the node while you are trying to get them off.
|
||||
1. Get the pods off the machine, via any of the following strategies:
|
||||
1. wait for finite-duration pods to complete
|
||||
@ -53,7 +53,7 @@ If you want more control over the upgrading process, you may use the following w
|
||||
1. for pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
|
||||
1. Work on the node
|
||||
1. Make the node schedulable again:
|
||||
`kubectl update nodes $NODENAME --patch='{"apiVersion": "v1beta3", "spec": {"unschedulable": false}}'`.
|
||||
`kubectl update nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": false}}'`.
|
||||
If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
|
||||
be created automatically when you create a new VM instance (if you're using a cloud provider that supports
|
||||
node discovery; currently this is only GCE, not including CoreOS on GCE using kube-register). See [Node](node.md).
|
||||
|
@ -348,7 +348,7 @@ No other variables are defined.
|
||||
Notice the `$(var)` syntax.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: expansion-pod
|
||||
@ -366,7 +366,7 @@ spec:
|
||||
#### In a pod: building a URL using downward API
|
||||
|
||||
```yaml
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: expansion-pod
|
||||
|
@ -231,7 +231,7 @@ OpenShift creates a Namespace in Kubernetes
|
||||
|
||||
```
|
||||
{
|
||||
"apiVersion":"v1beta3",
|
||||
"apiVersion":"v1",
|
||||
"kind": "Namespace",
|
||||
"metadata": {
|
||||
"name": "development",
|
||||
@ -256,7 +256,7 @@ User deletes the Namespace in Kubernetes, and Namespace now has following state:
|
||||
|
||||
```
|
||||
{
|
||||
"apiVersion":"v1beta3",
|
||||
"apiVersion":"v1",
|
||||
"kind": "Namespace",
|
||||
"metadata": {
|
||||
"name": "development",
|
||||
@ -281,7 +281,7 @@ removing *kubernetes* from the list of finalizers:
|
||||
|
||||
```
|
||||
{
|
||||
"apiVersion":"v1beta3",
|
||||
"apiVersion":"v1",
|
||||
"kind": "Namespace",
|
||||
"metadata": {
|
||||
"name": "development",
|
||||
@ -309,7 +309,7 @@ This results in the following state:
|
||||
|
||||
```
|
||||
{
|
||||
"apiVersion":"v1beta3",
|
||||
"apiVersion":"v1",
|
||||
"kind": "Namespace",
|
||||
"metadata": {
|
||||
"name": "development",
|
||||
|
@ -98,7 +98,7 @@ An administrator provisions storage by posting PVs to the API. Various way to a
|
||||
POST:
|
||||
|
||||
kind: PersistentVolume
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pv0001
|
||||
spec:
|
||||
@ -128,7 +128,7 @@ The user must be within a namespace to create PVCs.
|
||||
|
||||
POST:
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: myclaim-1
|
||||
spec:
|
||||
@ -179,7 +179,7 @@ The claim holder owns the claim and its data for as long as the claim exists. T
|
||||
POST:
|
||||
|
||||
kind: Pod
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mypod
|
||||
spec:
|
||||
|
@ -394,7 +394,7 @@ To create a pod that uses an ssh key stored as a secret, we first need to create
|
||||
```json
|
||||
{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "ssh-key-secret"
|
||||
},
|
||||
@ -414,7 +414,7 @@ Now we can create a pod which references the secret with the ssh key and consume
|
||||
```json
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "secret-test-pod",
|
||||
"labels": {
|
||||
@ -464,12 +464,12 @@ The secrets:
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"kind": "List",
|
||||
"items":
|
||||
[{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "prod-db-secret"
|
||||
},
|
||||
@ -480,7 +480,7 @@ The secrets:
|
||||
},
|
||||
{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "test-db-secret"
|
||||
},
|
||||
@ -496,12 +496,12 @@ The pods:
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"kind": "List",
|
||||
"items":
|
||||
[{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "prod-db-client-pod",
|
||||
"labels": {
|
||||
@ -534,7 +534,7 @@ The pods:
|
||||
},
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "test-db-client-pod",
|
||||
"labels": {
|
||||
|
@ -11,7 +11,7 @@ There is a testing image ```brendanburns/flake``` up on the docker hub. We will
|
||||
|
||||
Create a replication controller with the following config:
|
||||
```yaml
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: flakecontroller
|
||||
|
@ -25,7 +25,7 @@ field is the version of the API schema that the `fieldPath` is written in terms
|
||||
This is an example of a pod that consumes its name and namespace via the downward API:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: dapi-test-pod
|
||||
|
@ -147,7 +147,7 @@ Create a pod manifest: `pod.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"name": "hello",
|
||||
|
@ -1,4 +1,4 @@
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
|
@ -1,4 +1,4 @@
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
labels:
|
||||
|
@ -1,4 +1,4 @@
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
labels:
|
||||
@ -27,7 +27,7 @@ spec:
|
||||
name: grafana
|
||||
env:
|
||||
- name: INFLUXDB_EXTERNAL_URL
|
||||
value: /api/v1beta3/proxy/namespaces/default/services/monitoring-grafana/db/
|
||||
value: /api/v1/proxy/namespaces/default/services/monitoring-grafana/db/
|
||||
- name: INFLUXDB_HOST
|
||||
value: monitoring-influxdb
|
||||
- name: INFLUXDB_PORT
|
||||
|
@ -1,4 +1,4 @@
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
|
@ -1,4 +1,4 @@
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: elasticsearch-logging-v1
|
||||
@ -34,4 +34,4 @@ spec:
|
||||
mountPath: /data
|
||||
volumes:
|
||||
- name: es-persistent-storage
|
||||
emptyDir: {}
|
||||
emptyDir: {}
|
||||
|
@ -1,4 +1,4 @@
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: elasticsearch-logging
|
||||
|
@ -1,4 +1,4 @@
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: kibana-logging-v1
|
||||
|
@ -1,5 +1,5 @@
|
||||
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: kibana-logging
|
||||
|
@ -37,7 +37,7 @@ write_files:
|
||||
permissions: '0755'
|
||||
owner: root
|
||||
content: |
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: fluentd-elasticsearch
|
||||
|
@ -109,7 +109,7 @@ sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0
|
||||
On the master you created above, create a file named ```node.yaml``` make it's contents:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: Node
|
||||
metadata:
|
||||
name: ${NODE_IP}
|
||||
|
@ -175,7 +175,7 @@ iptables -nvL
|
||||
cat << EOF > apache.json
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "fedoraapache",
|
||||
"labels": {
|
||||
|
@ -98,7 +98,7 @@ done
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"kind": "Node",
|
||||
"metadata": {
|
||||
"name": "fed-node",
|
||||
|
@ -96,7 +96,7 @@ We'll follow the aws-coreos example. Create a pod manifest: `pod.json`
|
||||
|
||||
```
|
||||
{
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"name": "hello",
|
||||
|
@ -4,7 +4,7 @@ All objects in the Kubernetes REST API are unambiguously identified by a Name an
|
||||
For non-unique user-provided attributes, Kubernetes provides [labels](labels.md) and [annotations](annotations.md).
|
||||
|
||||
## Names
|
||||
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1beta3/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restructions. See the [identifiers design doc](design/identifiers.md) for the precise syntax rules for names.
|
||||
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restructions. See the [identifiers design doc](design/identifiers.md) for the precise syntax rules for names.
|
||||
|
||||
## UIDs
|
||||
UID are generated by Kubernetes. Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID (i.e., they are spatially and temporally unique).
|
||||
|
@ -11,7 +11,7 @@ https://github.com/GoogleCloudPlatform/kubernetes/issues/1755
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
api-version: v1beta3
|
||||
api-version: v1
|
||||
server: http://cow.org:8080
|
||||
name: cow-cluster
|
||||
- cluster:
|
||||
@ -138,7 +138,7 @@ users:
|
||||
#### Commands for the example file
|
||||
```
|
||||
$kubectl config set preferences.colors true
|
||||
$kubectl config set-cluster cow-cluster --server=http://cow.org:8080 --api-version=v1beta3
|
||||
$kubectl config set-cluster cow-cluster --server=http://cow.org:8080 --api-version=v1
|
||||
$kubectl config set-cluster horse-cluster --server=https://horse.org:4443 --certificate-authority=path/to/my/cafile
|
||||
$kubectl config set-cluster pig-cluster --server=https://pig.org:443 --insecure-skip-tls-verify=true
|
||||
$kubectl config set-credentials blue-user --token=blue-token
|
||||
|
@ -32,7 +32,7 @@ $ kubectl get replicationcontroller web
|
||||
$ kubectl get -o json pod web-pod-13je7
|
||||
|
||||
// Return only the phase value of the specified pod.
|
||||
$ kubectl get -o template web-pod-13je7 --template={{.status.phase}} --api-version=v1beta3
|
||||
$ kubectl get -o template web-pod-13je7 --template={{.status.phase}} --api-version=v1
|
||||
|
||||
// List all replication controllers and services together in ps output format.
|
||||
$ kubectl get rc,services
|
||||
@ -87,6 +87,6 @@ $ kubectl get rc/web service/frontend pods/web-pod-13je7
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-05-29 22:39:51.164275749 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-06-05 21:08:36.511279339 +0000 UTC
|
||||
|
||||
[]()
|
||||
|
@ -33,7 +33,7 @@ kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-pref
|
||||
$ kubectl proxy --port=8011 --www=./local/www/
|
||||
|
||||
// Run a proxy to kubernetes apiserver, changing the api prefix to k8s-api
|
||||
// This makes e.g. the pods api available at localhost:8011/k8s-api/v1beta3/pods/
|
||||
// This makes e.g. the pods api available at localhost:8011/k8s-api/v1/pods/
|
||||
$ kubectl proxy --api-prefix=/k8s-api
|
||||
```
|
||||
|
||||
@ -79,6 +79,6 @@ $ kubectl proxy --api-prefix=/k8s-api
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-06-04 01:34:05.594492715 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-06-05 21:08:36.513099878 +0000 UTC
|
||||
|
||||
[]()
|
||||
|
@ -25,7 +25,7 @@ $ kubectl run nginx --image=nginx --replicas=5
|
||||
$ kubectl run nginx --image=nginx --dry-run
|
||||
|
||||
// Start a single instance of nginx, but overload the spec of the replication controller with a partial set of values parsed from JSON.
|
||||
$ kubectl run nginx --image=nginx --overrides='{ "apiVersion": "v1beta3", "spec": { ... } }'
|
||||
$ kubectl run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { ... } }'
|
||||
```
|
||||
|
||||
### Options
|
||||
@ -78,6 +78,6 @@ $ kubectl run nginx --image=nginx --overrides='{ "apiVersion": "v1beta3", "spec"
|
||||
### SEE ALSO
|
||||
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.189857293 +0000 UTC
|
||||
###### Auto generated by spf13/cobra at 2015-06-05 21:08:36.513272503 +0000 UTC
|
||||
|
||||
[]()
|
||||
|
@ -17,8 +17,8 @@ As of Kubernetes 0.11, when you create a cluster the console output reports the
|
||||
a URL for a [Kibana](http://www.elasticsearch.org/overview/kibana/) dashboard viewer for the logs that have been ingested
|
||||
into Elasticsearch.
|
||||
```
|
||||
Elasticsearch is running at https://104.197.10.10/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging
|
||||
Kibana is running at https://104.197.10.10/api/v1beta3/proxy/namespaces/default/services/kibana-logging
|
||||
Elasticsearch is running at https://104.197.10.10/api/v1/proxy/namespaces/default/services/elasticsearch-logging
|
||||
Kibana is running at https://104.197.10.10/api/v1/proxy/namespaces/default/services/kibana-logging
|
||||
```
|
||||
Visiting the Kibana dashboard URL in a browser should give a display like this:
|
||||

|
||||
@ -27,7 +27,7 @@ To learn how to query, filter etc. using Kibana you might like to look at this [
|
||||
|
||||
You can check to see if any logs are being ingested into Elasticsearch by curling against its URL. You will need to provide the username and password that was generated when your cluster was created. This can be found in the `kubernetes_auth` file for your cluster.
|
||||
```
|
||||
$ curl -k -u admin:Drt3KdRGnoQL6TQM https://130.211.152.93/api/v1beta3/proxy/namespaces/default/services/elasticsearch-logging/_search?size=10
|
||||
$ curl -k -u admin:Drt3KdRGnoQL6TQM https://130.211.152.93/api/v1/proxy/namespaces/default/services/elasticsearch-logging/_search?size=10
|
||||
```
|
||||
A [demonstration](../examples/logging-demo/README.md) of two synthetic logging sources can be used
|
||||
to check that logging is working correctly.
|
||||
|
@ -178,7 +178,7 @@ $ kubectl get replicationcontroller web
|
||||
$ kubectl get \-o json pod web\-pod\-13je7
|
||||
|
||||
// Return only the phase value of the specified pod.
|
||||
$ kubectl get \-o template web\-pod\-13je7 \-\-template=\{\{.status.phase\}\} \-\-api\-version=v1beta3
|
||||
$ kubectl get \-o template web\-pod\-13je7 \-\-template=\{\{.status.phase\}\} \-\-api\-version=v1
|
||||
|
||||
// List all replication controllers and services together in ps output format.
|
||||
$ kubectl get rc,services
|
||||
|
@ -166,7 +166,7 @@ The above lets you 'curl localhost:8001/custom/api/v1/pods'
|
||||
$ kubectl proxy \-\-port=8011 \-\-www=./local/www/
|
||||
|
||||
// Run a proxy to kubernetes apiserver, changing the api prefix to k8s\-api
|
||||
// This makes e.g. the pods api available at localhost:8011/k8s\-api/v1beta3/pods/
|
||||
// This makes e.g. the pods api available at localhost:8011/k8s\-api/v1/pods/
|
||||
$ kubectl proxy \-\-api\-prefix=/k8s\-api
|
||||
|
||||
.fi
|
||||
|
@ -185,7 +185,7 @@ $ kubectl run nginx \-\-image=nginx \-\-replicas=5
|
||||
$ kubectl run nginx \-\-image=nginx \-\-dry\-run
|
||||
|
||||
// Start a single instance of nginx, but overload the spec of the replication controller with a partial set of values parsed from JSON.
|
||||
$ kubectl run nginx \-\-image=nginx \-\-overrides='\{ "apiVersion": "v1beta3", "spec": \{ ... \} \}'
|
||||
$ kubectl run nginx \-\-image=nginx \-\-overrides='\{ "apiVersion": "v1", "spec": \{ ... \} \}'
|
||||
|
||||
.fi
|
||||
.RE
|
||||
|
@ -63,7 +63,7 @@ For example, if you try to create a node from the following content:
|
||||
```json
|
||||
{
|
||||
"kind": "Node",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "10.240.79.157",
|
||||
"labels": {
|
||||
@ -132,7 +132,7 @@ node, but will not affect any existing pods on the node. This is useful as a
|
||||
preparatory step before a node reboot, etc. For example, to mark a node
|
||||
unschedulable, run this command:
|
||||
```
|
||||
kubectl update nodes 10.1.2.3 --patch='{"apiVersion": "v1beta3", "unschedulable": true}'
|
||||
kubectl update nodes 10.1.2.3 --patch='{"apiVersion": "v1", "unschedulable": true}'
|
||||
```
|
||||
|
||||
|
||||
|
@ -54,7 +54,7 @@ Kubectl supports creating, updating, and viewing quotas
|
||||
$ kubectl namespace myspace
|
||||
$ cat <<EOF > quota.json
|
||||
{
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"kind": "ResourceQuota",
|
||||
"metadata": {
|
||||
"name": "quota",
|
||||
|
@ -14,7 +14,7 @@ To make use of secrets requires at least two steps:
|
||||
This is an example of a simple secret, in json format:
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"kind": "Secret",
|
||||
"metadata" : {
|
||||
"name": "mysecret",
|
||||
@ -34,7 +34,7 @@ The values are arbitrary data, encoded using base64.
|
||||
This is an example of a pod that uses a secret, in json format:
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
"metadata": {
|
||||
"name": "mypod",
|
||||
@ -116,7 +116,7 @@ To create a pod that uses an ssh key stored as a secret, we first need to create
|
||||
```json
|
||||
{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "ssh-key-secret"
|
||||
},
|
||||
@ -137,7 +137,7 @@ consumes it in a volume:
|
||||
```json
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "secret-test-pod",
|
||||
"labels": {
|
||||
@ -187,12 +187,12 @@ The secrets:
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"kind": "List",
|
||||
"items":
|
||||
[{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "prod-db-secret"
|
||||
},
|
||||
@ -203,7 +203,7 @@ The secrets:
|
||||
},
|
||||
{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "test-db-secret"
|
||||
},
|
||||
@ -219,12 +219,12 @@ The pods:
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"kind": "List",
|
||||
"items":
|
||||
[{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "prod-db-client-pod",
|
||||
"labels": {
|
||||
@ -257,7 +257,7 @@ The pods:
|
||||
},
|
||||
{
|
||||
"kind": "Pod",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "test-db-client-pod",
|
||||
"labels": {
|
||||
|
@ -41,7 +41,7 @@ port 9376 and carry a label "app=MyApp".
|
||||
```json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "my-service"
|
||||
},
|
||||
@ -62,7 +62,7 @@ port 9376 and carry a label "app=MyApp".
|
||||
|
||||
This specification will create a new `Service` object named "my-service" which
|
||||
targets TCP port 9376 on any `Pod` with the "app=MyApp" label. This `Service`
|
||||
will also be assigned an IP address (sometimes called the "portal IP"), which
|
||||
will also be assigned an IP address (sometimes called the "cluster IP"), which
|
||||
is used by the service proxies (see below). The `Service`'s selector will be
|
||||
evaluated continuously and the results will be posted in an `Endpoints` object
|
||||
also named "my-service".
|
||||
@ -96,7 +96,7 @@ In any of these scenarios you can define a service without a selector:
|
||||
```json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "my-service"
|
||||
},
|
||||
@ -118,7 +118,7 @@ created. You can manually map the service to your own specific endpoints:
|
||||
```json
|
||||
{
|
||||
"kind": "Endpoints",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "my-service"
|
||||
},
|
||||
@ -174,7 +174,7 @@ disambiguated. For example:
|
||||
```json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "my-service"
|
||||
},
|
||||
@ -203,14 +203,13 @@ disambiguated. For example:
|
||||
## Choosing your own IP address
|
||||
|
||||
A user can specify their own cluster IP address as part of a `Service` creation
|
||||
request. To do this, set the `spec.clusterIP` field (called `portalIP` in
|
||||
v1beta3 and earlier APIs). For example, if they already have an existing DNS
|
||||
entry that they wish to replace, or legacy systems that are configured for a
|
||||
specific IP address and difficult to re-configure. The IP address that a user
|
||||
chooses must be a valid IP address and within the service_cluster_ip_range CIDR
|
||||
range that is specified by flag to the API server. If the IP address value is
|
||||
invalid, the apiserver returns a 422 HTTP status code to indicate that the
|
||||
value is invalid.
|
||||
request. To do this, set the `spec.clusterIP` field. For example, if they
|
||||
already have an existing DNS entry that they wish to replace, or legacy systems
|
||||
that are configured for a specific IP address and difficult to re-configure.
|
||||
The IP address that a user chooses must be a valid IP address and within the
|
||||
service_cluster_ip_range CIDR range that is specified by flag to the API server.
|
||||
If the IP address value is invalid, the apiserver returns a 422 HTTP status code
|
||||
to indicate that the value is invalid.
|
||||
|
||||
### Why not use round-robin DNS?
|
||||
|
||||
@ -280,7 +279,7 @@ records.
|
||||
|
||||
Sometimes you don't need or want load-balancing and a single service IP. In
|
||||
this case, you can create "headless" services by specifying `"None"` for the
|
||||
cluster IP (`spec.clusterIP` or `spec.portalIP` in v1beta3 and earlier APIs).
|
||||
cluster IP (`spec.clusterIP`).
|
||||
For such `Service`s, a cluster IP is not allocated and service-specific
|
||||
environment variables for `Pod`s are not created. DNS is configured to return
|
||||
multiple A records (addresses) for the `Service` name, which point directly to
|
||||
@ -304,7 +303,7 @@ address. Kubernetes supports two ways of doing this: `NodePort`s and
|
||||
Every `Service` has a `Type` field which defines how the `Service` can be
|
||||
accessed. Valid values for this field are:
|
||||
|
||||
* `ClusterIP`: use a cluster-internal IP (portal) only - this is the default
|
||||
* `ClusterIP`: use a cluster-internal IP only - this is the default
|
||||
* `NodePort`: use a cluster IP, but also expose the service on a port on each
|
||||
node of the cluster (the same port on each)
|
||||
* `LoadBalancer`: use a ClusterIP and a NodePort, but also ask the cloud
|
||||
@ -336,7 +335,7 @@ information about the provisioned balancer will be published in the `Service`'s
|
||||
```json
|
||||
{
|
||||
"kind": "Service",
|
||||
"apiVersion": "v1beta3",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "my-service"
|
||||
},
|
||||
@ -352,7 +351,7 @@ information about the provisioned balancer will be published in the `Service`'s
|
||||
"nodePort": 30061
|
||||
}
|
||||
],
|
||||
"portalIP": "10.0.171.239",
|
||||
"clusterIP": "10.0.171.239",
|
||||
"type": "LoadBalancer"
|
||||
},
|
||||
"status": {
|
||||
|
@ -3,7 +3,7 @@ This document describes the current state of Volumes in kubernetes. Familiarity
|
||||
|
||||
A Volume is a directory, possibly with some data in it, which is accessible to a Container. Kubernetes Volumes are similar to but not the same as [Docker Volumes](https://docs.docker.com/userguide/dockervolumes/).
|
||||
|
||||
A Pod specifies which Volumes its containers need in its [spec.volumes](http://kubernetes.io/third_party/swagger-ui/#!/v1beta3/createPod) property.
|
||||
A Pod specifies which Volumes its containers need in its [spec.volumes](http://kubernetes.io/third_party/swagger-ui/#!/v1/createPod) property.
|
||||
|
||||
A process in a Container sees a filesystem view composed from two sources: a single Docker image and zero or more Volumes. A [Docker image](https://docs.docker.com/userguide/dockerimages/) is at the root of the file hierarchy. Any Volumes are mounted at points on the Docker image; Volumes do not mount on other Volumes and do not have hard links to other Volumes. Each container in the Pod independently specifies where on its image to mount each Volume. This is specified in each container's VolumeMounts property.
|
||||
|
||||
@ -61,7 +61,7 @@ gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
|
||||
|
||||
#### GCE PD Example configuration:
|
||||
```yaml
|
||||
apiVersion: v1beta3
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: testpd
|
||||
|
@ -48,7 +48,7 @@ $ kubectl get replicationcontroller web
|
||||
$ kubectl get -o json pod web-pod-13je7
|
||||
|
||||
// Return only the phase value of the specified pod.
|
||||
$ kubectl get -o template web-pod-13je7 --template={{.status.phase}} --api-version=v1beta3
|
||||
$ kubectl get -o template web-pod-13je7 --template={{.status.phase}} --api-version=v1
|
||||
|
||||
// List all replication controllers and services together in ps output format.
|
||||
$ kubectl get rc,services
|
||||
|
@ -32,7 +32,7 @@ const (
|
||||
$ kubectl proxy --port=8011 --www=./local/www/
|
||||
|
||||
// Run a proxy to kubernetes apiserver, changing the api prefix to k8s-api
|
||||
// This makes e.g. the pods api available at localhost:8011/k8s-api/v1beta3/pods/
|
||||
// This makes e.g. the pods api available at localhost:8011/k8s-api/v1/pods/
|
||||
$ kubectl proxy --api-prefix=/k8s-api`
|
||||
)
|
||||
|
||||
|
@ -40,7 +40,7 @@ $ kubectl run nginx --image=nginx --replicas=5
|
||||
$ kubectl run nginx --image=nginx --dry-run
|
||||
|
||||
// Start a single instance of nginx, but overload the spec of the replication controller with a partial set of values parsed from JSON.
|
||||
$ kubectl run nginx --image=nginx --overrides='{ "apiVersion": "v1beta3", "spec": { ... } }'`
|
||||
$ kubectl run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { ... } }'`
|
||||
)
|
||||
|
||||
func NewCmdRun(f *cmdutil.Factory, out io.Writer) *cobra.Command {
|
||||
|
Loading…
Reference in New Issue
Block a user