NickrenREN
df4e71ffe1
auto generated code
2017-08-26 13:03:30 +08:00
Lars Lehtonen
e8a3424c1c
Fix swallowed error in fc
2017-08-25 21:34:40 -07:00
Lars Lehtonen
5abbd0f96f
Fix swallowed error in tests for flocker package
2017-08-25 21:28:30 -07:00
Lars Lehtonen
15dccee12d
Fix swallowed errors in tests of gce_pd
2017-08-25 21:16:34 -07:00
Lars Lehtonen
f482646372
Fix swallowed error in tests of host_path package
2017-08-25 21:06:34 -07:00
NickrenREN
194418986f
Add local storage to downwards API
2017-08-26 11:58:21 +08:00
Lars Lehtonen
47ee11437d
Fix swallowed error in iscsi package
2017-08-25 20:57:58 -07:00
Kubernetes Submit Queue
c112dbcab4
Merge pull request #51341 from mtaufen/fix-port-disable
...
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)
fix ReadOnlyPort defaulting, CAdvisorPort documentation
The ReadOnlyPort defaulting prevented passing 0 to diable via
the KubeletConfiguraiton struct.
The HealthzPort defaulting prevented passing 0 to disable via the
KubeletConfiguration struct. The documentation also failed to mention
this, but the check is performed in code.
The CAdvisorPort documentation failed to mention that you can pass 0 to
disable.
fixes #51345
2017-08-25 20:43:40 -07:00
Kubernetes Submit Queue
21aa8cacc5
Merge pull request #50730 from andrewsykim/49836
...
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)
Cloud Controller Manager now sets Node.Spec.ProviderID
**What this PR does / why we need it**:
Cloud Controller Manager now sets `Node.Spec.ProviderID`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/kubernetes/kubernetes/issues/49836 .
**Special notes for your reviewer**:
* As part of an effort to move cloud controller manager into beta https://github.com/kubernetes/kubernetes/issues/48690 .
2017-08-25 20:43:37 -07:00
Kubernetes Submit Queue
9e69d5b8f0
Merge pull request #50595 from k82cn/k8s_50594
...
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)
NodeConditionPredicates should return NodeOutOfDisk error.
**What this PR does / why we need it**:
In https://github.com/kubernetes/kubernetes/pull/49932 , I moved node condition check into a predicates; but it return incorrect error :(.
We also need to add more cases to `TestNodeShouldRunDaemonPod` which is key function of DaemonSet.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50594
**Release note**:
```release-note
None
```
2017-08-25 20:43:35 -07:00
Kubernetes Submit Queue
21ca7f7eec
Merge pull request #47782 from php-coder/fix_reverse_in_tests
...
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)
Fix benchmarks to really test reverse order of the keys
**What this PR does / why we need it**:
This PR modifies the code to do what comments says -- reverse the order of keys. It also fixes the logic that was wrong and didn't allow stale data.
**Special notes for your reviewer**:
This change resolves the following review comments:
- https://github.com/kubernetes/kubernetes/pull/41939#discussion_r117068104
- https://github.com/kubernetes/kubernetes/pull/46916#discussion_r122763350
- https://github.com/kubernetes/kubernetes/pull/46916#discussion_r122764000
**Release note**:
```release-note
NONE
```
PTAL @smarterclayton
2017-08-25 20:43:33 -07:00
Kubernetes Submit Queue
b65f3cc8dd
Merge pull request #49850 from m1093782566/service-session-timeout
...
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)
Paramaterize `stickyMaxAgeMinutes` for service in API
**What this PR does / why we need it**:
Currently I find `stickyMaxAgeMinutes` for a session affinity type service is hard code to 180min. There is a TODO comment, see
https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/iptables/proxier.go#L205
I think the seesion sticky max time varies from service to service and users may not aware of it since it's hard coded in all proxier.go - iptables, userspace and winuserspace.
Once we parameterize it in API, users can set/get the values for their different services.
Perhaps, we can introduce a new field `api.ClientIPAffinityConfig` in `api.ServiceSpec`.
There is an initial discussion about it in sig-network group. See,
https://groups.google.com/forum/#!topic/kubernetes-sig-network/i-LkeHrjs80
**Which issue this PR fixes**:
fixes #49831
**Special notes for your reviewer**:
**Release note**:
```release-note
Paramaterize session affinity timeout seconds in service API for Client IP based session affinity.
```
2017-08-25 20:43:30 -07:00
Lars Lehtonen
f77dd0ebac
Fix swallowed errors in tests of photon_pd package
2017-08-25 20:37:05 -07:00
Lars Lehtonen
4c86c891de
Fix swallowed errors in portworx tests
2017-08-25 20:28:24 -07:00
Lars Lehtonen
7ee91d6d54
Fix swallowed error in scaleio package tests
...
Test log improvement
2017-08-25 20:18:44 -07:00
Lars Lehtonen
d564d06967
Fix swallowed error in storageos
2017-08-25 20:05:06 -07:00
Klaus Ma
717cee04df
Refres equal cache if node condition changed.
2017-08-26 11:03:57 +08:00
Lars Lehtonen
14ad3b566e
Fix swallowed errs in volume util package
2017-08-25 19:56:47 -07:00
Kubernetes Submit Queue
85f963310e
Merge pull request #50504 from yastij/fcVolume-handleFailedMount
...
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)
handle failed mounts for fc volumes
**What this PR does / why we need it**: handles failed mounts for fc
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50502
**Special notes for your reviewer**:
**Release note**:
```release-note
None
```
2017-08-25 19:40:38 -07:00
Kubernetes Submit Queue
c170f5bfa2
Merge pull request #50972 from FengyunPan/external-loadBalancerIP
...
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)
Support for specifying external LoadBalancerIP on openstack
1. Support ServiceAnnotationLoadBalancerFloatingNetworkId for LB v1
2. Support for specifying external LoadBalancerIP on openstack
Add ServiceAnnotationLoadBalancerInternal annotation to distinguish
between internal LoadBalancerIP and external LoadBalancerIP.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fix #50851
**Release note**:
```release-note
NONE
```
2017-08-25 19:40:36 -07:00
Kubernetes Submit Queue
9d7bdb6a5f
Merge pull request #51274 from yastij/clean-cinder-detachLogError
...
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)
Clean cinder detachlogerror
**What this PR does / why we need it**:
**Which issue this PR fixes** : fixes #50441
**Special notes for your reviewer**:
**Release note**:
```release-note
```
2017-08-25 19:40:32 -07:00
Kubernetes Submit Queue
e923f2ba5c
Merge pull request #50819 from NickrenREN/remove-overlay-scheduler
...
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)
Changing scheduling part to manage one single local storage resource
**What this PR does / why we need it**:
Finally decided to manage a single local storage resource
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: part of #50818
**Special notes for your reviewer**:
Since finally decided to manage a single local storage resource, remove overlay related code in scheduling part and change the name scratch to ephemeral storage.
**Release note**:
```release-note
Changing scheduling part of the alpha feature 'LocalStorageCapacityIsolation' to manage one single local ephemeral storage resource
```
/assign @jingxu97
cc @ddysher
2017-08-25 19:40:29 -07:00
Cao Shufeng
ab09186737
Fix forbidden message format
...
Before this change:
# kubectl get pods --as=tom
Error from server (Forbidden): pods "" is forbidden: User "tom" cannot list pods in the namespace "default".
After this change:
# kubectl get pods --as=tom
Error from server (Forbidden): pods is forbidden: User "tom" cannot list pods in the namespace "default".
2017-08-26 10:27:35 +08:00
Kubernetes Submit Queue
65da3ce246
Merge pull request #51235 from cheftako/aggregator
...
Automatic merge from submit-queue
Fixed gke auth update wait condition.
Lookup whoami on gke using gcloud auth list.
Make sure we do not run the test on any cluster older than 1.7.
**What this PR does / why we need it**: Fixes issue with aggregator e2e test on GKE
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50945
**Special notes for your reviewer**: There is a TODO, follow up will be provided when the immediate problem is resolved.
**Release note**: ```release-note
NONE
```
2017-08-25 18:52:46 -07:00
Josh Horwitz
6ec738a8ec
generated files
2017-08-25 21:39:17 -04:00
Josh Horwitz
fab6044a31
Allow PSP's to specify a whitelist of allowed paths for host volume
...
removed files not supposed to be there
2017-08-25 21:35:55 -04:00
Klaus Ma
18dc690c7c
Moved node condition filter into a predicates.
2017-08-26 09:08:07 +08:00
Nick Sardo
0d55f6bdcb
Revert "GCE: Consume new config value for network project id"
2017-08-25 18:02:10 -07:00
Christoph Blecker
4f1106c8a5
Modify rsync filter to retain output across runs
2017-08-25 16:58:59 -07:00
Christoph Blecker
4d63e13c9f
Add option to copy output when running the build shell
2017-08-25 16:58:06 -07:00
Christoph Blecker
68232c328f
Create kube::util::create-fake-git-tree function
2017-08-25 16:51:51 -07:00
andrewsykim
fd86022714
add deprecation warnings for auto detecting cloud providers
2017-08-25 19:30:52 -04:00
Lantao Liu
a0ae7fac2b
Implement stop function in streaming server.
...
Signed-off-by: Lantao Liu <lantaol@google.com>
2017-08-25 23:24:30 +00:00
Lucas Käldström
b1fb289f0f
kubeadm: Move the uploadconfig phase right in the beginning of cluster init
2017-08-26 01:50:24 +03:00
Josh Horwitz
cf75c49883
change godoc based on feedback from luxas
2017-08-25 18:04:10 -04:00
Eric Chiang
9caff69027
generated: update API resources
...
./hack/update-codegen.sh
./hack/update-codecgen.sh
./hack/update-generated-protobuf.sh
2017-08-25 14:40:02 -07:00
Lantao Liu
b760fa95e5
Fix NoNewPrivs and also allow remote runtime to provide the support.
2017-08-25 21:32:33 +00:00
Matt Moyer
c7996a7236
kubeadm: add extra group info to token list
.
...
This adds an `EXTRA GROUPS` column to the output of `kubeadm token list`. This displays any extra `system:bootstrappers:*` groups that are specified in the token's `auth-extra-groups` key.
2017-08-25 16:26:28 -05:00
Matt Moyer
77f1b72a40
kubeadm: add --groups
flag for kubeadm token create
.
...
This adds support for creating a bootstrap token that authenticates with extra `system:bootstrappers:*` groups in addition to `system:bootstrappers`.
2017-08-25 16:26:20 -05:00
Matt Moyer
fd5c00b38d
Implement auth-extra-groups
in bootstrap token authenticator.
...
This implements support for the new `auth-extra-groups` key in `bootstrap.kubernetes.io/token` secrets by adding extra groups to the user info returned for valid bootstrap tokens.
2017-08-25 16:23:01 -05:00
NickrenREN
9730e3d302
Change validation for local ephemeral storage
2017-08-26 05:15:16 +08:00
NickrenREN
27901ad5df
Change eviction policy to manage one single local storage resource
2017-08-26 05:14:49 +08:00
Tim Hockin
e73b27cbce
Add debugging to the codegen process
2017-08-25 14:08:42 -07:00
Matt Moyer
33e02aff60
Add extra group constants and validation to pkg/bootstrap/api
.
...
This adds constants and validation for a new `auth-extra-groups` key on `bootstrap.kubernetes.io/token` secrets. This key allows a bootstrap token to authenticate to extra groups in addition to the `system:bootstrappers` group.
Extra groups are always applied in addition to the `system:bootstrappers` group, must begin with a `system:bootstrappers:` prefix, are limited in length, and are limited to a restricted set of characters (alphanumeric, colons, and dashes without a trailing colon/dash).
2017-08-25 16:04:53 -05:00
Kubernetes Submit Queue
a235ba4e49
Merge pull request #51327 from wasylkowski/ensure-ca-is-on
...
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)
Made the tests ensure that Cluster Autoscaler is on before running.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
2017-08-25 14:01:36 -07:00
Kubernetes Submit Queue
b5bb8099e7
Merge pull request #50971 from CaoShuFeng/audit_json
...
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)
set --audit-log-format default to json
Updates: https://github.com/kubernetes/kubernetes/issues/48561
**Release note**:
```
set --audit-log-format default to json for kube-apiserver
```
2017-08-25 14:01:33 -07:00
Kubernetes Submit Queue
ccae631ff9
Merge pull request #50562 from atlassian/call-cleanup-properly
...
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)
Call the right cleanup function
**What this PR does / why we need it**:
`defer cleanup()` will always call the function that was returned by the first call to `r.resyncChan()` but it should call the one returned by the last call.
**Special notes for your reviewer**:
This will print `c1`, not `c2`. See https://play.golang.org/p/FDjDbUxOvI
```go
func main() {
var c func()
c = c1
defer c()
c = c2
}
func c1 () {
fmt.Println("c1")
}
func c2 () {
fmt.Println("c2")
}
```
**Release note**:
```release-note
NONE
```
/kind bug
/sig api-machinery
2017-08-25 14:01:30 -07:00
Kubernetes Submit Queue
39581ac9bf
Merge pull request #51122 from luxas/kubeadm_impl_dryrun
...
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)
kubeadm: Fully implement --dry-run
**What this PR does / why we need it**:
Finishes the work begun in #50631
- Implements dry-run functionality for phases certs/kubeconfig/controlplane/etcd as well by making the outDir configurable
- Prints the controlplane manifests to stdout, but not the certs/kubeconfig files due to the sensitive nature. However, kubeadm outputs the directory to go and look in for those.
- Fixes a small yaml marshal error where `apiVersion` and `kind` wasn't printed earlier.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes: https://github.com/kubernetes/kubeadm/issues/389
**Special notes for your reviewer**:
Full `kubeadm init --dry-run` output:
```
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization mode: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/tmp/kubeadm-init-dryrun477531930"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[dryrun] Wrote certificates, kubeconfig files and control plane manifests to "/tmp/kubeadm-init-dryrun477531930"
[dryrun] Won't print certificates or kubeconfig files due to the sensitive nature of them
[dryrun] Please go and examine the "/tmp/kubeadm-init-dryrun477531930" directory for details about what would be written
[dryrun] Would write file "/etc/kubernetes/manifests/kube-apiserver.yaml" with content:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --allow-privileged=true
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
- --experimental-bootstrap-token-auth=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --secure-port=6443
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --insecure-port=0
- --requestheader-username-headers=X-Remote-User
- --requestheader-group-headers=X-Remote-Group
- --requestheader-allowed-names=front-proxy-client
- --advertise-address=192.168.200.101
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --authorization-mode=Node,RBAC
- --etcd-servers=http://127.0.0.1:2379
image: gcr.io/google_containers/kube-apiserver-amd64:v1.7.4
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: ca-certs-etc-pki
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/pki
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
name: ca-certs
- hostPath:
path: /etc/pki
name: ca-certs-etc-pki
status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-controller-manager.yaml" with content:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --address=127.0.0.1
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --leader-elect=true
- --use-service-account-credentials=true
- --controllers=*,bootstrapsigner,tokencleaner
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
image: gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /etc/pki
name: ca-certs-etc-pki
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/pki
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
name: ca-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
name: kubeconfig
- hostPath:
path: /etc/pki
name: ca-certs-etc-pki
status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-scheduler.yaml" with content:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --leader-elect=true
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --address=127.0.0.1
image: gcr.io/google_containers/kube-scheduler-amd64:v1.7.4
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
name: kubeconfig
status: {}
[markmaster] Will mark node thegopher as master by adding a label and a taint
[dryrun] Would perform action GET on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Attached patch:
{"metadata":{"labels":{"node-role.kubernetes.io/master":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","timeAdded":null}]}}
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[token] Using token: 96efd6.98bbb2f4603c026b
[dryrun] Would perform action GET on resource "secrets" in API group "core/v1"
[dryrun] Resource name: "bootstrap-token-96efd6"
[dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
description: VGhlIGRlZmF1bHQgYm9vdHN0cmFwIHRva2VuIGdlbmVyYXRlZCBieSAna3ViZWFkbSBpbml0Jy4=
expiration: MjAxNy0wOC0yM1QyMzoxOTozNCswMzowMA==
token-id: OTZlZmQ2
token-secret: OThiYmIyZjQ2MDNjMDI2Yg==
usage-bootstrap-authentication: dHJ1ZQ==
usage-bootstrap-signing: dHJ1ZQ==
kind: Secret
metadata:
creationTimestamp: null
name: bootstrap-token-96efd6
type: bootstrap.kubernetes.io/token
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- kind: Group
name: system:bootstrappers
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[dryrun] Would perform action CREATE on resource "clusterroles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
creationTimestamp: null
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
rules:
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests/nodeclient
verbs:
- create
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- kind: Group
name: system:bootstrappers
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01EZ3lNakl3TVRrek1Gb1hEVEkzTURneU1ESXdNVGt6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFk0CnZWZ1FSN3pva3VzbWVvQ3JwZ1lFdEFHSldhSWVVUXE0ZE8wcVA4TDFKQk10ZTdHcXVHeXlWdVlyejBBeXdGdkMKaEh3Tm1pbmpIWFdNYkgrQVdIUXJOZmtZMmRBdnVuL0NYZWd6RlRZZG56M1JzYU5EaW0wazVXaVhEamQwM21YVApicGpvMGxpT2ZtY0xlOHpYUXZNaHpmN2FMV24wOVJoN05Ld0M0eW84cis5MDNHNjVxRW56cnUybmJKTEJ1TFk0CkFsL3UxTElVSGV4dmExZjgzampOQ1NmQXJScGh1d0oyS1NTWXhoaEJpNHBJMzd0ZEFpN3diTUF0cG4zdU9rVEQKU0dtdGpkbFZoUlAzV1dHQzNQTjF3M1JRakpmTW5weFFZbFFmalU2UE9Pbzg4ODBwN3dnUXFDUU11bjU5UWlBWgpwNkI1c3lrUitMemhoZVpkMWtjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHaTVrcUJzMTdOMU5pRWx2RGJaWGFSeXk5anUKR3ZuRjRjSnczQ0dPR2hpdHgySmdxRkt5WXRIdlJUSFNYRXpBNTlteEs2RlJWUWpBZmJMdjhSZUNKUjYrSzdRdQo0U21uTVVxVXRTZFUzaHozVXZlMjVOTHVwMnhsYVpZbzVwdVRrOWhZdUszd09MbWgxZTFoRzcyUFpoZE5yOGd5Ck5lTFN3bjI4OEVUSlNCcWpob0FkV2w0YzZtcnpwWll4ekNrcEpUSDFPWnBCQzFUYmY3QW5HenVwRzB1Q1RSYWsKWTBCSERyL01uVGJKKzM5NEJyMXBId0NtQ3ZrWUY0RjVEeW9UTFQ0UFhGTnJSV3UweU9rMXdDdEFKbEs3eFlUOAp5Z015cUlRSG4rNjYrUGlsSUprcU81ODRoVm5ENURva1dLcEdISFlYNmNpRGYwU1hYZUI1d09YQ0xjaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.200.101:6443
name: ""
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
kind: ConfigMap
metadata:
creationTimestamp: null
name: cluster-info
namespace: kube-public
[dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
creationTimestamp: null
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
rules:
- apiGroups:
- ""
resourceNames:
- cluster-info
resources:
- configmaps
verbs:
- get
[dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
creationTimestamp: null
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:bootstrap-signer-clusterinfo
subjects:
- kind: User
name: system:anonymous
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
MasterConfiguration: |
api:
advertiseAddress: 192.168.200.101
bindPort: 6443
apiServerCertSANs: []
apiServerExtraArgs: null
authorizationModes:
- Node
- RBAC
certificatesDir: /etc/kubernetes/pki
cloudProvider: ""
controllerManagerExtraArgs: null
etcd:
caFile: ""
certFile: ""
dataDir: /var/lib/etcd
endpoints: []
extraArgs: null
image: ""
keyFile: ""
featureFlags: null
imageRepository: gcr.io/google_containers
kubernetesVersion: v1.7.4
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
nodeName: thegopher
schedulerExtraArgs: null
token: 96efd6.98bbb2f4603c026b
tokenTTL: 86400000000000
unifiedControlPlaneImage: ""
kind: ConfigMap
metadata:
creationTimestamp: null
name: kubeadm-config
namespace: kube-system
[dryrun] Would perform action GET on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Resource name: "system:node"
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
name: kube-dns
namespace: kube-system
[dryrun] Would perform action GET on resource "services" in API group "core/v1"
[dryrun] Resource name: "kubernetes"
[dryrun] Would perform action CREATE on resource "deployments" in API group "extensions/v1beta1"
[dryrun] Attached object:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
name: kube-dns
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
containers:
- args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: kubedns
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 5
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
volumeMounts:
- mountPath: /kube-dns-config
name: kube-dns-config
- args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: dnsmasq
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- mountPath: /etc/k8s/dns/dnsmasq-nanny
name: kube-dns-config
- args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: sidecar
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
cpu: 10m
memory: 20Mi
dnsPolicy: Default
serviceAccountName: kube-dns
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- configMap:
name: kube-dns
optional: true
name: kube-dns-config
status: {}
[dryrun] Would perform action CREATE on resource "services" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: KubeDNS
name: kube-dns
namespace: kube-system
resourceVersion: "0"
spec:
clusterIP: 10.96.0.10
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
k8s-app: kube-dns
status:
loadBalancer: {}
[addons] Applied essential addon: kube-dns
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
name: kube-proxy
namespace: kube-system
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
kubeconfig.conf: |
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://192.168.200.101:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
app: kube-proxy
name: kube-proxy
namespace: kube-system
[dryrun] Would perform action CREATE on resource "daemonsets" in API group "extensions/v1beta1"
[dryrun] Attached object:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
creationTimestamp: null
labels:
k8s-app: kube-proxy
name: kube-proxy
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kube-proxy
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-proxy
spec:
containers:
- command:
- /usr/local/bin/kube-proxy
- --kubeconfig=/var/lib/kube-proxy/kubeconfig.conf
image: gcr.io/google_containers/kube-proxy-amd64:v1.7.4
imagePullPolicy: IfNotPresent
name: kube-proxy
resources: {}
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/lib/kube-proxy
name: kube-proxy
- mountPath: /run/xtables.lock
name: xtables-lock
hostNetwork: true
serviceAccountName: kube-proxy
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
volumes:
- configMap:
name: kube-proxy
name: kube-proxy
- hostPath:
path: /run/xtables.lock
name: xtables-lock
updateStrategy:
type: RollingUpdate
status:
currentNumberScheduled: 0
desiredNumberScheduled: 0
numberMisscheduled: 0
numberReady: 0
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:node-proxier
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier
subjects:
- kind: ServiceAccount
name: kube-proxy
namespace: kube-system
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /tmp/kubeadm-init-dryrun477531930/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 96efd6.98bbb2f4603c026b 192.168.200.101:6443 --discovery-token-ca-cert-hash sha256:ccb794198ae65cb3c9e997be510c18023e0e9e064225a588997b9e6c64ebf9f1
```
**Release note**:
```release-note
kubeadm: Implement a `--dry-run` mode and flag for `kubeadm`
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @ncdc @sttts
2017-08-25 14:01:27 -07:00
Kubernetes Submit Queue
cfbd97c4f9
Merge pull request #51134 from stevekuznetsov/skuznets/verbose-bazel-error
...
Automatic merge from submit-queue
Don't silence `go get` during verify scripts
When the verify scripts fail to install a component using `go get`
today, they silence output go `stderr` and cause the user to not get any
actionable feedback on failure.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
```release-note
NONE
```
/assign @ixdy
2017-08-25 13:59:51 -07:00
Eric Chiang
85491f1578
Audit policy v1beta1 now supports matching subresources and resource names.
...
policy:
- level: Metadata
resources:
- group: ""
resources ["pods/logs"]
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
The top level resource no longer matches the subresource. For example "pods"
no longer matches requests to the logs subresource on pods.
```release-note
Audit policy supports matching subresources and resource names, but the top level resource no longer matches the subresouce. For example "pods" no longer matches requests to the logs subresource of pods. Use "pods/logs" to match subresources.
```
2017-08-25 13:59:16 -07:00