Merge pull request #49072 from xilabao/wait-rbac-in-local-cluster

Automatic merge from submit-queue (batch tested with PRs 49058, 49072, 49137, 49182, 49045)

use https to check healthz in hack/local-up-cluster.sh

**What this PR does / why we need it**:
```
# PSP_ADMISSION=true ALLOW_PRIVILEGED=true ALLOW_SECURITY_CONTEXT=true ALLOW_ANY_TOKEN=true ENABLE_RBAC=true RUNTIME_CONFIG="extensions/v1beta1=true,extensions/v1beta1/podsecuritypolicy=true" hack/local-up-cluster.sh
...
Waiting for apiserver to come up
+++ [0718 09:34:38] On try 5, apiserver: : 
Cluster "local-up-cluster" set.
use 'kubectl --kubeconfig=/var/run/kubernetes/admin-kube-aggregator.kubeconfig' to use the aggregated API server
Creating kube-system namespace
clusterrolebinding "system:kube-dns" created
serviceaccount "kube-dns" created
configmap "kube-dns" created
error: unable to recognize "kubedns-deployment.yaml": no matches for extensions/, Kind=Deployment
service "kube-dns" created
Kube-dns deployment and service successfully deployed.
kubelet ( 10952 ) is running.
Create podsecuritypolicy policies for RBAC.
unable to recognize "/home/nfs/mygo/src/k8s.io/kubernetes/examples/podsecuritypolicy/rbac/policies.yaml": no matches for extensions/, Kind=PodSecurityPolicy
unable to recognize "/home/nfs/mygo/src/k8s.io/kubernetes/examples/podsecuritypolicy/rbac/policies.yaml": no matches for extensions/, Kind=PodSecurityPolicy
unable to recognize "/home/nfs/mygo/src/k8s.io/kubernetes/examples/podsecuritypolicy/rbac/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRole
unable to recognize "/home/nfs/mygo/src/k8s.io/kubernetes/examples/podsecuritypolicy/rbac/roles.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRole
unable to recognize "/home/nfs/mygo/src/k8s.io/kubernetes/examples/podsecuritypolicy/rbac/bindings.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRoleBinding
unable to recognize "/home/nfs/mygo/src/k8s.io/kubernetes/examples/podsecuritypolicy/rbac/bindings.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRoleBinding
unable to recognize "/home/nfs/mygo/src/k8s.io/kubernetes/examples/podsecuritypolicy/rbac/bindings.yaml": no matches for rbac.authorization.k8s.io/, Kind=ClusterRoleBinding
Create default storage class for 
error: unable to recognize "/home/nfs/mygo/src/k8s.io/kubernetes/cluster/addons/storage-class/local/default.yaml": no matches for storage.k8s.io/, Kind=StorageClass
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.

Logs:
  /tmp/kube-apiserver.log
  /tmp/kube-controller-manager.log
  /tmp/kube-proxy.log
  /tmp/kube-scheduler.log
  /tmp/kubelet.log
...
```

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #47739

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
This commit is contained in:
Kubernetes Submit Queue 2017-07-19 10:27:23 -07:00 committed by GitHub
commit 92d310eddc

View File

@ -59,7 +59,7 @@ ENABLE_CLUSTER_DNS=${KUBE_ENABLE_CLUSTER_DNS:-true}
DNS_SERVER_IP=${KUBE_DNS_SERVER_IP:-10.0.0.10}
DNS_DOMAIN=${KUBE_DNS_NAME:-"cluster.local"}
KUBECTL=${KUBECTL:-cluster/kubectl.sh}
WAIT_FOR_URL_API_SERVER=${WAIT_FOR_URL_API_SERVER:-10}
WAIT_FOR_URL_API_SERVER=${WAIT_FOR_URL_API_SERVER:-20}
ENABLE_DAEMON=${ENABLE_DAEMON:-false}
HOSTNAME_OVERRIDE=${HOSTNAME_OVERRIDE:-"127.0.0.1"}
CLOUD_PROVIDER=${CLOUD_PROVIDER:-""}
@ -536,7 +536,7 @@ function start_apiserver {
# this uses the API port because if you don't have any authenticator, you can't seem to use the secure port at all.
# this matches what happened with the combination in 1.4.
# TODO change this conditionally based on whether API_PORT is on or off
kube::util::wait_for_url "http://${API_HOST_IP}:${API_SECURE_PORT}/healthz" "apiserver: " 1 ${WAIT_FOR_URL_API_SERVER} \
kube::util::wait_for_url "https://${API_HOST_IP}:${API_SECURE_PORT}/healthz" "apiserver: " 1 ${WAIT_FOR_URL_API_SERVER} \
|| { echo "check apiserver logs: ${APISERVER_LOG}" ; exit 1 ; }
# Create kubeconfigs for all components, using client certs