Christoph Blecker
4d63e13c9f
Add option to copy output when running the build shell
2017-08-25 16:58:06 -07:00
Christoph Blecker
68232c328f
Create kube::util::create-fake-git-tree function
2017-08-25 16:51:51 -07:00
andrewsykim
fd86022714
add deprecation warnings for auto detecting cloud providers
2017-08-25 19:30:52 -04:00
Lantao Liu
a0ae7fac2b
Implement stop function in streaming server.
...
Signed-off-by: Lantao Liu <lantaol@google.com>
2017-08-25 23:24:30 +00:00
Lucas Käldström
b1fb289f0f
kubeadm: Move the uploadconfig phase right in the beginning of cluster init
2017-08-26 01:50:24 +03:00
Josh Horwitz
cf75c49883
change godoc based on feedback from luxas
2017-08-25 18:04:10 -04:00
Eric Chiang
9caff69027
generated: update API resources
...
./hack/update-codegen.sh
./hack/update-codecgen.sh
./hack/update-generated-protobuf.sh
2017-08-25 14:40:02 -07:00
Lantao Liu
b760fa95e5
Fix NoNewPrivs and also allow remote runtime to provide the support.
2017-08-25 21:32:33 +00:00
Matt Moyer
c7996a7236
kubeadm: add extra group info to token list
.
...
This adds an `EXTRA GROUPS` column to the output of `kubeadm token list`. This displays any extra `system:bootstrappers:*` groups that are specified in the token's `auth-extra-groups` key.
2017-08-25 16:26:28 -05:00
Matt Moyer
77f1b72a40
kubeadm: add --groups
flag for kubeadm token create
.
...
This adds support for creating a bootstrap token that authenticates with extra `system:bootstrappers:*` groups in addition to `system:bootstrappers`.
2017-08-25 16:26:20 -05:00
Matt Moyer
fd5c00b38d
Implement auth-extra-groups
in bootstrap token authenticator.
...
This implements support for the new `auth-extra-groups` key in `bootstrap.kubernetes.io/token` secrets by adding extra groups to the user info returned for valid bootstrap tokens.
2017-08-25 16:23:01 -05:00
NickrenREN
9730e3d302
Change validation for local ephemeral storage
2017-08-26 05:15:16 +08:00
NickrenREN
27901ad5df
Change eviction policy to manage one single local storage resource
2017-08-26 05:14:49 +08:00
Tim Hockin
e73b27cbce
Add debugging to the codegen process
2017-08-25 14:08:42 -07:00
Matt Moyer
33e02aff60
Add extra group constants and validation to pkg/bootstrap/api
.
...
This adds constants and validation for a new `auth-extra-groups` key on `bootstrap.kubernetes.io/token` secrets. This key allows a bootstrap token to authenticate to extra groups in addition to the `system:bootstrappers` group.
Extra groups are always applied in addition to the `system:bootstrappers` group, must begin with a `system:bootstrappers:` prefix, are limited in length, and are limited to a restricted set of characters (alphanumeric, colons, and dashes without a trailing colon/dash).
2017-08-25 16:04:53 -05:00
Kubernetes Submit Queue
a235ba4e49
Merge pull request #51327 from wasylkowski/ensure-ca-is-on
...
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)
Made the tests ensure that Cluster Autoscaler is on before running.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
2017-08-25 14:01:36 -07:00
Kubernetes Submit Queue
b5bb8099e7
Merge pull request #50971 from CaoShuFeng/audit_json
...
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)
set --audit-log-format default to json
Updates: https://github.com/kubernetes/kubernetes/issues/48561
**Release note**:
```
set --audit-log-format default to json for kube-apiserver
```
2017-08-25 14:01:33 -07:00
Kubernetes Submit Queue
ccae631ff9
Merge pull request #50562 from atlassian/call-cleanup-properly
...
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)
Call the right cleanup function
**What this PR does / why we need it**:
`defer cleanup()` will always call the function that was returned by the first call to `r.resyncChan()` but it should call the one returned by the last call.
**Special notes for your reviewer**:
This will print `c1`, not `c2`. See https://play.golang.org/p/FDjDbUxOvI
```go
func main() {
var c func()
c = c1
defer c()
c = c2
}
func c1 () {
fmt.Println("c1")
}
func c2 () {
fmt.Println("c2")
}
```
**Release note**:
```release-note
NONE
```
/kind bug
/sig api-machinery
2017-08-25 14:01:30 -07:00
Kubernetes Submit Queue
39581ac9bf
Merge pull request #51122 from luxas/kubeadm_impl_dryrun
...
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)
kubeadm: Fully implement --dry-run
**What this PR does / why we need it**:
Finishes the work begun in #50631
- Implements dry-run functionality for phases certs/kubeconfig/controlplane/etcd as well by making the outDir configurable
- Prints the controlplane manifests to stdout, but not the certs/kubeconfig files due to the sensitive nature. However, kubeadm outputs the directory to go and look in for those.
- Fixes a small yaml marshal error where `apiVersion` and `kind` wasn't printed earlier.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes: https://github.com/kubernetes/kubeadm/issues/389
**Special notes for your reviewer**:
Full `kubeadm init --dry-run` output:
```
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization mode: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/tmp/kubeadm-init-dryrun477531930"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[dryrun] Wrote certificates, kubeconfig files and control plane manifests to "/tmp/kubeadm-init-dryrun477531930"
[dryrun] Won't print certificates or kubeconfig files due to the sensitive nature of them
[dryrun] Please go and examine the "/tmp/kubeadm-init-dryrun477531930" directory for details about what would be written
[dryrun] Would write file "/etc/kubernetes/manifests/kube-apiserver.yaml" with content:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --allow-privileged=true
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
- --experimental-bootstrap-token-auth=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --secure-port=6443
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --insecure-port=0
- --requestheader-username-headers=X-Remote-User
- --requestheader-group-headers=X-Remote-Group
- --requestheader-allowed-names=front-proxy-client
- --advertise-address=192.168.200.101
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --authorization-mode=Node,RBAC
- --etcd-servers=http://127.0.0.1:2379
image: gcr.io/google_containers/kube-apiserver-amd64:v1.7.4
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: ca-certs-etc-pki
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/pki
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
name: ca-certs
- hostPath:
path: /etc/pki
name: ca-certs-etc-pki
status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-controller-manager.yaml" with content:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --address=127.0.0.1
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --leader-elect=true
- --use-service-account-credentials=true
- --controllers=*,bootstrapsigner,tokencleaner
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
image: gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /etc/pki
name: ca-certs-etc-pki
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/pki
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
name: ca-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
name: kubeconfig
- hostPath:
path: /etc/pki
name: ca-certs-etc-pki
status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-scheduler.yaml" with content:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --leader-elect=true
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --address=127.0.0.1
image: gcr.io/google_containers/kube-scheduler-amd64:v1.7.4
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
name: kubeconfig
status: {}
[markmaster] Will mark node thegopher as master by adding a label and a taint
[dryrun] Would perform action GET on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Attached patch:
{"metadata":{"labels":{"node-role.kubernetes.io/master":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","timeAdded":null}]}}
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[token] Using token: 96efd6.98bbb2f4603c026b
[dryrun] Would perform action GET on resource "secrets" in API group "core/v1"
[dryrun] Resource name: "bootstrap-token-96efd6"
[dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
description: VGhlIGRlZmF1bHQgYm9vdHN0cmFwIHRva2VuIGdlbmVyYXRlZCBieSAna3ViZWFkbSBpbml0Jy4=
expiration: MjAxNy0wOC0yM1QyMzoxOTozNCswMzowMA==
token-id: OTZlZmQ2
token-secret: OThiYmIyZjQ2MDNjMDI2Yg==
usage-bootstrap-authentication: dHJ1ZQ==
usage-bootstrap-signing: dHJ1ZQ==
kind: Secret
metadata:
creationTimestamp: null
name: bootstrap-token-96efd6
type: bootstrap.kubernetes.io/token
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- kind: Group
name: system:bootstrappers
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[dryrun] Would perform action CREATE on resource "clusterroles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
creationTimestamp: null
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
rules:
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests/nodeclient
verbs:
- create
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- kind: Group
name: system:bootstrappers
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01EZ3lNakl3TVRrek1Gb1hEVEkzTURneU1ESXdNVGt6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFk0CnZWZ1FSN3pva3VzbWVvQ3JwZ1lFdEFHSldhSWVVUXE0ZE8wcVA4TDFKQk10ZTdHcXVHeXlWdVlyejBBeXdGdkMKaEh3Tm1pbmpIWFdNYkgrQVdIUXJOZmtZMmRBdnVuL0NYZWd6RlRZZG56M1JzYU5EaW0wazVXaVhEamQwM21YVApicGpvMGxpT2ZtY0xlOHpYUXZNaHpmN2FMV24wOVJoN05Ld0M0eW84cis5MDNHNjVxRW56cnUybmJKTEJ1TFk0CkFsL3UxTElVSGV4dmExZjgzampOQ1NmQXJScGh1d0oyS1NTWXhoaEJpNHBJMzd0ZEFpN3diTUF0cG4zdU9rVEQKU0dtdGpkbFZoUlAzV1dHQzNQTjF3M1JRakpmTW5weFFZbFFmalU2UE9Pbzg4ODBwN3dnUXFDUU11bjU5UWlBWgpwNkI1c3lrUitMemhoZVpkMWtjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHaTVrcUJzMTdOMU5pRWx2RGJaWGFSeXk5anUKR3ZuRjRjSnczQ0dPR2hpdHgySmdxRkt5WXRIdlJUSFNYRXpBNTlteEs2RlJWUWpBZmJMdjhSZUNKUjYrSzdRdQo0U21uTVVxVXRTZFUzaHozVXZlMjVOTHVwMnhsYVpZbzVwdVRrOWhZdUszd09MbWgxZTFoRzcyUFpoZE5yOGd5Ck5lTFN3bjI4OEVUSlNCcWpob0FkV2w0YzZtcnpwWll4ekNrcEpUSDFPWnBCQzFUYmY3QW5HenVwRzB1Q1RSYWsKWTBCSERyL01uVGJKKzM5NEJyMXBId0NtQ3ZrWUY0RjVEeW9UTFQ0UFhGTnJSV3UweU9rMXdDdEFKbEs3eFlUOAp5Z015cUlRSG4rNjYrUGlsSUprcU81ODRoVm5ENURva1dLcEdISFlYNmNpRGYwU1hYZUI1d09YQ0xjaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.200.101:6443
name: ""
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
kind: ConfigMap
metadata:
creationTimestamp: null
name: cluster-info
namespace: kube-public
[dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
creationTimestamp: null
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
rules:
- apiGroups:
- ""
resourceNames:
- cluster-info
resources:
- configmaps
verbs:
- get
[dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
creationTimestamp: null
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:bootstrap-signer-clusterinfo
subjects:
- kind: User
name: system:anonymous
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
MasterConfiguration: |
api:
advertiseAddress: 192.168.200.101
bindPort: 6443
apiServerCertSANs: []
apiServerExtraArgs: null
authorizationModes:
- Node
- RBAC
certificatesDir: /etc/kubernetes/pki
cloudProvider: ""
controllerManagerExtraArgs: null
etcd:
caFile: ""
certFile: ""
dataDir: /var/lib/etcd
endpoints: []
extraArgs: null
image: ""
keyFile: ""
featureFlags: null
imageRepository: gcr.io/google_containers
kubernetesVersion: v1.7.4
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
nodeName: thegopher
schedulerExtraArgs: null
token: 96efd6.98bbb2f4603c026b
tokenTTL: 86400000000000
unifiedControlPlaneImage: ""
kind: ConfigMap
metadata:
creationTimestamp: null
name: kubeadm-config
namespace: kube-system
[dryrun] Would perform action GET on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Resource name: "system:node"
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
name: kube-dns
namespace: kube-system
[dryrun] Would perform action GET on resource "services" in API group "core/v1"
[dryrun] Resource name: "kubernetes"
[dryrun] Would perform action CREATE on resource "deployments" in API group "extensions/v1beta1"
[dryrun] Attached object:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
name: kube-dns
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
containers:
- args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: kubedns
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 5
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
volumeMounts:
- mountPath: /kube-dns-config
name: kube-dns-config
- args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: dnsmasq
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- mountPath: /etc/k8s/dns/dnsmasq-nanny
name: kube-dns-config
- args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: sidecar
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
cpu: 10m
memory: 20Mi
dnsPolicy: Default
serviceAccountName: kube-dns
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- configMap:
name: kube-dns
optional: true
name: kube-dns-config
status: {}
[dryrun] Would perform action CREATE on resource "services" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: KubeDNS
name: kube-dns
namespace: kube-system
resourceVersion: "0"
spec:
clusterIP: 10.96.0.10
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
k8s-app: kube-dns
status:
loadBalancer: {}
[addons] Applied essential addon: kube-dns
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
name: kube-proxy
namespace: kube-system
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
kubeconfig.conf: |
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://192.168.200.101:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
app: kube-proxy
name: kube-proxy
namespace: kube-system
[dryrun] Would perform action CREATE on resource "daemonsets" in API group "extensions/v1beta1"
[dryrun] Attached object:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
creationTimestamp: null
labels:
k8s-app: kube-proxy
name: kube-proxy
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kube-proxy
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-proxy
spec:
containers:
- command:
- /usr/local/bin/kube-proxy
- --kubeconfig=/var/lib/kube-proxy/kubeconfig.conf
image: gcr.io/google_containers/kube-proxy-amd64:v1.7.4
imagePullPolicy: IfNotPresent
name: kube-proxy
resources: {}
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/lib/kube-proxy
name: kube-proxy
- mountPath: /run/xtables.lock
name: xtables-lock
hostNetwork: true
serviceAccountName: kube-proxy
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
volumes:
- configMap:
name: kube-proxy
name: kube-proxy
- hostPath:
path: /run/xtables.lock
name: xtables-lock
updateStrategy:
type: RollingUpdate
status:
currentNumberScheduled: 0
desiredNumberScheduled: 0
numberMisscheduled: 0
numberReady: 0
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:node-proxier
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier
subjects:
- kind: ServiceAccount
name: kube-proxy
namespace: kube-system
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /tmp/kubeadm-init-dryrun477531930/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 96efd6.98bbb2f4603c026b 192.168.200.101:6443 --discovery-token-ca-cert-hash sha256:ccb794198ae65cb3c9e997be510c18023e0e9e064225a588997b9e6c64ebf9f1
```
**Release note**:
```release-note
kubeadm: Implement a `--dry-run` mode and flag for `kubeadm`
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @ncdc @sttts
2017-08-25 14:01:27 -07:00
Kubernetes Submit Queue
cfbd97c4f9
Merge pull request #51134 from stevekuznetsov/skuznets/verbose-bazel-error
...
Automatic merge from submit-queue
Don't silence `go get` during verify scripts
When the verify scripts fail to install a component using `go get`
today, they silence output go `stderr` and cause the user to not get any
actionable feedback on failure.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
```release-note
NONE
```
/assign @ixdy
2017-08-25 13:59:51 -07:00
Eric Chiang
85491f1578
Audit policy v1beta1 now supports matching subresources and resource names.
...
policy:
- level: Metadata
resources:
- group: ""
resources ["pods/logs"]
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
The top level resource no longer matches the subresource. For example "pods"
no longer matches requests to the logs subresource on pods.
```release-note
Audit policy supports matching subresources and resource names, but the top level resource no longer matches the subresouce. For example "pods" no longer matches requests to the logs subresource of pods. Use "pods/logs" to match subresources.
```
2017-08-25 13:59:16 -07:00
Yassine TIJANI
588fe268dc
handle iscsi failed mounts
2017-08-25 22:32:13 +02:00
Josh Horwitz
82a69b2815
refactor method name as per comments
2017-08-25 16:25:19 -04:00
Steve Leon
c682d7cd74
Add host mountpath for controller-manager for flexvolume dir
...
Controller manager needs access to Flexvolume plugin when
using attach-detach controller interface.
This PR adds the host mount path for the default directory of flexvolume
plugins
Fixes https://github.com/kubernetes/kubeadm/issues/410
2017-08-25 13:23:53 -07:00
Josh Horwitz
3528ceb27f
address test & doc comments
2017-08-25 16:15:55 -04:00
Michael Taufen
6918ab1d70
fix ReadOnlyPort, HealthzPort, CAdvisorPort defaulting/documentation
...
The ReadOnlyPort defaulting prevented passing 0 to diable via
the KubeletConfiguraiton struct.
The HealthzPort defaulting prevented passing 0 to disable via the
KubeletConfiguration struct. The documentation also failed to mention
this, but the check is performed in code.
The CAdvisorPort documentation failed to mention that you can pass 0 to
disable.
2017-08-25 13:15:36 -07:00
Yang Guo
f9767d2f71
Change StatsProvider interface to provide container stats from either cadvisor or CRI and implement this interface using cadvisor
2017-08-25 13:11:26 -07:00
Lars Lehtonen
d73b3d049d
Unshadow error in registrytest
2017-08-25 12:59:57 -07:00
Kubernetes Submit Queue
acdf625e46
Merge pull request #51143 from oomichi/issue/43051
...
Automatic merge from submit-queue (batch tested with PRs 51038, 50063, 51257, 47171, 51143)
Add signal handler for catching Ctrl-C on hack/e2e
**What this PR does / why we need it**:
When operating e2e test, hack/e2e.go process creates kubetest process.
To kill the kubetest process when stop e2e test with Ctrl-C, we need
to send the signal to the process because it also creates another
process and it needs to kill it.
This PR adds the signal handler on hack/e2e.go to kill the kubetest
process.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes #43051
**Special notes for your reviewer**:
https://github.com/kubernetes/test-infra/pull/4154 is the part of kubetest.
**Release note**:
`NONE`
2017-08-25 12:31:10 -07:00
Kubernetes Submit Queue
08c2071bec
Merge pull request #47171 from xilabao/validate-nonResourceURL-in-create-clusterrole
...
Automatic merge from submit-queue (batch tested with PRs 51038, 50063, 51257, 47171, 51143)
validate nonResourceURL in create clusterrole
**Release note**:
```release-note
NONE
```
2017-08-25 12:31:07 -07:00
Kubernetes Submit Queue
cd908f3e59
Merge pull request #51257 from NickrenREN/validation-bugfix
...
Automatic merge from submit-queue (batch tested with PRs 51038, 50063, 51257, 47171, 51143)
Fix validation return value
Errors returned by some validation functions may be wrong
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #51256
**Release note**:
```release-note
NONE
```
2017-08-25 12:31:05 -07:00
Kubernetes Submit Queue
16a438b56e
Merge pull request #50063 from dixudx/manifests_use_hostpath_type
...
Automatic merge from submit-queue (batch tested with PRs 51038, 50063, 51257, 47171, 51143)
update related manifest files to use hostpath type
**What this PR does / why we need it**:
Per [discussion in #46597 ](https://github.com/kubernetes/kubernetes/pull/46597#pullrequestreview-53568947 )
Dependes on #46597
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes: https://github.com/kubernetes/kubeadm/issues/298
**Special notes for your reviewer**:
/cc @euank @thockin @tallclair @Random-Liu
**Release note**:
```release-note
None
```
2017-08-25 12:31:02 -07:00
Kubernetes Submit Queue
c02afa6e39
Merge pull request #51038 from nicksardo/gce-netproj
...
Automatic merge from submit-queue
GCE: Consume new config value for network project id
This PR will allow users to specify the network's project ID in gce.conf. If it's not specified, it will be filled with `ProjectID`. This means that `network-project-id` is a required field for building a cluster on a shared VPC network. However, this means the field does not need to be specified for GKE clusters on non-shared networks.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes #48515
**Special notes for your reviewer**:
/assign @bowei @freehan
**Release note**:
```release-note
NONE
```
2017-08-25 12:25:50 -07:00
Matthew Wong
19ebaf2870
Don't update pvc.status.capacity if pvc is already Bound
2017-08-25 15:23:25 -04:00
Jordan Liggitt
c59c54b247
Update fixture data
2017-08-25 15:01:08 -04:00
Jordan Liggitt
c7defb806f
Generated files
2017-08-25 15:01:08 -04:00
Jordan Liggitt
1bb19dfcc5
Revert "Ensure empty serialized slices are zero-length, not null"
2017-08-25 14:59:32 -04:00
Ryan Hitchman
a7e64aaa66
Make coreos test images sshd not allow password login.
...
Configuration is based on:
https://coreos.com/os/docs/latest/customizing-sshd.html
The specific SSHD config is:
# Use most defaults for sshd configuration.
UsePrivilegeSeparation sandbox
Subsystem sftp internal-sftp
ClientAliveInterval 180
UseDNS no
UsePAM yes
PrintLastLog no # handled by PAM
PrintMotd no # handled by PAM
AuthenticationMethods publickey
This will prevent security scanners from triggering.
2017-08-25 11:49:34 -07:00
Cheng Xing
396c3c7c6f
Adding dynamic Flexvolume plugin discovery capability, using filesystem watch.
2017-08-25 11:42:32 -07:00
Walter Fender
3b9485bba3
Fixed gke auth update wait condition.
...
Lookup whoami on gke using gcloud auth list.
Make sure we do not run the test on any cluster older than 1.7.
Fix for Mehdy
Fixes for LavaLamp
2017-08-25 11:11:59 -07:00
Kubernetes Submit Queue
5f805a5e66
Merge pull request #51207 from yguo0905/uc
...
Automatic merge from submit-queue (batch tested with PRs 50033, 49988, 51132, 49674, 51207)
Update cos image to cos-stable-60-9592-84-0
cos-m60 has been stable for a long time. This image contains a docker upgrade, which has been validated in https://github.com/kubernetes/kubernetes/issues/42926 .
**Release note**:
```
None
```
/assign @yujuhong
/cc @dchen1107
2017-08-25 11:07:17 -07:00
Kubernetes Submit Queue
c19785cfea
Merge pull request #49674 from crimsonfaith91/rollout
...
Automatic merge from submit-queue (batch tested with PRs 50033, 49988, 51132, 49674, 51207)
StatefulSet kubectl rollout command
**What this PR does / why we need it**: This PR implements StatefulSet kubectl rollout command, covering `history`, `status`, and `undo`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #49890
**Special notes for your reviewer**:
**Release note**:
```release-note
kubectl rollout `history`, `status`, and `undo` subcommands now support StatefulSets.
```
2017-08-25 11:07:15 -07:00
Kubernetes Submit Queue
fe0c519f49
Merge pull request #51132 from ConnorDoyle/cpuset-helpers
...
Automatic merge from submit-queue (batch tested with PRs 50033, 49988, 51132, 49674, 51207)
Add cpuset helper library.
Blocker for CPU manager #49186 (1 of 6)
@sjenning @derekwaynecarr
```release-note
NONE
```
2017-08-25 11:07:12 -07:00
Kubernetes Submit Queue
dfd1ca7728
Merge pull request #49988 from sbezverk/fsgroup_check
...
Automatic merge from submit-queue (batch tested with PRs 50033, 49988, 51132, 49674, 51207)
Adding fsGroup check before mounting a volume
fsGroup check will be enforcing that if a volume has already been
mounted by one pod and another pod wants to mount it but has a different
fsGroup value, this mount operation will not be allowed.
Closes #45053
2017-08-25 11:07:09 -07:00
Kubernetes Submit Queue
c04e516373
Merge pull request #50033 from cmluciano/cml/addnpcidrselector
...
Automatic merge from submit-queue (batch tested with PRs 50033, 49988, 51132, 49674, 51207)
Add IPBlock to Network Policy
**What this PR does / why we need it**:
Add ipBlockRule to NetworkPolicyPeer.
**Which issue this PR fixes**
fixes #49978
**Special notes for your reviewer**:
- I added this directly as a field on the existing API per guidance from API-Machinery/lazy SIG-Network consensus.
Todo:
- [ ] Documentation comments to mention this is beta, unless we want to go straight to GA
- [ ] e2e tests
**Release note**:
```
Support ipBlock in NetworkPolicy
```
2017-08-25 11:07:07 -07:00
Antoine Pelisse
fd5775c192
client-go: Update RoundTrippers to be Unwrappable
2017-08-25 11:05:43 -07:00
Matthew Wong
8b5b2e9927
Set flexvolumeplugin.host so that it's not nil
2017-08-25 13:36:17 -04:00
Lucas Käldström
2c71814344
kubeadm: Fully implement 'kubeadm init --dry-run'
2017-08-25 20:31:14 +03:00
Kubernetes Submit Queue
cb6f32e8ba
Merge pull request #50841 from zjj2wry/kubectl-set-image-ignoring
...
Automatic merge from submit-queue (batch tested with PRs 50872, 51103, 51220, 51285, 50841)
Fix issue(#49695 )kubectl set image deployment is ignoring --selector
**What this PR does / why we need it**:
closes #49695
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
2017-08-25 10:10:13 -07:00
Kubernetes Submit Queue
0a51ba116a
Merge pull request #51285 from jordanlewis/patch-1
...
Automatic merge from submit-queue (batch tested with PRs 50872, 51103, 51220, 51285, 50841)
Update CockroachDB tag to v1.0.5
cc @a-robinson
2017-08-25 10:10:10 -07:00