Commit Graph

42110 Commits

Author SHA1 Message Date
David Ashpole
a7393339ff licenses 2017-01-18 12:44:58 -08:00
David Ashpole
ca23355aa0 updated bazel 2017-01-18 11:08:07 -08:00
David Ashpole
9737b41189 updated cadvisor to latest version; updated aws dependency to same as cadvisor 2017-01-18 10:55:35 -08:00
Kubernetes Submit Queue
6dfe5c49f6 Merge pull request #38865 from vwfs/ext4_no_lazy_init
Automatic merge from submit-queue

Enable lazy initialization of ext3/ext4 filesystems

**What this PR does / why we need it**: It enables lazy inode table and journal initialization in ext3 and ext4.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #30752, fixes #30240

**Release note**:
```release-note
Enable lazy inode table and journal initialization for ext3 and ext4
```

**Special notes for your reviewer**:
This PR removes the extended options to mkfs.ext3/mkfs.ext4, so that the defaults (enabled) for lazy initialization are used.

These extended options come from a script that was historically located at */usr/share/google/safe_format_and_mount* and later ported to GO so this dependency to the script could be removed. After some search, I found the original script here: https://github.com/GoogleCloudPlatform/compute-image-packages/blob/legacy/google-startup-scripts/usr/share/google/safe_format_and_mount

Checking the history of this script, I found the commit [Disable lazy init of inode table and journal.](4d7346f7f5). This one introduces the extended flags with this description:
```
Now that discard with guaranteed zeroing is supported by PD,
initializing them is really fast and prevents perf from being affected
when the filesystem is first mounted.
```

The problem is, that this is not true for all cloud providers and all disk types, e.g. Azure and AWS. I only tested with magnetic disks on Azure and AWS, so maybe it's different for SSDs on these cloud providers. The result is that this performance optimization dramatically increases the time needed to format a disk in such cases.

When mkfs.ext4 is told to not lazily initialize the inode tables and the check for guaranteed zeroing on discard fails, it falls back to a very naive implementation that simply loops and writes zeroed buffers to the disk. Performance on this highly depends on free memory and also uses up all this free memory for write caching, reducing performance of everything else in the system. 

As of https://github.com/kubernetes/kubernetes/issues/30752, there is also something inside kubelet that somehow degrades performance of all this. It's however not exactly known what it is but I'd assume it has something to do with cgroups throttling IO or memory. 

I checked the kernel code for lazy inode table initialization. The nice thing is, that the kernel also does the guaranteed zeroing on discard check. If it is guaranteed, the kernel uses discard for the lazy initialization, which should finish in a just few seconds. If it is not guaranteed, it falls back to using *bio*s, which does not require the use of the write cache. The result is, that free memory is not required and not touched, thus performance is maxed and the system does not suffer.

As the original reason for disabling lazy init was a performance optimization and the kernel already does this optimization by default (and in a much better way), I'd suggest to completely remove these flags and rely on the kernel to do it in the best way.
2017-01-18 09:09:52 -08:00
Kubernetes Submit Queue
788804142f Merge pull request #39036 from juanvallejo/jvallejo/do-not-list-deleted-pull-secrets
Automatic merge from submit-queue (batch tested with PRs 40038, 40041, 39036)

don't show deleted pull secrets - kubectl describe

This patch filters out any image pull secrets that have been deleted
when printing the describer output for a service account.

Related downstream bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1403376

**Release note**:
```release-note
release-note-none
```

@fabianofranz @AdoHe
2017-01-18 08:37:54 -08:00
Kubernetes Submit Queue
95cca9c558 Merge pull request #40041 from deads2k/generic-25-undo-admission
Automatic merge from submit-queue (batch tested with PRs 40038, 40041, 39036)

move admission to genericapiserver

I disconnected the initialization that was type specific for later assessment.

@sttts
2017-01-18 08:37:53 -08:00
Kubernetes Submit Queue
8decccfbb2 Merge pull request #40038 from deads2k/client-06-move-one
Automatic merge from submit-queue

Start making `k8s.io/client-go` authoritative for generic client packages

Right now, client-go has copies of various generic client packages which produces golang type incompatibilities when you want to switch between different kinds of clients.  In many cases, there's no reason to have two sets of packages.  This pull eliminates the copy for `pkg/client/transport` and makes `client-go` the authoritative copy.

I recommend going by commits, the first just synchronizes the client-go code again so that I could test the copy script to make sure it correctly preserves the original package.

@kubernetes/sig-api-machinery-misc @lavalamp @sttts
2017-01-18 08:24:27 -08:00
Kubernetes Submit Queue
d946751924 Merge pull request #40026 from shyamjvs/heapster-rbac-fix
Automatic merge from submit-queue

Added RBAC for heapster in kubemark

Fixes #39952 

cc @wojtek-t @gmarek @deads2k
2017-01-18 06:18:57 -08:00
deads2k
01b3b2b461 move admission to genericapiserver 2017-01-18 08:15:19 -05:00
deads2k
52ec66ee85 remove api dependency from admission 2017-01-18 08:09:48 -05:00
deads2k
4f915039e4 move pkg/client/transport to client-go 2017-01-18 07:56:01 -05:00
deads2k
18d22a4f38 update client-go 2017-01-18 07:47:23 -05:00
Shyam Jeedigunta
9b0d8b9747 Added RBAC for heapster in kubemark 2017-01-18 13:47:08 +01:00
Kubernetes Submit Queue
fe69dcf861 Merge pull request #40018 from sttts/sttts-genericapiserver-storage-dep
Automatic merge from submit-queue (batch tested with PRs 40008, 40005, 40018)

Fix wrong rename pkg/storage.Is{TestFail -> Conflict}
2017-01-18 04:04:49 -08:00
Kubernetes Submit Queue
3a77dd18c5 Merge pull request #40005 from sttts/sttts-pkg-auth-handlers-genericapiserver
Automatic merge from submit-queue (batch tested with PRs 40008, 40005, 40018)

genericapiserver: move pkg/auth/handlers into filters

Move authn filters to the other api related filters.
2017-01-18 04:04:47 -08:00
Kubernetes Submit Queue
6895518177 Merge pull request #40008 from apprenda/kubeadm_112_init_token
Automatic merge from submit-queue

kubeadm: init must validate or generate token before anything else.

**What this PR does / why we need it**: `kubeadm init` must validate or generate a token before anything else. Otherwise, if token validation or generation fail, one will need to run `kubeadm reset && systemctl restart kubelet` before re-running `kubeadm init`.

**Which issue this PR fixes**: fixes kubernetes/kubeadm#112

**Special notes for your reviewer**: /cc @luxas

Tested manually.

### With no token

```
$ sudo ./kubeadm init
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.5.2
[token-discovery] A token has not been provided, generating one
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 7.762803 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 1.003148 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Using token: 8321b6:a535ba541af7623c
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 1.003423 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns
Your Kubernetes master has initialized successfully!
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node:
kubeadm join --discovery token://8321b6:a535ba541af7623c@10.142.0.6:9898
```

### With invalid token

```
$ sudo ./kubeadm init --discovery token://12345:12345
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.5.2
[token-discovery] A token has been provided, validating [&{ID:12345 Secret:12345 Addresses:[]}]
token ["12345:12345"] was not of form ["^([a-z0-9]{6})\\:([a-z0-9]{16})$"]
```

### With valid token

```
$ sudo ./kubeadm ex token generate
cd540e:c0e0318e2f4a63b1

$ sudo ./kubeadm init --discovery token://cd540e:c0e0318e2f4a63b1
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.5.2
[token-discovery] A token has been provided, validating [&{ID:cd540e Secret:c0e0318e2f4a63b1 Addresses:[]}]
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 13.513305 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 0.502656 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Using token: cd540e:c0e0318e2f4a63b1
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 2.002457 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns
Your Kubernetes master has initialized successfully!
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node:
kubeadm join --discovery token://cd540e:c0e0318e2f4a63b1@10.142.0.6:9898
```

**Release note**:
```release-note
NONE
```
2017-01-18 04:03:48 -08:00
Dr. Stefan Schimanski
331d96539a genericapiserver: move pkg/auth/handlers into filters 2017-01-18 10:20:41 +01:00
Kubernetes Submit Queue
8f99b74466 Merge pull request #40030 from colemickens/colemickens-dyn-disk-name-length
Automatic merge from submit-queue (batch tested with PRs 39826, 40030)

azure disk: restrict length of name

**What this PR does / why we need it**:
Fixes dynamic disk provisioning on Azure by properly truncating the disk name to conform to the Azure API spec.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
n/a

**Special notes for your reviewer**:
n/a

**Release note**:
```release-note
azure disk: restrict name length for Azure specifications
```

cc: @rootfs
2017-01-17 21:37:02 -08:00
Kubernetes Submit Queue
180936f8df Merge pull request #39826 from shyamjvs/fake-docker-client-fix
Automatic merge from submit-queue

Made tracing of calls and container lifecycle steps in FakeDockerClient optional

Fixes #39717 

Slightly refactored the FakeDockerClient code and made tracing optional (but enabled by default).

@yujuhong @Random-Liu
2017-01-17 21:11:36 -08:00
Kubernetes Submit Queue
f56b606985 Merge pull request #36520 from apelisse/owners-pkg-volume
Automatic merge from submit-queue

Curating Owners: pkg/volume

cc @jsafrane @spothanis @agonzalezro @justinsb @johscheuer @simonswine @nelcy @pmorie @quofelix @sdminonne @thockin @saad-ali @rootfs

In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.


If You Care About the Process:
------------------------------

We did this by algorithmically figuring out who’s contributed code to
the project and in what directories.  Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.

Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).

At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.

Also, see https://github.com/kubernetes/contrib/issues/1389.

TLDR:
-----

As an owner of a sig/directory and a leader of the project, here’s what
we need from you:

1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.

2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.

3. Notify me if you want some OWNERS file to be removed.  Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.

4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
2017-01-17 19:56:39 -08:00
Kubernetes Submit Queue
c8d1b37a59 Merge pull request #40047 from apelisse/create-sig-node-alias
Automatic merge from submit-queue

OWNERS: Create sig-node alias

Create an alias group for sig-node

This will be used by `pkg/kubelet/OWNERS` and `test/e2e_node/OWNERS`.

**Release note**: NONE
2017-01-17 18:11:01 -08:00
Saad Ali
9d08c9a3d9 Update OWNERS 2017-01-17 17:37:51 -08:00
Saad Ali
526578679b Update OWNERS 2017-01-17 17:36:34 -08:00
Saad Ali
6500f08e2f Update OWNERS 2017-01-17 17:32:49 -08:00
Saad Ali
5783a0ec4c Update OWNERS 2017-01-17 17:24:35 -08:00
Saad Ali
a420973187 Update OWNERS 2017-01-17 16:36:50 -08:00
Saad Ali
d25c20bc3f Update OWNERS 2017-01-17 16:35:21 -08:00
Saad Ali
d6ba2fc37a Update OWNERS 2017-01-17 16:34:34 -08:00
Saad Ali
7c67211734 Update OWNERS 2017-01-17 16:33:13 -08:00
Saad Ali
b0e588eec2 Update OWNERS 2017-01-17 16:31:46 -08:00
Saad Ali
cb1bbf14af Update OWNERS 2017-01-17 16:31:03 -08:00
Saad Ali
7918a8e8c2 Update OWNERS 2017-01-17 16:28:05 -08:00
Saad Ali
8e371e6dbf Update OWNERS 2017-01-17 16:26:16 -08:00
Antoine Pelisse
f0540b3c4c OWNERS: Create sig-node alias
Create an alias group for sig-node
2017-01-17 16:25:40 -08:00
Saad Ali
c8fbfd93df Update OWNERS 2017-01-17 16:24:46 -08:00
Saad Ali
04f20a06a6 Update OWNERS 2017-01-17 16:24:25 -08:00
Saad Ali
9f1181dc55 Update OWNERS 2017-01-17 16:24:01 -08:00
Saad Ali
dc9eea2f3c Update OWNERS 2017-01-17 16:22:36 -08:00
Saad Ali
16cbb574e4 Update OWNERS 2017-01-17 16:20:24 -08:00
Kubernetes Submit Queue
d357a72161 Merge pull request #40039 from timstclair/api-redirect
Automatic merge from submit-queue

Enable streaming proxy redirects by default (beta)

Prerequisite to moving CRI to Beta.

I'd like to enable this early in our 1.6 cycle to get plenty of test coverage before release.

@yujuhong @liggitt 

```release-note
Follow redirects for streaming requests (exec/attach/port-forward) in the apiserver by default (alpha -> beta).
```
2017-01-17 16:18:48 -08:00
Saad Ali
70ed66cdf8 Update OWNERS 2017-01-17 16:14:14 -08:00
Saad Ali
8159e620a1 Update OWNERS 2017-01-17 16:13:17 -08:00
Saad Ali
602b682a2a Update OWNERS 2017-01-17 16:12:45 -08:00
Saad Ali
7ed31e4761 Update OWNERS 2017-01-17 16:08:57 -08:00
Saad Ali
b4ac15ae05 Update OWNERS 2017-01-17 16:05:40 -08:00
Kubernetes Submit Queue
c14fa94a4a Merge pull request #40042 from seh/add-ingress-to-rbac-roles
Automatic merge from submit-queue

Include "ingresses" resource in RBAC bootstrap roles

The bootstrap RBAC roles "admin", "edit", and "view" should all be able to apply their respective access verbs to the "ingresses" resource in order to facilitate both publishing Ingress resources (for
service administrators) and consuming them (for ingress controllers).

Note that I alphabetized the resources listed in the role definitions that I changed to make it easier to decide later where to insert new entries. The original order looked like it may have started out alphabetized, but lost its way. If I missed an intended order there, please advise.

I am uncertain whether this change deserves mention in a release note, given the RBAC feature's alpha state. Regardless, it's possible that a cluster administrator could have been happy with the previous set of permissions afforded by these roles, and would be surprised to discover that bound subjects can now control _Ingress_ resources. However, in order to be afflicted, that administrator would have had to have applied these role definitions again which, if I understand it, would be a deliberate act, as bootstrapping should only occur once in a given cluster.
2017-01-17 15:32:45 -08:00
Kubernetes Submit Queue
627e1e8174 Merge pull request #39709 from smarterclayton/object_meta
Automatic merge from submit-queue

Move ObjectMeta into pkg/apis/meta/v1

Where it belongs
2017-01-17 15:32:33 -08:00
Brian Grant
56b1082deb Merge pull request #39921 from cblecker/fix-docs-39891
Fix broken link to logging documentation
2017-01-17 14:38:08 -08:00
Kubernetes Submit Queue
adcca3e663 Merge pull request #36512 from apelisse/owners-examples-mysql-wordpress-pd
Automatic merge from submit-queue

Curating Owners: examples/mysql-wordpress-pd

cc @jeffmendoza

In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.


If You Care About the Process:
------------------------------

We did this by algorithmically figuring out who’s contributed code to
the project and in what directories.  Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.

Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).

At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.

Also, see https://github.com/kubernetes/contrib/issues/1389.

TLDR:
-----

As an owner of a sig/directory and a leader of the project, here’s what
we need from you:

1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.

2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.

3. Notify me if you want some OWNERS file to be removed.  Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.

4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
2017-01-17 13:18:45 -08:00
Clayton Coleman
bcde05753b
Correct import statements 2017-01-17 16:18:18 -05:00