mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-24 20:24:09 +00:00
draft release notes for kubernetes v1.4.0
/cc @pwittrock @foxish @matchstick @quinton-hoole
This commit is contained in:
parent
501f264717
commit
b5850a66d2
133
CHANGELOG.md
133
CHANGELOG.md
@ -174,6 +174,139 @@
|
||||
|
||||
<!-- NEW RELEASE NOTES ENTRY -->
|
||||
|
||||
# v1.4.0 (draft)
|
||||
|
||||
[Documentation](http://kubernetes.github.io) & [Examples](http://releases.k8s.io/release-1.4/examples)
|
||||
|
||||
## Downloads
|
||||
|
||||
binary | sha256 hash
|
||||
------ | -----------
|
||||
TODO [kubernetes.tar.gz](TODO) | `TODO`
|
||||
|
||||
## Major Themes
|
||||
|
||||
- **Simplified User Experience**
|
||||
- Easier to get a cluster up and running (eg: `kubeadm`, intra-cluster bootstrapping)
|
||||
- Easier to understand a cluster (eg: API audit logs, server-based API defaults)
|
||||
- **Stateful Appplication Support**
|
||||
- Enhanced persistence capabilities (eg: `StorageClasses`, new volume plugins)
|
||||
- New resources and scheduler features (eg: `ScheduledJob` resource, pod/node affinity/anti-affinity)
|
||||
- **Cluster Federation**
|
||||
- Multi-Zone Ingress
|
||||
- Expanded support for resources such as Namespaces, Events, ReplicaSets
|
||||
- **Security**
|
||||
- Increased pod-level security granularity (eg: Container Image Policies, AppArmor and `sysctl` support)
|
||||
- Increased cluster-level security granularity (eg: Access Review API)
|
||||
|
||||
## Features
|
||||
|
||||
This is the first release tracked via the use of the [kubernetes/features](https://github.com/kubernetes/features) issues repo. Each Feature issue is owned by a Special Interest Group from [kubernetes/community](https://github.com/kubernetes/community)
|
||||
|
||||
- **API Machinery**
|
||||
- [alpha] Generate audit logs for every request user performs against secured API server endpoint. ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/admin/audit/)) ([kubernetes/features#22](https://github.com/kubernetes/features/issues/22))
|
||||
- [beta] `kube-apiserver` now publishes a swagger 2.0 spec in addition to a swagger 1.2 spec ([kubernetes/features#53](https://github.com/kubernetes/features/issues/53))
|
||||
- **Apps**
|
||||
- [alpha] Introducing 'ScheduledJobs', which allow running time based Jobs, namely once at a specified time or repeatedly at specified point in time. ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/user-guide/scheduled-jobs/)) ([kubernetes/features#19](https://github.com/kubernetes/features/issues/19))
|
||||
- **Auth**
|
||||
- [alpha] Container Image Policy allows an access controller to determine whether a pod may be schdeuled based on a policy ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/admin/admission-controllers/#imagepolicywebhook)) ([kubernetes/features#59](https://github.com/kubernetes/features/issues/59))
|
||||
- [alpha] Access Review APIs expose authorization engine to external inquiries for delgation, inspection, and debugging ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/admin/authorization/)) ([kubernetes/features#37](https://github.com/kubernetes/features/issues/37))
|
||||
- **Cluster Lifecycle**
|
||||
- [alpha] Ensure critical cluster infrastructure pods (Heapster, DNS, etc.) can schedule by evicting regular pods when necessary to make the critical pods schedule. ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/admin/rescheduler/#guaranteed-scheduling-of-critical-add-on-pods)) ([kubernetes/features#62](https://github.com/kubernetes/features/issues/62))
|
||||
- [alpha] Simplifies bootstrapping of TLS secured communication between the apiserver and kublet ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/admin/master-node-communication/#kubelet-tls-bootstrap)) ([kubernetes/features#43](https://github.com/kubernetes/features/issues/43))
|
||||
- [alpha] `kubeadm` tool makes install much easier. TODO: https://github.com/kubernetes/kubernetes.github.io/pull/1265#issuecomment-249300887 ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/kubeadm/)) ([kubernetes/features#11](https://github.com/kubernetes/features/issues/11))
|
||||
- **Federation**
|
||||
- [alpha] Creating a `Federated Ingress` is as simple as submitting an `Ingress` config/manifest to the Federation API Server. Federation then creates a single global VIP to load balance the incoming L7 traffic across all the registered clusters no matter in what regions the clusters are. GCE L7 LoadBalancer is the only supported implementation in this release. TODO: ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/user-guide/federation/federated-ingress/)) ([kubernetes/features#82](https://github.com/kubernetes/features/issues/82))
|
||||
- [alpha] Creating a `Namespace` in federation causes matching `Namespace`s to be created in all the clusters registered with that federation. ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/user-guide/federation/namespaces)) ([kubernetes/features#69](https://github.com/kubernetes/features/issues/69))
|
||||
- [alpha] ingress has alpha support for a single master multi zone cluster ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/user-guide/ingress.md#failing-across-availability-zones)) ([kubernetes/features#52](https://github.com/kubernetes/features/issues/52))
|
||||
- [beta] Federation API server gained support for events and many federation controllers now report important events. ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/user-guide/federation/events)) ([kubernetes/features#70](https://github.com/kubernetes/features/issues/70))
|
||||
- [beta] `Secret` created in federation are distributed to all the clusters in that federation. TODO @quinton-hoole @pwittrock ([docs](TODO)) ([kubernetes/features#68](https://github.com/kubernetes/features/issues/68))
|
||||
- [beta] Submitting a `Replica Set` to the Federation API Server creates matching `Replica Set`s in the underlying clusters with the desired replica count distributed across all the clusters. TODO @quinton-hoole @pwittrock ([docs](TODO)) ([kubernetes/features#46](https://github.com/kubernetes/features/issues/46))
|
||||
- **Network**
|
||||
- [alpha] Service LB now has alpha support for preserving client source IP ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/user-guide/load-balancer/)) ([kubernetes/features#27](https://github.com/kubernetes/features/issues/27))
|
||||
- **Node**
|
||||
- [alpha] Publish node performance dashboard at http://node-perf-dash.k8s.io/#/builds ([docs](https://github.com/kubernetes/contrib/blob/master/node-perf-dash/README.md)) ([kubernetes/features#83](https://github.com/kubernetes/features/issues/83))
|
||||
- [alpha] Pods now have alpha support for setting whitelisted, safe sysctls. Unsafe sysctls can be whitelisted on the kubelet. ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/admin/sysctls/)) ([kubernetes/features#34](https://github.com/kubernetes/features/issues/34))
|
||||
- [beta] AppArmor profiles can be specified & applied to pod containers ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/admin/apparmor/)) ([kubernetes/features#24](https://github.com/kubernetes/features/issues/24))
|
||||
- [beta] Cluster policy to control access and defaults of security related features ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/user-guide/load-balancer/)) ([kubernetes/features#5](https://github.com/kubernetes/features/issues/5))
|
||||
- [stable] kubelet is able to evict pods when it observes disk pressure ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/admin/out-of-resource/)) ([kubernetes/features#39](https://github.com/kubernetes/features/issues/39))
|
||||
- [stable] Automated docker validation results posted to https://k8s-testgrid.appspot.com/docker [kubernetes/features#57](https://github.com/kubernetes/features/issues/57)
|
||||
- **Scheduling**
|
||||
- [alpha] Allows pods to require or prohibit (or prefer or prefer not) co-scheduling on the same node (or zone or other topology domain) as another set of pods. ([docs]((http://kubernetes-io-vnext-staging.netlify.com/docs/user-guide/node-selection/)) ([kubernetes/features#51](https://github.com/kubernetes/features/issues/51))
|
||||
- **Storage**
|
||||
- [beta] Persistant Volume provisioning now supports multiple provisioners using StorageClass configuration. ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/user-guide/persistent-volumes/)) ([kubernetes/features#36](https://github.com/kubernetes/features/issues/36))
|
||||
- [stable] New volume plugin for the Quobyte Distributed File System ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/user-guide/volumes/)) ([kubernetes/features#80](https://github.com/kubernetes/features/issues/80))
|
||||
- [stable] New volume plugin for Azure Data Disk ([docs](http://kubernetes-io-vnext-staging.netlify.com/docs/user-guide/volumes)) ([kubernetes/features#79](https://github.com/kubernetes/features/issues/79))
|
||||
- **UI**
|
||||
- [stable] `kubectl` no longer applies defaults before sending objects to the server in create and update requests, allowing the server to apply the defaults. ([kubernetes/features#55](https://github.com/kubernetes/features/issues/55))
|
||||
|
||||
## Known Issues
|
||||
|
||||
- Completed pods lose logs across node upgrade (#32324)
|
||||
- Pods are deleted across node upgrade (#32323)
|
||||
- Secure master -> node communication (#11816)
|
||||
- upgrading master doesn't upgrade kubectl (#32538)
|
||||
- Specific error message on failed rolling update issued by older kubectl against 1.4 master (#32751)
|
||||
- bump master cidr range from /30 to /29 (#32886)
|
||||
- non-hostNetwork daemonsets will almost always have a pod that fails to schedule (#32900)
|
||||
- Service loadBalancerSourceRanges doesn't respect updates (#33033)
|
||||
- disallow user to update loadbalancerSourceRanges (#33346)
|
||||
|
||||
## Notable Changes to Existing Behavior
|
||||
|
||||
### Deployments
|
||||
|
||||
- ReplicaSets of paused Deployments are now scaled while the Deployment is paused. This is retroactive to existing Deployments.
|
||||
- When scaling a Deployment during a rollout, the ReplicaSets of all Deployments are now scaled proportionally based on the number of replicas they each have instead of only scaling the newest ReplicaSet.
|
||||
|
||||
### kubectl rolling-update: < v1.4.0 client vs >=v1.4.0 cluster
|
||||
|
||||
Old version kubectl's rolling-update command is compatible with Kubernetes 1.4 and higher only if you specify a new replication controller name. You will need to update to kubectl 1.4 or higher to use the rolling update command against a 1.4 cluster if you want to keep the original name, or you'll have to do two rolling updates.
|
||||
|
||||
If you do happen to use old version kubectl's rolling update against a 1.4 cluster, it will fail, usually with an error message that will direct you here. If you saw that error, then don't worry, the operation succeeded except for the part where the new replication controller is renamed back to the old name. You can just do another rolling update using kubectl 1.4 or higher to change the name back: look for a replication controller that has the original name plus a random suffix.
|
||||
|
||||
Unfortunately, there is a much rarer second possible failure mode: the replication controller gets renamed to the old name, but there is a duplicated set of pods in the cluster. kubectl will not report an error since it thinks its job is done.
|
||||
|
||||
If this happens to you, you can wait at most 10 minutes for the replication controller to start a resync, the extra pods will then be deleted. Or, you can manually trigger a resync by change the replicas in the spec of the replication controller.
|
||||
|
||||
### kubectl delete: < v1.4.0 client vs >=v1.4.0 cluster
|
||||
|
||||
If you use an old version kubectl to delete a replication controller or replicaset, then after the delete command has returned, the replication controller or the replicaset will continue to exist in the key-value store for a short period of time (<1s). You probably will not notice any difference if you use kubectl manually, but you might notice it if you are using kubectl in a script.
|
||||
|
||||
## Action Required Before Upgrading
|
||||
|
||||
- If you are using Kubernetes to manage `docker` containers, please be aware Kubernetes has been validated to work with docker 1.9.1, docker 1.11.2 (#23397), and docker 1.12.0 (#28698)
|
||||
- The NamespaceExists and NamespaceAutoProvision admission controllers have been removed, use the NamespaceLifecycle admission controller instead (#31250, @derekwaynecarr)
|
||||
- If upgrading Cluster Federation components from 1.3.x, the `federation-apiserver` and `federation-controller-manager` binaries have been folded into `hyperkube`. Please switch to using that instead. (#29929, @madhusudancs)
|
||||
- If you are using the PodSecurityPolicy feature (eg: `kubectl get podsecuritypolicy` does not error, and returns one or more objects), be aware that init containers have moved from alpha to beta. If there are any pods with the key `pods.beta.kubernetes.io/init-containers`, then that pod may not have been filtered by the PodSecurityPolicy. You should find such pods and either delete them or audit them to ensure they do not use features that you intend to be blocked by PodSecurityPolicy. (#31026, @erictune)
|
||||
- If upgrading Cluster Federation components from 1.3.x, please ensure your cluster name is a valid DNS label (#30956, @nikhiljindal)
|
||||
- kubelet's `--config` flag has been deprecated, use `--pod-manifest-path` instead (#29999, @mtaufen)
|
||||
- If upgrading Cluster Federation components from 1.3.x, be aware the federation-controller-manager now looks for a different secret name. Run the following to migrate (#28938, @madhusudancs)
|
||||
```
|
||||
kubectl --namespace=federation get secret federation-apiserver-secret -o json | sed 's/federation-apiserver-secret/federation-apiserver-kubeconfig/g' | kubectl create -f -
|
||||
# optionally, remove the old secret
|
||||
kubectl delete secret --namespace=federation federation-apiserver-secret
|
||||
```
|
||||
- Kubernetes components no longer handle panics, and instead actively crash. All Kubernetes components should be run by something that actively restarts them. This is true of the default setups, but those with custom environments may need to double-check (#28800, @lavalamp)
|
||||
- kubelet now defaults to `--cloud-provider=auto-detect`, use `--cloud-provider=''` to preserve previous default of no cloud provider (#28258, @vishh)
|
||||
|
||||
## Previous Releases Included in v1.4.0
|
||||
|
||||
For a detailed list of all changes that were included in this release, please refer to the following CHANGELOG entries:
|
||||
|
||||
- [v1.4.0-beta.10](CHANGELOG.md#v140-beta10)
|
||||
- [v1.4.0-beta.8](CHANGELOG.md#v140-beta8)
|
||||
- [v1.4.0-beta.7](CHANGELOG.md#v140-beta7)
|
||||
- [v1.4.0-beta.6](CHANGELOG.md#v140-beta6)
|
||||
- [v1.4.0-beta.5](CHANGELOG.md#v140-beta5)
|
||||
- [v1.4.0-beta.3](CHANGELOG.md#v140-beta3)
|
||||
- [v1.4.0-beta.2](CHANGELOG.md#v140-beta2)
|
||||
- [v1.4.0-beta.1](CHANGELOG.md#v140-beta1)
|
||||
- [v1.4.0-alpha.3](CHANGELOG.md#v140-alpha3)
|
||||
- [v1.4.0-alpha.2](CHANGELOG.md#v140-alpha2)
|
||||
- [v1.4.0-alpha.1](CHANGELOG.md#v140-alpha1)
|
||||
|
||||
|
||||
|
||||
# v1.4.0-beta.10
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user