mirror of
https://github.com/k3s-io/kubernetes.git
synced 2026-01-05 07:27:21 +00:00
Fix typos and linted_packages sorting
This commit is contained in:
@@ -113,7 +113,7 @@ Ubuntu system. The profiles can be found at `{securityfs}/apparmor/profiles`
|
||||
|
||||
## API Changes
|
||||
|
||||
The intial alpha support of AppArmor will follow the pattern
|
||||
The initial alpha support of AppArmor will follow the pattern
|
||||
[used by seccomp](https://github.com/kubernetes/kubernetes/pull/25324) and specify profiles through
|
||||
annotations. Profiles can be specified per-container through pod annotations. The annotation format
|
||||
is a key matching the container, and a profile name value:
|
||||
|
||||
@@ -56,7 +56,7 @@ ship with all of the requirements for the node specification by default.
|
||||
|
||||
**Objective**: Generate security certificates used to configure secure communication between client, master and nodes
|
||||
|
||||
TODO: Enumerate ceritificates which have to be generated.
|
||||
TODO: Enumerate certificates which have to be generated.
|
||||
|
||||
## Step 3: Deploy master
|
||||
|
||||
|
||||
@@ -245,7 +245,7 @@ discussion and may be achieved alternatively:
|
||||
**Imperative pod-level interface**
|
||||
The interface contains only CreatePod(), StartPod(), StopPod() and RemovePod().
|
||||
This implies that the runtime needs to take over container lifecycle
|
||||
manangement (i.e., enforce restart policy), lifecycle hooks, liveness checks,
|
||||
management (i.e., enforce restart policy), lifecycle hooks, liveness checks,
|
||||
etc. Kubelet will mainly be responsible for interfacing with the apiserver, and
|
||||
can potentially become a very thin daemon.
|
||||
- Pros: Lower maintenance overhead for the Kubernetes maintainers if `Docker`
|
||||
|
||||
@@ -86,7 +86,7 @@ To prevent re-adoption of an object during deletion the `DeletionTimestamp` will
|
||||
Necessary related work:
|
||||
* `OwnerReferences` are correctly added/deleted,
|
||||
* GarbageCollector removes dangling references,
|
||||
* Controllers don't take any meaningfull actions when `DeletionTimestamps` is set.
|
||||
* Controllers don't take any meaningful actions when `DeletionTimestamps` is set.
|
||||
|
||||
# Considered alternatives
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@ the pods matching the service pod-selector.
|
||||
|
||||
## Motivation
|
||||
|
||||
The current implemention requires that the cloud loadbalancer balances traffic across all
|
||||
The current implementation requires that the cloud loadbalancer balances traffic across all
|
||||
Kubernetes worker nodes, and this traffic is then equally distributed to all the backend
|
||||
pods for that service.
|
||||
Due to the DNAT required to redirect the traffic to its ultimate destination, the return
|
||||
|
||||
@@ -16,7 +16,7 @@ federated servers.
|
||||
* Unblock new APIs from core kubernetes team review: A lot of new API proposals
|
||||
are currently blocked on review from the core kubernetes team. By allowing
|
||||
developers to expose their APIs as a separate server and enabling the cluster
|
||||
admin to use it without any change to the core kubernetes reporsitory, we
|
||||
admin to use it without any change to the core kubernetes repository, we
|
||||
unblock these APIs.
|
||||
* Place for staging experimental APIs: New APIs can remain in separate
|
||||
federated servers until they become stable, at which point, they can be moved
|
||||
@@ -167,7 +167,7 @@ resource.
|
||||
|
||||
This proposal is not enough for hosted cluster users, but allows us to improve
|
||||
that in the future.
|
||||
On a hosted kubernetes cluster, for eg on GKE - where Google manages the kubernetes
|
||||
On a hosted kubernetes cluster, for e.g. on GKE - where Google manages the kubernetes
|
||||
API server, users will have to bring up and maintain the proxy and federated servers
|
||||
themselves.
|
||||
Other system components like the various controllers, will not be aware of the
|
||||
|
||||
@@ -102,7 +102,7 @@ The first is accomplished in this PR, while a timeline for 2. and 3. is TDB. To
|
||||
- Put: This is a request for a lease. If the nodecontroller is allocating CIDRs we can probably just no-op.
|
||||
* `/network/reservations`: TDB, we can probably use this to accommodate node controller allocating CIDR instead of flannel requesting it
|
||||
|
||||
The ick-iest part of this implementation is going to the `GET /network/leases`, i.e the watch proxy. We can side-step by waiting for a more generic Kubernetes resource. However, we can also implement it as follows:
|
||||
The ick-iest part of this implementation is going to the `GET /network/leases`, i.e. the watch proxy. We can side-step by waiting for a more generic Kubernetes resource. However, we can also implement it as follows:
|
||||
* Watch all nodes, ignore heartbeats
|
||||
* On each change, figure out the lease for the node, construct a [lease watch result](https://github.com/coreos/flannel/blob/0bf263826eab1707be5262703a8092c7d15e0be4/subnet/subnet.go#L72), and send it down the watch with the RV from the node
|
||||
* Implement a lease list that does a similar translation
|
||||
|
||||
@@ -52,7 +52,7 @@ The admission controller code will go in `plugin/pkg/admission/imagepolicy`.
|
||||
There will be a cache of decisions in the admission controller.
|
||||
|
||||
If the apiserver cannot reach the webhook backend, it will log a warning and either admit or deny the pod.
|
||||
A flag will control whether it admits or denys on failure.
|
||||
A flag will control whether it admits or denies on failure.
|
||||
The rationale for deny is that an attacker could DoS the backend or wait for it to be down, and then sneak a
|
||||
bad pod into the system. The rationale for allow here is that, if the cluster admin also does
|
||||
after-the-fact auditing of what images were run (which we think will be common), this will catch
|
||||
|
||||
@@ -88,7 +88,7 @@ type JobSpec struct {
|
||||
}
|
||||
```
|
||||
|
||||
`JobStatus` structure is defined to contain informations about pods executing
|
||||
`JobStatus` structure is defined to contain information about pods executing
|
||||
specified job. The structure holds information about pods currently executing
|
||||
the job.
|
||||
|
||||
|
||||
@@ -63,10 +63,10 @@ by the latter command.
|
||||
When clusters utilize authorization plugins access decisions are based on the
|
||||
correct configuration of an auth-N plugin, an auth-Z plugin, and client side
|
||||
credentials. Being rejected then begs several questions. Is the user's
|
||||
kubeconfig mis-configured? Is the authorization plugin setup wrong? Is the user
|
||||
kubeconfig misconfigured? Is the authorization plugin setup wrong? Is the user
|
||||
authenticating as a different user than the one they assume?
|
||||
|
||||
To help `kubectl login` diagnose mis-configured credentials, responses from the
|
||||
To help `kubectl login` diagnose misconfigured credentials, responses from the
|
||||
API server to authenticated requests SHOULD include the `Authentication-Info`
|
||||
header as defined in [RFC 7615](https://tools.ietf.org/html/rfc7615). The value
|
||||
will hold name value pairs for `username` and `uid`. Since usernames and IDs
|
||||
|
||||
@@ -145,7 +145,7 @@ The following node conditions are defined that correspond to the specified evict
|
||||
| Node Condition | Eviction Signal | Description |
|
||||
|----------------|------------------|------------------------------------------------------------------|
|
||||
| MemoryPressure | memory.available | Available memory on the node has satisfied an eviction threshold |
|
||||
| DiskPressure | nodefs.available, nodefs.inodesFree, imagefs.available, or imagefs.inodesFree | Available disk space and inodes on either the node's root filesytem or image filesystem has satisfied an eviction threshold |
|
||||
| DiskPressure | nodefs.available, nodefs.inodesFree, imagefs.available, or imagefs.inodesFree | Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold |
|
||||
|
||||
The `kubelet` will continue to report node status updates at the frequency specified by
|
||||
`--node-status-update-frequency` which defaults to `10s`.
|
||||
@@ -300,7 +300,7 @@ In the future, if we store logs of dead containers outside of the container itse
|
||||
Once the lifetime of containers and logs are split, kubelet can support more user friendly policies
|
||||
around log evictions. `kubelet` can delete logs of the oldest containers first.
|
||||
Since logs from the first and the most recent incarnation of a container is the most important for most applications,
|
||||
kubelet can try to preserve these logs and aggresively delete logs from other container incarnations.
|
||||
kubelet can try to preserve these logs and aggressively delete logs from other container incarnations.
|
||||
|
||||
Until logs are split from container's lifetime, `kubelet` can delete dead containers to free up disk space.
|
||||
|
||||
|
||||
@@ -46,12 +46,12 @@ For a large enterprise where computing power is the king, one may imagine the fo
|
||||
- `linux/ppc64le`: For running highly-optimized software; especially massive compute tasks
|
||||
- `windows/amd64`: For running services that are only compatible on windows; e.g. business applications written in C# .NET
|
||||
|
||||
For a mid-sized business where efficency is most important, these could be combinations:
|
||||
For a mid-sized business where efficiency is most important, these could be combinations:
|
||||
- `linux/amd64`: For running most of the general-purpose computing tasks, plus tasks that require very high single-core performance.
|
||||
- `linux/arm64`: For running webservices and high-density tasks => the cluster could autoscale in a way that `linux/amd64` machines could hibernate at night in order to minimize power usage.
|
||||
|
||||
For a small business or university, arm is often sufficent:
|
||||
- `linux/arm`: Draws very little power, and can run web sites and app backends efficently on Scaleway for example.
|
||||
For a small business or university, arm is often sufficient:
|
||||
- `linux/arm`: Draws very little power, and can run web sites and app backends efficiently on Scaleway for example.
|
||||
|
||||
And last but not least; Raspberry Pi's should be used for [education at universities](http://kubecloud.io/) and are great for **demoing Kubernetes' features at conferences.**
|
||||
|
||||
@@ -514,14 +514,14 @@ Linux 0a7da80f1665 4.2.0-25-generic #30-Ubuntu SMP Mon Jan 18 12:31:50 UTC 2016
|
||||
|
||||
Here a linux module called `binfmt_misc` registered the "magic numbers" in the kernel, so the kernel may detect which architecture a binary is, and prepend the call with `/usr/bin/qemu-(arm|aarch64|ppc64le)-static`. For example, `/usr/bin/qemu-arm-static` is a statically linked `amd64` binary that translates all ARM syscalls to `amd64` syscalls.
|
||||
|
||||
The multiarch guys have done a great job here, you may find the source for this and other images at [Github](https://github.com/multiarch)
|
||||
The multiarch guys have done a great job here, you may find the source for this and other images at [GitHub](https://github.com/multiarch)
|
||||
|
||||
|
||||
## Implementation
|
||||
|
||||
## History
|
||||
|
||||
32-bit ARM (`linux/arm`) was the first platform Kubernetes was ported to, and luxas' project [`Kubernetes on ARM`](https://github.com/luxas/kubernetes-on-arm) (released on Github the 31st of September 2015)
|
||||
32-bit ARM (`linux/arm`) was the first platform Kubernetes was ported to, and luxas' project [`Kubernetes on ARM`](https://github.com/luxas/kubernetes-on-arm) (released on GitHub the 31st of September 2015)
|
||||
served as a way of running Kubernetes on ARM devices easily.
|
||||
The 30th of November 2015, a tracking issue about making Kubernetes run on ARM was opened: [#17981](https://github.com/kubernetes/kubernetes/issues/17981). It later shifted focus to how to make Kubernetes a more platform-independent system.
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ chosen networking solution.
|
||||
|
||||
## Implementation
|
||||
|
||||
The implmentation in Kubernetes consists of:
|
||||
The implementation in Kubernetes consists of:
|
||||
- A v1beta1 NetworkPolicy API object
|
||||
- A structure on the `Namespace` object to control policy, to be developed as an annotation for now.
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ Basic ideas:
|
||||
|
||||
### Logging monitoring
|
||||
|
||||
Log spam is a serious problem and we need to keep it under control. Simplest way to check for regressions, suggested by @bredanburns, is to compute the rate in which log files
|
||||
Log spam is a serious problem and we need to keep it under control. Simplest way to check for regressions, suggested by @brendandburns, is to compute the rate in which log files
|
||||
grow in e2e tests.
|
||||
|
||||
Basic ideas:
|
||||
@@ -70,7 +70,7 @@ Basic ideas:
|
||||
Reverse of REST call monitoring done in the API server. We need to know when a given component increases a pressure it puts on the API server. As a proxy for number of
|
||||
requests sent we can track how saturated are rate limiters. This has additional advantage of giving us data needed to fine-tune rate limiter constants.
|
||||
|
||||
Because we have rate limitting on both ends (client and API server) we should monitor number of inflight requests in API server and how it relates to `max-requests-inflight`.
|
||||
Because we have rate limiting on both ends (client and API server) we should monitor number of inflight requests in API server and how it relates to `max-requests-inflight`.
|
||||
|
||||
Basic ideas:
|
||||
- percentage of used non-burst limit,
|
||||
|
||||
@@ -383,7 +383,7 @@ The implementation goals of the first milestone are outlined below.
|
||||
- [x] Add PodContainerManagerImpl Create and Destroy methods which implements the respective PodContainerManager methods using a cgroupfs driver. #28017
|
||||
- [x] Have docker manager create container cgroups under pod level cgroups. Inject creation and deletion of pod cgroups into the pod workers. Add e2e tests to test this behaviour. #29049
|
||||
- [x] Add support for updating policy for the pod cgroups. Add e2e tests to test this behaviour. #29087
|
||||
- [ ] Enabling 'cgroup-per-qos' flag in Kubelet: The user is expected to drain the node and restart it before eenabling this feature, but as a fallback we also want to allow the user to just restart the kubelet with the cgroup-per-qos flag enabled to use this feature. As a part of this we need to figure out a policy for pods having Restart Policy: Never. More details in this [issue](https://github.com/kubernetes/kubernetes/issues/29946).
|
||||
- [ ] Enabling 'cgroup-per-qos' flag in Kubelet: The user is expected to drain the node and restart it before enabling this feature, but as a fallback we also want to allow the user to just restart the kubelet with the cgroup-per-qos flag enabled to use this feature. As a part of this we need to figure out a policy for pods having Restart Policy: Never. More details in this [issue](https://github.com/kubernetes/kubernetes/issues/29946).
|
||||
- [ ] Removing terminated pod's Cgroup : We need to cleanup the pod's cgroup once the pod is terminated. More details in this [issue](https://github.com/kubernetes/kubernetes/issues/29927).
|
||||
- [ ] Kubelet needs to ensure that the cgroup settings are what the kubelet expects them to be. If security is not of concern, one can assume that once kubelet applies cgroups setting successfully, the values will never change unless kubelet changes it. If security is of concern, then kubelet will have to ensure that the cgroup values meet its requirements and then continue to watch for updates to cgroups via inotify and re-apply cgroup values if necessary.
|
||||
Updating QoS limits needs to happen before pod cgroups values are updated. When pod cgroups are being deleted, QoS limits have to be updated after pod cgroup values have been updated for deletion or pod cgroups have been removed. Given that kubelet doesn't have any checkpoints and updates to QoS and pod cgroups are not atomic, kubelet needs to reconcile cgroups status whenever it restarts to ensure that the cgroups values match kubelet's expectation.
|
||||
|
||||
@@ -56,7 +56,7 @@ attributes.
|
||||
Some use cases require the containers in a pod to run with different security settings. As an
|
||||
example, a user may want to have a pod with two containers, one of which runs as root with the
|
||||
privileged setting, and one that runs as a non-root UID. To support use cases like this, it should
|
||||
be possible to override appropriate (ie, not intrinsically pod-level) security settings for
|
||||
be possible to override appropriate (i.e., not intrinsically pod-level) security settings for
|
||||
individual containers.
|
||||
|
||||
## Proposed Design
|
||||
|
||||
@@ -58,7 +58,7 @@ obtained by summing over usage of all nodes in the cluster.
|
||||
This feature is not yet specified/implemented although it seems reasonable to provide users information
|
||||
about resource usage on pod/node level.
|
||||
|
||||
Since this feature has not been fully specified yet it will be not supported initally in the API although
|
||||
Since this feature has not been fully specified yet it will be not supported initially in the API although
|
||||
it will be probably possible to provide a reasonable implementation of the feature anyway.
|
||||
|
||||
#### Kubernetes dashboard
|
||||
@@ -67,7 +67,7 @@ it will be probably possible to provide a reasonable implementation of the featu
|
||||
in timeseries format from relatively long period of time. The aggregations should be also possible on various levels
|
||||
including replication controllers, deployments, services, etc.
|
||||
|
||||
Since the use case is complicated it will not be supported initally in the API and they will query Heapster
|
||||
Since the use case is complicated it will not be supported initially in the API and they will query Heapster
|
||||
directly using some custom API there.
|
||||
|
||||
## Proposed API
|
||||
|
||||
@@ -303,7 +303,7 @@ in namespace `ns1` might create a job `nightly-earnings-report-3m4d3`, and later
|
||||
a job called `nightly-earnings-report-6k7ts`. This is consistent with pods, but
|
||||
does not give the user much information.
|
||||
|
||||
Alternatively, we can use time as a uniqifier. For example, the same scheduledJob could
|
||||
Alternatively, we can use time as a uniquifier. For example, the same scheduledJob could
|
||||
create a job called `nightly-earnings-report-2016-May-19`.
|
||||
However, for Jobs that run more than once per day, we would need to represent
|
||||
time as well as date. Standard date formats (e.g. RFC 3339) use colons for time.
|
||||
|
||||
@@ -172,7 +172,7 @@ not specify all files in the object.
|
||||
The are two downside:
|
||||
|
||||
* The files are symlinks pointint to the real file, and the realfile
|
||||
permissions are only set. The symlink has the clasic symlink permissions.
|
||||
permissions are only set. The symlink has the classic symlink permissions.
|
||||
This is something already present in 1.3, and it seems applications like ssh
|
||||
work just fine with that. Something worth mentioning, but doesn't seem to be
|
||||
an issue.
|
||||
|
||||
@@ -10,7 +10,7 @@ There are two main motivators for Template functionality in Kubernetes: Control
|
||||
|
||||
Today the replication controller defines a PodTemplate which allows it to instantiate multiple pods with identical characteristics.
|
||||
This is useful but limited. Stateful applications have a need to instantiate multiple instances of a more sophisticated topology
|
||||
than just a single pod (eg they also need Volume definitions). A Template concept would allow a Controller to stamp out multiple
|
||||
than just a single pod (e.g. they also need Volume definitions). A Template concept would allow a Controller to stamp out multiple
|
||||
instances of a given Template definition. This capability would be immediately useful to the [PetSet](https://github.com/kubernetes/kubernetes/pull/18016) proposal.
|
||||
|
||||
Similarly the [Service Catalog proposal](https://github.com/kubernetes/kubernetes/pull/17543) could leverage template instantiation as a mechanism for claiming service instances.
|
||||
@@ -22,7 +22,7 @@ Kubernetes gives developers a platform on which to run images and many configura
|
||||
constructing a cohesive application made up of images and configuration objects is currently difficult. Applications
|
||||
require:
|
||||
|
||||
* Information sharing between images (eg one image provides a DB service, another consumes it)
|
||||
* Information sharing between images (e.g. one image provides a DB service, another consumes it)
|
||||
* Configuration/tuning settings (memory sizes, queue limits)
|
||||
* Unique/customizable identifiers (service names, routes)
|
||||
|
||||
@@ -30,7 +30,7 @@ Application authors know which values should be tunable and what information mus
|
||||
consistent way for an application author to define that set of information so that application consumers can easily deploy
|
||||
an application and make appropriate decisions about the tunable parameters the author intended to expose.
|
||||
|
||||
Furthermore, even if an application author provides consumers with a set of API object definitions (eg a set of yaml files)
|
||||
Furthermore, even if an application author provides consumers with a set of API object definitions (e.g. a set of yaml files)
|
||||
it is difficult to build a UI around those objects that would allow the deployer to modify names in one place without
|
||||
potentially breaking assumed linkages to other pieces. There is also no prescriptive way to define which configuration
|
||||
values are appropriate for a deployer to tune or what the parameters control.
|
||||
@@ -40,14 +40,14 @@ values are appropriate for a deployer to tune or what the parameters control.
|
||||
### Use cases for templates in general
|
||||
|
||||
* Providing a full baked application experience in a single portable object that can be repeatably deployed in different environments.
|
||||
* eg Wordpress deployment with separate database pod/replica controller
|
||||
* e.g. Wordpress deployment with separate database pod/replica controller
|
||||
* Complex service/replication controller/volume topologies
|
||||
* Bulk object creation
|
||||
* Provide a management mechanism for deleting/uninstalling an entire set of components related to a single deployed application
|
||||
* Providing a library of predefined application definitions that users can select from
|
||||
* Enabling the creation of user interfaces that can guide an application deployer through the deployment process with descriptive help about the configuration value decisions they are making, and useful default values where appropriate
|
||||
* Exporting a set of objects in a namespace as a template so the topology can be inspected/visualized or recreated in another environment
|
||||
* Controllers that need to instantiate multiple instances of identical objects (eg PetSets).
|
||||
* Controllers that need to instantiate multiple instances of identical objects (e.g. PetSets).
|
||||
|
||||
|
||||
### Use cases for parameters within templates
|
||||
@@ -59,9 +59,9 @@ values are appropriate for a deployer to tune or what the parameters control.
|
||||
* Allow simple, declarative defaulting of parameter values and expose them to end users in an approachable way - a parameter
|
||||
like “MySQL table space” can be parameterized in images as an env var - the template parameters declare the parameter, give
|
||||
it a friendly name, give it a reasonable default, and informs the user what tuning options are available.
|
||||
* Customization of component names to avoid collisions and ensure matched labeling (eg replica selector value and pod label are
|
||||
* Customization of component names to avoid collisions and ensure matched labeling (e.g. replica selector value and pod label are
|
||||
user provided and in sync).
|
||||
* Customize cross-component references (eg user provides the name of a secret that already exists in their namespace, to use in
|
||||
* Customize cross-component references (e.g. user provides the name of a secret that already exists in their namespace, to use in
|
||||
a pod as a TLS cert).
|
||||
* Provide guidance to users for parameters such as default values, descriptions, and whether or not a particular parameter value
|
||||
is required or can be left blank.
|
||||
@@ -410,7 +410,7 @@ The api endpoint will then:
|
||||
returned.
|
||||
5. Return the processed template object. (or List, depending on the choice made when this is implemented)
|
||||
|
||||
The client can now either return the processed template to the user in a desired form (eg json or yaml), or directly iterate the
|
||||
The client can now either return the processed template to the user in a desired form (e.g. json or yaml), or directly iterate the
|
||||
api objects within the template, invoking the appropriate object creation api endpoint for each element. (If the api returns
|
||||
a List, the client would simply iterate the list to create the objects).
|
||||
|
||||
@@ -453,9 +453,9 @@ automatic generation of passwords.
|
||||
(mapped to use cases described above)
|
||||
|
||||
* [Share passwords](https://github.com/jboss-openshift/application-templates/blob/master/eap/eap64-mongodb-s2i.json#L146-L152)
|
||||
* [Simple deployment-time customization of “app” configuration via environment values](https://github.com/jboss-openshift/application-templates/blob/master/eap/eap64-mongodb-s2i.json#L108-L126) (eg memory tuning, resource limits, etc)
|
||||
* [Simple deployment-time customization of “app” configuration via environment values](https://github.com/jboss-openshift/application-templates/blob/master/eap/eap64-mongodb-s2i.json#L108-L126) (e.g. memory tuning, resource limits, etc)
|
||||
* [Customization of component names with referential integrity](https://github.com/jboss-openshift/application-templates/blob/master/eap/eap64-mongodb-s2i.json#L199-L207)
|
||||
* [Customize cross-component references](https://github.com/jboss-openshift/application-templates/blob/master/eap/eap64-mongodb-s2i.json#L78-L83) (eg user provides the name of a secret that already exists in their namespace, to use in a pod as a TLS cert)
|
||||
* [Customize cross-component references](https://github.com/jboss-openshift/application-templates/blob/master/eap/eap64-mongodb-s2i.json#L78-L83) (e.g. user provides the name of a secret that already exists in their namespace, to use in a pod as a TLS cert)
|
||||
|
||||
## Requirements analysis
|
||||
|
||||
@@ -546,7 +546,7 @@ fields to be substituted by a parameter value use the "$(parameter)" syntax whic
|
||||
value of `parameter` should be matched to a parameter with that name, and the value of the matched parameter substituted into
|
||||
the field value.
|
||||
|
||||
Other suggestions include a path/map approach in which a list of field paths (eg json path expressions) and corresponding
|
||||
Other suggestions include a path/map approach in which a list of field paths (e.g. json path expressions) and corresponding
|
||||
parameter names are provided. The substitution process would walk the map, replacing fields with the appropriate
|
||||
parameter value. This approach makes templates more fragile from the perspective of editing/refactoring as field paths
|
||||
may change, thus breaking the map. There is of course also risk of breaking references with the previous scheme, but
|
||||
@@ -560,7 +560,7 @@ Openshift defines templates as a first class resource so they can be created/ret
|
||||
|
||||
Openshift handles template processing via a server endpoint which consumes a template object from the client and returns the list of objects
|
||||
produced by processing the template. It is also possible to handle the entire template processing flow via the client, but this was deemed
|
||||
undesirable as it would force each client tool to reimplement template processing (eg the standard CLI tool, an eclipse plugin, a plugin for a CI system like Jenkins, etc). The assumption in this proposal is that server side template processing is the preferred implementation approach for
|
||||
undesirable as it would force each client tool to reimplement template processing (e.g. the standard CLI tool, an eclipse plugin, a plugin for a CI system like Jenkins, etc). The assumption in this proposal is that server side template processing is the preferred implementation approach for
|
||||
this reason.
|
||||
|
||||
|
||||
|
||||
@@ -140,7 +140,7 @@ We propose that:
|
||||
controller attempts to delete the provisioned volume and creates an event
|
||||
on the claim
|
||||
|
||||
Existing behavior is un-changed for claims that do not specify
|
||||
Existing behavior is unchanged for claims that do not specify
|
||||
`claim.Spec.Class`.
|
||||
|
||||
* **Out of tree provisioning**
|
||||
|
||||
Reference in New Issue
Block a user