Absolutize links that leave the docs/ tree to go anywhere other than

to examples/ or back to docs/
This commit is contained in:
David Oppenheimer 2015-07-20 00:25:07 -07:00
parent d414e29643
commit 50e95a031b
27 changed files with 86 additions and 86 deletions

View File

@ -113,7 +113,7 @@ To permit an action Policy with an unset namespace applies regardless of namespa
3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}`
4. Bob can just read pods in namespace "projectCaribou": `{"user":"bob", "resource": "pods", "readonly": true, "ns": "projectCaribou"}`
[Complete file example](../../pkg/auth/authorizer/abac/example_policy_file.jsonl)
[Complete file example](http://releases.k8s.io/HEAD/pkg/auth/authorizer/abac/example_policy_file.jsonl)
## Plugin Development

View File

@ -97,17 +97,17 @@ selects a node for them to run on.
Addons are pods and services that implement cluster features. They don't run on
the master VM, but currently the default setup scripts that make the API calls
to create these pods and services does run on the master VM. See:
[kube-master-addons](../../cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh)
[kube-master-addons](http://releases.k8s.io/HEAD/cluster/saltbase/salt/kube-master-addons/kube-master-addons.sh)
Addon objects are created in the "kube-system" namespace.
Example addons are:
* [DNS](../../cluster/addons/dns/) provides cluster local DNS.
* [kube-ui](../../cluster/addons/kube-ui/) provides a graphical UI for the
* [DNS](http://releases.k8s.io/HEAD/cluster/addons/dns/) provides cluster local DNS.
* [kube-ui](http://releases.k8s.io/HEAD/cluster/addons/kube-ui/) provides a graphical UI for the
cluster.
* [fluentd-elasticsearch](../../cluster/addons/fluentd-elasticsearch/) provides
log storage. Also see the [gcp version](../../cluster/addons/fluentd-gcp/).
* [cluster-monitoring](../../cluster/addons/cluster-monitoring/) provides
* [fluentd-elasticsearch](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/) provides
log storage. Also see the [gcp version](http://releases.k8s.io/HEAD/cluster/addons/fluentd-gcp/).
* [cluster-monitoring](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/) provides
monitoring for the cluster.
## Node components

View File

@ -41,7 +41,7 @@ At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](../../cluster/gce/config-default.sh)).
Normally the number of nodes in a cluster is controlled by the the value `NUM_MINIONS` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](http://releases.k8s.io/HEAD/cluster/gce/config-default.sh)).
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
@ -82,15 +82,15 @@ These limits, however, are based on data collected from addons running on 4-node
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
* Scale memory and CPU limits for each of the following addons, if used, along with the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
* Heapster ([GCM/GCL backed](../../cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](../../cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](../../cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](../../cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml))
* [InfluxDB and Grafana](../../cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [skydns, kube2sky, and dns etcd](../../cluster/addons/dns/skydns-rc.yaml.in)
* [Kibana](../../cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
* Heapster ([GCM/GCL backed](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/google/heapster-controller.yaml), [InfluxDB backed](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/influxdb/heapster-controller.yaml), [InfluxDB/GCL backed](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml), [standalone](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/standalone/heapster-controller.yaml))
* [InfluxDB and Grafana](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [skydns, kube2sky, and dns etcd](http://releases.k8s.io/HEAD/cluster/addons/dns/skydns-rc.yaml.in)
* [Kibana](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/kibana-controller.yaml)
* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
* [elasticsearch](../../cluster/addons/fluentd-elasticsearch/es-controller.yaml)
* [elasticsearch](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/es-controller.yaml)
* Increase memory and CPU limits sligthly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
* [FluentD with ElasticSearch Plugin](../../cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
* [FluentD with GCP Plugin](../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
* [FluentD with ElasticSearch Plugin](http://releases.k8s.io/HEAD/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
* [FluentD with GCP Plugin](http://releases.k8s.io/HEAD/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](../user-guide/compute-resources.md#troubleshooting).

View File

@ -33,7 +33,7 @@ Documentation for other releases can be found at
# DNS Integration with Kubernetes
As of kubernetes 0.8, DNS is offered as a [cluster add-on](../../cluster/addons/README.md).
As of kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/HEAD/cluster/addons/README.md).
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.
@ -68,7 +68,7 @@ time.
## For more information
See [the docs for the DNS cluster addon](../../cluster/addons/dns/README.md).
See [the docs for the DNS cluster addon](http://releases.k8s.io/HEAD/cluster/addons/dns/README.md).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -55,7 +55,7 @@ to reduce downtime in case of corruption.
## Default configuration
The default setup scripts use kubelet's file-based static pods feature to run etcd in a
[pod](../../cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only
[pod](http://releases.k8s.io/HEAD/cluster/saltbase/salt/etcd/etcd.manifest). This manifest should only
be run on master VMs. The default location that kubelet scans for manifests is
`/etc/kubernetes/manifests/`.

View File

@ -107,7 +107,7 @@ choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run
If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
`which kubelet` to determine if the binary is in fact installed. If it is not installed,
you should install the [kubelet binary](https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kubelet), the
[kubelet init file](../../cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
[kubelet init file](http://releases.k8s.io/HEAD/cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
scripts.
If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and

View File

@ -129,7 +129,7 @@ We should define a grains.conf key that captures more specifically what network
## Further reading
The [cluster/saltbase](../../cluster/saltbase/) tree has more details on the current SaltStack configuration.
The [cluster/saltbase](http://releases.k8s.io/HEAD/cluster/saltbase/) tree has more details on the current SaltStack configuration.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -48,7 +48,7 @@ Event compression should be best effort (not guaranteed). Meaning, in the worst
## Design
Instead of a single Timestamp, each event object [contains](../../pkg/api/types.go#L1111) the following fields:
Instead of a single Timestamp, each event object [contains](http://releases.k8s.io/HEAD/pkg/api/types.go#L1111) the following fields:
* `FirstTimestamp util.Time`
* The date/time of the first occurrence of the event.
* `LastTimestamp util.Time`
@ -72,7 +72,7 @@ Each binary that generates events:
* `event.Reason`
* `event.Message`
* The LRU cache is capped at 4096 events. That means if a component (e.g. kubelet) runs for a long period of time and generates tons of unique events, the previously generated events cache will not grow unchecked in memory. Instead, after 4096 unique events are generated, the oldest events are evicted from the cache.
* When an event is generated, the previously generated events cache is checked (see [`pkg/client/record/event.go`](../../pkg/client/record/event.go)).
* When an event is generated, the previously generated events cache is checked (see [`pkg/client/record/event.go`](http://releases.k8s.io/HEAD/pkg/client/record/event.go)).
* If the key for the new event matches the key for a previously generated event (meaning all of the above fields match between the new event and some previously generated event), then the event is considered to be a duplicate and the existing event entry is updated in etcd:
* The new PUT (update) event API is called to update the existing event entry in etcd with the new last seen timestamp and count.
* The event is also updated in the previously generated events cache with an incremented count, updated last seen timestamp, name, and new resource version (all required to issue a future event update).

View File

@ -54,7 +54,7 @@ particular, they may be self-merged by the release branch owner without fanfare,
in the case the release branch owner knows the cherry pick was already
requested - this should not be the norm, but it may happen.
[Contributor License Agreements](../../CONTRIBUTING.md) is considered implicit
[Contributor License Agreements](http://releases.k8s.io/HEAD/CONTRIBUTING.md) is considered implicit
for all code within cherry-pick pull requests, ***unless there is a large
conflict***.

View File

@ -35,7 +35,7 @@ Documentation for other releases can be found at
### Supported
* [Go](../../pkg/client/)
* [Go](http://releases.k8s.io/HEAD/pkg/client/)
### User Contributed

View File

@ -35,7 +35,7 @@ Documentation for other releases can be found at
# Releases and Official Builds
Official releases are built in Docker containers. Details are [here](../../build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below.
Official releases are built in Docker containers. Details are [here](http://releases.k8s.io/HEAD/build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below.
## Go development environment
@ -324,7 +324,7 @@ The conformance test runs a subset of the e2e-tests against a manually-created c
require support for up/push/down and other operations. To run a conformance test, you need to know the
IP of the master for your cluster and the authorization arguments to use. The conformance test is
intended to run against a cluster at a specific binary release of Kubernetes.
See [conformance-test.sh](../../hack/conformance-test.sh).
See [conformance-test.sh](http://releases.k8s.io/HEAD/hack/conformance-test.sh).
## Testing out flaky tests

View File

@ -33,7 +33,7 @@ Documentation for other releases can be found at
# Getting Kubernetes Builds
You can use [hack/get-build.sh](../../hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build).
You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build).
```console
usage:

View File

@ -53,30 +53,30 @@ divided by the node's capacity).
Finally, the node with the highest priority is chosen
(or, if there are multiple such nodes, then one of them is chosen at random). The code
for this main scheduling loop is in the function `Schedule()` in
[plugin/pkg/scheduler/generic_scheduler.go](../../plugin/pkg/scheduler/generic_scheduler.go)
[plugin/pkg/scheduler/generic_scheduler.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/generic_scheduler.go)
## Scheduler extensibility
The scheduler is extensible: the cluster administrator can choose which of the pre-defined
scheduling policies to apply, and can add new ones. The built-in predicates and priorities are
defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](../../plugin/pkg/scheduler/algorithm/predicates/predicates.go) and
[plugin/pkg/scheduler/algorithm/priorities/priorities.go](../../plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively.
defined in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go) and
[plugin/pkg/scheduler/algorithm/priorities/priorities.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/priorities.go), respectively.
The policies that are applied when scheduling can be chosen in one of two ways. Normally,
the policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in
[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).
[plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).
However, the choice of policies
can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON
file specifying which scheduling policies to use. See
[examples/scheduler-policy-config.json](../../examples/scheduler-policy-config.json) for an example
config file. (Note that the config file format is versioned; the API is defined in
[plugin/pkg/scheduler/api](../../plugin/pkg/scheduler/api/)).
[plugin/pkg/scheduler/api](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/api/)).
Thus to add a new scheduling policy, you should modify predicates.go or priorities.go,
and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file.
## Exploring the code
If you want to get a global picture of how the scheduler works, you can start in
[plugin/cmd/kube-scheduler/app/server.go](../../plugin/cmd/kube-scheduler/app/server.go)
[plugin/cmd/kube-scheduler/app/server.go](http://releases.k8s.io/HEAD/plugin/cmd/kube-scheduler/app/server.go)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -31,40 +31,40 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Scheduler Algorithm in Kubernetes
For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod.
## Filtering the nodes
# Scheduler Algorithm in Kubernetes
The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource limits of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including:
- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted.
- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of limits of all Pods on the node.
- `PodFitsPorts`: Check if any HostPort required by the Pod is already occupied on the node.
- `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field.
- `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](../user-guide/node-selection/) is an example of how to use `nodeSelector` field).
- `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value.
The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](../../plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).
## Ranking the nodes
The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is:
finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2)
After the scores of all nodes are calculated, the node with highest score is chosen as the host of the Pod. If there are more than one nodes with equal highest scores, a random one among them is chosen.
Currently, Kubernetes scheduler provides some practical priority functions, including:
- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of limits of all Pods already on the node - limit of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption.
- `CalculateNodeLabelPriority`: Prefer nodes that have the specified label.
- `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed.
- `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node.
- `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label.
The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](../../plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize).
For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod.
## Filtering the nodes
The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource limits of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including:
- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted.
- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of limits of all Pods on the node.
- `PodFitsPorts`: Check if any HostPort required by the Pod is already occupied on the node.
- `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field.
- `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](../user-guide/node-selection/) is an example of how to use `nodeSelector` field).
- `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value.
The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).
## Ranking the nodes
The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is:
finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2)
After the scores of all nodes are calculated, the node with highest score is chosen as the host of the Pod. If there are more than one nodes with equal highest scores, a random one among them is chosen.
Currently, Kubernetes scheduler provides some practical priority functions, including:
- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of limits of all Pods already on the node - limit of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption.
- `CalculateNodeLabelPriority`: Prefer nodes that have the specified label.
- `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed.
- `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node.
- `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label.
The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -63,16 +63,16 @@ export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
```
NOTE: This script calls [cluster/kube-up.sh](../../cluster/kube-up.sh)
which in turn calls [cluster/aws/util.sh](../../cluster/aws/util.sh)
using [cluster/aws/config-default.sh](../../cluster/aws/config-default.sh).
NOTE: This script calls [cluster/kube-up.sh](http://releases.k8s.io/HEAD/cluster/kube-up.sh)
which in turn calls [cluster/aws/util.sh](http://releases.k8s.io/HEAD/cluster/aws/util.sh)
using [cluster/aws/config-default.sh](http://releases.k8s.io/HEAD/cluster/aws/config-default.sh).
This process takes about 5 to 10 minutes. Once the cluster is up, the IP addresses of your master and node(s) will be printed,
as well as information about the default services running in the cluster (monitoring, logging, dns). User credentials and security
tokens are written in `~/.kube/kubeconfig`, they will be necessary to use the CLI or the HTTP Basic Auth.
By default, the script will provision a new VPC and a 4 node k8s cluster in us-west-2a (Oregon) with `t2.micro` instances running on Ubuntu.
You can override the variables defined in [config-default.sh](../../cluster/aws/config-default.sh) to change this behavior as follows:
You can override the variables defined in [config-default.sh](http://releases.k8s.io/HEAD/cluster/aws/config-default.sh) to change this behavior as follows:
```bash
export KUBE_AWS_ZONE=eu-west-1c

View File

@ -53,7 +53,7 @@ cd kubernetes
make release
```
For more details on the release process see the [`build/` directory](../../build/)
For more details on the release process see the [`build/` directory](http://releases.k8s.io/HEAD/build/)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -86,7 +86,7 @@ wget -q -O - https://get.k8s.io | bash
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](logging.md), while `heapster` provides [monitoring](../../cluster/addons/cluster-monitoring/README.md) services.
By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](logging.md), while `heapster` provides [monitoring](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/README.md) services.
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.

View File

@ -152,7 +152,7 @@ Weve lost the log lines from the first invocation of the container in this po
When a Kubernetes cluster is created with logging to Google Cloud Logging enabled, the system creates a pod called `fluentd-cloud-logging` on each node of the cluster to collect Docker container logs. These pods were shown at the start of this blog article in the response to the first get pods command.
This log collection pod has a specification which looks something like this [fluentd-gcp.yaml](../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml):
This log collection pod has a specification which looks something like this [fluentd-gcp.yaml](http://releases.k8s.io/HEAD/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml):
```yaml
apiVersion: v1
@ -225,7 +225,7 @@ $ cat 21\:00\:00_21\:59\:59_S0.json | jq '.structPayload.log'
...
```
This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pods containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](../../contrib/logging/fluentd-sidecar-gcp/README.md) and sending them to the Google Cloud Logging service.
This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pods containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](http://releases.k8s.io/HEAD/contrib/logging/fluentd-sidecar-gcp/README.md) and sending them to the Google Cloud Logging service.
Some of the material in this section also appears in the blog article [Cluster Level Logging with Kubernetes](http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes.html).

View File

@ -835,7 +835,7 @@ At this point you should be able to run through one of the basic examples, such
### Running the Conformance Test
You may want to try to run the [Conformance test](../../hack/conformance-test.sh). Any failures may give a hint as to areas that need more attention.
You may want to try to run the [Conformance test](http://releases.k8s.io/HEAD/hack/conformance-test.sh). Any failures may give a hint as to areas that need more attention.
### Networking

View File

@ -148,7 +148,7 @@ with future high-availability support.
There are [client libraries](../devel/client-libraries.md) for accessing the API
from several languages. The Kubernetes project-supported
[Go](../../pkg/client/)
[Go](http://releases.k8s.io/HEAD/pkg/client/)
client library can use the same [kubeconfig file](kubeconfig-file.md)
as the kubectl CLI does to locate and authenticate to the apiserver.

View File

@ -145,7 +145,7 @@ To determine if a container cannot be scheduled or is being killed due to resour
The resource usage of a pod is reported as part of the Pod status.
If [optional monitoring](../../cluster/addons/cluster-monitoring/README.md) is configured for your cluster,
If [optional monitoring](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/README.md) is configured for your cluster,
then pod resource usage can be retrieved from the monitoring system.
## Troubleshooting

View File

@ -160,7 +160,7 @@ You should now be able to curl the nginx Service on `10.0.116.146:80` from any n
## Accessing the Service
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](../../cluster/addons/dns/README.md).
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](http://releases.k8s.io/HEAD/cluster/addons/dns/README.md).
### Environment Variables
@ -199,7 +199,7 @@ kube-dns <none> k8s-app=kube-dns 10.0.0.10 53/UDP
53/TCP
```
If it isnt running, you can [enable it](../../cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived ip (nginxsvc), and a dns server that has assigned a name to that ip (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Lets create another pod to test this:
If it isnt running, you can [enable it](http://releases.k8s.io/HEAD/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived ip (nginxsvc), and a dns server that has assigned a name to that ip (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Lets create another pod to test this:
```yaml
$ cat curlpod.yaml

View File

@ -83,7 +83,7 @@ FOO_SERVICE_HOST=<the host the service is running on>
FOO_SERVICE_PORT=<the port the service is running on>
```
Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](../../cluster/addons/dns/) is enabled).  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery.
Services have dedicated IP address, and are also surfaced to the container via DNS (If [DNS addon](http://releases.k8s.io/HEAD/cluster/addons/dns/) is enabled).  Of course DNS is still not an enumerable protocol, so we will continue to provide environment variables so that containers can do discovery.
## Container Hooks

View File

@ -111,7 +111,7 @@ describes how to ingest cluster level logs into Elasticsearch and view them usin
## Ingesting Application Log Files
Cluster level logging only collects the standard output and standard error output of the applications
running in containers. The guide [Collecting log files within containers with Fluentd](../../contrib/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging.
running in containers. The guide [Collecting log files within containers with Fluentd](http://releases.k8s.io/HEAD/contrib/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging.
## Known issues

View File

@ -206,7 +206,7 @@ spec:
[Pods](pods.md) support running multiple containers co-located together. They can be used to host vertically integrated application stacks, but their primary motivation is to support auxiliary helper programs that assist the primary application. Typical examples are data pullers, data pushers, and proxies.
Such containers typically need to communicate with one another, often through the file system. This can be achieved by mounting the same volume into both containers. An example of this pattern would be a web server with a [program that polls a git repository](../../contrib/git-sync/) for new updates:
Such containers typically need to communicate with one another, often through the file system. This can be achieved by mounting the same volume into both containers. An example of this pattern would be a web server with a [program that polls a git repository](http://releases.k8s.io/HEAD/contrib/git-sync/) for new updates:
```yaml
apiVersion: v1

View File

@ -294,7 +294,7 @@ variables and DNS.
When a `Pod` is run on a `Node`, the kubelet adds a set of environment variables
for each active `Service`. It supports both [Docker links
compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
[makeLinkVariables](../../pkg/kubelet/envvars/envvars.go#L49))
[makeLinkVariables](http://releases.k8s.io/HEAD/pkg/kubelet/envvars/envvars.go#L49))
and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
where the Service name is upper-cased and dashes are converted to underscores.
@ -319,7 +319,7 @@ variables will not be populated. DNS does not have this restriction.
### DNS
An optional (though strongly recommended) [cluster
add-on](../../cluster/addons/README.md) is a DNS server. The
add-on](http://releases.k8s.io/HEAD/cluster/addons/README.md) is a DNS server. The
DNS server watches the Kubernetes API for new `Services` and creates a set of
DNS records for each. If DNS has been enabled throughout the cluster then all
`Pods` should be able to do name resolution of `Services` automatically.

View File

@ -46,7 +46,7 @@ kubectl create -f cluster/addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
```
Normally, this should be taken care of automatically by the [`kube-addons.sh`](../../cluster/saltbase/salt/kube-addons/kube-addons.sh) script that runs on the master.
Normally, this should be taken care of automatically by the [`kube-addons.sh`](http://releases.k8s.io/HEAD/cluster/saltbase/salt/kube-addons/kube-addons.sh) script that runs on the master.
## Using the UI
@ -79,7 +79,7 @@ Other views (Pods, Nodes, Replication Controllers, Services, and Events) simply
## More Information
For more information, see the [Kubernetes UI development document](../../www/README.md) in the www directory.
For more information, see the [Kubernetes UI development document](http://releases.k8s.io/HEAD/www/README.md) in the www directory.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->