Automatic merge from submit-queue
Curating Owners: examples/mysql-wordpress-pd
cc @jeffmendoza
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
log cfgzErr if err happened
We need to log err info when err info returned by initConfigz(),no matter what the result of utilconfig.DefaultFeatureGate.DynamicKubeletConfig() is and
whether s.RunOnce is true or not.
We should log the initKubeletConfigSync() err info too.
Automatic merge from submit-queue
Remove packages which are now apimachinery
Removes all the content from the packages that were moved to `apimachinery`. This will force all vendoring projects to figure out what's wrong. I had to leave many empty marker packages behind to have verify-godep succeed on vendoring heapster.
@sttts straight deletes and simple adds
Automatic merge from submit-queue (batch tested with PRs 34763, 38706, 39939, 40020)
Use Statefulset instead in e2e and controller
Quick fix ref: #35534
We should finish the issue to meet v1.6 milestone.
Automatic merge from submit-queue (batch tested with PRs 34763, 38706, 39939, 40020)
prevent anonymous auth and allow all
https://github.com/kubernetes/kubernetes/pull/38696 for master
@kubernetes/sig-auth
```release-note
Anonymous authentication is now automatically disabled if the API server is started with the AlwaysAllow authorizer.
```
Automatic merge from submit-queue
log info on invalid --output-version
**Release note**:
``` release-note
release-note-none
```
Object versions default to the current version (v1) when a specified
`--output-version` is invalid. This patch logs a warning when this is
the case. Cases affected are all commands with the `--output-version`
option, and anywhere runtime objects are converted to versioned objects.
**Example**
```
$ kubectl get pod <mypod> -o json --output-version=invalid
W1013 17:24:16.810278 26719 result.go:238] info: the output version
specified (invalid) is invalid, defaulting to v1
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "mypod",
"namespace": "test",
...
```
Automatic merge from submit-queue
Move PatchType to apimachinery/pkg/types
Fixes https://github.com/kubernetes/kubernetes/issues/39970
`PatchType` is shared by the client and server, they have to agree, and its critical for our API to function.
@smarterclayton @kubernetes/sig-api-machinery-misc
Automatic merge from submit-queue (batch tested with PRs 39911, 40002, 39969, 40012, 40009)
kubectl: fix rollback dryrun when version is not specified
@kubernetes/sig-cli-misc
Automatic merge from submit-queue (batch tested with PRs 39911, 40002, 39969, 40012, 40009)
Sync fluentd daemonset liveness probe with static pod liveness probe
Syncing change from https://github.com/kubernetes/kubernetes/pull/39949
Should also be cherry-picked
Automatic merge from submit-queue (batch tested with PRs 39911, 40002, 39969, 40012, 40009)
Fix RBAC role for kube-proxy in Kubemark
Ref #39959
This should ensure that kube-proxy (in Kubemark) has the required role and RBAC binding.
@deads2k PTAL
cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
Automatic merge from submit-queue (batch tested with PRs 39911, 40002, 39969, 40012, 40009)
kubeadm: upgrade kube-dns to 1.11.0.
**What this PR does / why we need it**: See kubernetes/dns#25
**Which issue this PR fixes**: fixeskubernetes/kubeadm#121
**Special notes for your reviewer**: /cc @luxas
I know this is not the template solution you are looking for but seems to me it's important enough to do this now because of the issues it fixes.
Tested manually and it works.
`NONE`
Automatic merge from submit-queue
[kubeadm] resetting cluster should check whether docker service is active
Signed-off-by: bruceauyeung <ouyang.qinhua@zte.com.cn>
**What this PR does / why we need it**:
if not, `kubeadm reset` will fail to remove kubernetes-managed containers
Automatic merge from submit-queue
genericapiserver: cut off kube pkg/version dependency
Move type into k8s.io/apiserver and use fake version for now in genericapiserver tests.
Automatic merge from submit-queue
Use $HOSTNAME as node.name by default
**What this PR does / why we need it**:
Allows to identify elasticsearch instances more easily.
As $HOSTNAME of a pod is unique, this should be no problem.
Automatic merge from submit-queue
Corrected a typo in scheduler factory.go.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 39945, 39601)
bugfix for PodToleratesNodeTaints
`PodToleratesNodeTaints`predicate func should return true if pod has no toleration annotations and node's taint effect is `PreferNoSchedule`
Automatic merge from submit-queue
genericapiserver: cut off pkg/serviceaccount dependency
**Blocked** by pkg/api/validation/genericvalidation to be split up and moved into apimachinery.
Automatic merge from submit-queue (batch tested with PRs 39948, 39997)
Fix ScheduledJob -> CronJob rename leftovers
I found a few leftovers from the rename I did some time ago.
@kubernetes/sig-apps-misc ptal
Automatic merge from submit-queue
Move pkg/api/rest into genericapiserver
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Do not list CronJob unmet starting times beyond deadline
**What this PR does / why we need it**:
See #36311. `getRecentUnmetScheduleTimes` gives up after 100 unmet times to avoid wasting too much CPU or memory generating all the times, as it generates them sequentially.
When concurrency is forbidden, this is conceptually un-necessary: we only need the last unmet start time. This suggests that when concurrency is forbidden, we could generate times by going backward in time from now. This is not very practical as CronJob currently relies on a package that only provides `Next` and no `Prev`. Hand-cooking a `Prev` does not seem like a good idea. I could submit a PR to the cron library to add a `Prev` method, and use that when concurrency is forbidden through something like `getLastUnmetScheduleTime`. This would be `O(1)` and there would be no limit involved.
(edit: actually, even for the other concurrency settings, we only start the last unmet start times -- there is a `TODO` in the controller to actually start all of them, but that is not implemented at the moment. This means the solution would apply, at least temporarily, to all concurrency settings).
cc @soltysh what do you think?
In the meantime, I would suggest to do something simple. Currently, the user has no way to configure anything to ensure that his CronJob will not get stuck if one job takes more that 100 unmet times.
`getRecentUnmetScheduleTimes` starts with an initial time corresponding to the last start (or to the creation of the CronJob, if nothing has started yet). However, when `StartingDeadlineSeconds` is set, the controller will not start anything that is older than the deadline, so if the last start is way beyond the deadline, we are generating potentially lots of unmet start times that will not be considered by the scheduler for scheduling anyway.
Consider a job running every minute, where the last instance has taken 120 minutes. This means there are more than 100 unmet times when we start counting from the last start time.
**The PR makes `getRecentUnmetScheduleTimes` only consider times that do not fall beyond the deadline.** Here, the CronJob can be configured with a `StartingDeadlineSeconds` of, say, 10 minutes. After the 120min job has run, `getRecentUnmetScheduleTimes` will only consider the times in the last 10 minutes from now, and will not get stuck.
As a side note on the max. number of unmet times to use as limits in terms of CPU used by the controller: I have run a quick benchmark on my i7 mac. Schedules corresponding to "once a week" tend to be more expensive to generate unmet times for. Just FYI.
```
+--------------+---------------+--------------+
| SCHEDULE | MISSED STARTS | TIMING |
+--------------+---------------+--------------+
| */1 * * * ? | 100 | 383.645µs |
| */30 * * * ? | 100 | 354.765µs |
| 30 1 * * ? | 100 | 1.065124ms |
| 30 1 * * 0 | 100 | 1.80034ms |
| */1 * * * ? | 500 | 1.341365ms |
| */30 * * * ? | 500 | 1.814441ms |
| 30 1 * * ? | 500 | 8.475012ms |
| 30 1 * * 0 | 500 | 10.020613ms |
| */1 * * * ? | 1000 | 2.551697ms |
| */30 * * * ? | 1000 | 4.075813ms |
| 30 1 * * ? | 1000 | 17.674945ms |
| 30 1 * * 0 | 1000 | 19.149324ms |
| */1 * * * ? | 10000 | 25.725531ms |
| */30 * * * ? | 10000 | 87.520022ms |
| 30 1 * * ? | 10000 | 174.29216ms |
| 30 1 * * 0 | 10000 | 196.565748ms |
+--------------+---------------+--------------+
```
using
```.go
package main
import (
"fmt"
"time"
"os"
"strconv"
"github.com/robfig/cron"
"github.com/olekukonko/tablewriter"
)
func timeSchedule(schedule string, iterations int) (time.Duration) {
sched, err := cron.ParseStandard(schedule)
if err != nil {
panic(fmt.Sprintf("Unparseable schedule: %s", err))
}
start := time.Now()
t := time.Now()
for i := 1; i <= iterations; i++ {
t = sched.Next(t)
}
return time.Since(start)
}
func main() {
table := tablewriter.NewWriter(os.Stdout)
table.SetHeader([]string{"Schedule", "Missed starts", "Timing"})
schedules := []string{"*/1 * * * ?", "*/30 * * * ?", "30 1 * * ?", "30 1 * * 0"}
iteration_nums := []int{100, 500, 1000, 10000}
for _, iterations := range iteration_nums {
for _, schedule := range schedules {
table.Append([]string{schedule,
strconv.Itoa(iterations),
timeSchedule(schedule, iterations).String()})
}
}
table.Render()
}
```
**Which issue this PR fixes**: fixes#36311
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 37680, 39968)
Update Owners for Scheduler
Update Owners file for scheduler component to spread the reviews around.
/cc @davidopp per previous sig-mtg.