Update conformance.yaml to remove comments not part of Description

This commit is contained in:
Jefftree 2020-03-06 11:37:35 -08:00
parent 6da6380d1b
commit db714336a9

View File

@ -42,9 +42,8 @@
codename: '[k8s.io] Container Runtime blackbox test on terminated container should codename: '[k8s.io] Container Runtime blackbox test on terminated container should
report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy
FallbackToLogsOnError is set [NodeConformance] [Conformance]' FallbackToLogsOnError is set [NodeConformance] [Conformance]'
description: 'Name: Container Runtime, TerminationMessage, from log output of succeeding description: 'Create a pod with an container. Container''s output is recorded in
container Create a pod with an container. Container''s output is recorded in log log and container exits successfully without an error. When container is terminated,
and container exits successfully without an error. When container is terminated,
terminationMessage MUST have no content as container succeed. [LinuxOnly]: Cannot terminationMessage MUST have no content as container succeed. [LinuxOnly]: Cannot
mount files in Windows Containers.' mount files in Windows Containers.'
release: v1.15 release: v1.15
@ -53,9 +52,8 @@
codename: '[k8s.io] Container Runtime blackbox test on terminated container should codename: '[k8s.io] Container Runtime blackbox test on terminated container should
report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy
FallbackToLogsOnError is set [NodeConformance] [Conformance]' FallbackToLogsOnError is set [NodeConformance] [Conformance]'
description: 'Name: Container Runtime, TerminationMessage, from file of succeeding description: 'Create a pod with an container. Container''s output is recorded in
container Create a pod with an container. Container''s output is recorded in a a file and the container exits successfully without an error. When container is
file and the container exits successfully without an error. When container is
terminated, terminationMessage MUST match with the content from file. [LinuxOnly]: terminated, terminationMessage MUST match with the content from file. [LinuxOnly]:
Cannot mount files in Windows Containers.' Cannot mount files in Windows Containers.'
release: v1.15 release: v1.15
@ -64,23 +62,21 @@
codename: '[k8s.io] Container Runtime blackbox test on terminated container should codename: '[k8s.io] Container Runtime blackbox test on terminated container should
report termination message [LinuxOnly] from log output if TerminationMessagePolicy report termination message [LinuxOnly] from log output if TerminationMessagePolicy
FallbackToLogsOnError is set [NodeConformance] [Conformance]' FallbackToLogsOnError is set [NodeConformance] [Conformance]'
description: 'Name: Container Runtime, TerminationMessage, from container''s log description: 'Create a pod with an container. Container''s output is recorded in
output of failing container Create a pod with an container. Container''s output log and container exits with an error. When container is terminated, termination
is recorded in log and container exits with an error. When container is terminated, message MUST match the expected output recorded from container''s log. [LinuxOnly]:
termination message MUST match the expected output recorded from container''s Cannot mount files in Windows Containers.'
log. [LinuxOnly]: Cannot mount files in Windows Containers.'
release: v1.15 release: v1.15
file: test/e2e/common/runtime.go file: test/e2e/common/runtime.go
- testname: "" - testname: ""
codename: '[k8s.io] Container Runtime blackbox test on terminated container should codename: '[k8s.io] Container Runtime blackbox test on terminated container should
report termination message [LinuxOnly] if TerminationMessagePath is set as non-root report termination message [LinuxOnly] if TerminationMessagePath is set as non-root
user and at a non-default path [NodeConformance] [Conformance]' user and at a non-default path [NodeConformance] [Conformance]'
description: 'Name: Container Runtime, TerminationMessagePath, non-root user and description: 'Create a pod with a container to run it as a non-root user with a
non-default path Create a pod with a container to run it as a non-root user with custom TerminationMessagePath set. Pod redirects the output to the provided path
a custom TerminationMessagePath set. Pod redirects the output to the provided successfully. When the container is terminated, the termination message MUST match
path successfully. When the container is terminated, the termination message MUST the expected output logged in the provided custom path. [LinuxOnly]: Tagged LinuxOnly
match the expected output logged in the provided custom path. [LinuxOnly]: Tagged due to use of ''uid'' and unable to mount files in Windows Containers.'
LinuxOnly due to use of ''uid'' and unable to mount files in Windows Containers.'
release: v1.15 release: v1.15
file: test/e2e/common/runtime.go file: test/e2e/common/runtime.go
- testname: Container Runtime, Restart Policy, Pod Phases - testname: Container Runtime, Restart Policy, Pod Phases
@ -104,10 +100,9 @@
- testname: Docker containers, with command - testname: Docker containers, with command
codename: '[k8s.io] Docker Containers should be able to override the image''s default codename: '[k8s.io] Docker Containers should be able to override the image''s default
command (docker entrypoint) [NodeConformance] [Conformance]' command (docker entrypoint) [NodeConformance] [Conformance]'
description: 'Note: when you override the entrypoint, the image''s arguments (docker description: Default command from the docker image entrypoint MUST NOT be used when
cmd) are ignored. Default command from the docker image entrypoint MUST NOT be Pod specifies the container command. Command from Pod spec MUST override the
used when Pod specifies the container command. Command from Pod spec MUST override command in the image.
the command in the image.'
release: v1.9 release: v1.9
file: test/e2e/common/docker_containers.go file: test/e2e/common/docker_containers.go
- testname: Docker containers, with command and arguments - testname: Docker containers, with command and arguments
@ -793,22 +788,21 @@
- testname: Garbage Collector, dependency cycle - testname: Garbage Collector, dependency cycle
codename: '[sig-api-machinery] Garbage collector should not be blocked by dependency codename: '[sig-api-machinery] Garbage collector should not be blocked by dependency
circle [Conformance]' circle [Conformance]'
description: 'TODO: should be an integration test Create three pods, patch them description: Create three pods, patch them with Owner references such that pod1
with Owner references such that pod1 has pod3, pod2 has pod1 and pod3 has pod2 has pod3, pod2 has pod1 and pod3 has pod2 as owner references respectively. Delete
as owner references respectively. Delete pod1 MUST delete all pods. The dependency pod1 MUST delete all pods. The dependency cycle MUST not block the garbage collection.
cycle MUST not block the garbage collection.'
release: v1.9 release: v1.9
file: test/e2e/apimachinery/garbage_collector.go file: test/e2e/apimachinery/garbage_collector.go
- testname: Garbage Collector, multiple owners - testname: Garbage Collector, multiple owners
codename: '[sig-api-machinery] Garbage collector should not delete dependents that codename: '[sig-api-machinery] Garbage collector should not delete dependents that
have both valid owner and owner that''s waiting for dependents to be deleted [Conformance]' have both valid owner and owner that''s waiting for dependents to be deleted [Conformance]'
description: 'TODO: this should be an integration test Create a replication controller description: Create a replication controller RC1, with maximum allocatable Pods
RC1, with maximum allocatable Pods between 10 and 100 replicas. Create second between 10 and 100 replicas. Create second replication controller RC2 and set
replication controller RC2 and set RC2 as owner for half of those replicas. Once RC2 as owner for half of those replicas. Once RC1 is created and the all Pods
RC1 is created and the all Pods are created, delete RC1 with deleteOptions.PropagationPolicy are created, delete RC1 with deleteOptions.PropagationPolicy set to Foreground.
set to Foreground. Half of the Pods that has RC2 as owner MUST not be deleted Half of the Pods that has RC2 as owner MUST not be deleted but have a deletion
but have a deletion timestamp. Deleting the Replication Controller MUST not delete timestamp. Deleting the Replication Controller MUST not delete Pods that are owned
Pods that are owned by multiple replication controllers.' by multiple replication controllers.
release: v1.9 release: v1.9
file: test/e2e/apimachinery/garbage_collector.go file: test/e2e/apimachinery/garbage_collector.go
- testname: Garbage Collector, delete deployment, propagation policy orphan - testname: Garbage Collector, delete deployment, propagation policy orphan
@ -1385,10 +1379,9 @@
- testname: Kubectl, proxy port zero - testname: Kubectl, proxy port zero
codename: '[sig-cli] Kubectl client Proxy server should support proxy with --port codename: '[sig-cli] Kubectl client Proxy server should support proxy with --port
0 [Conformance]' 0 [Conformance]'
description: 'TODO: test proxy options (static, prefix, etc) Start a proxy server description: Start a proxy server on port zero by running 'kubectl proxy' with --port=0.
on port zero by running ''kubectl proxy'' with --port=0. Call the proxy server Call the proxy server by requesting api versions from unix socket. The proxy server
by requesting api versions from unix socket. The proxy server MUST provide at MUST provide at least one version string.
least one version string.'
release: v1.9 release: v1.9
file: test/e2e/kubectl/kubectl.go file: test/e2e/kubectl/kubectl.go
- testname: Kubectl, replication controller - testname: Kubectl, replication controller
@ -1473,13 +1466,11 @@
- testname: Networking, intra pod http - testname: Networking, intra pod http
codename: '[sig-network] Networking Granular Checks: Pods should function for intra-pod codename: '[sig-network] Networking Granular Checks: Pods should function for intra-pod
communication: http [NodeConformance] [Conformance]' communication: http [NodeConformance] [Conformance]'
description: Try to hit all endpoints through a test container, retry 5 times, expect description: Create a hostexec pod that is capable of curl to netcat commands. Create
exactly one unique hostname. Each of these endpoints reports its own hostname. a test Pod that will act as a webserver front end exposing ports 8080 for tcp
Create a hostexec pod that is capable of curl to netcat commands. Create a test and 8081 for udp. The netserver service proxies are created on specified number
Pod that will act as a webserver front end exposing ports 8080 for tcp and 8081 of nodes. The kubectl exec on the webserver container MUST reach a http port on
for udp. The netserver service proxies are created on specified number of nodes. the each of service proxy endpoints in the cluster and the request MUST be successful.
The kubectl exec on the webserver container MUST reach a http port on the each
of service proxy endpoints in the cluster and the request MUST be successful.
Container will execute curl command to reach the service port within specified Container will execute curl command to reach the service port within specified
max retry limit and MUST result in reporting unique hostnames. max retry limit and MUST result in reporting unique hostnames.
release: v1.9, v1.18 release: v1.9, v1.18
@ -1540,10 +1531,8 @@
file: test/e2e/network/proxy.go file: test/e2e/network/proxy.go
- testname: Proxy, logs service endpoint - testname: Proxy, logs service endpoint
codename: '[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]' codename: '[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]'
description: using the porter image to serve content, access the content (of multiple description: Select any node in the cluster to invoke /logs endpoint using the
pods?) from multiple (endpoints/services?) Select any node in the cluster to invoke /logs /nodes/proxy subresource from the kubelet port. This endpoint MUST be reachable.
endpoint using the /nodes/proxy subresource from the kubelet port. This endpoint
MUST be reachable.
release: v1.9 release: v1.9
file: test/e2e/network/proxy.go file: test/e2e/network/proxy.go
- testname: Service endpoint latency, thresholds - testname: Service endpoint latency, thresholds
@ -1711,16 +1700,7 @@
- testname: Scheduler, resource limits - testname: Scheduler, resource limits
codename: '[sig-scheduling] SchedulerPredicates [Serial] validates resource limits codename: '[sig-scheduling] SchedulerPredicates [Serial] validates resource limits
of pods that are allowed to run [Conformance]' of pods that are allowed to run [Conformance]'
description: 'This test verifies we don''t allow scheduling of pods in a way that description: Scheduling Pods MUST fail if the resource requests exceed Machine capacity.
sum of resource requests of pods is greater than machines capacity. It assumes
that cluster add-on pods stay stable and cannot be run in parallel with any other
test that touches Nodes or Pods. It is so because we need to have precise control
on what''s running in the cluster. Test scenario: 1. Find the amount CPU resources
on each node. 2. Create one pod with affinity to each node that uses 70% of the
node CPU. 3. Wait for the pods to be scheduled. 4. Create another pod with no
affinity to any node that need 50% of the largest node CPU. 5. Make sure this
additional pod is not scheduled. Scheduling Pods MUST fail if the resource requests
exceed Machine capacity.'
release: v1.9 release: v1.9
file: test/e2e/scheduling/predicates.go file: test/e2e/scheduling/predicates.go
- testname: Scheduler, node selector matching - testname: Scheduler, node selector matching
@ -1734,10 +1714,9 @@
- testname: Scheduler, node selector not matching - testname: Scheduler, node selector not matching
codename: '[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector codename: '[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector
is respected if not matching [Conformance]' is respected if not matching [Conformance]'
description: Test Nodes does not have any label, hence it should be impossible to description: Create a Pod with a NodeSelector set to a value that does not match
schedule Pod with nonempty Selector set. Create a Pod with a NodeSelector set a node in the cluster. Since there are no nodes matching the criteria the Pod
to a value that does not match a node in the cluster. Since there are no nodes MUST not be scheduled.
matching the criteria the Pod MUST not be scheduled.
release: v1.9 release: v1.9
file: test/e2e/scheduling/predicates.go file: test/e2e/scheduling/predicates.go
- testname: Scheduling, HostPort and Protocol match, HostIPs different but one is - testname: Scheduling, HostPort and Protocol match, HostIPs different but one is
@ -2127,11 +2106,10 @@
- testname: Projected Volume, multiple projections - testname: Projected Volume, multiple projections
codename: '[sig-storage] Projected combined should project all components that make codename: '[sig-storage] Projected combined should project all components that make
up the projection API [Projection][NodeConformance] [Conformance]' up the projection API [Projection][NodeConformance] [Conformance]'
description: Test multiple projections A Pod is created with a projected volume description: A Pod is created with a projected volume source for secrets, configMap
source for secrets, configMap and downwardAPI with pod name, cpu and memory limits and downwardAPI with pod name, cpu and memory limits and cpu and memory requests.
and cpu and memory requests. Pod MUST be able to read the secrets, configMap values Pod MUST be able to read the secrets, configMap values and the cpu and memory
and the cpu and memory limits as well as cpu and memory requests from the mounted limits as well as cpu and memory requests from the mounted DownwardAPIVolumeFiles.
DownwardAPIVolumeFiles.
release: v1.9 release: v1.9
file: test/e2e/common/projected_combined.go file: test/e2e/common/projected_combined.go
- testname: Projected Volume, ConfigMap, create, update and delete - testname: Projected Volume, ConfigMap, create, update and delete