Implement pod resource metrics as described in KEP 1916. The new
`/metrics/resources` endpoint is exposed on the active scheduler
and reports kube_pod_resources* metrics that present the effective
requests and limits for all resources on the pods as calculated by
the scheduler and kubelet. This allows administrators using the
system to quickly perform resource consumption, reservation, and
pending utilization calculations when those metrics are read.
Because metrics calculation is on-demand, there is no additional
resource consumption incurred by the scheduler unless the endpoint
is scraped.
Allow a fast approximate conversion to float64 from quantity (which
is an arbitrary precision integral value). Will return +Inf/-Inf if
a float64 is not sufficient to represent the current value.
A suite of e2e tests was created for Topology Manager
so as to test pod scope alignment feature.
Co-authored-by: Pawel Rapacz <p.rapacz@partner.samsung.com>
Co-authored-by: Krzysztof Wiatrzyk <k.wiatrzyk@samsung.com>
Signed-off-by: Cezary Zukowski <c.zukowski@samsung.com>
Pod object is more flexible to use and construct
* Update TestGetTopologyHints() to work according to new test cases
* Update topologyHintTestCase{} to include proper field
Signed-off-by: Krzysztof Wiatrzyk <k.wiatrzyk@samsung.com>
* Extract common tests cases that will be used for both GetTopologyHints()
and GetPodTopologyHints()
* Extract machineInfo as it will be used for both functions as well
Signed-off-by: Krzysztof Wiatrzyk <k.wiatrzyk@samsung.com>
* Add topologyScopeName parameter to NewManager().
* Add scope interface and structure that implement common logic
* Add pod scope & container scopes
* Add pod lifecycle functions
Co-authored-by: sw.han <sw.han@samsung.com>
Signed-off-by: Krzysztof Wiatrzyk <k.wiatrzyk@samsung.com>
since we added tests to check connectivity against pods with
hostNetwork: true, there is the possibility that those pods
fail to run because the port is being used in the host.
Current test were using port 8080,8081 and 8082 that are commonly
used in hosts for other applications.
If the service is not ready after a certain time, and we are using
Pods with hostNetwork: true we assume that there is a conflict
and skip this test.
Dual stack services can have two ClusterIPs, we already have tests that
exercise the connectivity from different scenarios to the first
ClusterIP of the service.
This PR adds a new functionality to the e2e network utils to enable
DualStack services, and replicate the same tests but using the
secondary ClusterIP, so we cover the connectivity to both cluster IPs.