- `cert-dir` could be specified to a value other than the default value
- we have tests that should be executed successfully on the working cluster
Signed-off-by: Dave Chen <dave.chen@arm.com>
Before:
findDisk()
fcPathExp := "^(pci-.*-fc|fc)-0x" + wwn + "-lun-" + lun
After:
findDisk()
fcPathExp := "^(pci-.*-fc|fc)-0x" + wwn + "-lun-" + lun + "$"
fc path may have the same wwns but different luns.for example:
pci-0000:41:00.0-fc-0x500a0981891b8dc5-lun-1
pci-0000:41:00.0-fc-0x500a0981891b8dc5-lun-12
Function findDisk() may mismatch the fc path, return the wrong device and wrong associated devicemapper parent.
This may cause a disater that pods attach wrong disks. Accutally it happended in my testing environment before.
The ipvs proxier was figuring out LoadBalancerSourceRanges matches in
the nat table and using KUBE-MARK-DROP to mark unmatched packets to be
dropped later. But with ipvs, unlike with iptables, DNAT happens after
the packet is "delivered" to the dummy interface, so the packet will
still be unmodified when it reaches the filter table (the first time)
so there's no reason to split the work between the nat and filter
tables; we can just do it all from the filter table and call DROP
directly.
Before:
- KUBE-LOAD-BALANCER (in nat) uses kubeLoadBalancerFWSet to match LB
traffic for services using LoadBalancerSourceRanges, and sends it
to KUBE-FIREWALL.
- KUBE-FIREWALL uses kubeLoadBalancerSourceCIDRSet and
kubeLoadBalancerSourceIPSet to match allowed source/dest combos
and calls "-j RETURN".
- All remaining traffic that doesn't escape KUBE-FIREWALL is sent to
KUBE-MARK-DROP.
- Traffic sent to KUBE-MARK-DROP later gets dropped by chains in
filter created by kubelet.
After:
- All INPUT and FORWARD traffic gets routed to KUBE-PROXY-FIREWALL
(in filter). (We don't use "KUBE-FIREWALL" any more because
there's already a chain in filter by that name that belongs to
kubelet.)
- KUBE-PROXY-FIREWALL sends traffic matching kubeLoadbalancerFWSet
to KUBE-SOURCE-RANGES-FIREWALL
- KUBE-SOURCE-RANGES-FIREWALL uses kubeLoadBalancerSourceCIDRSet and
kubeLoadBalancerSourceIPSet to match allowed source/dest combos
and calls "-j RETURN".
- All remaining traffic that doesn't escape
KUBE-SOURCE-RANGES-FIREWALL is dropped (directly via "-j DROP").
- (KUBE-LOAD-BALANCER in nat is now used only to set up masquerading)
When a Pod is referencing a Node that doesn't exist on the local
informer cache, the current behavior was to return an error to
retry later and stop processing.
However, this can cause scenarios that a missing node leaves a
Slice stuck, it can no reflect other changes, or be created.
Also, this doesn't respect the publishNotReadyAddresses options
on Services, that considers ok to publish pod Addresses that are
known to not be ready.
The new behavior keeps retrying the problematic Service, but it
keeps processing the updates, reflacting current state on the
EndpointSlice. If the publishNotReadyAddresses is set, a missing
node on a Pod is not treated as an error.
There is always a placeholder slice.
The ServicePortCache logic was considering always one endpointSlice
per Endpoint, but if there are multiple empty Endpoints, we just
use one placeholder slice, not multiple placeholder slices.
The following flags, which do not apply to kubectl run,
have been removed:
--cascade
--filename
--force
--grace-period
--kustomize
--recursive
--timeout
--wait
These flags were being added to the run command to support
pod deletion after attach, but they are not used if set, so
they effectively do nothing.
This PR also displays an error message if the pod fails to be
deleted (when the --rm flag is used). Previously any error
during deletion would be suppressed and the pod would remain.
This PR also adds some unit tests for run and attach with and
without the --rm flag. As such, some minor refactoring of the
run command has been done to support mocking dependencies.