When user recreate whole cluster certs, multus thick plugin's
previous cert is no longer valid. In such case, we need to prevent
to use cert manager's old certs and restart it from bootstrap
kubeconfig. This fix reloads client config from bootstrap
kubeconfig if cert mgr's cert is failed to load pod.
As described in #1171, the watch function is required in the clusterrole
for the thick Multus version, otherwise "Failed to watch *v1.Pod" would
be returned.
The ClusterRole was missing the watch permission on pods, which resulted in Multus throwing this error message every few seconds:
Failed to watch *v1.Pod: unknown (get pods)
This change stops to update status in CNI's DEL command.
There are two reasons:
1. cmd DEL is invoked at only pod deletion, hence k8s does not
guarantee the pod and it may be already deleted. Hence this
API may failed.
2. In stateful set's pod recreation case, it may have race
condition to update the status at cmd DEL case.
In stateful set case, same pod name, i.e. stateful-0, is deleted
and then created again. In this case, if old Pod's CNI DEL command
is not finished before new Pod's creation, then SetStatus function
is failed due to pod UID mismatch.
This change introduces certDuration as parameter to customize
cert duration. In addition, environment variable for node name
is matched to other usages.
We used to run chroot in multus main process when calling other CNI
plugin binary. We also use a mutex to lock the access to pod files.
But this causes performance issues when facing heavy
CNI_ADD/CNI_DEL requests.
With this patch, we do chroot in the child processes instead. So
file operations in the main process will not be affected by chroot.
This change requires the multus thick plugin pod to mount CNI bin
directory to the same path in the container host.
Signed-off-by: Peng Liu <pliu@redhat.com>
This change introduces per-node certification for multus pods.
Once multus pod is launched, then specified bootstrap kubeconfig
is used for initial access, then multus sends CSR request to
kube API to get original certs for kube API access. Once it is
accepted then the multus pod uses generated certs for kube access.
For whatever reason calling os.Stat() on the readiness indicator file
from CmdAdd()/CmdDel() when multus is running in server mode and is
containerized often returns "file not found", which triggers the
polling behavior of GetReadinessIndicatorFile(). This greatly delays
CNI operations that should be pretty quick. Even if an exponential
backoff is used, os.Stat() can still return "file not found"
multiple times, even though the file clearly exists.
But it turns out we don't need to check the readiness file in server
mode when running with MultusConfigFile == "auto". In this mode the
server starts the ConfigManager which (a) waits until the file exists
and (b) fsnotify watches the readiness and (c) exits the daemon
immediately if the file is deleted or moved.
This means we can assume that while the daemon is running and the
server is handling CNI requests that the readiness file exists;
otherwise the daemon would have exited. Thus CmdAdd/CmdDel don't
need to run a lot of possibly failing os.Stat() calls in the CNI
hot paths.
Signed-off-by: Dan Williams <dcbw@redhat.com>
The test was comparing the same configuration to itself, since
nothing in the changed CNI configuration is used in the written
multus configuration.
Instead make sure the updated CNI config contains something
that will be reflected in the written multus configuration,
and while we're there use a more robust way to wait for the
config to be written via gomega.Eventually().
Signed-off-by: Dan Williams <dcbw@redhat.com>
Simplify setup by moving the post-creation operations like
GenerateConfig() and PersistMultusConfig() into a new Start() function
that also begins watching the configuration directory. This better
encapsulates the manager functionality in the object.
We can also get rid of the done channel passed to the config
manager and just use the existing WaitGroup to determine when to
exit the daemon main().
Signed-off-by: Dan Williams <dcbw@redhat.com>
A couple of the setup variables for NewManager*() are already in the
multus config that it gets passed, so use those instead of passing
explicitly.
Signed-off-by: Dan Williams <dcbw@redhat.com>
When running in server mode we can use a shared informer to listen for
Pod events from the apiserver, and grab pod info from that cache rather
than doing direct apiserver requests each time.
This reduces apiserver load and retry latency, since multus can poll
the local cache more frequently than it should do direct apiserver
requests.
Oddly static pods don't show up in the informer by the timeout and
require a direct apiserver request. Since static pods are not common
and are typically long-running, it should not be a big issue to
fall back to direct apiserver access for them.
Signed-off-by: Dan Williams <dcbw@redhat.com>
Move server start code to a common function that both regular
and test code can use. Also shut down the server from the
testcases.
Signed-off-by: Dan Williams <dcbw@redhat.com>
Then we can just use the Server struct kube client and exec rather
than passing them through the function parameters.
Signed-off-by: Dan Williams <dcbw@redhat.com>