Merge pull request #27567 from saad-ali/blockKubeletOnAttachController

Automatic merge from submit-queue

Kubelet Volume Manager Wait For Attach Detach Controller and Backoff on Error

* Closes https://github.com/kubernetes/kubernetes/issues/27483
  * Modified Attach/Detach controller to report `Node.Status.AttachedVolumes` on successful attach (unique volume name along with device path).
  * Modified Kubelet Volume Manager wait for Attach/Detach controller to report success before proceeding with attach.
* Closes https://github.com/kubernetes/kubernetes/issues/27492
  * Implemented an exponential backoff mechanism for for volume manager and attach/detach controller to prevent operations (attach/detach/mount/unmount/wait for controller attach/etc) from executing back to back unchecked.
* Closes https://github.com/kubernetes/kubernetes/issues/26679
  * Modified volume `Attacher.WaitForAttach()` methods to uses the device path reported by the Attach/Detach controller in `Node.Status.AttachedVolumes` instead of calling out to cloud providers.
This commit is contained in:
k8s-merge-robot 2016-06-20 20:36:08 -07:00 committed by GitHub
commit ec518005a8
42 changed files with 2782 additions and 628 deletions

View File

@ -16768,6 +16768,13 @@
"$ref": "v1.UniqueVolumeName" "$ref": "v1.UniqueVolumeName"
}, },
"description": "List of attachable volumes in use (mounted) by the node." "description": "List of attachable volumes in use (mounted) by the node."
},
"volumesAttached": {
"type": "array",
"items": {
"$ref": "v1.AttachedVolume"
},
"description": "List of volumes that are attached to the node."
} }
} }
}, },
@ -16930,6 +16937,24 @@
"id": "v1.UniqueVolumeName", "id": "v1.UniqueVolumeName",
"properties": {} "properties": {}
}, },
"v1.AttachedVolume": {
"id": "v1.AttachedVolume",
"description": "AttachedVolume describes a volume attached to a node",
"required": [
"name",
"devicePath"
],
"properties": {
"name": {
"type": "string",
"description": "Name of the attached volume"
},
"devicePath": {
"type": "string",
"description": "DevicePath represents the device path where the volume should be avilable"
}
}
},
"v1.PersistentVolumeClaimList": { "v1.PersistentVolumeClaimList": {
"id": "v1.PersistentVolumeClaimList", "id": "v1.PersistentVolumeClaimList",
"description": "PersistentVolumeClaimList is a list of PersistentVolumeClaim items.", "description": "PersistentVolumeClaimList is a list of PersistentVolumeClaim items.",

View File

@ -2265,6 +2265,61 @@ Populated by the system when a graceful deletion is requested. Read-only. More i
<div class="paragraph"> <div class="paragraph">
<p>Patch is provided to give a concrete name and type to the Kubernetes PATCH request body.</p> <p>Patch is provided to give a concrete name and type to the Kubernetes PATCH request body.</p>
</div> </div>
</div>
<div class="sect2">
<h3 id="_v1_namespacelist">v1.NamespaceList</h3>
<div class="paragraph">
<p>NamespaceList is a list of Namespaces.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">kind</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: <a href="http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds">http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">apiVersion</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: <a href="http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources">http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">metadata</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Standard list metadata. More info: <a href="http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds">http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_unversioned_listmeta">unversioned.ListMeta</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">items</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Items is the list of Namespace objects in the list. More info: <a href="http://releases.k8s.io/HEAD/docs/user-guide/namespaces.md">http://releases.k8s.io/HEAD/docs/user-guide/namespaces.md</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_namespace">v1.Namespace</a> array</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div> </div>
<div class="sect2"> <div class="sect2">
<h3 id="_v1_persistentvolumeclaim">v1.PersistentVolumeClaim</h3> <h3 id="_v1_persistentvolumeclaim">v1.PersistentVolumeClaim</h3>
@ -2327,61 +2382,6 @@ Populated by the system when a graceful deletion is requested. Read-only. More i
</tbody> </tbody>
</table> </table>
</div>
<div class="sect2">
<h3 id="_v1_namespacelist">v1.NamespaceList</h3>
<div class="paragraph">
<p>NamespaceList is a list of Namespaces.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">kind</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: <a href="http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds">http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">apiVersion</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: <a href="http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources">http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">metadata</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Standard list metadata. More info: <a href="http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds">http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_unversioned_listmeta">unversioned.ListMeta</a></p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">items</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Items is the list of Namespace objects in the list. More info: <a href="http://releases.k8s.io/HEAD/docs/user-guide/namespaces.md">http://releases.k8s.io/HEAD/docs/user-guide/namespaces.md</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_namespace">v1.Namespace</a> array</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div> </div>
<div class="sect2"> <div class="sect2">
<h3 id="_v1_serviceaccount">v1.ServiceAccount</h3> <h3 id="_v1_serviceaccount">v1.ServiceAccount</h3>
@ -4762,6 +4762,13 @@ The resulting set of endpoints can be viewed as:<br>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_uniquevolumename">v1.UniqueVolumeName</a> array</p></td> <td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_uniquevolumename">v1.UniqueVolumeName</a> array</p></td>
<td class="tableblock halign-left valign-top"></td> <td class="tableblock halign-left valign-top"></td>
</tr> </tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">volumesAttached</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">List of volumes that are attached to the node.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_attachedvolume">v1.AttachedVolume</a> array</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody> </tbody>
</table> </table>
@ -4813,6 +4820,47 @@ The resulting set of endpoints can be viewed as:<br>
</tbody> </tbody>
</table> </table>
</div>
<div class="sect2">
<h3 id="_v1_attachedvolume">v1.AttachedVolume</h3>
<div class="paragraph">
<p>AttachedVolume describes a volume attached to a node</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">name</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Name of the attached volume</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">devicePath</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">DevicePath represents the device path where the volume should be avilable</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div> </div>
<div class="sect2"> <div class="sect2">
<h3 id="_v1_eventsource">v1.EventSource</h3> <h3 id="_v1_eventsource">v1.EventSource</h3>
@ -8105,7 +8153,7 @@ The resulting set of endpoints can be viewed as:<br>
</div> </div>
<div id="footer"> <div id="footer">
<div id="footer-text"> <div id="footer-text">
Last updated 2016-06-08 04:10:38 UTC Last updated 2016-06-16 20:52:03 UTC
</div> </div>
</div> </div>
</body> </body>

View File

@ -35,6 +35,7 @@ func init() {
if err := Scheme.AddGeneratedDeepCopyFuncs( if err := Scheme.AddGeneratedDeepCopyFuncs(
DeepCopy_api_AWSElasticBlockStoreVolumeSource, DeepCopy_api_AWSElasticBlockStoreVolumeSource,
DeepCopy_api_Affinity, DeepCopy_api_Affinity,
DeepCopy_api_AttachedVolume,
DeepCopy_api_AzureFileVolumeSource, DeepCopy_api_AzureFileVolumeSource,
DeepCopy_api_Binding, DeepCopy_api_Binding,
DeepCopy_api_Capabilities, DeepCopy_api_Capabilities,
@ -228,6 +229,12 @@ func DeepCopy_api_Affinity(in Affinity, out *Affinity, c *conversion.Cloner) err
return nil return nil
} }
func DeepCopy_api_AttachedVolume(in AttachedVolume, out *AttachedVolume, c *conversion.Cloner) error {
out.Name = in.Name
out.DevicePath = in.DevicePath
return nil
}
func DeepCopy_api_AzureFileVolumeSource(in AzureFileVolumeSource, out *AzureFileVolumeSource, c *conversion.Cloner) error { func DeepCopy_api_AzureFileVolumeSource(in AzureFileVolumeSource, out *AzureFileVolumeSource, c *conversion.Cloner) error {
out.SecretName = in.SecretName out.SecretName = in.SecretName
out.ShareName = in.ShareName out.ShareName = in.ShareName
@ -1610,6 +1617,17 @@ func DeepCopy_api_NodeStatus(in NodeStatus, out *NodeStatus, c *conversion.Clone
} else { } else {
out.VolumesInUse = nil out.VolumesInUse = nil
} }
if in.VolumesAttached != nil {
in, out := in.VolumesAttached, &out.VolumesAttached
*out = make([]AttachedVolume, len(in))
for i := range in {
if err := DeepCopy_api_AttachedVolume(in[i], &(*out)[i], c); err != nil {
return err
}
}
} else {
out.VolumesAttached = nil
}
return nil return nil
} }

View File

@ -36525,7 +36525,7 @@ func (x *NodeStatus) CodecEncodeSelf(e *codec1978.Encoder) {
} else { } else {
yysep2 := !z.EncBinary() yysep2 := !z.EncBinary()
yy2arr2 := z.EncBasicHandle().StructToArray yy2arr2 := z.EncBasicHandle().StructToArray
var yyq2 [9]bool var yyq2 [10]bool
_, _, _ = yysep2, yyq2, yy2arr2 _, _, _ = yysep2, yyq2, yy2arr2
const yyr2 bool = false const yyr2 bool = false
yyq2[0] = len(x.Capacity) != 0 yyq2[0] = len(x.Capacity) != 0
@ -36537,9 +36537,10 @@ func (x *NodeStatus) CodecEncodeSelf(e *codec1978.Encoder) {
yyq2[6] = true yyq2[6] = true
yyq2[7] = len(x.Images) != 0 yyq2[7] = len(x.Images) != 0
yyq2[8] = len(x.VolumesInUse) != 0 yyq2[8] = len(x.VolumesInUse) != 0
yyq2[9] = len(x.VolumesAttached) != 0
var yynn2 int var yynn2 int
if yyr2 || yy2arr2 { if yyr2 || yy2arr2 {
r.EncodeArrayStart(9) r.EncodeArrayStart(10)
} else { } else {
yynn2 = 0 yynn2 = 0
for _, b := range yyq2 { for _, b := range yyq2 {
@ -36777,6 +36778,39 @@ func (x *NodeStatus) CodecEncodeSelf(e *codec1978.Encoder) {
} }
} }
} }
if yyr2 || yy2arr2 {
z.EncSendContainerState(codecSelfer_containerArrayElem1234)
if yyq2[9] {
if x.VolumesAttached == nil {
r.EncodeNil()
} else {
yym35 := z.EncBinary()
_ = yym35
if false {
} else {
h.encSliceAttachedVolume(([]AttachedVolume)(x.VolumesAttached), e)
}
}
} else {
r.EncodeNil()
}
} else {
if yyq2[9] {
z.EncSendContainerState(codecSelfer_containerMapKey1234)
r.EncodeString(codecSelferC_UTF81234, string("volumesAttached"))
z.EncSendContainerState(codecSelfer_containerMapValue1234)
if x.VolumesAttached == nil {
r.EncodeNil()
} else {
yym36 := z.EncBinary()
_ = yym36
if false {
} else {
h.encSliceAttachedVolume(([]AttachedVolume)(x.VolumesAttached), e)
}
}
}
}
if yyr2 || yy2arr2 { if yyr2 || yy2arr2 {
z.EncSendContainerState(codecSelfer_containerArrayEnd1234) z.EncSendContainerState(codecSelfer_containerArrayEnd1234)
} else { } else {
@ -36920,6 +36954,18 @@ func (x *NodeStatus) codecDecodeSelfFromMap(l int, d *codec1978.Decoder) {
h.decSliceUniqueVolumeName((*[]UniqueVolumeName)(yyv15), d) h.decSliceUniqueVolumeName((*[]UniqueVolumeName)(yyv15), d)
} }
} }
case "volumesAttached":
if r.TryDecodeAsNil() {
x.VolumesAttached = nil
} else {
yyv17 := &x.VolumesAttached
yym18 := z.DecBinary()
_ = yym18
if false {
} else {
h.decSliceAttachedVolume((*[]AttachedVolume)(yyv17), d)
}
}
default: default:
z.DecStructFieldNotFound(-1, yys3) z.DecStructFieldNotFound(-1, yys3)
} // end switch yys3 } // end switch yys3
@ -36931,16 +36977,16 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
var h codecSelfer1234 var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d) z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r _, _, _ = h, z, r
var yyj17 int var yyj19 int
var yyb17 bool var yyb19 bool
var yyhl17 bool = l >= 0 var yyhl19 bool = l >= 0
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -36948,16 +36994,16 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.Capacity = nil x.Capacity = nil
} else { } else {
yyv18 := &x.Capacity yyv20 := &x.Capacity
yyv18.CodecDecodeSelf(d) yyv20.CodecDecodeSelf(d)
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -36965,16 +37011,16 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.Allocatable = nil x.Allocatable = nil
} else { } else {
yyv19 := &x.Allocatable yyv21 := &x.Allocatable
yyv19.CodecDecodeSelf(d) yyv21.CodecDecodeSelf(d)
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -36984,13 +37030,13 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
} else { } else {
x.Phase = NodePhase(r.DecodeString()) x.Phase = NodePhase(r.DecodeString())
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -36998,21 +37044,21 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.Conditions = nil x.Conditions = nil
} else { } else {
yyv21 := &x.Conditions yyv23 := &x.Conditions
yym22 := z.DecBinary() yym24 := z.DecBinary()
_ = yym22 _ = yym24
if false { if false {
} else { } else {
h.decSliceNodeCondition((*[]NodeCondition)(yyv21), d) h.decSliceNodeCondition((*[]NodeCondition)(yyv23), d)
} }
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -37020,21 +37066,21 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.Addresses = nil x.Addresses = nil
} else { } else {
yyv23 := &x.Addresses yyv25 := &x.Addresses
yym24 := z.DecBinary() yym26 := z.DecBinary()
_ = yym24 _ = yym26
if false { if false {
} else { } else {
h.decSliceNodeAddress((*[]NodeAddress)(yyv23), d) h.decSliceNodeAddress((*[]NodeAddress)(yyv25), d)
} }
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -37042,16 +37088,16 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.DaemonEndpoints = NodeDaemonEndpoints{} x.DaemonEndpoints = NodeDaemonEndpoints{}
} else { } else {
yyv25 := &x.DaemonEndpoints yyv27 := &x.DaemonEndpoints
yyv25.CodecDecodeSelf(d) yyv27.CodecDecodeSelf(d)
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -37059,16 +37105,16 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.NodeInfo = NodeSystemInfo{} x.NodeInfo = NodeSystemInfo{}
} else { } else {
yyv26 := &x.NodeInfo yyv28 := &x.NodeInfo
yyv26.CodecDecodeSelf(d) yyv28.CodecDecodeSelf(d)
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -37076,21 +37122,21 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.Images = nil x.Images = nil
} else { } else {
yyv27 := &x.Images yyv29 := &x.Images
yym28 := z.DecBinary() yym30 := z.DecBinary()
_ = yym28 _ = yym30
if false { if false {
} else { } else {
h.decSliceContainerImage((*[]ContainerImage)(yyv27), d) h.decSliceContainerImage((*[]ContainerImage)(yyv29), d)
} }
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -37098,26 +37144,48 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.VolumesInUse = nil x.VolumesInUse = nil
} else { } else {
yyv29 := &x.VolumesInUse yyv31 := &x.VolumesInUse
yym30 := z.DecBinary() yym32 := z.DecBinary()
_ = yym30 _ = yym32
if false { if false {
} else { } else {
h.decSliceUniqueVolumeName((*[]UniqueVolumeName)(yyv29), d) h.decSliceUniqueVolumeName((*[]UniqueVolumeName)(yyv31), d)
}
}
yyj19++
if yyhl19 {
yyb19 = yyj19 > l
} else {
yyb19 = r.CheckBreak()
}
if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return
}
z.DecSendContainerState(codecSelfer_containerArrayElem1234)
if r.TryDecodeAsNil() {
x.VolumesAttached = nil
} else {
yyv33 := &x.VolumesAttached
yym34 := z.DecBinary()
_ = yym34
if false {
} else {
h.decSliceAttachedVolume((*[]AttachedVolume)(yyv33), d)
} }
} }
for { for {
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
break break
} }
z.DecSendContainerState(codecSelfer_containerArrayElem1234) z.DecSendContainerState(codecSelfer_containerArrayElem1234)
z.DecStructFieldNotFound(yyj17-1, "") z.DecStructFieldNotFound(yyj19-1, "")
} }
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
} }
@ -37148,6 +37216,199 @@ func (x *UniqueVolumeName) CodecDecodeSelf(d *codec1978.Decoder) {
} }
} }
func (x *AttachedVolume) CodecEncodeSelf(e *codec1978.Encoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperEncoder(e)
_, _, _ = h, z, r
if x == nil {
r.EncodeNil()
} else {
yym1 := z.EncBinary()
_ = yym1
if false {
} else if z.HasExtensions() && z.EncExt(x) {
} else {
yysep2 := !z.EncBinary()
yy2arr2 := z.EncBasicHandle().StructToArray
var yyq2 [2]bool
_, _, _ = yysep2, yyq2, yy2arr2
const yyr2 bool = false
var yynn2 int
if yyr2 || yy2arr2 {
r.EncodeArrayStart(2)
} else {
yynn2 = 2
for _, b := range yyq2 {
if b {
yynn2++
}
}
r.EncodeMapStart(yynn2)
yynn2 = 0
}
if yyr2 || yy2arr2 {
z.EncSendContainerState(codecSelfer_containerArrayElem1234)
x.Name.CodecEncodeSelf(e)
} else {
z.EncSendContainerState(codecSelfer_containerMapKey1234)
r.EncodeString(codecSelferC_UTF81234, string("name"))
z.EncSendContainerState(codecSelfer_containerMapValue1234)
x.Name.CodecEncodeSelf(e)
}
if yyr2 || yy2arr2 {
z.EncSendContainerState(codecSelfer_containerArrayElem1234)
yym7 := z.EncBinary()
_ = yym7
if false {
} else {
r.EncodeString(codecSelferC_UTF81234, string(x.DevicePath))
}
} else {
z.EncSendContainerState(codecSelfer_containerMapKey1234)
r.EncodeString(codecSelferC_UTF81234, string("devicePath"))
z.EncSendContainerState(codecSelfer_containerMapValue1234)
yym8 := z.EncBinary()
_ = yym8
if false {
} else {
r.EncodeString(codecSelferC_UTF81234, string(x.DevicePath))
}
}
if yyr2 || yy2arr2 {
z.EncSendContainerState(codecSelfer_containerArrayEnd1234)
} else {
z.EncSendContainerState(codecSelfer_containerMapEnd1234)
}
}
}
}
func (x *AttachedVolume) CodecDecodeSelf(d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
yym1 := z.DecBinary()
_ = yym1
if false {
} else if z.HasExtensions() && z.DecExt(x) {
} else {
yyct2 := r.ContainerType()
if yyct2 == codecSelferValueTypeMap1234 {
yyl2 := r.ReadMapStart()
if yyl2 == 0 {
z.DecSendContainerState(codecSelfer_containerMapEnd1234)
} else {
x.codecDecodeSelfFromMap(yyl2, d)
}
} else if yyct2 == codecSelferValueTypeArray1234 {
yyl2 := r.ReadArrayStart()
if yyl2 == 0 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
} else {
x.codecDecodeSelfFromArray(yyl2, d)
}
} else {
panic(codecSelferOnlyMapOrArrayEncodeToStructErr1234)
}
}
}
func (x *AttachedVolume) codecDecodeSelfFromMap(l int, d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
var yys3Slc = z.DecScratchBuffer() // default slice to decode into
_ = yys3Slc
var yyhl3 bool = l >= 0
for yyj3 := 0; ; yyj3++ {
if yyhl3 {
if yyj3 >= l {
break
}
} else {
if r.CheckBreak() {
break
}
}
z.DecSendContainerState(codecSelfer_containerMapKey1234)
yys3Slc = r.DecodeBytes(yys3Slc, true, true)
yys3 := string(yys3Slc)
z.DecSendContainerState(codecSelfer_containerMapValue1234)
switch yys3 {
case "name":
if r.TryDecodeAsNil() {
x.Name = ""
} else {
x.Name = UniqueVolumeName(r.DecodeString())
}
case "devicePath":
if r.TryDecodeAsNil() {
x.DevicePath = ""
} else {
x.DevicePath = string(r.DecodeString())
}
default:
z.DecStructFieldNotFound(-1, yys3)
} // end switch yys3
} // end for yyj3
z.DecSendContainerState(codecSelfer_containerMapEnd1234)
}
func (x *AttachedVolume) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
var yyj6 int
var yyb6 bool
var yyhl6 bool = l >= 0
yyj6++
if yyhl6 {
yyb6 = yyj6 > l
} else {
yyb6 = r.CheckBreak()
}
if yyb6 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return
}
z.DecSendContainerState(codecSelfer_containerArrayElem1234)
if r.TryDecodeAsNil() {
x.Name = ""
} else {
x.Name = UniqueVolumeName(r.DecodeString())
}
yyj6++
if yyhl6 {
yyb6 = yyj6 > l
} else {
yyb6 = r.CheckBreak()
}
if yyb6 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return
}
z.DecSendContainerState(codecSelfer_containerArrayElem1234)
if r.TryDecodeAsNil() {
x.DevicePath = ""
} else {
x.DevicePath = string(r.DecodeString())
}
for {
yyj6++
if yyhl6 {
yyb6 = yyj6 > l
} else {
yyb6 = r.CheckBreak()
}
if yyb6 {
break
}
z.DecSendContainerState(codecSelfer_containerArrayElem1234)
z.DecStructFieldNotFound(yyj6-1, "")
}
z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
}
func (x *ContainerImage) CodecEncodeSelf(e *codec1978.Encoder) { func (x *ContainerImage) CodecEncodeSelf(e *codec1978.Encoder) {
var h codecSelfer1234 var h codecSelfer1234
z, r := codec1978.GenHelperEncoder(e) z, r := codec1978.GenHelperEncoder(e)
@ -57477,6 +57738,125 @@ func (x codecSelfer1234) decSliceUniqueVolumeName(v *[]UniqueVolumeName, d *code
} }
} }
func (x codecSelfer1234) encSliceAttachedVolume(v []AttachedVolume, e *codec1978.Encoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperEncoder(e)
_, _, _ = h, z, r
r.EncodeArrayStart(len(v))
for _, yyv1 := range v {
z.EncSendContainerState(codecSelfer_containerArrayElem1234)
yy2 := &yyv1
yy2.CodecEncodeSelf(e)
}
z.EncSendContainerState(codecSelfer_containerArrayEnd1234)
}
func (x codecSelfer1234) decSliceAttachedVolume(v *[]AttachedVolume, d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
yyv1 := *v
yyh1, yyl1 := z.DecSliceHelperStart()
var yyc1 bool
_ = yyc1
if yyl1 == 0 {
if yyv1 == nil {
yyv1 = []AttachedVolume{}
yyc1 = true
} else if len(yyv1) != 0 {
yyv1 = yyv1[:0]
yyc1 = true
}
} else if yyl1 > 0 {
var yyrr1, yyrl1 int
var yyrt1 bool
_, _ = yyrl1, yyrt1
yyrr1 = yyl1 // len(yyv1)
if yyl1 > cap(yyv1) {
yyrg1 := len(yyv1) > 0
yyv21 := yyv1
yyrl1, yyrt1 = z.DecInferLen(yyl1, z.DecBasicHandle().MaxInitLen, 32)
if yyrt1 {
if yyrl1 <= cap(yyv1) {
yyv1 = yyv1[:yyrl1]
} else {
yyv1 = make([]AttachedVolume, yyrl1)
}
} else {
yyv1 = make([]AttachedVolume, yyrl1)
}
yyc1 = true
yyrr1 = len(yyv1)
if yyrg1 {
copy(yyv1, yyv21)
}
} else if yyl1 != len(yyv1) {
yyv1 = yyv1[:yyl1]
yyc1 = true
}
yyj1 := 0
for ; yyj1 < yyrr1; yyj1++ {
yyh1.ElemContainerState(yyj1)
if r.TryDecodeAsNil() {
yyv1[yyj1] = AttachedVolume{}
} else {
yyv2 := &yyv1[yyj1]
yyv2.CodecDecodeSelf(d)
}
}
if yyrt1 {
for ; yyj1 < yyl1; yyj1++ {
yyv1 = append(yyv1, AttachedVolume{})
yyh1.ElemContainerState(yyj1)
if r.TryDecodeAsNil() {
yyv1[yyj1] = AttachedVolume{}
} else {
yyv3 := &yyv1[yyj1]
yyv3.CodecDecodeSelf(d)
}
}
}
} else {
yyj1 := 0
for ; !r.CheckBreak(); yyj1++ {
if yyj1 >= len(yyv1) {
yyv1 = append(yyv1, AttachedVolume{}) // var yyz1 AttachedVolume
yyc1 = true
}
yyh1.ElemContainerState(yyj1)
if yyj1 < len(yyv1) {
if r.TryDecodeAsNil() {
yyv1[yyj1] = AttachedVolume{}
} else {
yyv4 := &yyv1[yyj1]
yyv4.CodecDecodeSelf(d)
}
} else {
z.DecSwallow()
}
}
if yyj1 < len(yyv1) {
yyv1 = yyv1[:yyj1]
yyc1 = true
} else if yyj1 == 0 && yyv1 == nil {
yyv1 = []AttachedVolume{}
yyc1 = true
}
}
yyh1.End()
if yyc1 {
*v = yyv1
}
}
func (x codecSelfer1234) encResourceList(v ResourceList, e *codec1978.Encoder) { func (x codecSelfer1234) encResourceList(v ResourceList, e *codec1978.Encoder) {
var h codecSelfer1234 var h codecSelfer1234
z, r := codec1978.GenHelperEncoder(e) z, r := codec1978.GenHelperEncoder(e)
@ -57630,7 +58010,7 @@ func (x codecSelfer1234) decSliceNode(v *[]Node, d *codec1978.Decoder) {
yyrg1 := len(yyv1) > 0 yyrg1 := len(yyv1) > 0
yyv21 := yyv1 yyv21 := yyv1
yyrl1, yyrt1 = z.DecInferLen(yyl1, z.DecBasicHandle().MaxInitLen, 592) yyrl1, yyrt1 = z.DecInferLen(yyl1, z.DecBasicHandle().MaxInitLen, 616)
if yyrt1 { if yyrt1 {
if yyrl1 <= cap(yyv1) { if yyrl1 <= cap(yyv1) {
yyv1 = yyv1[:yyrl1] yyv1 = yyv1[:yyrl1]

View File

@ -1989,10 +1989,21 @@ type NodeStatus struct {
Images []ContainerImage `json:"images,omitempty"` Images []ContainerImage `json:"images,omitempty"`
// List of attachable volumes in use (mounted) by the node. // List of attachable volumes in use (mounted) by the node.
VolumesInUse []UniqueVolumeName `json:"volumesInUse,omitempty"` VolumesInUse []UniqueVolumeName `json:"volumesInUse,omitempty"`
// List of volumes that are attached to the node.
VolumesAttached []AttachedVolume `json:"volumesAttached,omitempty"`
} }
type UniqueVolumeName string type UniqueVolumeName string
// AttachedVolume describes a volume attached to a node
type AttachedVolume struct {
// Name of the attached volume
Name UniqueVolumeName `json:"name"`
// DevicePath represents the device path where the volume should be avilable
DevicePath string `json:"devicePath"`
}
// Describe a container image // Describe a container image
type ContainerImage struct { type ContainerImage struct {
// Names by which this image is known. // Names by which this image is known.

View File

@ -33,6 +33,8 @@ func init() {
Convert_api_AWSElasticBlockStoreVolumeSource_To_v1_AWSElasticBlockStoreVolumeSource, Convert_api_AWSElasticBlockStoreVolumeSource_To_v1_AWSElasticBlockStoreVolumeSource,
Convert_v1_Affinity_To_api_Affinity, Convert_v1_Affinity_To_api_Affinity,
Convert_api_Affinity_To_v1_Affinity, Convert_api_Affinity_To_v1_Affinity,
Convert_v1_AttachedVolume_To_api_AttachedVolume,
Convert_api_AttachedVolume_To_v1_AttachedVolume,
Convert_v1_AzureFileVolumeSource_To_api_AzureFileVolumeSource, Convert_v1_AzureFileVolumeSource_To_api_AzureFileVolumeSource,
Convert_api_AzureFileVolumeSource_To_v1_AzureFileVolumeSource, Convert_api_AzureFileVolumeSource_To_v1_AzureFileVolumeSource,
Convert_v1_Binding_To_api_Binding, Convert_v1_Binding_To_api_Binding,
@ -425,6 +427,26 @@ func Convert_api_Affinity_To_v1_Affinity(in *api.Affinity, out *Affinity, s conv
return autoConvert_api_Affinity_To_v1_Affinity(in, out, s) return autoConvert_api_Affinity_To_v1_Affinity(in, out, s)
} }
func autoConvert_v1_AttachedVolume_To_api_AttachedVolume(in *AttachedVolume, out *api.AttachedVolume, s conversion.Scope) error {
out.Name = api.UniqueVolumeName(in.Name)
out.DevicePath = in.DevicePath
return nil
}
func Convert_v1_AttachedVolume_To_api_AttachedVolume(in *AttachedVolume, out *api.AttachedVolume, s conversion.Scope) error {
return autoConvert_v1_AttachedVolume_To_api_AttachedVolume(in, out, s)
}
func autoConvert_api_AttachedVolume_To_v1_AttachedVolume(in *api.AttachedVolume, out *AttachedVolume, s conversion.Scope) error {
out.Name = UniqueVolumeName(in.Name)
out.DevicePath = in.DevicePath
return nil
}
func Convert_api_AttachedVolume_To_v1_AttachedVolume(in *api.AttachedVolume, out *AttachedVolume, s conversion.Scope) error {
return autoConvert_api_AttachedVolume_To_v1_AttachedVolume(in, out, s)
}
func autoConvert_v1_AzureFileVolumeSource_To_api_AzureFileVolumeSource(in *AzureFileVolumeSource, out *api.AzureFileVolumeSource, s conversion.Scope) error { func autoConvert_v1_AzureFileVolumeSource_To_api_AzureFileVolumeSource(in *AzureFileVolumeSource, out *api.AzureFileVolumeSource, s conversion.Scope) error {
out.SecretName = in.SecretName out.SecretName = in.SecretName
out.ShareName = in.ShareName out.ShareName = in.ShareName
@ -3397,6 +3419,17 @@ func autoConvert_v1_NodeStatus_To_api_NodeStatus(in *NodeStatus, out *api.NodeSt
} else { } else {
out.VolumesInUse = nil out.VolumesInUse = nil
} }
if in.VolumesAttached != nil {
in, out := &in.VolumesAttached, &out.VolumesAttached
*out = make([]api.AttachedVolume, len(*in))
for i := range *in {
if err := Convert_v1_AttachedVolume_To_api_AttachedVolume(&(*in)[i], &(*out)[i], s); err != nil {
return err
}
}
} else {
out.VolumesAttached = nil
}
return nil return nil
} }
@ -3480,6 +3513,17 @@ func autoConvert_api_NodeStatus_To_v1_NodeStatus(in *api.NodeStatus, out *NodeSt
} else { } else {
out.VolumesInUse = nil out.VolumesInUse = nil
} }
if in.VolumesAttached != nil {
in, out := &in.VolumesAttached, &out.VolumesAttached
*out = make([]AttachedVolume, len(*in))
for i := range *in {
if err := Convert_api_AttachedVolume_To_v1_AttachedVolume(&(*in)[i], &(*out)[i], s); err != nil {
return err
}
}
} else {
out.VolumesAttached = nil
}
return nil return nil
} }

View File

@ -34,6 +34,7 @@ func init() {
if err := api.Scheme.AddGeneratedDeepCopyFuncs( if err := api.Scheme.AddGeneratedDeepCopyFuncs(
DeepCopy_v1_AWSElasticBlockStoreVolumeSource, DeepCopy_v1_AWSElasticBlockStoreVolumeSource,
DeepCopy_v1_Affinity, DeepCopy_v1_Affinity,
DeepCopy_v1_AttachedVolume,
DeepCopy_v1_AzureFileVolumeSource, DeepCopy_v1_AzureFileVolumeSource,
DeepCopy_v1_Binding, DeepCopy_v1_Binding,
DeepCopy_v1_Capabilities, DeepCopy_v1_Capabilities,
@ -225,6 +226,12 @@ func DeepCopy_v1_Affinity(in Affinity, out *Affinity, c *conversion.Cloner) erro
return nil return nil
} }
func DeepCopy_v1_AttachedVolume(in AttachedVolume, out *AttachedVolume, c *conversion.Cloner) error {
out.Name = in.Name
out.DevicePath = in.DevicePath
return nil
}
func DeepCopy_v1_AzureFileVolumeSource(in AzureFileVolumeSource, out *AzureFileVolumeSource, c *conversion.Cloner) error { func DeepCopy_v1_AzureFileVolumeSource(in AzureFileVolumeSource, out *AzureFileVolumeSource, c *conversion.Cloner) error {
out.SecretName = in.SecretName out.SecretName = in.SecretName
out.ShareName = in.ShareName out.ShareName = in.ShareName
@ -1557,6 +1564,17 @@ func DeepCopy_v1_NodeStatus(in NodeStatus, out *NodeStatus, c *conversion.Cloner
} else { } else {
out.VolumesInUse = nil out.VolumesInUse = nil
} }
if in.VolumesAttached != nil {
in, out := in.VolumesAttached, &out.VolumesAttached
*out = make([]AttachedVolume, len(in))
for i := range in {
if err := DeepCopy_v1_AttachedVolume(in[i], &(*out)[i], c); err != nil {
return err
}
}
} else {
out.VolumesAttached = nil
}
return nil return nil
} }

View File

@ -27,6 +27,7 @@ limitations under the License.
It has these top-level messages: It has these top-level messages:
AWSElasticBlockStoreVolumeSource AWSElasticBlockStoreVolumeSource
Affinity Affinity
AttachedVolume
AzureFileVolumeSource AzureFileVolumeSource
Binding Binding
Capabilities Capabilities
@ -201,6 +202,10 @@ func (m *Affinity) Reset() { *m = Affinity{} }
func (m *Affinity) String() string { return proto.CompactTextString(m) } func (m *Affinity) String() string { return proto.CompactTextString(m) }
func (*Affinity) ProtoMessage() {} func (*Affinity) ProtoMessage() {}
func (m *AttachedVolume) Reset() { *m = AttachedVolume{} }
func (m *AttachedVolume) String() string { return proto.CompactTextString(m) }
func (*AttachedVolume) ProtoMessage() {}
func (m *AzureFileVolumeSource) Reset() { *m = AzureFileVolumeSource{} } func (m *AzureFileVolumeSource) Reset() { *m = AzureFileVolumeSource{} }
func (m *AzureFileVolumeSource) String() string { return proto.CompactTextString(m) } func (m *AzureFileVolumeSource) String() string { return proto.CompactTextString(m) }
func (*AzureFileVolumeSource) ProtoMessage() {} func (*AzureFileVolumeSource) ProtoMessage() {}
@ -788,6 +793,7 @@ func (*WeightedPodAffinityTerm) ProtoMessage() {}
func init() { func init() {
proto.RegisterType((*AWSElasticBlockStoreVolumeSource)(nil), "k8s.io.kubernetes.pkg.api.v1.AWSElasticBlockStoreVolumeSource") proto.RegisterType((*AWSElasticBlockStoreVolumeSource)(nil), "k8s.io.kubernetes.pkg.api.v1.AWSElasticBlockStoreVolumeSource")
proto.RegisterType((*Affinity)(nil), "k8s.io.kubernetes.pkg.api.v1.Affinity") proto.RegisterType((*Affinity)(nil), "k8s.io.kubernetes.pkg.api.v1.Affinity")
proto.RegisterType((*AttachedVolume)(nil), "k8s.io.kubernetes.pkg.api.v1.AttachedVolume")
proto.RegisterType((*AzureFileVolumeSource)(nil), "k8s.io.kubernetes.pkg.api.v1.AzureFileVolumeSource") proto.RegisterType((*AzureFileVolumeSource)(nil), "k8s.io.kubernetes.pkg.api.v1.AzureFileVolumeSource")
proto.RegisterType((*Binding)(nil), "k8s.io.kubernetes.pkg.api.v1.Binding") proto.RegisterType((*Binding)(nil), "k8s.io.kubernetes.pkg.api.v1.Binding")
proto.RegisterType((*Capabilities)(nil), "k8s.io.kubernetes.pkg.api.v1.Capabilities") proto.RegisterType((*Capabilities)(nil), "k8s.io.kubernetes.pkg.api.v1.Capabilities")
@ -1020,6 +1026,32 @@ func (m *Affinity) MarshalTo(data []byte) (int, error) {
return i, nil return i, nil
} }
func (m *AttachedVolume) Marshal() (data []byte, err error) {
size := m.Size()
data = make([]byte, size)
n, err := m.MarshalTo(data)
if err != nil {
return nil, err
}
return data[:n], nil
}
func (m *AttachedVolume) MarshalTo(data []byte) (int, error) {
var i int
_ = i
var l int
_ = l
data[i] = 0xa
i++
i = encodeVarintGenerated(data, i, uint64(len(m.Name)))
i += copy(data[i:], m.Name)
data[i] = 0x12
i++
i = encodeVarintGenerated(data, i, uint64(len(m.DevicePath)))
i += copy(data[i:], m.DevicePath)
return i, nil
}
func (m *AzureFileVolumeSource) Marshal() (data []byte, err error) { func (m *AzureFileVolumeSource) Marshal() (data []byte, err error) {
size := m.Size() size := m.Size()
data = make([]byte, size) data = make([]byte, size)
@ -4174,6 +4206,18 @@ func (m *NodeStatus) MarshalTo(data []byte) (int, error) {
i += copy(data[i:], s) i += copy(data[i:], s)
} }
} }
if len(m.VolumesAttached) > 0 {
for _, msg := range m.VolumesAttached {
data[i] = 0x52
i++
i = encodeVarintGenerated(data, i, uint64(msg.Size()))
n, err := msg.MarshalTo(data[i:])
if err != nil {
return 0, err
}
i += n
}
}
return i, nil return i, nil
} }
@ -7735,6 +7779,16 @@ func (m *Affinity) Size() (n int) {
return n return n
} }
func (m *AttachedVolume) Size() (n int) {
var l int
_ = l
l = len(m.Name)
n += 1 + l + sovGenerated(uint64(l))
l = len(m.DevicePath)
n += 1 + l + sovGenerated(uint64(l))
return n
}
func (m *AzureFileVolumeSource) Size() (n int) { func (m *AzureFileVolumeSource) Size() (n int) {
var l int var l int
_ = l _ = l
@ -8887,6 +8941,12 @@ func (m *NodeStatus) Size() (n int) {
n += 1 + l + sovGenerated(uint64(l)) n += 1 + l + sovGenerated(uint64(l))
} }
} }
if len(m.VolumesAttached) > 0 {
for _, e := range m.VolumesAttached {
l = e.Size()
n += 1 + l + sovGenerated(uint64(l))
}
}
return n return n
} }
@ -10492,6 +10552,114 @@ func (m *Affinity) Unmarshal(data []byte) error {
} }
return nil return nil
} }
func (m *AttachedVolume) Unmarshal(data []byte) error {
l := len(data)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGenerated
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: AttachedVolume: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: AttachedVolume: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGenerated
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthGenerated
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Name = UniqueVolumeName(data[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field DevicePath", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGenerated
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthGenerated
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.DevicePath = string(data[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipGenerated(data[iNdEx:])
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthGenerated
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *AzureFileVolumeSource) Unmarshal(data []byte) error { func (m *AzureFileVolumeSource) Unmarshal(data []byte) error {
l := len(data) l := len(data)
iNdEx := 0 iNdEx := 0
@ -21635,6 +21803,37 @@ func (m *NodeStatus) Unmarshal(data []byte) error {
} }
m.VolumesInUse = append(m.VolumesInUse, UniqueVolumeName(data[iNdEx:postIndex])) m.VolumesInUse = append(m.VolumesInUse, UniqueVolumeName(data[iNdEx:postIndex]))
iNdEx = postIndex iNdEx = postIndex
case 10:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field VolumesAttached", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGenerated
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthGenerated
}
postIndex := iNdEx + msglen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.VolumesAttached = append(m.VolumesAttached, AttachedVolume{})
if err := m.VolumesAttached[len(m.VolumesAttached)-1].Unmarshal(data[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default: default:
iNdEx = preIndex iNdEx = preIndex
skippy, err := skipGenerated(data[iNdEx:]) skippy, err := skipGenerated(data[iNdEx:])

View File

@ -71,6 +71,15 @@ message Affinity {
optional PodAntiAffinity podAntiAffinity = 3; optional PodAntiAffinity podAntiAffinity = 3;
} }
// AttachedVolume describes a volume attached to a node
message AttachedVolume {
// Name of the attached volume
optional string name = 1;
// DevicePath represents the device path where the volume should be avilable
optional string devicePath = 2;
}
// AzureFile represents an Azure File Service mount on the host and bind mount to the pod. // AzureFile represents an Azure File Service mount on the host and bind mount to the pod.
message AzureFileVolumeSource { message AzureFileVolumeSource {
// the name of secret that contains Azure Storage Account Name and Key // the name of secret that contains Azure Storage Account Name and Key
@ -1306,6 +1315,9 @@ message NodeStatus {
// List of attachable volumes in use (mounted) by the node. // List of attachable volumes in use (mounted) by the node.
repeated string volumesInUse = 9; repeated string volumesInUse = 9;
// List of volumes that are attached to the node.
repeated AttachedVolume volumesAttached = 10;
} }
// NodeSystemInfo is a set of ids/uuids to uniquely identify the node. // NodeSystemInfo is a set of ids/uuids to uniquely identify the node.

View File

@ -36330,7 +36330,7 @@ func (x *NodeStatus) CodecEncodeSelf(e *codec1978.Encoder) {
} else { } else {
yysep2 := !z.EncBinary() yysep2 := !z.EncBinary()
yy2arr2 := z.EncBasicHandle().StructToArray yy2arr2 := z.EncBasicHandle().StructToArray
var yyq2 [9]bool var yyq2 [10]bool
_, _, _ = yysep2, yyq2, yy2arr2 _, _, _ = yysep2, yyq2, yy2arr2
const yyr2 bool = false const yyr2 bool = false
yyq2[0] = len(x.Capacity) != 0 yyq2[0] = len(x.Capacity) != 0
@ -36342,9 +36342,10 @@ func (x *NodeStatus) CodecEncodeSelf(e *codec1978.Encoder) {
yyq2[6] = true yyq2[6] = true
yyq2[7] = len(x.Images) != 0 yyq2[7] = len(x.Images) != 0
yyq2[8] = len(x.VolumesInUse) != 0 yyq2[8] = len(x.VolumesInUse) != 0
yyq2[9] = len(x.VolumesAttached) != 0
var yynn2 int var yynn2 int
if yyr2 || yy2arr2 { if yyr2 || yy2arr2 {
r.EncodeArrayStart(9) r.EncodeArrayStart(10)
} else { } else {
yynn2 = 0 yynn2 = 0
for _, b := range yyq2 { for _, b := range yyq2 {
@ -36582,6 +36583,39 @@ func (x *NodeStatus) CodecEncodeSelf(e *codec1978.Encoder) {
} }
} }
} }
if yyr2 || yy2arr2 {
z.EncSendContainerState(codecSelfer_containerArrayElem1234)
if yyq2[9] {
if x.VolumesAttached == nil {
r.EncodeNil()
} else {
yym35 := z.EncBinary()
_ = yym35
if false {
} else {
h.encSliceAttachedVolume(([]AttachedVolume)(x.VolumesAttached), e)
}
}
} else {
r.EncodeNil()
}
} else {
if yyq2[9] {
z.EncSendContainerState(codecSelfer_containerMapKey1234)
r.EncodeString(codecSelferC_UTF81234, string("volumesAttached"))
z.EncSendContainerState(codecSelfer_containerMapValue1234)
if x.VolumesAttached == nil {
r.EncodeNil()
} else {
yym36 := z.EncBinary()
_ = yym36
if false {
} else {
h.encSliceAttachedVolume(([]AttachedVolume)(x.VolumesAttached), e)
}
}
}
}
if yyr2 || yy2arr2 { if yyr2 || yy2arr2 {
z.EncSendContainerState(codecSelfer_containerArrayEnd1234) z.EncSendContainerState(codecSelfer_containerArrayEnd1234)
} else { } else {
@ -36725,6 +36759,18 @@ func (x *NodeStatus) codecDecodeSelfFromMap(l int, d *codec1978.Decoder) {
h.decSliceUniqueVolumeName((*[]UniqueVolumeName)(yyv15), d) h.decSliceUniqueVolumeName((*[]UniqueVolumeName)(yyv15), d)
} }
} }
case "volumesAttached":
if r.TryDecodeAsNil() {
x.VolumesAttached = nil
} else {
yyv17 := &x.VolumesAttached
yym18 := z.DecBinary()
_ = yym18
if false {
} else {
h.decSliceAttachedVolume((*[]AttachedVolume)(yyv17), d)
}
}
default: default:
z.DecStructFieldNotFound(-1, yys3) z.DecStructFieldNotFound(-1, yys3)
} // end switch yys3 } // end switch yys3
@ -36736,16 +36782,16 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
var h codecSelfer1234 var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d) z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r _, _, _ = h, z, r
var yyj17 int var yyj19 int
var yyb17 bool var yyb19 bool
var yyhl17 bool = l >= 0 var yyhl19 bool = l >= 0
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -36753,16 +36799,16 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.Capacity = nil x.Capacity = nil
} else { } else {
yyv18 := &x.Capacity yyv20 := &x.Capacity
yyv18.CodecDecodeSelf(d) yyv20.CodecDecodeSelf(d)
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -36770,16 +36816,16 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.Allocatable = nil x.Allocatable = nil
} else { } else {
yyv19 := &x.Allocatable yyv21 := &x.Allocatable
yyv19.CodecDecodeSelf(d) yyv21.CodecDecodeSelf(d)
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -36789,13 +36835,13 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
} else { } else {
x.Phase = NodePhase(r.DecodeString()) x.Phase = NodePhase(r.DecodeString())
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -36803,21 +36849,21 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.Conditions = nil x.Conditions = nil
} else { } else {
yyv21 := &x.Conditions yyv23 := &x.Conditions
yym22 := z.DecBinary() yym24 := z.DecBinary()
_ = yym22 _ = yym24
if false { if false {
} else { } else {
h.decSliceNodeCondition((*[]NodeCondition)(yyv21), d) h.decSliceNodeCondition((*[]NodeCondition)(yyv23), d)
} }
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -36825,21 +36871,21 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.Addresses = nil x.Addresses = nil
} else { } else {
yyv23 := &x.Addresses yyv25 := &x.Addresses
yym24 := z.DecBinary() yym26 := z.DecBinary()
_ = yym24 _ = yym26
if false { if false {
} else { } else {
h.decSliceNodeAddress((*[]NodeAddress)(yyv23), d) h.decSliceNodeAddress((*[]NodeAddress)(yyv25), d)
} }
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -36847,16 +36893,16 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.DaemonEndpoints = NodeDaemonEndpoints{} x.DaemonEndpoints = NodeDaemonEndpoints{}
} else { } else {
yyv25 := &x.DaemonEndpoints yyv27 := &x.DaemonEndpoints
yyv25.CodecDecodeSelf(d) yyv27.CodecDecodeSelf(d)
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -36864,16 +36910,16 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.NodeInfo = NodeSystemInfo{} x.NodeInfo = NodeSystemInfo{}
} else { } else {
yyv26 := &x.NodeInfo yyv28 := &x.NodeInfo
yyv26.CodecDecodeSelf(d) yyv28.CodecDecodeSelf(d)
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -36881,21 +36927,21 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.Images = nil x.Images = nil
} else { } else {
yyv27 := &x.Images yyv29 := &x.Images
yym28 := z.DecBinary() yym30 := z.DecBinary()
_ = yym28 _ = yym30
if false { if false {
} else { } else {
h.decSliceContainerImage((*[]ContainerImage)(yyv27), d) h.decSliceContainerImage((*[]ContainerImage)(yyv29), d)
} }
} }
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return return
} }
@ -36903,26 +36949,48 @@ func (x *NodeStatus) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
if r.TryDecodeAsNil() { if r.TryDecodeAsNil() {
x.VolumesInUse = nil x.VolumesInUse = nil
} else { } else {
yyv29 := &x.VolumesInUse yyv31 := &x.VolumesInUse
yym30 := z.DecBinary() yym32 := z.DecBinary()
_ = yym30 _ = yym32
if false { if false {
} else { } else {
h.decSliceUniqueVolumeName((*[]UniqueVolumeName)(yyv29), d) h.decSliceUniqueVolumeName((*[]UniqueVolumeName)(yyv31), d)
}
}
yyj19++
if yyhl19 {
yyb19 = yyj19 > l
} else {
yyb19 = r.CheckBreak()
}
if yyb19 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return
}
z.DecSendContainerState(codecSelfer_containerArrayElem1234)
if r.TryDecodeAsNil() {
x.VolumesAttached = nil
} else {
yyv33 := &x.VolumesAttached
yym34 := z.DecBinary()
_ = yym34
if false {
} else {
h.decSliceAttachedVolume((*[]AttachedVolume)(yyv33), d)
} }
} }
for { for {
yyj17++ yyj19++
if yyhl17 { if yyhl19 {
yyb17 = yyj17 > l yyb19 = yyj19 > l
} else { } else {
yyb17 = r.CheckBreak() yyb19 = r.CheckBreak()
} }
if yyb17 { if yyb19 {
break break
} }
z.DecSendContainerState(codecSelfer_containerArrayElem1234) z.DecSendContainerState(codecSelfer_containerArrayElem1234)
z.DecStructFieldNotFound(yyj17-1, "") z.DecStructFieldNotFound(yyj19-1, "")
} }
z.DecSendContainerState(codecSelfer_containerArrayEnd1234) z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
} }
@ -36953,6 +37021,199 @@ func (x *UniqueVolumeName) CodecDecodeSelf(d *codec1978.Decoder) {
} }
} }
func (x *AttachedVolume) CodecEncodeSelf(e *codec1978.Encoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperEncoder(e)
_, _, _ = h, z, r
if x == nil {
r.EncodeNil()
} else {
yym1 := z.EncBinary()
_ = yym1
if false {
} else if z.HasExtensions() && z.EncExt(x) {
} else {
yysep2 := !z.EncBinary()
yy2arr2 := z.EncBasicHandle().StructToArray
var yyq2 [2]bool
_, _, _ = yysep2, yyq2, yy2arr2
const yyr2 bool = false
var yynn2 int
if yyr2 || yy2arr2 {
r.EncodeArrayStart(2)
} else {
yynn2 = 2
for _, b := range yyq2 {
if b {
yynn2++
}
}
r.EncodeMapStart(yynn2)
yynn2 = 0
}
if yyr2 || yy2arr2 {
z.EncSendContainerState(codecSelfer_containerArrayElem1234)
x.Name.CodecEncodeSelf(e)
} else {
z.EncSendContainerState(codecSelfer_containerMapKey1234)
r.EncodeString(codecSelferC_UTF81234, string("name"))
z.EncSendContainerState(codecSelfer_containerMapValue1234)
x.Name.CodecEncodeSelf(e)
}
if yyr2 || yy2arr2 {
z.EncSendContainerState(codecSelfer_containerArrayElem1234)
yym7 := z.EncBinary()
_ = yym7
if false {
} else {
r.EncodeString(codecSelferC_UTF81234, string(x.DevicePath))
}
} else {
z.EncSendContainerState(codecSelfer_containerMapKey1234)
r.EncodeString(codecSelferC_UTF81234, string("devicePath"))
z.EncSendContainerState(codecSelfer_containerMapValue1234)
yym8 := z.EncBinary()
_ = yym8
if false {
} else {
r.EncodeString(codecSelferC_UTF81234, string(x.DevicePath))
}
}
if yyr2 || yy2arr2 {
z.EncSendContainerState(codecSelfer_containerArrayEnd1234)
} else {
z.EncSendContainerState(codecSelfer_containerMapEnd1234)
}
}
}
}
func (x *AttachedVolume) CodecDecodeSelf(d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
yym1 := z.DecBinary()
_ = yym1
if false {
} else if z.HasExtensions() && z.DecExt(x) {
} else {
yyct2 := r.ContainerType()
if yyct2 == codecSelferValueTypeMap1234 {
yyl2 := r.ReadMapStart()
if yyl2 == 0 {
z.DecSendContainerState(codecSelfer_containerMapEnd1234)
} else {
x.codecDecodeSelfFromMap(yyl2, d)
}
} else if yyct2 == codecSelferValueTypeArray1234 {
yyl2 := r.ReadArrayStart()
if yyl2 == 0 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
} else {
x.codecDecodeSelfFromArray(yyl2, d)
}
} else {
panic(codecSelferOnlyMapOrArrayEncodeToStructErr1234)
}
}
}
func (x *AttachedVolume) codecDecodeSelfFromMap(l int, d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
var yys3Slc = z.DecScratchBuffer() // default slice to decode into
_ = yys3Slc
var yyhl3 bool = l >= 0
for yyj3 := 0; ; yyj3++ {
if yyhl3 {
if yyj3 >= l {
break
}
} else {
if r.CheckBreak() {
break
}
}
z.DecSendContainerState(codecSelfer_containerMapKey1234)
yys3Slc = r.DecodeBytes(yys3Slc, true, true)
yys3 := string(yys3Slc)
z.DecSendContainerState(codecSelfer_containerMapValue1234)
switch yys3 {
case "name":
if r.TryDecodeAsNil() {
x.Name = ""
} else {
x.Name = UniqueVolumeName(r.DecodeString())
}
case "devicePath":
if r.TryDecodeAsNil() {
x.DevicePath = ""
} else {
x.DevicePath = string(r.DecodeString())
}
default:
z.DecStructFieldNotFound(-1, yys3)
} // end switch yys3
} // end for yyj3
z.DecSendContainerState(codecSelfer_containerMapEnd1234)
}
func (x *AttachedVolume) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
var yyj6 int
var yyb6 bool
var yyhl6 bool = l >= 0
yyj6++
if yyhl6 {
yyb6 = yyj6 > l
} else {
yyb6 = r.CheckBreak()
}
if yyb6 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return
}
z.DecSendContainerState(codecSelfer_containerArrayElem1234)
if r.TryDecodeAsNil() {
x.Name = ""
} else {
x.Name = UniqueVolumeName(r.DecodeString())
}
yyj6++
if yyhl6 {
yyb6 = yyj6 > l
} else {
yyb6 = r.CheckBreak()
}
if yyb6 {
z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
return
}
z.DecSendContainerState(codecSelfer_containerArrayElem1234)
if r.TryDecodeAsNil() {
x.DevicePath = ""
} else {
x.DevicePath = string(r.DecodeString())
}
for {
yyj6++
if yyhl6 {
yyb6 = yyj6 > l
} else {
yyb6 = r.CheckBreak()
}
if yyb6 {
break
}
z.DecSendContainerState(codecSelfer_containerArrayElem1234)
z.DecStructFieldNotFound(yyj6-1, "")
}
z.DecSendContainerState(codecSelfer_containerArrayEnd1234)
}
func (x *ContainerImage) CodecEncodeSelf(e *codec1978.Encoder) { func (x *ContainerImage) CodecEncodeSelf(e *codec1978.Encoder) {
var h codecSelfer1234 var h codecSelfer1234
z, r := codec1978.GenHelperEncoder(e) z, r := codec1978.GenHelperEncoder(e)
@ -57530,6 +57791,125 @@ func (x codecSelfer1234) decSliceUniqueVolumeName(v *[]UniqueVolumeName, d *code
} }
} }
func (x codecSelfer1234) encSliceAttachedVolume(v []AttachedVolume, e *codec1978.Encoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperEncoder(e)
_, _, _ = h, z, r
r.EncodeArrayStart(len(v))
for _, yyv1 := range v {
z.EncSendContainerState(codecSelfer_containerArrayElem1234)
yy2 := &yyv1
yy2.CodecEncodeSelf(e)
}
z.EncSendContainerState(codecSelfer_containerArrayEnd1234)
}
func (x codecSelfer1234) decSliceAttachedVolume(v *[]AttachedVolume, d *codec1978.Decoder) {
var h codecSelfer1234
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
yyv1 := *v
yyh1, yyl1 := z.DecSliceHelperStart()
var yyc1 bool
_ = yyc1
if yyl1 == 0 {
if yyv1 == nil {
yyv1 = []AttachedVolume{}
yyc1 = true
} else if len(yyv1) != 0 {
yyv1 = yyv1[:0]
yyc1 = true
}
} else if yyl1 > 0 {
var yyrr1, yyrl1 int
var yyrt1 bool
_, _ = yyrl1, yyrt1
yyrr1 = yyl1 // len(yyv1)
if yyl1 > cap(yyv1) {
yyrg1 := len(yyv1) > 0
yyv21 := yyv1
yyrl1, yyrt1 = z.DecInferLen(yyl1, z.DecBasicHandle().MaxInitLen, 32)
if yyrt1 {
if yyrl1 <= cap(yyv1) {
yyv1 = yyv1[:yyrl1]
} else {
yyv1 = make([]AttachedVolume, yyrl1)
}
} else {
yyv1 = make([]AttachedVolume, yyrl1)
}
yyc1 = true
yyrr1 = len(yyv1)
if yyrg1 {
copy(yyv1, yyv21)
}
} else if yyl1 != len(yyv1) {
yyv1 = yyv1[:yyl1]
yyc1 = true
}
yyj1 := 0
for ; yyj1 < yyrr1; yyj1++ {
yyh1.ElemContainerState(yyj1)
if r.TryDecodeAsNil() {
yyv1[yyj1] = AttachedVolume{}
} else {
yyv2 := &yyv1[yyj1]
yyv2.CodecDecodeSelf(d)
}
}
if yyrt1 {
for ; yyj1 < yyl1; yyj1++ {
yyv1 = append(yyv1, AttachedVolume{})
yyh1.ElemContainerState(yyj1)
if r.TryDecodeAsNil() {
yyv1[yyj1] = AttachedVolume{}
} else {
yyv3 := &yyv1[yyj1]
yyv3.CodecDecodeSelf(d)
}
}
}
} else {
yyj1 := 0
for ; !r.CheckBreak(); yyj1++ {
if yyj1 >= len(yyv1) {
yyv1 = append(yyv1, AttachedVolume{}) // var yyz1 AttachedVolume
yyc1 = true
}
yyh1.ElemContainerState(yyj1)
if yyj1 < len(yyv1) {
if r.TryDecodeAsNil() {
yyv1[yyj1] = AttachedVolume{}
} else {
yyv4 := &yyv1[yyj1]
yyv4.CodecDecodeSelf(d)
}
} else {
z.DecSwallow()
}
}
if yyj1 < len(yyv1) {
yyv1 = yyv1[:yyj1]
yyc1 = true
} else if yyj1 == 0 && yyv1 == nil {
yyv1 = []AttachedVolume{}
yyc1 = true
}
}
yyh1.End()
if yyc1 {
*v = yyv1
}
}
func (x codecSelfer1234) encResourceList(v ResourceList, e *codec1978.Encoder) { func (x codecSelfer1234) encResourceList(v ResourceList, e *codec1978.Encoder) {
var h codecSelfer1234 var h codecSelfer1234
z, r := codec1978.GenHelperEncoder(e) z, r := codec1978.GenHelperEncoder(e)
@ -57683,7 +58063,7 @@ func (x codecSelfer1234) decSliceNode(v *[]Node, d *codec1978.Decoder) {
yyrg1 := len(yyv1) > 0 yyrg1 := len(yyv1) > 0
yyv21 := yyv1 yyv21 := yyv1
yyrl1, yyrt1 = z.DecInferLen(yyl1, z.DecBasicHandle().MaxInitLen, 592) yyrl1, yyrt1 = z.DecInferLen(yyl1, z.DecBasicHandle().MaxInitLen, 616)
if yyrt1 { if yyrt1 {
if yyrl1 <= cap(yyv1) { if yyrl1 <= cap(yyv1) {
yyv1 = yyv1[:yyrl1] yyv1 = yyv1[:yyrl1]

View File

@ -2388,10 +2388,21 @@ type NodeStatus struct {
Images []ContainerImage `json:"images,omitempty" protobuf:"bytes,8,rep,name=images"` Images []ContainerImage `json:"images,omitempty" protobuf:"bytes,8,rep,name=images"`
// List of attachable volumes in use (mounted) by the node. // List of attachable volumes in use (mounted) by the node.
VolumesInUse []UniqueVolumeName `json:"volumesInUse,omitempty" protobuf:"bytes,9,rep,name=volumesInUse"` VolumesInUse []UniqueVolumeName `json:"volumesInUse,omitempty" protobuf:"bytes,9,rep,name=volumesInUse"`
// List of volumes that are attached to the node.
VolumesAttached []AttachedVolume `json:"volumesAttached,omitempty" protobuf:"bytes,10,rep,name=volumesAttached"`
} }
type UniqueVolumeName string type UniqueVolumeName string
// AttachedVolume describes a volume attached to a node
type AttachedVolume struct {
// Name of the attached volume
Name UniqueVolumeName `json:"name" protobuf:"bytes,1,rep,name=name"`
// DevicePath represents the device path where the volume should be avilable
DevicePath string `json:"devicePath" protobuf:"bytes,2,rep,name=devicePath"`
}
// Describe a container image // Describe a container image
type ContainerImage struct { type ContainerImage struct {
// Names by which this image is known. // Names by which this image is known.

View File

@ -50,6 +50,16 @@ func (Affinity) SwaggerDoc() map[string]string {
return map_Affinity return map_Affinity
} }
var map_AttachedVolume = map[string]string{
"": "AttachedVolume describes a volume attached to a node",
"name": "Name of the attached volume",
"devicePath": "DevicePath represents the device path where the volume should be avilable",
}
func (AttachedVolume) SwaggerDoc() map[string]string {
return map_AttachedVolume
}
var map_AzureFileVolumeSource = map[string]string{ var map_AzureFileVolumeSource = map[string]string{
"": "AzureFile represents an Azure File Service mount on the host and bind mount to the pod.", "": "AzureFile represents an Azure File Service mount on the host and bind mount to the pod.",
"secretName": "the name of secret that contains Azure Storage Account Name and Key", "secretName": "the name of secret that contains Azure Storage Account Name and Key",
@ -881,6 +891,7 @@ var map_NodeStatus = map[string]string{
"nodeInfo": "Set of ids/uuids to uniquely identify the node. More info: http://releases.k8s.io/HEAD/docs/admin/node.md#node-info", "nodeInfo": "Set of ids/uuids to uniquely identify the node. More info: http://releases.k8s.io/HEAD/docs/admin/node.md#node-info",
"images": "List of container images on this node", "images": "List of container images on this node",
"volumesInUse": "List of attachable volumes in use (mounted) by the node.", "volumesInUse": "List of attachable volumes in use (mounted) by the node.",
"volumesAttached": "List of volumes that are attached to the node.",
} }
func (NodeStatus) SwaggerDoc() map[string]string { func (NodeStatus) SwaggerDoc() map[string]string {

View File

@ -0,0 +1,32 @@
/*
Copyright 2016 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fake
import (
"k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/client/testing/core"
)
func (c *FakeNodes) PatchStatus(nodeName string, data []byte) (*api.Node, error) {
obj, err := c.Fake.Invokes(
core.NewPatchSubresourceAction(nodesResource, "status"), &api.Node{})
if obj == nil {
return nil, err
}
return obj.(*api.Node), err
}

View File

@ -22,8 +22,6 @@ type EndpointsExpansion interface{}
type LimitRangeExpansion interface{} type LimitRangeExpansion interface{}
type NodeExpansion interface{}
type PersistentVolumeExpansion interface{} type PersistentVolumeExpansion interface{}
type PersistentVolumeClaimExpansion interface{} type PersistentVolumeClaimExpansion interface{}

View File

@ -0,0 +1,40 @@
/*
Copyright 2016 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package unversioned
import "k8s.io/kubernetes/pkg/api"
// The NodeExpansion interface allows manually adding extra methods to the NodeInterface.
type NodeExpansion interface {
// PatchStatus modifies the status of an existing node. It returns the copy
// of the node that the server returns, or an error.
PatchStatus(nodeName string, data []byte) (*api.Node, error)
}
// PatchStatus modifies the status of an existing node. It returns the copy of
// the node that the server returns, or an error.
func (c *nodes) PatchStatus(nodeName string, data []byte) (*api.Node, error) {
result := &api.Node{}
err := c.client.Patch(api.StrategicMergePatchType).
Resource("nodes").
Name(nodeName).
SubResource("status").
Body(data).
Do().
Into(result)
return result, err
}

View File

@ -137,6 +137,15 @@ func NewPatchAction(resource unversioned.GroupVersionResource, namespace string,
return action return action
} }
func NewPatchSubresourceAction(resource unversioned.GroupVersionResource, subresource string) PatchActionImpl {
action := PatchActionImpl{}
action.Verb = "patch"
action.Resource = resource
action.Subresource = subresource
return action
}
func NewRootUpdateSubresourceAction(resource unversioned.GroupVersionResource, subresource string, object runtime.Object) UpdateActionImpl { func NewRootUpdateSubresourceAction(resource unversioned.GroupVersionResource, subresource string, object runtime.Object) UpdateActionImpl {
action := UpdateActionImpl{} action := UpdateActionImpl{}
action.Verb = "update" action.Verb = "update"

View File

@ -157,6 +157,11 @@ func (m *FakeNodeHandler) UpdateStatus(node *api.Node) (*api.Node, error) {
return node, nil return node, nil
} }
func (m *FakeNodeHandler) PatchStatus(nodeName string, data []byte) (*api.Node, error) {
m.RequestCount++
return &api.Node{}, nil
}
func (m *FakeNodeHandler) Watch(opts api.ListOptions) (watch.Interface, error) { func (m *FakeNodeHandler) Watch(opts api.ListOptions) (watch.Interface, error) {
return nil, nil return nil, nil
} }

View File

@ -66,7 +66,7 @@ func NewPersistentVolumeController(
claims: cache.NewStore(framework.DeletionHandlingMetaNamespaceKeyFunc), claims: cache.NewStore(framework.DeletionHandlingMetaNamespaceKeyFunc),
kubeClient: kubeClient, kubeClient: kubeClient,
eventRecorder: eventRecorder, eventRecorder: eventRecorder,
runningOperations: goroutinemap.NewGoRoutineMap(), runningOperations: goroutinemap.NewGoRoutineMap(false /* exponentialBackOffOnError */),
cloud: cloud, cloud: cloud,
provisioner: provisioner, provisioner: provisioner,
enableDynamicProvisioning: enableDynamicProvisioning, enableDynamicProvisioning: enableDynamicProvisioning,

View File

@ -30,6 +30,7 @@ import (
"k8s.io/kubernetes/pkg/controller/framework" "k8s.io/kubernetes/pkg/controller/framework"
"k8s.io/kubernetes/pkg/controller/volume/cache" "k8s.io/kubernetes/pkg/controller/volume/cache"
"k8s.io/kubernetes/pkg/controller/volume/reconciler" "k8s.io/kubernetes/pkg/controller/volume/reconciler"
"k8s.io/kubernetes/pkg/controller/volume/statusupdater"
"k8s.io/kubernetes/pkg/types" "k8s.io/kubernetes/pkg/types"
"k8s.io/kubernetes/pkg/util/io" "k8s.io/kubernetes/pkg/util/io"
"k8s.io/kubernetes/pkg/util/mount" "k8s.io/kubernetes/pkg/util/mount"
@ -105,13 +106,18 @@ func NewAttachDetachController(
adc.desiredStateOfWorld = cache.NewDesiredStateOfWorld(&adc.volumePluginMgr) adc.desiredStateOfWorld = cache.NewDesiredStateOfWorld(&adc.volumePluginMgr)
adc.actualStateOfWorld = cache.NewActualStateOfWorld(&adc.volumePluginMgr) adc.actualStateOfWorld = cache.NewActualStateOfWorld(&adc.volumePluginMgr)
adc.attacherDetacher = adc.attacherDetacher =
operationexecutor.NewOperationExecutor(&adc.volumePluginMgr) operationexecutor.NewOperationExecutor(
kubeClient,
&adc.volumePluginMgr)
adc.nodeStatusUpdater = statusupdater.NewNodeStatusUpdater(
kubeClient, nodeInformer, adc.actualStateOfWorld)
adc.reconciler = reconciler.NewReconciler( adc.reconciler = reconciler.NewReconciler(
reconcilerLoopPeriod, reconcilerLoopPeriod,
reconcilerMaxWaitForUnmountDuration, reconcilerMaxWaitForUnmountDuration,
adc.desiredStateOfWorld, adc.desiredStateOfWorld,
adc.actualStateOfWorld, adc.actualStateOfWorld,
adc.attacherDetacher) adc.attacherDetacher,
adc.nodeStatusUpdater)
return adc, nil return adc, nil
} }
@ -160,6 +166,10 @@ type attachDetachController struct {
// desiredStateOfWorld with the actualStateOfWorld by triggering attach // desiredStateOfWorld with the actualStateOfWorld by triggering attach
// detach operations using the attacherDetacher. // detach operations using the attacherDetacher.
reconciler reconciler.Reconciler reconciler reconciler.Reconciler
// nodeStatusUpdater is used to update node status with the list of attached
// volumes
nodeStatusUpdater statusupdater.NodeStatusUpdater
} }
func (adc *attachDetachController) Run(stopCh <-chan struct{}) { func (adc *attachDetachController) Run(stopCh <-chan struct{}) {

View File

@ -17,21 +17,16 @@ limitations under the License.
package volume package volume
import ( import (
"fmt"
"testing" "testing"
"time" "time"
"k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset/fake"
"k8s.io/kubernetes/pkg/client/testing/core"
"k8s.io/kubernetes/pkg/controller/framework/informers" "k8s.io/kubernetes/pkg/controller/framework/informers"
"k8s.io/kubernetes/pkg/runtime" controllervolumetesting "k8s.io/kubernetes/pkg/controller/volume/testing"
"k8s.io/kubernetes/pkg/watch"
) )
func Test_NewAttachDetachController_Positive(t *testing.T) { func Test_NewAttachDetachController_Positive(t *testing.T) {
// Arrange // Arrange
fakeKubeClient := createTestClient() fakeKubeClient := controllervolumetesting.CreateTestClient()
resyncPeriod := 5 * time.Minute resyncPeriod := 5 * time.Minute
podInformer := informers.CreateSharedPodIndexInformer(fakeKubeClient, resyncPeriod) podInformer := informers.CreateSharedPodIndexInformer(fakeKubeClient, resyncPeriod)
nodeInformer := informers.CreateSharedNodeIndexInformer(fakeKubeClient, resyncPeriod) nodeInformer := informers.CreateSharedNodeIndexInformer(fakeKubeClient, resyncPeriod)
@ -53,62 +48,3 @@ func Test_NewAttachDetachController_Positive(t *testing.T) {
t.Fatalf("Run failed with error. Expected: <no error> Actual: <%v>", err) t.Fatalf("Run failed with error. Expected: <no error> Actual: <%v>", err)
} }
} }
func createTestClient() *fake.Clientset {
fakeClient := &fake.Clientset{}
fakeClient.AddReactor("list", "pods", func(action core.Action) (handled bool, ret runtime.Object, err error) {
obj := &api.PodList{}
podNamePrefix := "mypod"
namespace := "mynamespace"
for i := 0; i < 5; i++ {
podName := fmt.Sprintf("%s-%d", podNamePrefix, i)
pod := api.Pod{
Status: api.PodStatus{
Phase: api.PodRunning,
},
ObjectMeta: api.ObjectMeta{
Name: podName,
Namespace: namespace,
Labels: map[string]string{
"name": podName,
},
},
Spec: api.PodSpec{
Containers: []api.Container{
{
Name: "containerName",
Image: "containerImage",
VolumeMounts: []api.VolumeMount{
{
Name: "volumeMountName",
ReadOnly: false,
MountPath: "/mnt",
},
},
},
},
Volumes: []api.Volume{
{
Name: "volumeName",
VolumeSource: api.VolumeSource{
GCEPersistentDisk: &api.GCEPersistentDiskVolumeSource{
PDName: "pdName",
FSType: "ext4",
ReadOnly: false,
},
},
},
},
},
}
obj.Items = append(obj.Items, pod)
}
return true, obj, nil
})
fakeWatch := watch.NewFake()
fakeClient.AddWatchReactor("*", core.DefaultWatchReactor(fakeWatch, nil))
return fakeClient
}

View File

@ -55,7 +55,7 @@ type ActualStateOfWorld interface {
// added. // added.
// If no node with the name nodeName exists in list of attached nodes for // If no node with the name nodeName exists in list of attached nodes for
// the specified volume, the node is added. // the specified volume, the node is added.
AddVolumeNode(volumeSpec *volume.Spec, nodeName string) (api.UniqueVolumeName, error) AddVolumeNode(volumeSpec *volume.Spec, nodeName string, devicePath string) (api.UniqueVolumeName, error)
// SetVolumeMountedByNode sets the MountedByNode value for the given volume // SetVolumeMountedByNode sets the MountedByNode value for the given volume
// and node. When set to true this value indicates the volume is mounted by // and node. When set to true this value indicates the volume is mounted by
@ -75,6 +75,13 @@ type ActualStateOfWorld interface {
// the specified volume, an error is returned. // the specified volume, an error is returned.
MarkDesireToDetach(volumeName api.UniqueVolumeName, nodeName string) (time.Duration, error) MarkDesireToDetach(volumeName api.UniqueVolumeName, nodeName string) (time.Duration, error)
// ResetNodeStatusUpdateNeeded resets statusUpdateNeeded for the specified
// node to false indicating the AttachedVolume field of the Node's Status
// object has been updated.
// If no node with the name nodeName exists in list of attached nodes for
// the specified volume, an error is returned.
ResetNodeStatusUpdateNeeded(nodeName string) error
// DeleteVolumeNode removes the given volume and node from the underlying // DeleteVolumeNode removes the given volume and node from the underlying
// store indicating the specified volume is no longer attached to the // store indicating the specified volume is no longer attached to the
// specified node. // specified node.
@ -97,6 +104,15 @@ type ActualStateOfWorld interface {
// the specified node reflecting which volumes are attached to that node // the specified node reflecting which volumes are attached to that node
// based on the current actual state of the world. // based on the current actual state of the world.
GetAttachedVolumesForNode(nodeName string) []AttachedVolume GetAttachedVolumesForNode(nodeName string) []AttachedVolume
// GetVolumesToReportAttached returns a map containing the set of nodes for
// which the VolumesAttached Status field in the Node API object should be
// updated. The key in this map is the name of the node to update and the
// value is list of volumes that should be reported as attached (note that
// this may differ from the actual list of attached volumes for the node
// since volumes should be removed from this list as soon a detach operation
// is considered, before the detach operation is triggered).
GetVolumesToReportAttached() map[string][]api.AttachedVolume
} }
// AttachedVolume represents a volume that is attached to a node. // AttachedVolume represents a volume that is attached to a node.
@ -120,6 +136,7 @@ type AttachedVolume struct {
func NewActualStateOfWorld(volumePluginMgr *volume.VolumePluginMgr) ActualStateOfWorld { func NewActualStateOfWorld(volumePluginMgr *volume.VolumePluginMgr) ActualStateOfWorld {
return &actualStateOfWorld{ return &actualStateOfWorld{
attachedVolumes: make(map[api.UniqueVolumeName]attachedVolume), attachedVolumes: make(map[api.UniqueVolumeName]attachedVolume),
nodesToUpdateStatusFor: make(map[string]nodeToUpdateStatusFor),
volumePluginMgr: volumePluginMgr, volumePluginMgr: volumePluginMgr,
} }
} }
@ -130,9 +147,17 @@ type actualStateOfWorld struct {
// managing. The key in this map is the name of the volume and the value is // managing. The key in this map is the name of the volume and the value is
// an object containing more information about the attached volume. // an object containing more information about the attached volume.
attachedVolumes map[api.UniqueVolumeName]attachedVolume attachedVolumes map[api.UniqueVolumeName]attachedVolume
// nodesToUpdateStatusFor is a map containing the set of nodes for which to
// update the VolumesAttached Status field. The key in this map is the name
// of the node and the value is an object containing more information about
// the node (including the list of volumes to report attached).
nodesToUpdateStatusFor map[string]nodeToUpdateStatusFor
// volumePluginMgr is the volume plugin manager used to create volume // volumePluginMgr is the volume plugin manager used to create volume
// plugin objects. // plugin objects.
volumePluginMgr *volume.VolumePluginMgr volumePluginMgr *volume.VolumePluginMgr
sync.RWMutex sync.RWMutex
} }
@ -152,9 +177,12 @@ type attachedVolume struct {
// node and the value is a node object containing more information about // node and the value is a node object containing more information about
// the node. // the node.
nodesAttachedTo map[string]nodeAttachedTo nodesAttachedTo map[string]nodeAttachedTo
// devicePath contains the path on the node where the volume is attached
devicePath string
} }
// The nodeAttachedTo object represents a node that . // The nodeAttachedTo object represents a node that has volumes attached to it.
type nodeAttachedTo struct { type nodeAttachedTo struct {
// nodeName contains the name of this node. // nodeName contains the name of this node.
nodeName string nodeName string
@ -173,9 +201,31 @@ type nodeAttachedTo struct {
detachRequestedTime time.Time detachRequestedTime time.Time
} }
// nodeToUpdateStatusFor is an object that reflects a node that has one or more
// volume attached. It keeps track of the volumes that should be reported as
// attached in the Node's Status API object.
type nodeToUpdateStatusFor struct {
// nodeName contains the name of this node.
nodeName string
// statusUpdateNeeded indicates that the value of the VolumesAttached field
// in the Node's Status API object should be updated. This should be set to
// true whenever a volume is added or deleted from
// volumesToReportAsAttached. It should be reset whenever the status is
// updated.
statusUpdateNeeded bool
// volumesToReportAsAttached is the list of volumes that should be reported
// as attached in the Node's status (note that this may differ from the
// actual list of attached volumes since volumes should be removed from this
// list as soon a detach operation is considered, before the detach
// operation is triggered).
volumesToReportAsAttached map[api.UniqueVolumeName]api.UniqueVolumeName
}
func (asw *actualStateOfWorld) MarkVolumeAsAttached( func (asw *actualStateOfWorld) MarkVolumeAsAttached(
volumeSpec *volume.Spec, nodeName string) error { volumeSpec *volume.Spec, nodeName string, devicePath string) error {
_, err := asw.AddVolumeNode(volumeSpec, nodeName) _, err := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
return err return err
} }
@ -185,7 +235,7 @@ func (asw *actualStateOfWorld) MarkVolumeAsDetached(
} }
func (asw *actualStateOfWorld) AddVolumeNode( func (asw *actualStateOfWorld) AddVolumeNode(
volumeSpec *volume.Spec, nodeName string) (api.UniqueVolumeName, error) { volumeSpec *volume.Spec, nodeName string, devicePath string) (api.UniqueVolumeName, error) {
asw.Lock() asw.Lock()
defer asw.Unlock() defer asw.Unlock()
@ -212,6 +262,7 @@ func (asw *actualStateOfWorld) AddVolumeNode(
volumeName: volumeName, volumeName: volumeName,
spec: volumeSpec, spec: volumeSpec,
nodesAttachedTo: make(map[string]nodeAttachedTo), nodesAttachedTo: make(map[string]nodeAttachedTo),
devicePath: devicePath,
} }
asw.attachedVolumes[volumeName] = volumeObj asw.attachedVolumes[volumeName] = volumeObj
} }
@ -231,6 +282,24 @@ func (asw *actualStateOfWorld) AddVolumeNode(
volumeObj.nodesAttachedTo[nodeName] = nodeObj volumeObj.nodesAttachedTo[nodeName] = nodeObj
} }
nodeToUpdate, nodeToUpdateExists := asw.nodesToUpdateStatusFor[nodeName]
if !nodeToUpdateExists {
// Create object if it doesn't exist
nodeToUpdate = nodeToUpdateStatusFor{
nodeName: nodeName,
statusUpdateNeeded: true,
volumesToReportAsAttached: make(map[api.UniqueVolumeName]api.UniqueVolumeName),
}
asw.nodesToUpdateStatusFor[nodeName] = nodeToUpdate
}
_, nodeToUpdateVolumeExists :=
nodeToUpdate.volumesToReportAsAttached[volumeName]
if !nodeToUpdateVolumeExists {
nodeToUpdate.statusUpdateNeeded = true
nodeToUpdate.volumesToReportAsAttached[volumeName] = volumeName
asw.nodesToUpdateStatusFor[nodeName] = nodeToUpdate
}
return volumeName, nil return volumeName, nil
} }
@ -298,9 +367,38 @@ func (asw *actualStateOfWorld) MarkDesireToDetach(
volumeObj.nodesAttachedTo[nodeName] = nodeObj volumeObj.nodesAttachedTo[nodeName] = nodeObj
} }
// Remove volume from volumes to report as attached
nodeToUpdate, nodeToUpdateExists := asw.nodesToUpdateStatusFor[nodeName]
if nodeToUpdateExists {
_, nodeToUpdateVolumeExists :=
nodeToUpdate.volumesToReportAsAttached[volumeName]
if nodeToUpdateVolumeExists {
nodeToUpdate.statusUpdateNeeded = true
delete(nodeToUpdate.volumesToReportAsAttached, volumeName)
asw.nodesToUpdateStatusFor[nodeName] = nodeToUpdate
}
}
return time.Since(volumeObj.nodesAttachedTo[nodeName].detachRequestedTime), nil return time.Since(volumeObj.nodesAttachedTo[nodeName].detachRequestedTime), nil
} }
func (asw *actualStateOfWorld) ResetNodeStatusUpdateNeeded(
nodeName string) error {
asw.Lock()
defer asw.Unlock()
// Remove volume from volumes to report as attached
nodeToUpdate, nodeToUpdateExists := asw.nodesToUpdateStatusFor[nodeName]
if !nodeToUpdateExists {
return fmt.Errorf(
"failed to ResetNodeStatusUpdateNeeded(nodeName=%q) nodeName does not exist",
nodeName)
}
nodeToUpdate.statusUpdateNeeded = false
asw.nodesToUpdateStatusFor[nodeName] = nodeToUpdate
return nil
}
func (asw *actualStateOfWorld) DeleteVolumeNode( func (asw *actualStateOfWorld) DeleteVolumeNode(
volumeName api.UniqueVolumeName, nodeName string) { volumeName api.UniqueVolumeName, nodeName string) {
asw.Lock() asw.Lock()
@ -319,6 +417,18 @@ func (asw *actualStateOfWorld) DeleteVolumeNode(
if len(volumeObj.nodesAttachedTo) == 0 { if len(volumeObj.nodesAttachedTo) == 0 {
delete(asw.attachedVolumes, volumeName) delete(asw.attachedVolumes, volumeName)
} }
// Remove volume from volumes to report as attached
nodeToUpdate, nodeToUpdateExists := asw.nodesToUpdateStatusFor[nodeName]
if nodeToUpdateExists {
_, nodeToUpdateVolumeExists :=
nodeToUpdate.volumesToReportAsAttached[volumeName]
if nodeToUpdateVolumeExists {
nodeToUpdate.statusUpdateNeeded = true
delete(nodeToUpdate.volumesToReportAsAttached, volumeName)
asw.nodesToUpdateStatusFor[nodeName] = nodeToUpdate
}
}
} }
func (asw *actualStateOfWorld) VolumeNodeExists( func (asw *actualStateOfWorld) VolumeNodeExists(
@ -372,6 +482,31 @@ func (asw *actualStateOfWorld) GetAttachedVolumesForNode(
return attachedVolumes return attachedVolumes
} }
func (asw *actualStateOfWorld) GetVolumesToReportAttached() map[string][]api.AttachedVolume {
asw.RLock()
defer asw.RUnlock()
volumesToReportAttached := make(map[string][]api.AttachedVolume)
for _, nodeToUpdateObj := range asw.nodesToUpdateStatusFor {
if nodeToUpdateObj.statusUpdateNeeded {
attachedVolumes := make(
[]api.AttachedVolume,
len(nodeToUpdateObj.volumesToReportAsAttached) /* len */)
i := 0
for _, volume := range nodeToUpdateObj.volumesToReportAsAttached {
attachedVolumes[i] = api.AttachedVolume{
Name: volume,
DevicePath: asw.attachedVolumes[volume].devicePath,
}
i++
}
volumesToReportAttached[nodeToUpdateObj.nodeName] = attachedVolumes
}
}
return volumesToReportAttached
}
func getAttachedVolume( func getAttachedVolume(
attachedVolume *attachedVolume, attachedVolume *attachedVolume,
nodeAttachedTo *nodeAttachedTo) AttachedVolume { nodeAttachedTo *nodeAttachedTo) AttachedVolume {

View File

@ -34,9 +34,10 @@ func Test_AddVolumeNode_Positive_NewVolumeNewNode(t *testing.T) {
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
devicePath := "fake/device/path"
// Act // Act
generatedVolumeName, err := asw.AddVolumeNode(volumeSpec, nodeName) generatedVolumeName, err := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
// Assert // Assert
if err != nil { if err != nil {
@ -66,10 +67,11 @@ func Test_AddVolumeNode_Positive_ExistingVolumeNewNode(t *testing.T) {
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
node1Name := "node1-name" node1Name := "node1-name"
node2Name := "node2-name" node2Name := "node2-name"
devicePath := "fake/device/path"
// Act // Act
generatedVolumeName1, add1Err := asw.AddVolumeNode(volumeSpec, node1Name) generatedVolumeName1, add1Err := asw.AddVolumeNode(volumeSpec, node1Name, devicePath)
generatedVolumeName2, add2Err := asw.AddVolumeNode(volumeSpec, node2Name) generatedVolumeName2, add2Err := asw.AddVolumeNode(volumeSpec, node2Name, devicePath)
// Assert // Assert
if add1Err != nil { if add1Err != nil {
@ -114,10 +116,11 @@ func Test_AddVolumeNode_Positive_ExistingVolumeExistingNode(t *testing.T) {
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
devicePath := "fake/device/path"
// Act // Act
generatedVolumeName1, add1Err := asw.AddVolumeNode(volumeSpec, nodeName) generatedVolumeName1, add1Err := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
generatedVolumeName2, add2Err := asw.AddVolumeNode(volumeSpec, nodeName) generatedVolumeName2, add2Err := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
// Assert // Assert
if add1Err != nil { if add1Err != nil {
@ -157,7 +160,8 @@ func Test_DeleteVolumeNode_Positive_VolumeExistsNodeExists(t *testing.T) {
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName) devicePath := "fake/device/path"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
@ -213,11 +217,12 @@ func Test_DeleteVolumeNode_Positive_TwoNodesOneDeleted(t *testing.T) {
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
node1Name := "node1-name" node1Name := "node1-name"
node2Name := "node2-name" node2Name := "node2-name"
generatedVolumeName1, add1Err := asw.AddVolumeNode(volumeSpec, node1Name) devicePath := "fake/device/path"
generatedVolumeName1, add1Err := asw.AddVolumeNode(volumeSpec, node1Name, devicePath)
if add1Err != nil { if add1Err != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add1Err) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add1Err)
} }
generatedVolumeName2, add2Err := asw.AddVolumeNode(volumeSpec, node2Name) generatedVolumeName2, add2Err := asw.AddVolumeNode(volumeSpec, node2Name, devicePath)
if add2Err != nil { if add2Err != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add2Err) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add2Err)
} }
@ -260,7 +265,8 @@ func Test_VolumeNodeExists_Positive_VolumeExistsNodeExists(t *testing.T) {
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName) devicePath := "fake/device/path"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
@ -292,7 +298,8 @@ func Test_VolumeNodeExists_Positive_VolumeExistsNodeDoesntExist(t *testing.T) {
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
node1Name := "node1-name" node1Name := "node1-name"
node2Name := "node2-name" node2Name := "node2-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, node1Name) devicePath := "fake/device/path"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, node1Name, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
@ -362,7 +369,8 @@ func Test_GetAttachedVolumes_Positive_OneVolumeOneNode(t *testing.T) {
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName) devicePath := "fake/device/path"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
@ -388,14 +396,15 @@ func Test_GetAttachedVolumes_Positive_TwoVolumeTwoNodes(t *testing.T) {
volume1Name := api.UniqueVolumeName("volume1-name") volume1Name := api.UniqueVolumeName("volume1-name")
volume1Spec := controllervolumetesting.GetTestVolumeSpec(string(volume1Name), volume1Name) volume1Spec := controllervolumetesting.GetTestVolumeSpec(string(volume1Name), volume1Name)
node1Name := "node1-name" node1Name := "node1-name"
generatedVolumeName1, add1Err := asw.AddVolumeNode(volume1Spec, node1Name) devicePath := "fake/device/path"
generatedVolumeName1, add1Err := asw.AddVolumeNode(volume1Spec, node1Name, devicePath)
if add1Err != nil { if add1Err != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add1Err) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add1Err)
} }
volume2Name := api.UniqueVolumeName("volume2-name") volume2Name := api.UniqueVolumeName("volume2-name")
volume2Spec := controllervolumetesting.GetTestVolumeSpec(string(volume2Name), volume2Name) volume2Spec := controllervolumetesting.GetTestVolumeSpec(string(volume2Name), volume2Name)
node2Name := "node2-name" node2Name := "node2-name"
generatedVolumeName2, add2Err := asw.AddVolumeNode(volume2Spec, node2Name) generatedVolumeName2, add2Err := asw.AddVolumeNode(volume2Spec, node2Name, devicePath)
if add2Err != nil { if add2Err != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add2Err) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add2Err)
} }
@ -422,12 +431,13 @@ func Test_GetAttachedVolumes_Positive_OneVolumeTwoNodes(t *testing.T) {
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
node1Name := "node1-name" node1Name := "node1-name"
generatedVolumeName1, add1Err := asw.AddVolumeNode(volumeSpec, node1Name) devicePath := "fake/device/path"
generatedVolumeName1, add1Err := asw.AddVolumeNode(volumeSpec, node1Name, devicePath)
if add1Err != nil { if add1Err != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add1Err) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add1Err)
} }
node2Name := "node2-name" node2Name := "node2-name"
generatedVolumeName2, add2Err := asw.AddVolumeNode(volumeSpec, node2Name) generatedVolumeName2, add2Err := asw.AddVolumeNode(volumeSpec, node2Name, devicePath)
if add2Err != nil { if add2Err != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add2Err) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add2Err)
} }
@ -460,7 +470,8 @@ func Test_SetVolumeMountedByNode_Positive_Set(t *testing.T) {
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName) devicePath := "fake/device/path"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
@ -486,7 +497,8 @@ func Test_SetVolumeMountedByNode_Positive_UnsetWithInitialSet(t *testing.T) {
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName) devicePath := "fake/device/path"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
@ -521,7 +533,8 @@ func Test_SetVolumeMountedByNode_Positive_UnsetWithoutInitialSet(t *testing.T) {
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName) devicePath := "fake/device/path"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
@ -553,7 +566,8 @@ func Test_SetVolumeMountedByNode_Positive_UnsetWithInitialSetAddVolumeNodeNotRes
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName) devicePath := "fake/device/path"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
@ -561,7 +575,7 @@ func Test_SetVolumeMountedByNode_Positive_UnsetWithInitialSetAddVolumeNodeNotRes
// Act // Act
setVolumeMountedErr1 := asw.SetVolumeMountedByNode(generatedVolumeName, nodeName, true /* mounted */) setVolumeMountedErr1 := asw.SetVolumeMountedByNode(generatedVolumeName, nodeName, true /* mounted */)
setVolumeMountedErr2 := asw.SetVolumeMountedByNode(generatedVolumeName, nodeName, false /* mounted */) setVolumeMountedErr2 := asw.SetVolumeMountedByNode(generatedVolumeName, nodeName, false /* mounted */)
generatedVolumeName, addErr = asw.AddVolumeNode(volumeSpec, nodeName) generatedVolumeName, addErr = asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
// Assert // Assert
if setVolumeMountedErr1 != nil { if setVolumeMountedErr1 != nil {
@ -593,7 +607,8 @@ func Test_SetVolumeMountedByNode_Positive_UnsetWithInitialSetVerifyDetachRequest
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName) devicePath := "fake/device/path"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
@ -633,9 +648,10 @@ func Test_MarkDesireToDetach_Positive_Set(t *testing.T) {
volumePluginMgr, _ := volumetesting.GetTestVolumePluginMgr(t) volumePluginMgr, _ := volumetesting.GetTestVolumePluginMgr(t)
asw := NewActualStateOfWorld(volumePluginMgr) asw := NewActualStateOfWorld(volumePluginMgr)
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
devicePath := "fake/device/path"
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName) generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
@ -661,7 +677,8 @@ func Test_MarkDesireToDetach_Positive_Marked(t *testing.T) {
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName) devicePath := "fake/device/path"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
@ -694,14 +711,15 @@ func Test_MarkDesireToDetach_Positive_MarkedAddVolumeNodeReset(t *testing.T) {
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName) devicePath := "fake/device/path"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
// Act // Act
_, markDesireToDetachErr := asw.MarkDesireToDetach(generatedVolumeName, nodeName) _, markDesireToDetachErr := asw.MarkDesireToDetach(generatedVolumeName, nodeName)
generatedVolumeName, addErr = asw.AddVolumeNode(volumeSpec, nodeName) generatedVolumeName, addErr = asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
// Assert // Assert
if markDesireToDetachErr != nil { if markDesireToDetachErr != nil {
@ -731,7 +749,8 @@ func Test_MarkDesireToDetach_Positive_UnsetWithInitialSetVolumeMountedByNodePres
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName) devicePath := "fake/device/path"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
@ -810,7 +829,8 @@ func Test_GetAttachedVolumesForNode_Positive_OneVolumeOneNode(t *testing.T) {
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
nodeName := "node-name" nodeName := "node-name"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName) devicePath := "fake/device/path"
generatedVolumeName, addErr := asw.AddVolumeNode(volumeSpec, nodeName, devicePath)
if addErr != nil { if addErr != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", addErr)
} }
@ -833,14 +853,15 @@ func Test_GetAttachedVolumesForNode_Positive_TwoVolumeTwoNodes(t *testing.T) {
volume1Name := api.UniqueVolumeName("volume1-name") volume1Name := api.UniqueVolumeName("volume1-name")
volume1Spec := controllervolumetesting.GetTestVolumeSpec(string(volume1Name), volume1Name) volume1Spec := controllervolumetesting.GetTestVolumeSpec(string(volume1Name), volume1Name)
node1Name := "node1-name" node1Name := "node1-name"
_, add1Err := asw.AddVolumeNode(volume1Spec, node1Name) devicePath := "fake/device/path"
_, add1Err := asw.AddVolumeNode(volume1Spec, node1Name, devicePath)
if add1Err != nil { if add1Err != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add1Err) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add1Err)
} }
volume2Name := api.UniqueVolumeName("volume2-name") volume2Name := api.UniqueVolumeName("volume2-name")
volume2Spec := controllervolumetesting.GetTestVolumeSpec(string(volume2Name), volume2Name) volume2Spec := controllervolumetesting.GetTestVolumeSpec(string(volume2Name), volume2Name)
node2Name := "node2-name" node2Name := "node2-name"
generatedVolumeName2, add2Err := asw.AddVolumeNode(volume2Spec, node2Name) generatedVolumeName2, add2Err := asw.AddVolumeNode(volume2Spec, node2Name, devicePath)
if add2Err != nil { if add2Err != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add2Err) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add2Err)
} }
@ -863,12 +884,13 @@ func Test_GetAttachedVolumesForNode_Positive_OneVolumeTwoNodes(t *testing.T) {
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
node1Name := "node1-name" node1Name := "node1-name"
generatedVolumeName1, add1Err := asw.AddVolumeNode(volumeSpec, node1Name) devicePath := "fake/device/path"
generatedVolumeName1, add1Err := asw.AddVolumeNode(volumeSpec, node1Name, devicePath)
if add1Err != nil { if add1Err != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add1Err) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add1Err)
} }
node2Name := "node2-name" node2Name := "node2-name"
generatedVolumeName2, add2Err := asw.AddVolumeNode(volumeSpec, node2Name) generatedVolumeName2, add2Err := asw.AddVolumeNode(volumeSpec, node2Name, devicePath)
if add2Err != nil { if add2Err != nil {
t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add2Err) t.Fatalf("AddVolumeNode failed. Expected: <no error> Actual: <%v>", add2Err)
} }

View File

@ -24,6 +24,8 @@ import (
"github.com/golang/glog" "github.com/golang/glog"
"k8s.io/kubernetes/pkg/controller/volume/cache" "k8s.io/kubernetes/pkg/controller/volume/cache"
"k8s.io/kubernetes/pkg/controller/volume/statusupdater"
"k8s.io/kubernetes/pkg/util/goroutinemap"
"k8s.io/kubernetes/pkg/util/wait" "k8s.io/kubernetes/pkg/util/wait"
"k8s.io/kubernetes/pkg/volume/util/operationexecutor" "k8s.io/kubernetes/pkg/volume/util/operationexecutor"
) )
@ -55,13 +57,15 @@ func NewReconciler(
maxWaitForUnmountDuration time.Duration, maxWaitForUnmountDuration time.Duration,
desiredStateOfWorld cache.DesiredStateOfWorld, desiredStateOfWorld cache.DesiredStateOfWorld,
actualStateOfWorld cache.ActualStateOfWorld, actualStateOfWorld cache.ActualStateOfWorld,
attacherDetacher operationexecutor.OperationExecutor) Reconciler { attacherDetacher operationexecutor.OperationExecutor,
nodeStatusUpdater statusupdater.NodeStatusUpdater) Reconciler {
return &reconciler{ return &reconciler{
loopPeriod: loopPeriod, loopPeriod: loopPeriod,
maxWaitForUnmountDuration: maxWaitForUnmountDuration, maxWaitForUnmountDuration: maxWaitForUnmountDuration,
desiredStateOfWorld: desiredStateOfWorld, desiredStateOfWorld: desiredStateOfWorld,
actualStateOfWorld: actualStateOfWorld, actualStateOfWorld: actualStateOfWorld,
attacherDetacher: attacherDetacher, attacherDetacher: attacherDetacher,
nodeStatusUpdater: nodeStatusUpdater,
} }
} }
@ -71,6 +75,7 @@ type reconciler struct {
desiredStateOfWorld cache.DesiredStateOfWorld desiredStateOfWorld cache.DesiredStateOfWorld
actualStateOfWorld cache.ActualStateOfWorld actualStateOfWorld cache.ActualStateOfWorld
attacherDetacher operationexecutor.OperationExecutor attacherDetacher operationexecutor.OperationExecutor
nodeStatusUpdater statusupdater.NodeStatusUpdater
} }
func (rc *reconciler) Run(stopCh <-chan struct{}) { func (rc *reconciler) Run(stopCh <-chan struct{}) {
@ -88,10 +93,22 @@ func (rc *reconciler) reconciliationLoopFunc() func() {
attachedVolume.VolumeName, attachedVolume.NodeName) { attachedVolume.VolumeName, attachedVolume.NodeName) {
// Volume exists in actual state of world but not desired // Volume exists in actual state of world but not desired
if !attachedVolume.MountedByNode { if !attachedVolume.MountedByNode {
glog.V(5).Infof("Attempting to start DetachVolume for volume %q to node %q", attachedVolume.VolumeName, attachedVolume.NodeName) glog.V(5).Infof("Attempting to start DetachVolume for volume %q from node %q", attachedVolume.VolumeName, attachedVolume.NodeName)
err := rc.attacherDetacher.DetachVolume(attachedVolume.AttachedVolume, rc.actualStateOfWorld) err := rc.attacherDetacher.DetachVolume(attachedVolume.AttachedVolume, rc.actualStateOfWorld)
if err == nil { if err == nil {
glog.Infof("Started DetachVolume for volume %q to node %q", attachedVolume.VolumeName, attachedVolume.NodeName) glog.Infof("Started DetachVolume for volume %q from node %q", attachedVolume.VolumeName, attachedVolume.NodeName)
}
if err != nil &&
!goroutinemap.IsAlreadyExists(err) &&
!goroutinemap.IsExponentialBackoff(err) {
// Ignore goroutinemap.IsAlreadyExists && goroutinemap.IsExponentialBackoff errors, they are expected.
// Log all other errors.
glog.Errorf(
"operationExecutor.DetachVolume failed to start for volume %q (spec.Name: %q) from node %q with err: %v",
attachedVolume.VolumeName,
attachedVolume.VolumeSpec.Name(),
attachedVolume.NodeName,
err)
} }
} else { } else {
// If volume is not safe to detach (is mounted) wait a max amount of time before detaching any way. // If volume is not safe to detach (is mounted) wait a max amount of time before detaching any way.
@ -100,10 +117,22 @@ func (rc *reconciler) reconciliationLoopFunc() func() {
glog.Errorf("Unexpected error actualStateOfWorld.MarkDesireToDetach(): %v", err) glog.Errorf("Unexpected error actualStateOfWorld.MarkDesireToDetach(): %v", err)
} }
if timeElapsed > rc.maxWaitForUnmountDuration { if timeElapsed > rc.maxWaitForUnmountDuration {
glog.V(5).Infof("Attempting to start DetachVolume for volume %q to node %q. Volume is not safe to detach, but maxWaitForUnmountDuration expired.", attachedVolume.VolumeName, attachedVolume.NodeName) glog.V(5).Infof("Attempting to start DetachVolume for volume %q from node %q. Volume is not safe to detach, but maxWaitForUnmountDuration expired.", attachedVolume.VolumeName, attachedVolume.NodeName)
err := rc.attacherDetacher.DetachVolume(attachedVolume.AttachedVolume, rc.actualStateOfWorld) err := rc.attacherDetacher.DetachVolume(attachedVolume.AttachedVolume, rc.actualStateOfWorld)
if err == nil { if err == nil {
glog.Infof("Started DetachVolume for volume %q to node %q due to maxWaitForUnmountDuration expiry.", attachedVolume.VolumeName, attachedVolume.NodeName) glog.Infof("Started DetachVolume for volume %q from node %q due to maxWaitForUnmountDuration expiry.", attachedVolume.VolumeName, attachedVolume.NodeName)
}
if err != nil &&
!goroutinemap.IsAlreadyExists(err) &&
!goroutinemap.IsExponentialBackoff(err) {
// Ignore goroutinemap.IsAlreadyExists && goroutinemap.IsExponentialBackoff errors, they are expected.
// Log all other errors.
glog.Errorf(
"operationExecutor.DetachVolume failed to start (maxWaitForUnmountDuration expiry) for volume %q (spec.Name: %q) from node %q with err: %v",
attachedVolume.VolumeName,
attachedVolume.VolumeSpec.Name(),
attachedVolume.NodeName,
err)
} }
} }
} }
@ -117,7 +146,7 @@ func (rc *reconciler) reconciliationLoopFunc() func() {
// Volume/Node exists, touch it to reset detachRequestedTime // Volume/Node exists, touch it to reset detachRequestedTime
glog.V(12).Infof("Volume %q/Node %q is attached--touching.", volumeToAttach.VolumeName, volumeToAttach.NodeName) glog.V(12).Infof("Volume %q/Node %q is attached--touching.", volumeToAttach.VolumeName, volumeToAttach.NodeName)
_, err := rc.actualStateOfWorld.AddVolumeNode( _, err := rc.actualStateOfWorld.AddVolumeNode(
volumeToAttach.VolumeSpec, volumeToAttach.NodeName) volumeToAttach.VolumeSpec, volumeToAttach.NodeName, "" /* devicePath */)
if err != nil { if err != nil {
glog.Errorf("Unexpected error on actualStateOfWorld.AddVolumeNode(): %v", err) glog.Errorf("Unexpected error on actualStateOfWorld.AddVolumeNode(): %v", err)
} }
@ -128,7 +157,25 @@ func (rc *reconciler) reconciliationLoopFunc() func() {
if err == nil { if err == nil {
glog.Infof("Started AttachVolume for volume %q to node %q", volumeToAttach.VolumeName, volumeToAttach.NodeName) glog.Infof("Started AttachVolume for volume %q to node %q", volumeToAttach.VolumeName, volumeToAttach.NodeName)
} }
if err != nil &&
!goroutinemap.IsAlreadyExists(err) &&
!goroutinemap.IsExponentialBackoff(err) {
// Ignore goroutinemap.IsAlreadyExists && goroutinemap.IsExponentialBackoff errors, they are expected.
// Log all other errors.
glog.Errorf(
"operationExecutor.AttachVolume failed to start for volume %q (spec.Name: %q) to node %q with err: %v",
volumeToAttach.VolumeName,
volumeToAttach.VolumeSpec.Name(),
volumeToAttach.NodeName,
err)
} }
} }
} }
// Update Node Status
err := rc.nodeStatusUpdater.UpdateNodeStatuses()
if err != nil {
glog.Infof("UpdateNodeStatuses failed with: %v", err)
}
}
} }

View File

@ -21,7 +21,9 @@ import (
"time" "time"
"k8s.io/kubernetes/pkg/api" "k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/controller/framework/informers"
"k8s.io/kubernetes/pkg/controller/volume/cache" "k8s.io/kubernetes/pkg/controller/volume/cache"
"k8s.io/kubernetes/pkg/controller/volume/statusupdater"
controllervolumetesting "k8s.io/kubernetes/pkg/controller/volume/testing" controllervolumetesting "k8s.io/kubernetes/pkg/controller/volume/testing"
"k8s.io/kubernetes/pkg/util/wait" "k8s.io/kubernetes/pkg/util/wait"
volumetesting "k8s.io/kubernetes/pkg/volume/testing" volumetesting "k8s.io/kubernetes/pkg/volume/testing"
@ -32,6 +34,7 @@ import (
const ( const (
reconcilerLoopPeriod time.Duration = 0 * time.Millisecond reconcilerLoopPeriod time.Duration = 0 * time.Millisecond
maxWaitForUnmountDuration time.Duration = 50 * time.Millisecond maxWaitForUnmountDuration time.Duration = 50 * time.Millisecond
resyncPeriod time.Duration = 5 * time.Minute
) )
// Calls Run() // Calls Run()
@ -41,9 +44,15 @@ func Test_Run_Positive_DoNothing(t *testing.T) {
volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t) volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t)
dsw := cache.NewDesiredStateOfWorld(volumePluginMgr) dsw := cache.NewDesiredStateOfWorld(volumePluginMgr)
asw := cache.NewActualStateOfWorld(volumePluginMgr) asw := cache.NewActualStateOfWorld(volumePluginMgr)
ad := operationexecutor.NewOperationExecutor(volumePluginMgr) fakeKubeClient := controllervolumetesting.CreateTestClient()
ad := operationexecutor.NewOperationExecutor(
fakeKubeClient, volumePluginMgr)
nodeInformer := informers.CreateSharedNodeIndexInformer(
fakeKubeClient, resyncPeriod)
nsu := statusupdater.NewNodeStatusUpdater(
fakeKubeClient, nodeInformer, asw)
reconciler := NewReconciler( reconciler := NewReconciler(
reconcilerLoopPeriod, maxWaitForUnmountDuration, dsw, asw, ad) reconcilerLoopPeriod, maxWaitForUnmountDuration, dsw, asw, ad, nsu)
// Act // Act
go reconciler.Run(wait.NeverStop) go reconciler.Run(wait.NeverStop)
@ -64,9 +73,14 @@ func Test_Run_Positive_OneDesiredVolumeAttach(t *testing.T) {
volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t) volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t)
dsw := cache.NewDesiredStateOfWorld(volumePluginMgr) dsw := cache.NewDesiredStateOfWorld(volumePluginMgr)
asw := cache.NewActualStateOfWorld(volumePluginMgr) asw := cache.NewActualStateOfWorld(volumePluginMgr)
ad := operationexecutor.NewOperationExecutor(volumePluginMgr) fakeKubeClient := controllervolumetesting.CreateTestClient()
ad := operationexecutor.NewOperationExecutor(fakeKubeClient, volumePluginMgr)
nodeInformer := informers.CreateSharedNodeIndexInformer(
fakeKubeClient, resyncPeriod)
nsu := statusupdater.NewNodeStatusUpdater(
fakeKubeClient, nodeInformer, asw)
reconciler := NewReconciler( reconciler := NewReconciler(
reconcilerLoopPeriod, maxWaitForUnmountDuration, dsw, asw, ad) reconcilerLoopPeriod, maxWaitForUnmountDuration, dsw, asw, ad, nsu)
podName := types.UniquePodName("pod-uid") podName := types.UniquePodName("pod-uid")
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
@ -105,9 +119,14 @@ func Test_Run_Positive_OneDesiredVolumeAttachThenDetachWithUnmountedVolume(t *te
volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t) volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t)
dsw := cache.NewDesiredStateOfWorld(volumePluginMgr) dsw := cache.NewDesiredStateOfWorld(volumePluginMgr)
asw := cache.NewActualStateOfWorld(volumePluginMgr) asw := cache.NewActualStateOfWorld(volumePluginMgr)
ad := operationexecutor.NewOperationExecutor(volumePluginMgr) fakeKubeClient := controllervolumetesting.CreateTestClient()
ad := operationexecutor.NewOperationExecutor(fakeKubeClient, volumePluginMgr)
nodeInformer := informers.CreateSharedNodeIndexInformer(
fakeKubeClient, resyncPeriod)
nsu := statusupdater.NewNodeStatusUpdater(
fakeKubeClient, nodeInformer, asw)
reconciler := NewReconciler( reconciler := NewReconciler(
reconcilerLoopPeriod, maxWaitForUnmountDuration, dsw, asw, ad) reconcilerLoopPeriod, maxWaitForUnmountDuration, dsw, asw, ad, nsu)
podName := types.UniquePodName("pod-uid") podName := types.UniquePodName("pod-uid")
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
@ -167,9 +186,14 @@ func Test_Run_Positive_OneDesiredVolumeAttachThenDetachWithMountedVolume(t *test
volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t) volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t)
dsw := cache.NewDesiredStateOfWorld(volumePluginMgr) dsw := cache.NewDesiredStateOfWorld(volumePluginMgr)
asw := cache.NewActualStateOfWorld(volumePluginMgr) asw := cache.NewActualStateOfWorld(volumePluginMgr)
ad := operationexecutor.NewOperationExecutor(volumePluginMgr) fakeKubeClient := controllervolumetesting.CreateTestClient()
ad := operationexecutor.NewOperationExecutor(fakeKubeClient, volumePluginMgr)
nodeInformer := informers.CreateSharedNodeIndexInformer(
fakeKubeClient, resyncPeriod)
nsu := statusupdater.NewNodeStatusUpdater(
fakeKubeClient, nodeInformer, asw)
reconciler := NewReconciler( reconciler := NewReconciler(
reconcilerLoopPeriod, maxWaitForUnmountDuration, dsw, asw, ad) reconcilerLoopPeriod, maxWaitForUnmountDuration, dsw, asw, ad, nsu)
podName := types.UniquePodName("pod-uid") podName := types.UniquePodName("pod-uid")
volumeName := api.UniqueVolumeName("volume-name") volumeName := api.UniqueVolumeName("volume-name")
volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName) volumeSpec := controllervolumetesting.GetTestVolumeSpec(string(volumeName), volumeName)
@ -379,6 +403,3 @@ func retryWithExponentialBackOff(initialDuration time.Duration, fn wait.Conditio
} }
return wait.ExponentialBackoff(backoff, fn) return wait.ExponentialBackoff(backoff, fn)
} }
// t.Logf("asw: %v", asw.GetAttachedVolumes())
// t.Logf("dsw: %v", dsw.GetVolumesToAttach())

View File

@ -0,0 +1,127 @@
/*
Copyright 2016 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package statusupdater implements interfaces that enable updating the status
// of API objects.
package statusupdater
import (
"encoding/json"
"fmt"
"github.com/golang/glog"
"k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset"
"k8s.io/kubernetes/pkg/controller/framework"
"k8s.io/kubernetes/pkg/controller/volume/cache"
"k8s.io/kubernetes/pkg/util/strategicpatch"
)
// NodeStatusUpdater defines a set of operations for updating the
// VolumesAttached field in the Node Status.
type NodeStatusUpdater interface {
// Gets a list of node statuses that should be updated from the actual state
// of the world and updates them.
UpdateNodeStatuses() error
}
// NewNodeStatusUpdater returns a new instance of NodeStatusUpdater.
func NewNodeStatusUpdater(
kubeClient internalclientset.Interface,
nodeInformer framework.SharedInformer,
actualStateOfWorld cache.ActualStateOfWorld) NodeStatusUpdater {
return &nodeStatusUpdater{
actualStateOfWorld: actualStateOfWorld,
nodeInformer: nodeInformer,
kubeClient: kubeClient,
}
}
type nodeStatusUpdater struct {
kubeClient internalclientset.Interface
nodeInformer framework.SharedInformer
actualStateOfWorld cache.ActualStateOfWorld
}
func (nsu *nodeStatusUpdater) UpdateNodeStatuses() error {
nodesToUpdate := nsu.actualStateOfWorld.GetVolumesToReportAttached()
for nodeName, attachedVolumes := range nodesToUpdate {
nodeObj, exists, err := nsu.nodeInformer.GetStore().GetByKey(nodeName)
if nodeObj == nil || !exists || err != nil {
return fmt.Errorf(
"failed to find node %q in NodeInformer cache. %v",
nodeName,
err)
}
node, ok := nodeObj.(*api.Node)
if !ok || node == nil {
return fmt.Errorf(
"failed to cast %q object %#v to Node",
nodeName,
nodeObj)
}
oldData, err := json.Marshal(node)
if err != nil {
return fmt.Errorf(
"failed to Marshal oldData for node %q. %v",
nodeName,
err)
}
node.Status.VolumesAttached = attachedVolumes
newData, err := json.Marshal(node)
if err != nil {
return fmt.Errorf(
"failed to Marshal newData for node %q. %v",
nodeName,
err)
}
patchBytes, err :=
strategicpatch.CreateStrategicMergePatch(oldData, newData, node)
if err != nil {
return fmt.Errorf(
"failed to CreateStrategicMergePatch for node %q. %v",
nodeName,
err)
}
_, err = nsu.kubeClient.Core().Nodes().PatchStatus(nodeName, patchBytes)
if err != nil {
return fmt.Errorf(
"failed to kubeClient.Core().Nodes().Patch for node %q. %v",
nodeName,
err)
}
err = nsu.actualStateOfWorld.ResetNodeStatusUpdateNeeded(nodeName)
if err != nil {
return fmt.Errorf(
"failed to ResetNodeStatusUpdateNeeded for node %q. %v",
nodeName,
err)
}
glog.V(3).Infof(
"Updating status for node %q succeeded. patchBytes: %q",
string(patchBytes))
}
return nil
}

View File

@ -17,8 +17,14 @@ limitations under the License.
package testing package testing
import ( import (
"fmt"
"k8s.io/kubernetes/pkg/api" "k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset/fake"
"k8s.io/kubernetes/pkg/client/testing/core"
"k8s.io/kubernetes/pkg/runtime"
"k8s.io/kubernetes/pkg/volume" "k8s.io/kubernetes/pkg/volume"
"k8s.io/kubernetes/pkg/watch"
) )
// GetTestVolumeSpec returns a test volume spec // GetTestVolumeSpec returns a test volume spec
@ -36,3 +42,62 @@ func GetTestVolumeSpec(volumeName string, diskName api.UniqueVolumeName) *volume
}, },
} }
} }
func CreateTestClient() *fake.Clientset {
fakeClient := &fake.Clientset{}
fakeClient.AddReactor("list", "pods", func(action core.Action) (handled bool, ret runtime.Object, err error) {
obj := &api.PodList{}
podNamePrefix := "mypod"
namespace := "mynamespace"
for i := 0; i < 5; i++ {
podName := fmt.Sprintf("%s-%d", podNamePrefix, i)
pod := api.Pod{
Status: api.PodStatus{
Phase: api.PodRunning,
},
ObjectMeta: api.ObjectMeta{
Name: podName,
Namespace: namespace,
Labels: map[string]string{
"name": podName,
},
},
Spec: api.PodSpec{
Containers: []api.Container{
{
Name: "containerName",
Image: "containerImage",
VolumeMounts: []api.VolumeMount{
{
Name: "volumeMountName",
ReadOnly: false,
MountPath: "/mnt",
},
},
},
},
Volumes: []api.Volume{
{
Name: "volumeName",
VolumeSource: api.VolumeSource{
GCEPersistentDisk: &api.GCEPersistentDiskVolumeSource{
PDName: "pdName",
FSType: "ext4",
ReadOnly: false,
},
},
},
},
},
}
obj.Items = append(obj.Items, pod)
}
return true, obj, nil
})
fakeWatch := watch.NewFake()
fakeClient.AddWatchReactor("*", core.DefaultWatchReactor(fakeWatch, nil))
return fakeClient
}

View File

@ -297,7 +297,7 @@ func newTestKubeletWithImageList(
controllerAttachDetachEnabled, controllerAttachDetachEnabled,
kubelet.hostname, kubelet.hostname,
kubelet.podManager, kubelet.podManager,
kubelet.kubeClient, fakeKubeClient,
kubelet.volumePluginMgr) kubelet.volumePluginMgr)
if err != nil { if err != nil {
t.Fatalf("failed to initialize volume manager: %v", err) t.Fatalf("failed to initialize volume manager: %v", err)
@ -617,6 +617,24 @@ func TestVolumeAttachAndMountControllerEnabled(t *testing.T) {
testKubelet := newTestKubelet(t, true /* controllerAttachDetachEnabled */) testKubelet := newTestKubelet(t, true /* controllerAttachDetachEnabled */)
kubelet := testKubelet.kubelet kubelet := testKubelet.kubelet
kubelet.mounter = &mount.FakeMounter{} kubelet.mounter = &mount.FakeMounter{}
kubeClient := testKubelet.fakeKubeClient
kubeClient.AddReactor("get", "nodes",
func(action core.Action) (bool, runtime.Object, error) {
return true, &api.Node{
ObjectMeta: api.ObjectMeta{Name: testKubeletHostname},
Status: api.NodeStatus{
VolumesAttached: []api.AttachedVolume{
{
Name: "fake/vol1",
DevicePath: "fake/path",
},
}},
Spec: api.NodeSpec{ExternalID: testKubeletHostname},
}, nil
})
kubeClient.AddReactor("*", "*", func(action core.Action) (bool, runtime.Object, error) {
return true, nil, fmt.Errorf("no reaction implemented for %s", action)
})
pod := podWithUidNameNsSpec("12345678", "foo", "test", api.PodSpec{ pod := podWithUidNameNsSpec("12345678", "foo", "test", api.PodSpec{
Volumes: []api.Volume{ Volumes: []api.Volume{
@ -687,6 +705,24 @@ func TestVolumeUnmountAndDetachControllerEnabled(t *testing.T) {
testKubelet := newTestKubelet(t, true /* controllerAttachDetachEnabled */) testKubelet := newTestKubelet(t, true /* controllerAttachDetachEnabled */)
kubelet := testKubelet.kubelet kubelet := testKubelet.kubelet
kubelet.mounter = &mount.FakeMounter{} kubelet.mounter = &mount.FakeMounter{}
kubeClient := testKubelet.fakeKubeClient
kubeClient.AddReactor("get", "nodes",
func(action core.Action) (bool, runtime.Object, error) {
return true, &api.Node{
ObjectMeta: api.ObjectMeta{Name: testKubeletHostname},
Status: api.NodeStatus{
VolumesAttached: []api.AttachedVolume{
{
Name: "fake/vol1",
DevicePath: "fake/path",
},
}},
Spec: api.NodeSpec{ExternalID: testKubeletHostname},
}, nil
})
kubeClient.AddReactor("*", "*", func(action core.Action) (bool, runtime.Object, error) {
return true, nil, fmt.Errorf("no reaction implemented for %s", action)
})
pod := podWithUidNameNsSpec("12345678", "foo", "test", api.PodSpec{ pod := podWithUidNameNsSpec("12345678", "foo", "test", api.PodSpec{
Volumes: []api.Volume{ Volumes: []api.Volume{

View File

@ -57,7 +57,7 @@ type ActualStateOfWorld interface {
// If a volume with the same generated name already exists, this is a noop. // If a volume with the same generated name already exists, this is a noop.
// If no volume plugin can support the given volumeSpec or more than one // If no volume plugin can support the given volumeSpec or more than one
// plugin can support it, an error is returned. // plugin can support it, an error is returned.
AddVolume(volumeSpec *volume.Spec) (api.UniqueVolumeName, error) AddVolume(volumeSpec *volume.Spec, devicePath string) (api.UniqueVolumeName, error)
// AddPodToVolume adds the given pod to the given volume in the cache // AddPodToVolume adds the given pod to the given volume in the cache
// indicating the specified volume has been successfully mounted to the // indicating the specified volume has been successfully mounted to the
@ -108,14 +108,14 @@ type ActualStateOfWorld interface {
// If a volume with the name volumeName does not exist in the list of // If a volume with the name volumeName does not exist in the list of
// attached volumes, a volumeNotAttachedError is returned indicating the // attached volumes, a volumeNotAttachedError is returned indicating the
// given volume is not yet attached. // given volume is not yet attached.
// If a the given volumeName/podName combo exists but the value of // If the given volumeName/podName combo exists but the value of
// remountRequired is true, a remountRequiredError is returned indicating // remountRequired is true, a remountRequiredError is returned indicating
// the given volume has been successfully mounted to this pod but should be // the given volume has been successfully mounted to this pod but should be
// remounted to reflect changes in the referencing pod. Atomically updating // remounted to reflect changes in the referencing pod. Atomically updating
// volumes, depend on this to update the contents of the volume. // volumes, depend on this to update the contents of the volume.
// All volume mounting calls should be idempotent so a second mount call for // All volume mounting calls should be idempotent so a second mount call for
// volumes that do not need to update contents should not fail. // volumes that do not need to update contents should not fail.
PodExistsInVolume(podName volumetypes.UniquePodName, volumeName api.UniqueVolumeName) (bool, error) PodExistsInVolume(podName volumetypes.UniquePodName, volumeName api.UniqueVolumeName) (bool, string, error)
// GetMountedVolumes generates and returns a list of volumes and the pods // GetMountedVolumes generates and returns a list of volumes and the pods
// they are successfully attached and mounted for based on the current // they are successfully attached and mounted for based on the current
@ -224,9 +224,13 @@ type attachedVolume struct {
pluginIsAttachable bool pluginIsAttachable bool
// globallyMounted indicates that the volume is mounted to the underlying // globallyMounted indicates that the volume is mounted to the underlying
// device at a global mount point. This global mount point must unmounted // device at a global mount point. This global mount point must be unmounted
// prior to detach. // prior to detach.
globallyMounted bool globallyMounted bool
// devicePath contains the path on the node where the volume is attached for
// attachable volumes
devicePath string
} }
// The mountedPod object represents a pod for which the kubelet volume manager // The mountedPod object represents a pod for which the kubelet volume manager
@ -260,8 +264,8 @@ type mountedPod struct {
} }
func (asw *actualStateOfWorld) MarkVolumeAsAttached( func (asw *actualStateOfWorld) MarkVolumeAsAttached(
volumeSpec *volume.Spec, nodeName string) error { volumeSpec *volume.Spec, nodeName string, devicePath string) error {
_, err := asw.AddVolume(volumeSpec) _, err := asw.AddVolume(volumeSpec, devicePath)
return err return err
} }
@ -302,7 +306,7 @@ func (asw *actualStateOfWorld) MarkDeviceAsUnmounted(
} }
func (asw *actualStateOfWorld) AddVolume( func (asw *actualStateOfWorld) AddVolume(
volumeSpec *volume.Spec) (api.UniqueVolumeName, error) { volumeSpec *volume.Spec, devicePath string) (api.UniqueVolumeName, error) {
asw.Lock() asw.Lock()
defer asw.Unlock() defer asw.Unlock()
@ -338,6 +342,7 @@ func (asw *actualStateOfWorld) AddVolume(
pluginName: volumePlugin.GetPluginName(), pluginName: volumePlugin.GetPluginName(),
pluginIsAttachable: pluginIsAttachable, pluginIsAttachable: pluginIsAttachable,
globallyMounted: false, globallyMounted: false,
devicePath: devicePath,
} }
asw.attachedVolumes[volumeName] = volumeObj asw.attachedVolumes[volumeName] = volumeObj
} }
@ -469,21 +474,22 @@ func (asw *actualStateOfWorld) DeleteVolume(volumeName api.UniqueVolumeName) err
} }
func (asw *actualStateOfWorld) PodExistsInVolume( func (asw *actualStateOfWorld) PodExistsInVolume(
podName volumetypes.UniquePodName, volumeName api.UniqueVolumeName) (bool, error) { podName volumetypes.UniquePodName,
volumeName api.UniqueVolumeName) (bool, string, error) {
asw.RLock() asw.RLock()
defer asw.RUnlock() defer asw.RUnlock()
volumeObj, volumeExists := asw.attachedVolumes[volumeName] volumeObj, volumeExists := asw.attachedVolumes[volumeName]
if !volumeExists { if !volumeExists {
return false, newVolumeNotAttachedError(volumeName) return false, "", newVolumeNotAttachedError(volumeName)
} }
podObj, podExists := volumeObj.mountedPods[podName] podObj, podExists := volumeObj.mountedPods[podName]
if podExists && podObj.remountRequired { if podExists && podObj.remountRequired {
return true, newRemountRequiredError(volumeObj.volumeName, podObj.podName) return true, volumeObj.devicePath, newRemountRequiredError(volumeObj.volumeName, podObj.podName)
} }
return podExists, nil return podExists, volumeObj.devicePath, nil
} }
func (asw *actualStateOfWorld) GetMountedVolumes() []MountedVolume { func (asw *actualStateOfWorld) GetMountedVolumes() []MountedVolume {

View File

@ -51,9 +51,10 @@ func Test_AddVolume_Positive_NewVolume(t *testing.T) {
}, },
} }
volumeSpec := &volume.Spec{Volume: &pod.Spec.Volumes[0]} volumeSpec := &volume.Spec{Volume: &pod.Spec.Volumes[0]}
devicePath := "fake/device/path"
// Act // Act
generatedVolumeName, err := asw.AddVolume(volumeSpec) generatedVolumeName, err := asw.AddVolume(volumeSpec, devicePath)
// Assert // Assert
if err != nil { if err != nil {
@ -69,6 +70,7 @@ func Test_AddVolume_Positive_NewVolume(t *testing.T) {
func Test_AddVolume_Positive_ExistingVolume(t *testing.T) { func Test_AddVolume_Positive_ExistingVolume(t *testing.T) {
// Arrange // Arrange
volumePluginMgr, _ := volumetesting.GetTestVolumePluginMgr(t) volumePluginMgr, _ := volumetesting.GetTestVolumePluginMgr(t)
devicePath := "fake/device/path"
asw := NewActualStateOfWorld("mynode" /* nodeName */, volumePluginMgr) asw := NewActualStateOfWorld("mynode" /* nodeName */, volumePluginMgr)
pod := &api.Pod{ pod := &api.Pod{
ObjectMeta: api.ObjectMeta{ ObjectMeta: api.ObjectMeta{
@ -90,13 +92,13 @@ func Test_AddVolume_Positive_ExistingVolume(t *testing.T) {
} }
volumeSpec := &volume.Spec{Volume: &pod.Spec.Volumes[0]} volumeSpec := &volume.Spec{Volume: &pod.Spec.Volumes[0]}
generatedVolumeName, err := asw.AddVolume(volumeSpec) generatedVolumeName, err := asw.AddVolume(volumeSpec, devicePath)
if err != nil { if err != nil {
t.Fatalf("AddVolume failed. Expected: <no error> Actual: <%v>", err) t.Fatalf("AddVolume failed. Expected: <no error> Actual: <%v>", err)
} }
// Act // Act
generatedVolumeName, err = asw.AddVolume(volumeSpec) generatedVolumeName, err = asw.AddVolume(volumeSpec, devicePath)
// Assert // Assert
if err != nil { if err != nil {
@ -113,6 +115,7 @@ func Test_AddPodToVolume_Positive_ExistingVolumeNewNode(t *testing.T) {
// Arrange // Arrange
volumePluginMgr, plugin := volumetesting.GetTestVolumePluginMgr(t) volumePluginMgr, plugin := volumetesting.GetTestVolumePluginMgr(t)
asw := NewActualStateOfWorld("mynode" /* nodeName */, volumePluginMgr) asw := NewActualStateOfWorld("mynode" /* nodeName */, volumePluginMgr)
devicePath := "fake/device/path"
pod := &api.Pod{ pod := &api.Pod{
ObjectMeta: api.ObjectMeta{ ObjectMeta: api.ObjectMeta{
@ -137,7 +140,7 @@ func Test_AddPodToVolume_Positive_ExistingVolumeNewNode(t *testing.T) {
volumeName, err := volumehelper.GetUniqueVolumeNameFromSpec( volumeName, err := volumehelper.GetUniqueVolumeNameFromSpec(
plugin, volumeSpec) plugin, volumeSpec)
generatedVolumeName, err := asw.AddVolume(volumeSpec) generatedVolumeName, err := asw.AddVolume(volumeSpec, devicePath)
if err != nil { if err != nil {
t.Fatalf("AddVolume failed. Expected: <no error> Actual: <%v>", err) t.Fatalf("AddVolume failed. Expected: <no error> Actual: <%v>", err)
} }
@ -158,7 +161,7 @@ func Test_AddPodToVolume_Positive_ExistingVolumeNewNode(t *testing.T) {
} }
verifyVolumeExistsInAttachedVolumes(t, generatedVolumeName, asw) verifyVolumeExistsInAttachedVolumes(t, generatedVolumeName, asw)
verifyPodExistsInVolumeAsw(t, podName, generatedVolumeName, asw) verifyPodExistsInVolumeAsw(t, podName, generatedVolumeName, "fake/device/path" /* expectedDevicePath */, asw)
} }
// Populates data struct with a volume // Populates data struct with a volume
@ -169,6 +172,7 @@ func Test_AddPodToVolume_Positive_ExistingVolumeExistingNode(t *testing.T) {
// Arrange // Arrange
volumePluginMgr, plugin := volumetesting.GetTestVolumePluginMgr(t) volumePluginMgr, plugin := volumetesting.GetTestVolumePluginMgr(t)
asw := NewActualStateOfWorld("mynode" /* nodeName */, volumePluginMgr) asw := NewActualStateOfWorld("mynode" /* nodeName */, volumePluginMgr)
devicePath := "fake/device/path"
pod := &api.Pod{ pod := &api.Pod{
ObjectMeta: api.ObjectMeta{ ObjectMeta: api.ObjectMeta{
@ -193,7 +197,7 @@ func Test_AddPodToVolume_Positive_ExistingVolumeExistingNode(t *testing.T) {
volumeName, err := volumehelper.GetUniqueVolumeNameFromSpec( volumeName, err := volumehelper.GetUniqueVolumeNameFromSpec(
plugin, volumeSpec) plugin, volumeSpec)
generatedVolumeName, err := asw.AddVolume(volumeSpec) generatedVolumeName, err := asw.AddVolume(volumeSpec, devicePath)
if err != nil { if err != nil {
t.Fatalf("AddVolume failed. Expected: <no error> Actual: <%v>", err) t.Fatalf("AddVolume failed. Expected: <no error> Actual: <%v>", err)
} }
@ -220,7 +224,7 @@ func Test_AddPodToVolume_Positive_ExistingVolumeExistingNode(t *testing.T) {
} }
verifyVolumeExistsInAttachedVolumes(t, generatedVolumeName, asw) verifyVolumeExistsInAttachedVolumes(t, generatedVolumeName, asw)
verifyPodExistsInVolumeAsw(t, podName, generatedVolumeName, asw) verifyPodExistsInVolumeAsw(t, podName, generatedVolumeName, "fake/device/path" /* expectedDevicePath */, asw)
} }
// Calls AddPodToVolume() to add pod to empty data stuct // Calls AddPodToVolume() to add pod to empty data stuct
@ -316,8 +320,9 @@ func verifyPodExistsInVolumeAsw(
t *testing.T, t *testing.T,
expectedPodName volumetypes.UniquePodName, expectedPodName volumetypes.UniquePodName,
expectedVolumeName api.UniqueVolumeName, expectedVolumeName api.UniqueVolumeName,
expectedDevicePath string,
asw ActualStateOfWorld) { asw ActualStateOfWorld) {
podExistsInVolume, err := podExistsInVolume, devicePath, err :=
asw.PodExistsInVolume(expectedPodName, expectedVolumeName) asw.PodExistsInVolume(expectedPodName, expectedVolumeName)
if err != nil { if err != nil {
t.Fatalf( t.Fatalf(
@ -329,6 +334,13 @@ func verifyPodExistsInVolumeAsw(
"ASW PodExistsInVolume result invalid. Expected: <true> Actual: <%v>", "ASW PodExistsInVolume result invalid. Expected: <true> Actual: <%v>",
podExistsInVolume) podExistsInVolume)
} }
if devicePath != expectedDevicePath {
t.Fatalf(
"Invalid devicePath. Expected: <%q> Actual: <%q> ",
expectedDevicePath,
devicePath)
}
} }
func verifyPodDoesntExistInVolumeAsw( func verifyPodDoesntExistInVolumeAsw(
@ -337,7 +349,7 @@ func verifyPodDoesntExistInVolumeAsw(
volumeToCheck api.UniqueVolumeName, volumeToCheck api.UniqueVolumeName,
expectVolumeToExist bool, expectVolumeToExist bool,
asw ActualStateOfWorld) { asw ActualStateOfWorld) {
podExistsInVolume, err := podExistsInVolume, devicePath, err :=
asw.PodExistsInVolume(podToCheck, volumeToCheck) asw.PodExistsInVolume(podToCheck, volumeToCheck)
if !expectVolumeToExist && err == nil { if !expectVolumeToExist && err == nil {
t.Fatalf( t.Fatalf(
@ -354,4 +366,10 @@ func verifyPodDoesntExistInVolumeAsw(
"ASW PodExistsInVolume result invalid. Expected: <false> Actual: <%v>", "ASW PodExistsInVolume result invalid. Expected: <false> Actual: <%v>",
podExistsInVolume) podExistsInVolume)
} }
if devicePath != "" {
t.Fatalf(
"Invalid devicePath. Expected: <\"\"> Actual: <%q> ",
devicePath)
}
} }

View File

@ -84,8 +84,8 @@ type DesiredStateOfWorld interface {
GetVolumesToMount() []VolumeToMount GetVolumesToMount() []VolumeToMount
} }
// VolumeToMount represents a volume that should be attached to this node and // VolumeToMount represents a volume that is attached to this node and needs to
// mounted to the PodName. // be mounted to PodName.
type VolumeToMount struct { type VolumeToMount struct {
operationexecutor.VolumeToMount operationexecutor.VolumeToMount
} }

View File

@ -23,6 +23,7 @@ import (
"time" "time"
"github.com/golang/glog" "github.com/golang/glog"
"k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset"
"k8s.io/kubernetes/pkg/kubelet/volume/cache" "k8s.io/kubernetes/pkg/kubelet/volume/cache"
"k8s.io/kubernetes/pkg/util/goroutinemap" "k8s.io/kubernetes/pkg/util/goroutinemap"
"k8s.io/kubernetes/pkg/util/wait" "k8s.io/kubernetes/pkg/util/wait"
@ -62,6 +63,7 @@ type Reconciler interface {
// safely (prevents more than one operation from being triggered on the same // safely (prevents more than one operation from being triggered on the same
// volume) // volume)
func NewReconciler( func NewReconciler(
kubeClient internalclientset.Interface,
controllerAttachDetachEnabled bool, controllerAttachDetachEnabled bool,
loopSleepDuration time.Duration, loopSleepDuration time.Duration,
waitForAttachTimeout time.Duration, waitForAttachTimeout time.Duration,
@ -70,6 +72,7 @@ func NewReconciler(
actualStateOfWorld cache.ActualStateOfWorld, actualStateOfWorld cache.ActualStateOfWorld,
operationExecutor operationexecutor.OperationExecutor) Reconciler { operationExecutor operationexecutor.OperationExecutor) Reconciler {
return &reconciler{ return &reconciler{
kubeClient: kubeClient,
controllerAttachDetachEnabled: controllerAttachDetachEnabled, controllerAttachDetachEnabled: controllerAttachDetachEnabled,
loopSleepDuration: loopSleepDuration, loopSleepDuration: loopSleepDuration,
waitForAttachTimeout: waitForAttachTimeout, waitForAttachTimeout: waitForAttachTimeout,
@ -81,6 +84,7 @@ func NewReconciler(
} }
type reconciler struct { type reconciler struct {
kubeClient internalclientset.Interface
controllerAttachDetachEnabled bool controllerAttachDetachEnabled bool
loopSleepDuration time.Duration loopSleepDuration time.Duration
waitForAttachTimeout time.Duration waitForAttachTimeout time.Duration
@ -112,8 +116,10 @@ func (rc *reconciler) reconciliationLoopFunc() func() {
mountedVolume.PodUID) mountedVolume.PodUID)
err := rc.operationExecutor.UnmountVolume( err := rc.operationExecutor.UnmountVolume(
mountedVolume.MountedVolume, rc.actualStateOfWorld) mountedVolume.MountedVolume, rc.actualStateOfWorld)
if err != nil && !goroutinemap.IsAlreadyExists(err) { if err != nil &&
// Ignore goroutinemap.IsAlreadyExists errors, they are expected. !goroutinemap.IsAlreadyExists(err) &&
!goroutinemap.IsExponentialBackoff(err) {
// Ignore goroutinemap.IsAlreadyExists and goroutinemap.IsExponentialBackoff errors, they are expected.
// Log all other errors. // Log all other errors.
glog.Errorf( glog.Errorf(
"operationExecutor.UnmountVolume failed for volume %q (spec.Name: %q) pod %q (UID: %q) controllerAttachDetachEnabled: %v with err: %v", "operationExecutor.UnmountVolume failed for volume %q (spec.Name: %q) pod %q (UID: %q) controllerAttachDetachEnabled: %v with err: %v",
@ -136,25 +142,37 @@ func (rc *reconciler) reconciliationLoopFunc() func() {
// Ensure volumes that should be attached/mounted are attached/mounted. // Ensure volumes that should be attached/mounted are attached/mounted.
for _, volumeToMount := range rc.desiredStateOfWorld.GetVolumesToMount() { for _, volumeToMount := range rc.desiredStateOfWorld.GetVolumesToMount() {
volMounted, err := rc.actualStateOfWorld.PodExistsInVolume(volumeToMount.PodName, volumeToMount.VolumeName) volMounted, devicePath, err := rc.actualStateOfWorld.PodExistsInVolume(volumeToMount.PodName, volumeToMount.VolumeName)
volumeToMount.DevicePath = devicePath
if cache.IsVolumeNotAttachedError(err) { if cache.IsVolumeNotAttachedError(err) {
// Volume is not attached, it should be
if rc.controllerAttachDetachEnabled || !volumeToMount.PluginIsAttachable { if rc.controllerAttachDetachEnabled || !volumeToMount.PluginIsAttachable {
// Kubelet not responsible for attaching or this volume has a non-attachable volume plugin, // Volume is not attached (or doesn't implement attacher), kubelet attach is disabled, wait
// so just add it to actualStateOfWorld without attach. // for controller to finish attaching volume.
markVolumeAttachErr := rc.actualStateOfWorld.MarkVolumeAsAttached( glog.V(12).Infof("Attempting to start VerifyControllerAttachedVolume for volume %q (spec.Name: %q) pod %q (UID: %q)",
volumeToMount.VolumeSpec, rc.hostName) volumeToMount.VolumeName,
if markVolumeAttachErr != nil { volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName,
volumeToMount.Pod.UID)
err := rc.operationExecutor.VerifyControllerAttachedVolume(
volumeToMount.VolumeToMount,
rc.hostName,
rc.actualStateOfWorld)
if err != nil &&
!goroutinemap.IsAlreadyExists(err) &&
!goroutinemap.IsExponentialBackoff(err) {
// Ignore goroutinemap.IsAlreadyExists and goroutinemap.IsExponentialBackoff errors, they are expected.
// Log all other errors.
glog.Errorf( glog.Errorf(
"actualStateOfWorld.MarkVolumeAsAttached failed for volume %q (spec.Name: %q) pod %q (UID: %q) controllerAttachDetachEnabled: %v with err: %v", "operationExecutor.VerifyControllerAttachedVolume failed for volume %q (spec.Name: %q) pod %q (UID: %q) controllerAttachDetachEnabled: %v with err: %v",
volumeToMount.VolumeName, volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(), volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName, volumeToMount.PodName,
volumeToMount.Pod.UID, volumeToMount.Pod.UID,
rc.controllerAttachDetachEnabled, rc.controllerAttachDetachEnabled,
markVolumeAttachErr) err)
} else { }
glog.V(12).Infof("actualStateOfWorld.MarkVolumeAsAttached succeeded for volume %q (spec.Name: %q) pod %q (UID: %q)", if err == nil {
glog.Infof("VerifyControllerAttachedVolume operation started for volume %q (spec.Name: %q) pod %q (UID: %q)",
volumeToMount.VolumeName, volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(), volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName, volumeToMount.PodName,
@ -174,8 +192,10 @@ func (rc *reconciler) reconciliationLoopFunc() func() {
volumeToMount.PodName, volumeToMount.PodName,
volumeToMount.Pod.UID) volumeToMount.Pod.UID)
err := rc.operationExecutor.AttachVolume(volumeToAttach, rc.actualStateOfWorld) err := rc.operationExecutor.AttachVolume(volumeToAttach, rc.actualStateOfWorld)
if err != nil && !goroutinemap.IsAlreadyExists(err) { if err != nil &&
// Ignore goroutinemap.IsAlreadyExists errors, they are expected. !goroutinemap.IsAlreadyExists(err) &&
!goroutinemap.IsExponentialBackoff(err) {
// Ignore goroutinemap.IsAlreadyExists and goroutinemap.IsExponentialBackoff errors, they are expected.
// Log all other errors. // Log all other errors.
glog.Errorf( glog.Errorf(
"operationExecutor.AttachVolume failed for volume %q (spec.Name: %q) pod %q (UID: %q) controllerAttachDetachEnabled: %v with err: %v", "operationExecutor.AttachVolume failed for volume %q (spec.Name: %q) pod %q (UID: %q) controllerAttachDetachEnabled: %v with err: %v",
@ -210,8 +230,10 @@ func (rc *reconciler) reconciliationLoopFunc() func() {
rc.waitForAttachTimeout, rc.waitForAttachTimeout,
volumeToMount.VolumeToMount, volumeToMount.VolumeToMount,
rc.actualStateOfWorld) rc.actualStateOfWorld)
if err != nil && !goroutinemap.IsAlreadyExists(err) { if err != nil &&
// Ignore goroutinemap.IsAlreadyExists errors, they are expected. !goroutinemap.IsAlreadyExists(err) &&
!goroutinemap.IsExponentialBackoff(err) {
// Ignore goroutinemap.IsAlreadyExists and goroutinemap.IsExponentialBackoff errors, they are expected.
// Log all other errors. // Log all other errors.
glog.Errorf( glog.Errorf(
"operationExecutor.MountVolume failed for volume %q (spec.Name: %q) pod %q (UID: %q) controllerAttachDetachEnabled: %v with err: %v", "operationExecutor.MountVolume failed for volume %q (spec.Name: %q) pod %q (UID: %q) controllerAttachDetachEnabled: %v with err: %v",
@ -243,8 +265,10 @@ func (rc *reconciler) reconciliationLoopFunc() func() {
attachedVolume.VolumeSpec.Name()) attachedVolume.VolumeSpec.Name())
err := rc.operationExecutor.UnmountDevice( err := rc.operationExecutor.UnmountDevice(
attachedVolume.AttachedVolume, rc.actualStateOfWorld) attachedVolume.AttachedVolume, rc.actualStateOfWorld)
if err != nil && !goroutinemap.IsAlreadyExists(err) { if err != nil &&
// Ignore goroutinemap.IsAlreadyExists errors, they are expected. !goroutinemap.IsAlreadyExists(err) &&
!goroutinemap.IsExponentialBackoff(err) {
// Ignore goroutinemap.IsAlreadyExists and goroutinemap.IsExponentialBackoff errors, they are expected.
// Log all other errors. // Log all other errors.
glog.Errorf( glog.Errorf(
"operationExecutor.UnmountDevice failed for volume %q (spec.Name: %q) controllerAttachDetachEnabled: %v with err: %v", "operationExecutor.UnmountDevice failed for volume %q (spec.Name: %q) controllerAttachDetachEnabled: %v with err: %v",
@ -272,8 +296,10 @@ func (rc *reconciler) reconciliationLoopFunc() func() {
attachedVolume.VolumeSpec.Name()) attachedVolume.VolumeSpec.Name())
err := rc.operationExecutor.DetachVolume( err := rc.operationExecutor.DetachVolume(
attachedVolume.AttachedVolume, rc.actualStateOfWorld) attachedVolume.AttachedVolume, rc.actualStateOfWorld)
if err != nil && !goroutinemap.IsAlreadyExists(err) { if err != nil &&
// Ignore goroutinemap.IsAlreadyExists errors, they are expected. !goroutinemap.IsAlreadyExists(err) &&
!goroutinemap.IsExponentialBackoff(err) {
// Ignore goroutinemap.IsAlreadyExists && goroutinemap.IsExponentialBackoff errors, they are expected.
// Log all other errors. // Log all other errors.
glog.Errorf( glog.Errorf(
"operationExecutor.DetachVolume failed for volume %q (spec.Name: %q) controllerAttachDetachEnabled: %v with err: %v", "operationExecutor.DetachVolume failed for volume %q (spec.Name: %q) controllerAttachDetachEnabled: %v with err: %v",

View File

@ -17,12 +17,16 @@ limitations under the License.
package reconciler package reconciler
import ( import (
"fmt"
"testing" "testing"
"time" "time"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"k8s.io/kubernetes/pkg/api" "k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset/fake"
"k8s.io/kubernetes/pkg/client/testing/core"
"k8s.io/kubernetes/pkg/kubelet/volume/cache" "k8s.io/kubernetes/pkg/kubelet/volume/cache"
"k8s.io/kubernetes/pkg/runtime"
"k8s.io/kubernetes/pkg/util/wait" "k8s.io/kubernetes/pkg/util/wait"
"k8s.io/kubernetes/pkg/volume" "k8s.io/kubernetes/pkg/volume"
volumetesting "k8s.io/kubernetes/pkg/volume/testing" volumetesting "k8s.io/kubernetes/pkg/volume/testing"
@ -38,18 +42,20 @@ const (
// waitForAttachTimeout is the maximum amount of time a // waitForAttachTimeout is the maximum amount of time a
// operationexecutor.Mount call will wait for a volume to be attached. // operationexecutor.Mount call will wait for a volume to be attached.
waitForAttachTimeout time.Duration = 1 * time.Second waitForAttachTimeout time.Duration = 1 * time.Second
nodeName string = "myhostname"
) )
// Calls Run() // Calls Run()
// Verifies there are no calls to attach, detach, mount, unmount, etc. // Verifies there are no calls to attach, detach, mount, unmount, etc.
func Test_Run_Positive_DoNothing(t *testing.T) { func Test_Run_Positive_DoNothing(t *testing.T) {
// Arrange // Arrange
nodeName := "myhostname"
volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t) volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t)
dsw := cache.NewDesiredStateOfWorld(volumePluginMgr) dsw := cache.NewDesiredStateOfWorld(volumePluginMgr)
asw := cache.NewActualStateOfWorld(nodeName, volumePluginMgr) asw := cache.NewActualStateOfWorld(nodeName, volumePluginMgr)
oex := operationexecutor.NewOperationExecutor(volumePluginMgr) kubeClient := createTestClient()
oex := operationexecutor.NewOperationExecutor(kubeClient, volumePluginMgr)
reconciler := NewReconciler( reconciler := NewReconciler(
kubeClient,
false, /* controllerAttachDetachEnabled */ false, /* controllerAttachDetachEnabled */
reconcilerLoopSleepDuration, reconcilerLoopSleepDuration,
waitForAttachTimeout, waitForAttachTimeout,
@ -75,12 +81,13 @@ func Test_Run_Positive_DoNothing(t *testing.T) {
// Verifies there is are attach/mount/etc calls and no detach/unmount calls. // Verifies there is are attach/mount/etc calls and no detach/unmount calls.
func Test_Run_Positive_VolumeAttachAndMount(t *testing.T) { func Test_Run_Positive_VolumeAttachAndMount(t *testing.T) {
// Arrange // Arrange
nodeName := "myhostname"
volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t) volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t)
dsw := cache.NewDesiredStateOfWorld(volumePluginMgr) dsw := cache.NewDesiredStateOfWorld(volumePluginMgr)
asw := cache.NewActualStateOfWorld(nodeName, volumePluginMgr) asw := cache.NewActualStateOfWorld(nodeName, volumePluginMgr)
oex := operationexecutor.NewOperationExecutor(volumePluginMgr) kubeClient := createTestClient()
oex := operationexecutor.NewOperationExecutor(kubeClient, volumePluginMgr)
reconciler := NewReconciler( reconciler := NewReconciler(
kubeClient,
false, /* controllerAttachDetachEnabled */ false, /* controllerAttachDetachEnabled */
reconcilerLoopSleepDuration, reconcilerLoopSleepDuration,
waitForAttachTimeout, waitForAttachTimeout,
@ -141,12 +148,13 @@ func Test_Run_Positive_VolumeAttachAndMount(t *testing.T) {
// Verifies there are no attach/detach calls. // Verifies there are no attach/detach calls.
func Test_Run_Positive_VolumeMountControllerAttachEnabled(t *testing.T) { func Test_Run_Positive_VolumeMountControllerAttachEnabled(t *testing.T) {
// Arrange // Arrange
nodeName := "myhostname"
volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t) volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t)
dsw := cache.NewDesiredStateOfWorld(volumePluginMgr) dsw := cache.NewDesiredStateOfWorld(volumePluginMgr)
asw := cache.NewActualStateOfWorld(nodeName, volumePluginMgr) asw := cache.NewActualStateOfWorld(nodeName, volumePluginMgr)
oex := operationexecutor.NewOperationExecutor(volumePluginMgr) kubeClient := createTestClient()
oex := operationexecutor.NewOperationExecutor(kubeClient, volumePluginMgr)
reconciler := NewReconciler( reconciler := NewReconciler(
kubeClient,
true, /* controllerAttachDetachEnabled */ true, /* controllerAttachDetachEnabled */
reconcilerLoopSleepDuration, reconcilerLoopSleepDuration,
waitForAttachTimeout, waitForAttachTimeout,
@ -206,12 +214,13 @@ func Test_Run_Positive_VolumeMountControllerAttachEnabled(t *testing.T) {
// Verifies detach/unmount calls are issued. // Verifies detach/unmount calls are issued.
func Test_Run_Positive_VolumeAttachMountUnmountDetach(t *testing.T) { func Test_Run_Positive_VolumeAttachMountUnmountDetach(t *testing.T) {
// Arrange // Arrange
nodeName := "myhostname"
volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t) volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t)
dsw := cache.NewDesiredStateOfWorld(volumePluginMgr) dsw := cache.NewDesiredStateOfWorld(volumePluginMgr)
asw := cache.NewActualStateOfWorld(nodeName, volumePluginMgr) asw := cache.NewActualStateOfWorld(nodeName, volumePluginMgr)
oex := operationexecutor.NewOperationExecutor(volumePluginMgr) kubeClient := createTestClient()
oex := operationexecutor.NewOperationExecutor(kubeClient, volumePluginMgr)
reconciler := NewReconciler( reconciler := NewReconciler(
kubeClient,
false, /* controllerAttachDetachEnabled */ false, /* controllerAttachDetachEnabled */
reconcilerLoopSleepDuration, reconcilerLoopSleepDuration,
waitForAttachTimeout, waitForAttachTimeout,
@ -284,12 +293,13 @@ func Test_Run_Positive_VolumeAttachMountUnmountDetach(t *testing.T) {
// Verifies there are no attach/detach calls made. // Verifies there are no attach/detach calls made.
func Test_Run_Positive_VolumeUnmountControllerAttachEnabled(t *testing.T) { func Test_Run_Positive_VolumeUnmountControllerAttachEnabled(t *testing.T) {
// Arrange // Arrange
nodeName := "myhostname"
volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t) volumePluginMgr, fakePlugin := volumetesting.GetTestVolumePluginMgr(t)
dsw := cache.NewDesiredStateOfWorld(volumePluginMgr) dsw := cache.NewDesiredStateOfWorld(volumePluginMgr)
asw := cache.NewActualStateOfWorld(nodeName, volumePluginMgr) asw := cache.NewActualStateOfWorld(nodeName, volumePluginMgr)
oex := operationexecutor.NewOperationExecutor(volumePluginMgr) kubeClient := createTestClient()
oex := operationexecutor.NewOperationExecutor(kubeClient, volumePluginMgr)
reconciler := NewReconciler( reconciler := NewReconciler(
kubeClient,
true, /* controllerAttachDetachEnabled */ true, /* controllerAttachDetachEnabled */
reconcilerLoopSleepDuration, reconcilerLoopSleepDuration,
waitForAttachTimeout, waitForAttachTimeout,
@ -402,3 +412,25 @@ func retryWithExponentialBackOff(initialDuration time.Duration, fn wait.Conditio
} }
return wait.ExponentialBackoff(backoff, fn) return wait.ExponentialBackoff(backoff, fn)
} }
func createTestClient() *fake.Clientset {
fakeClient := &fake.Clientset{}
fakeClient.AddReactor("get", "nodes",
func(action core.Action) (bool, runtime.Object, error) {
return true, &api.Node{
ObjectMeta: api.ObjectMeta{Name: nodeName},
Status: api.NodeStatus{
VolumesAttached: []api.AttachedVolume{
{
Name: "fake-plugin/volume-name",
DevicePath: "fake/path",
},
}},
Spec: api.NodeSpec{ExternalID: nodeName},
}, nil
})
fakeClient.AddReactor("*", "*", func(action core.Action) (bool, runtime.Object, error) {
return true, nil, fmt.Errorf("no reaction implemented for %s", action)
})
return fakeClient
}

View File

@ -126,10 +126,13 @@ func NewVolumeManager(
volumePluginMgr: volumePluginMgr, volumePluginMgr: volumePluginMgr,
desiredStateOfWorld: cache.NewDesiredStateOfWorld(volumePluginMgr), desiredStateOfWorld: cache.NewDesiredStateOfWorld(volumePluginMgr),
actualStateOfWorld: cache.NewActualStateOfWorld(hostName, volumePluginMgr), actualStateOfWorld: cache.NewActualStateOfWorld(hostName, volumePluginMgr),
operationExecutor: operationexecutor.NewOperationExecutor(volumePluginMgr), operationExecutor: operationexecutor.NewOperationExecutor(
kubeClient,
volumePluginMgr),
} }
vm.reconciler = reconciler.NewReconciler( vm.reconciler = reconciler.NewReconciler(
kubeClient,
controllerAttachDetachEnabled, controllerAttachDetachEnabled,
reconcilerLoopSleepPeriod, reconcilerLoopSleepPeriod,
waitForAttachTimeout, waitForAttachTimeout,

View File

@ -23,9 +23,24 @@ package goroutinemap
import ( import (
"fmt" "fmt"
"runtime"
"sync" "sync"
"time"
"k8s.io/kubernetes/pkg/util/runtime" "github.com/golang/glog"
k8sRuntime "k8s.io/kubernetes/pkg/util/runtime"
)
const (
// initialDurationBeforeRetry is the amount of time after an error occurs
// that GoRoutineMap will refuse to allow another operation to start with
// the same operationName (if exponentialBackOffOnError is enabled). Each
// successive error results in a wait 2x times the previous.
initialDurationBeforeRetry time.Duration = 500 * time.Millisecond
// maxDurationBeforeRetry is the maximum amount of time that
// durationBeforeRetry will grow to due to exponential backoff.
maxDurationBeforeRetry time.Duration = 2 * time.Minute
) )
// GoRoutineMap defines the supported set of operations. // GoRoutineMap defines the supported set of operations.
@ -36,7 +51,7 @@ type GoRoutineMap interface {
// go routine is terminated and the operationName is removed from the list // go routine is terminated and the operationName is removed from the list
// of executing operations allowing a new operation to be started with the // of executing operations allowing a new operation to be started with the
// same name without error. // same name without error.
Run(operationName string, operation func() error) error Run(operationName string, operationFunc func() error) error
// Wait blocks until all operations are completed. This is typically // Wait blocks until all operations are completed. This is typically
// necessary during tests - the test should wait until all operations finish // necessary during tests - the test should wait until all operations finish
@ -45,50 +60,127 @@ type GoRoutineMap interface {
} }
// NewGoRoutineMap returns a new instance of GoRoutineMap. // NewGoRoutineMap returns a new instance of GoRoutineMap.
func NewGoRoutineMap() GoRoutineMap { func NewGoRoutineMap(exponentialBackOffOnError bool) GoRoutineMap {
return &goRoutineMap{ return &goRoutineMap{
operations: make(map[string]bool), operations: make(map[string]operation),
exponentialBackOffOnError: exponentialBackOffOnError,
} }
} }
type goRoutineMap struct { type goRoutineMap struct {
operations map[string]bool operations map[string]operation
sync.Mutex exponentialBackOffOnError bool
wg sync.WaitGroup wg sync.WaitGroup
sync.Mutex
} }
func (grm *goRoutineMap) Run(operationName string, operation func() error) error { type operation struct {
operationPending bool
lastError error
lastErrorTime time.Time
durationBeforeRetry time.Duration
}
func (grm *goRoutineMap) Run(operationName string, operationFunc func() error) error {
grm.Lock() grm.Lock()
defer grm.Unlock() defer grm.Unlock()
if grm.operations[operationName] { existingOp, exists := grm.operations[operationName]
if exists {
// Operation with name exists // Operation with name exists
if existingOp.operationPending {
return newAlreadyExistsError(operationName) return newAlreadyExistsError(operationName)
} }
grm.operations[operationName] = true if time.Since(existingOp.lastErrorTime) <= existingOp.durationBeforeRetry {
return newExponentialBackoffError(operationName, existingOp)
}
}
grm.operations[operationName] = operation{
operationPending: true,
lastError: existingOp.lastError,
lastErrorTime: existingOp.lastErrorTime,
durationBeforeRetry: existingOp.durationBeforeRetry,
}
grm.wg.Add(1) grm.wg.Add(1)
go func() { go func() (err error) {
defer grm.operationComplete(operationName) // Handle unhandled panics (very unlikely)
defer runtime.HandleCrash() defer k8sRuntime.HandleCrash()
operation() // Handle completion of and error, if any, from operationFunc()
defer grm.operationComplete(operationName, &err)
// Handle panic, if any, from operationFunc()
defer recoverFromPanic(operationName, &err)
return operationFunc()
}() }()
return nil return nil
} }
func (grm *goRoutineMap) operationComplete(operationName string) { func (grm *goRoutineMap) operationComplete(operationName string, err *error) {
defer grm.wg.Done() defer grm.wg.Done()
grm.Lock() grm.Lock()
defer grm.Unlock() defer grm.Unlock()
if *err == nil || !grm.exponentialBackOffOnError {
// Operation completed without error, or exponentialBackOffOnError disabled
delete(grm.operations, operationName) delete(grm.operations, operationName)
if *err != nil {
// Log error
glog.Errorf("operation for %q failed with: %v",
operationName,
*err)
}
} else {
// Operation completed with error and exponentialBackOffOnError Enabled
existingOp := grm.operations[operationName]
if existingOp.durationBeforeRetry == 0 {
existingOp.durationBeforeRetry = initialDurationBeforeRetry
} else {
existingOp.durationBeforeRetry = 2 * existingOp.durationBeforeRetry
if existingOp.durationBeforeRetry > maxDurationBeforeRetry {
existingOp.durationBeforeRetry = maxDurationBeforeRetry
}
}
existingOp.lastError = *err
existingOp.lastErrorTime = time.Now()
existingOp.operationPending = false
grm.operations[operationName] = existingOp
// Log error
glog.Errorf("Operation for %q failed. No retries permitted until %v (durationBeforeRetry %v). error: %v",
operationName,
existingOp.lastErrorTime.Add(existingOp.durationBeforeRetry),
existingOp.durationBeforeRetry,
*err)
}
} }
func (grm *goRoutineMap) Wait() { func (grm *goRoutineMap) Wait() {
grm.wg.Wait() grm.wg.Wait()
} }
// alreadyExistsError is specific error returned when NewGoRoutine() func recoverFromPanic(operationName string, err *error) {
// detects that operation with given name is already running. if r := recover(); r != nil {
callers := ""
for i := 0; true; i++ {
_, file, line, ok := runtime.Caller(i)
if !ok {
break
}
callers = callers + fmt.Sprintf("%v:%v\n", file, line)
}
*err = fmt.Errorf(
"operation for %q recovered from panic %q. (err=%v) Call stack:\n%v",
operationName,
r,
*err,
callers)
}
}
// alreadyExistsError is the error returned when NewGoRoutine() detects that
// an operation with the given name is already running.
type alreadyExistsError struct { type alreadyExistsError struct {
operationName string operationName string
} }
@ -96,7 +188,7 @@ type alreadyExistsError struct {
var _ error = alreadyExistsError{} var _ error = alreadyExistsError{}
func (err alreadyExistsError) Error() string { func (err alreadyExistsError) Error() string {
return fmt.Sprintf("Failed to create operation with name %q. An operation with that name already exists", err.operationName) return fmt.Sprintf("Failed to create operation with name %q. An operation with that name is already executing.", err.operationName)
} }
func newAlreadyExistsError(operationName string) error { func newAlreadyExistsError(operationName string) error {
@ -113,3 +205,43 @@ func IsAlreadyExists(err error) bool {
return false return false
} }
} }
// exponentialBackoffError is the error returned when NewGoRoutine() detects
// that the previous operation for given name failed less then
// durationBeforeRetry.
type exponentialBackoffError struct {
operationName string
failedOp operation
}
var _ error = exponentialBackoffError{}
func (err exponentialBackoffError) Error() string {
return fmt.Sprintf(
"Failed to create operation with name %q. An operation with that name failed at %v. No retries permitted until %v (%v). Last error: %q.",
err.operationName,
err.failedOp.lastErrorTime,
err.failedOp.lastErrorTime.Add(err.failedOp.durationBeforeRetry),
err.failedOp.durationBeforeRetry,
err.failedOp.lastError)
}
func newExponentialBackoffError(
operationName string, failedOp operation) error {
return exponentialBackoffError{
operationName: operationName,
failedOp: failedOp,
}
}
// IsExponentialBackoff returns true if an error returned from NewGoRoutine()
// indicates that the previous operation for given name failed less then
// durationBeforeRetry.
func IsExponentialBackoff(err error) bool {
switch err.(type) {
case exponentialBackoffError:
return true
default:
return false
}
}

View File

@ -24,14 +24,41 @@ import (
"k8s.io/kubernetes/pkg/util/wait" "k8s.io/kubernetes/pkg/util/wait"
) )
// testTimeout is a timeout of goroutines to finish. This _should_ be just a const (
// "context switch" and it should take several ms, however, Clayton says "We // testTimeout is a timeout of goroutines to finish. This _should_ be just a
// have had flakes due to tests that assumed that 15s is long enough to sleep") // "context switch" and it should take several ms, however, Clayton says "We
const testTimeout = 1 * time.Minute // have had flakes due to tests that assumed that 15s is long enough to sleep")
testTimeout time.Duration = 1 * time.Minute
// initialOperationWaitTimeShort is the initial amount of time the test will
// wait for an operation to complete (each successive failure results in
// exponential backoff).
initialOperationWaitTimeShort time.Duration = 20 * time.Millisecond
// initialOperationWaitTimeLong is the initial amount of time the test will
// wait for an operation to complete (each successive failure results in
// exponential backoff).
initialOperationWaitTimeLong time.Duration = 500 * time.Millisecond
)
func Test_NewGoRoutineMap_Positive_SingleOp(t *testing.T) { func Test_NewGoRoutineMap_Positive_SingleOp(t *testing.T) {
// Arrange // Arrange
grm := NewGoRoutineMap() grm := NewGoRoutineMap(false /* exponentialBackOffOnError */)
operationName := "operation-name"
operation := func() error { return nil }
// Act
err := grm.Run(operationName, operation)
// Assert
if err != nil {
t.Fatalf("NewGoRoutine failed. Expected: <no error> Actual: <%v>", err)
}
}
func Test_NewGoRoutineMap_Positive_SingleOpWithExpBackoff(t *testing.T) {
// Arrange
grm := NewGoRoutineMap(true /* exponentialBackOffOnError */)
operationName := "operation-name" operationName := "operation-name"
operation := func() error { return nil } operation := func() error { return nil }
@ -46,7 +73,7 @@ func Test_NewGoRoutineMap_Positive_SingleOp(t *testing.T) {
func Test_NewGoRoutineMap_Positive_SecondOpAfterFirstCompletes(t *testing.T) { func Test_NewGoRoutineMap_Positive_SecondOpAfterFirstCompletes(t *testing.T) {
// Arrange // Arrange
grm := NewGoRoutineMap() grm := NewGoRoutineMap(false /* exponentialBackOffOnError */)
operationName := "operation-name" operationName := "operation-name"
operation1DoneCh := make(chan interface{}, 0 /* bufferSize */) operation1DoneCh := make(chan interface{}, 0 /* bufferSize */)
operation1 := generateCallbackFunc(operation1DoneCh) operation1 := generateCallbackFunc(operation1DoneCh)
@ -59,11 +86,43 @@ func Test_NewGoRoutineMap_Positive_SecondOpAfterFirstCompletes(t *testing.T) {
// Act // Act
err2 := retryWithExponentialBackOff( err2 := retryWithExponentialBackOff(
time.Duration(20*time.Millisecond), time.Duration(initialOperationWaitTimeShort),
func() (bool, error) { func() (bool, error) {
err := grm.Run(operationName, operation2) err := grm.Run(operationName, operation2)
if err != nil { if err != nil {
t.Logf("Warning: NewGoRoutine failed. Expected: <no error> Actual: <%v>. Will retry.", err) t.Logf("Warning: NewGoRoutine failed with %v. Will retry.", err)
return false, nil
}
return true, nil
},
)
// Assert
if err2 != nil {
t.Fatalf("NewGoRoutine failed. Expected: <no error> Actual: <%v>", err2)
}
}
func Test_NewGoRoutineMap_Positive_SecondOpAfterFirstCompletesWithExpBackoff(t *testing.T) {
// Arrange
grm := NewGoRoutineMap(true /* exponentialBackOffOnError */)
operationName := "operation-name"
operation1DoneCh := make(chan interface{}, 0 /* bufferSize */)
operation1 := generateCallbackFunc(operation1DoneCh)
err1 := grm.Run(operationName, operation1)
if err1 != nil {
t.Fatalf("NewGoRoutine failed. Expected: <no error> Actual: <%v>", err1)
}
operation2 := generateNoopFunc()
<-operation1DoneCh // Force operation1 to complete
// Act
err2 := retryWithExponentialBackOff(
time.Duration(initialOperationWaitTimeShort),
func() (bool, error) {
err := grm.Run(operationName, operation2)
if err != nil {
t.Logf("Warning: NewGoRoutine failed with %v. Will retry.", err)
return false, nil return false, nil
} }
return true, nil return true, nil
@ -78,7 +137,7 @@ func Test_NewGoRoutineMap_Positive_SecondOpAfterFirstCompletes(t *testing.T) {
func Test_NewGoRoutineMap_Positive_SecondOpAfterFirstPanics(t *testing.T) { func Test_NewGoRoutineMap_Positive_SecondOpAfterFirstPanics(t *testing.T) {
// Arrange // Arrange
grm := NewGoRoutineMap() grm := NewGoRoutineMap(false /* exponentialBackOffOnError */)
operationName := "operation-name" operationName := "operation-name"
operation1 := generatePanicFunc() operation1 := generatePanicFunc()
err1 := grm.Run(operationName, operation1) err1 := grm.Run(operationName, operation1)
@ -89,11 +148,41 @@ func Test_NewGoRoutineMap_Positive_SecondOpAfterFirstPanics(t *testing.T) {
// Act // Act
err2 := retryWithExponentialBackOff( err2 := retryWithExponentialBackOff(
time.Duration(20*time.Millisecond), time.Duration(initialOperationWaitTimeShort),
func() (bool, error) { func() (bool, error) {
err := grm.Run(operationName, operation2) err := grm.Run(operationName, operation2)
if err != nil { if err != nil {
t.Logf("Warning: NewGoRoutine failed. Expected: <no error> Actual: <%v>. Will retry.", err) t.Logf("Warning: NewGoRoutine failed with %v. Will retry.", err)
return false, nil
}
return true, nil
},
)
// Assert
if err2 != nil {
t.Fatalf("NewGoRoutine failed. Expected: <no error> Actual: <%v>", err2)
}
}
func Test_NewGoRoutineMap_Positive_SecondOpAfterFirstPanicsWithExpBackoff(t *testing.T) {
// Arrange
grm := NewGoRoutineMap(true /* exponentialBackOffOnError */)
operationName := "operation-name"
operation1 := generatePanicFunc()
err1 := grm.Run(operationName, operation1)
if err1 != nil {
t.Fatalf("NewGoRoutine failed. Expected: <no error> Actual: <%v>", err1)
}
operation2 := generateNoopFunc()
// Act
err2 := retryWithExponentialBackOff(
time.Duration(initialOperationWaitTimeLong), // Longer duration to accommodate for backoff
func() (bool, error) {
err := grm.Run(operationName, operation2)
if err != nil {
t.Logf("Warning: NewGoRoutine failed with %v. Will retry.", err)
return false, nil return false, nil
} }
return true, nil return true, nil
@ -108,7 +197,31 @@ func Test_NewGoRoutineMap_Positive_SecondOpAfterFirstPanics(t *testing.T) {
func Test_NewGoRoutineMap_Negative_SecondOpBeforeFirstCompletes(t *testing.T) { func Test_NewGoRoutineMap_Negative_SecondOpBeforeFirstCompletes(t *testing.T) {
// Arrange // Arrange
grm := NewGoRoutineMap() grm := NewGoRoutineMap(false /* exponentialBackOffOnError */)
operationName := "operation-name"
operation1DoneCh := make(chan interface{}, 0 /* bufferSize */)
operation1 := generateWaitFunc(operation1DoneCh)
err1 := grm.Run(operationName, operation1)
if err1 != nil {
t.Fatalf("NewGoRoutine failed. Expected: <no error> Actual: <%v>", err1)
}
operation2 := generateNoopFunc()
// Act
err2 := grm.Run(operationName, operation2)
// Assert
if err2 == nil {
t.Fatalf("NewGoRoutine did not fail. Expected: <Failed to create operation with name \"%s\". An operation with that name already exists.> Actual: <no error>", operationName)
}
if !IsAlreadyExists(err2) {
t.Fatalf("NewGoRoutine did not return alreadyExistsError, got: %v", err2)
}
}
func Test_NewGoRoutineMap_Negative_SecondOpBeforeFirstCompletesWithExpBackoff(t *testing.T) {
// Arrange
grm := NewGoRoutineMap(true /* exponentialBackOffOnError */)
operationName := "operation-name" operationName := "operation-name"
operation1DoneCh := make(chan interface{}, 0 /* bufferSize */) operation1DoneCh := make(chan interface{}, 0 /* bufferSize */)
operation1 := generateWaitFunc(operation1DoneCh) operation1 := generateWaitFunc(operation1DoneCh)
@ -132,7 +245,7 @@ func Test_NewGoRoutineMap_Negative_SecondOpBeforeFirstCompletes(t *testing.T) {
func Test_NewGoRoutineMap_Positive_ThirdOpAfterFirstCompletes(t *testing.T) { func Test_NewGoRoutineMap_Positive_ThirdOpAfterFirstCompletes(t *testing.T) {
// Arrange // Arrange
grm := NewGoRoutineMap() grm := NewGoRoutineMap(false /* exponentialBackOffOnError */)
operationName := "operation-name" operationName := "operation-name"
operation1DoneCh := make(chan interface{}, 0 /* bufferSize */) operation1DoneCh := make(chan interface{}, 0 /* bufferSize */)
operation1 := generateWaitFunc(operation1DoneCh) operation1 := generateWaitFunc(operation1DoneCh)
@ -157,11 +270,11 @@ func Test_NewGoRoutineMap_Positive_ThirdOpAfterFirstCompletes(t *testing.T) {
// Act // Act
operation1DoneCh <- true // Force operation1 to complete operation1DoneCh <- true // Force operation1 to complete
err3 := retryWithExponentialBackOff( err3 := retryWithExponentialBackOff(
time.Duration(20*time.Millisecond), time.Duration(initialOperationWaitTimeShort),
func() (bool, error) { func() (bool, error) {
err := grm.Run(operationName, operation3) err := grm.Run(operationName, operation3)
if err != nil { if err != nil {
t.Logf("Warning: NewGoRoutine failed. Expected: <no error> Actual: <%v>. Will retry.", err) t.Logf("Warning: NewGoRoutine failed with %v. Will retry.", err)
return false, nil return false, nil
} }
return true, nil return true, nil
@ -174,6 +287,146 @@ func Test_NewGoRoutineMap_Positive_ThirdOpAfterFirstCompletes(t *testing.T) {
} }
} }
func Test_NewGoRoutineMap_Positive_ThirdOpAfterFirstCompletesWithExpBackoff(t *testing.T) {
// Arrange
grm := NewGoRoutineMap(true /* exponentialBackOffOnError */)
operationName := "operation-name"
operation1DoneCh := make(chan interface{}, 0 /* bufferSize */)
operation1 := generateWaitFunc(operation1DoneCh)
err1 := grm.Run(operationName, operation1)
if err1 != nil {
t.Fatalf("NewGoRoutine failed. Expected: <no error> Actual: <%v>", err1)
}
operation2 := generateNoopFunc()
operation3 := generateNoopFunc()
// Act
err2 := grm.Run(operationName, operation2)
// Assert
if err2 == nil {
t.Fatalf("NewGoRoutine did not fail. Expected: <Failed to create operation with name \"%s\". An operation with that name already exists.> Actual: <no error>", operationName)
}
if !IsAlreadyExists(err2) {
t.Fatalf("NewGoRoutine did not return alreadyExistsError, got: %v", err2)
}
// Act
operation1DoneCh <- true // Force operation1 to complete
err3 := retryWithExponentialBackOff(
time.Duration(initialOperationWaitTimeShort),
func() (bool, error) {
err := grm.Run(operationName, operation3)
if err != nil {
t.Logf("Warning: NewGoRoutine failed with %v. Will retry.", err)
return false, nil
}
return true, nil
},
)
// Assert
if err3 != nil {
t.Fatalf("NewGoRoutine failed. Expected: <no error> Actual: <%v>", err3)
}
}
func Test_NewGoRoutineMap_Positive_WaitEmpty(t *testing.T) {
// Test than Wait() on empty GoRoutineMap always succeeds without blocking
// Arrange
grm := NewGoRoutineMap(false /* exponentialBackOffOnError */)
// Act
waitDoneCh := make(chan interface{}, 1)
go func() {
grm.Wait()
waitDoneCh <- true
}()
// Assert
err := waitChannelWithTimeout(waitDoneCh, testTimeout)
if err != nil {
t.Errorf("Error waiting for GoRoutineMap.Wait: %v", err)
}
}
func Test_NewGoRoutineMap_Positive_WaitEmptyWithExpBackoff(t *testing.T) {
// Test than Wait() on empty GoRoutineMap always succeeds without blocking
// Arrange
grm := NewGoRoutineMap(true /* exponentialBackOffOnError */)
// Act
waitDoneCh := make(chan interface{}, 1)
go func() {
grm.Wait()
waitDoneCh <- true
}()
// Assert
err := waitChannelWithTimeout(waitDoneCh, testTimeout)
if err != nil {
t.Errorf("Error waiting for GoRoutineMap.Wait: %v", err)
}
}
func Test_NewGoRoutineMap_Positive_Wait(t *testing.T) {
// Test that Wait() really blocks until the last operation succeeds
// Arrange
grm := NewGoRoutineMap(false /* exponentialBackOffOnError */)
operationName := "operation-name"
operation1DoneCh := make(chan interface{}, 0 /* bufferSize */)
operation1 := generateWaitFunc(operation1DoneCh)
err := grm.Run(operationName, operation1)
if err != nil {
t.Fatalf("NewGoRoutine failed. Expected: <no error> Actual: <%v>", err)
}
// Act
waitDoneCh := make(chan interface{}, 1)
go func() {
grm.Wait()
waitDoneCh <- true
}()
// Finish the operation
operation1DoneCh <- true
// Assert
err = waitChannelWithTimeout(waitDoneCh, testTimeout)
if err != nil {
t.Fatalf("Error waiting for GoRoutineMap.Wait: %v", err)
}
}
func Test_NewGoRoutineMap_Positive_WaitWithExpBackoff(t *testing.T) {
// Test that Wait() really blocks until the last operation succeeds
// Arrange
grm := NewGoRoutineMap(true /* exponentialBackOffOnError */)
operationName := "operation-name"
operation1DoneCh := make(chan interface{}, 0 /* bufferSize */)
operation1 := generateWaitFunc(operation1DoneCh)
err := grm.Run(operationName, operation1)
if err != nil {
t.Fatalf("NewGoRoutine failed. Expected: <no error> Actual: <%v>", err)
}
// Act
waitDoneCh := make(chan interface{}, 1)
go func() {
grm.Wait()
waitDoneCh <- true
}()
// Finish the operation
operation1DoneCh <- true
// Assert
err = waitChannelWithTimeout(waitDoneCh, testTimeout)
if err != nil {
t.Fatalf("Error waiting for GoRoutineMap.Wait: %v", err)
}
}
func generateCallbackFunc(done chan<- interface{}) func() error { func generateCallbackFunc(done chan<- interface{}) func() error {
return func() error { return func() error {
done <- true done <- true
@ -208,54 +461,6 @@ func retryWithExponentialBackOff(initialDuration time.Duration, fn wait.Conditio
return wait.ExponentialBackoff(backoff, fn) return wait.ExponentialBackoff(backoff, fn)
} }
func Test_NewGoRoutineMap_Positive_WaitEmpty(t *testing.T) {
// Test than Wait() on empty GoRoutineMap always succeeds without blocking
// Arrange
grm := NewGoRoutineMap()
// Act
waitDoneCh := make(chan interface{}, 1)
go func() {
grm.Wait()
waitDoneCh <- true
}()
// Assert
err := waitChannelWithTimeout(waitDoneCh, testTimeout)
if err != nil {
t.Errorf("Error waiting for GoRoutineMap.Wait: %v", err)
}
}
func Test_NewGoRoutineMap_Positive_Wait(t *testing.T) {
// Test that Wait() really blocks until the last operation succeeds
// Arrange
grm := NewGoRoutineMap()
operationName := "operation-name"
operation1DoneCh := make(chan interface{}, 0 /* bufferSize */)
operation1 := generateWaitFunc(operation1DoneCh)
err := grm.Run(operationName, operation1)
if err != nil {
t.Fatalf("NewGoRoutine failed. Expected: <no error> Actual: <%v>", err)
}
// Act
waitDoneCh := make(chan interface{}, 1)
go func() {
grm.Wait()
waitDoneCh <- true
}()
// Finish the operation
operation1DoneCh <- true
// Assert
err = waitChannelWithTimeout(waitDoneCh, testTimeout)
if err != nil {
t.Fatalf("Error waiting for GoRoutineMap.Wait: %v", err)
}
}
func waitChannelWithTimeout(ch <-chan interface{}, timeout time.Duration) error { func waitChannelWithTimeout(ch <-chan interface{}, timeout time.Duration) error {
timer := time.NewTimer(timeout) timer := time.NewTimer(timeout)

View File

@ -41,46 +41,31 @@ func (plugin *awsElasticBlockStorePlugin) NewAttacher() (volume.Attacher, error)
return &awsElasticBlockStoreAttacher{host: plugin.host}, nil return &awsElasticBlockStoreAttacher{host: plugin.host}, nil
} }
func (attacher *awsElasticBlockStoreAttacher) Attach(spec *volume.Spec, hostName string) error { func (attacher *awsElasticBlockStoreAttacher) Attach(spec *volume.Spec, hostName string) (string, error) {
volumeSource, readOnly, err := getVolumeSource(spec) volumeSource, readOnly, err := getVolumeSource(spec)
if err != nil { if err != nil {
return err return "", err
} }
volumeID := volumeSource.VolumeID volumeID := volumeSource.VolumeID
awsCloud, err := getCloudProvider(attacher.host.GetCloudProvider())
if err != nil {
return err
}
attached, err := awsCloud.DiskIsAttached(volumeID, hostName)
if err != nil {
// Log error and continue with attach
glog.Errorf(
"Error checking if volume (%q) is already attached to current node (%q). Will continue and try attach anyway. err=%v",
volumeID, hostName, err)
}
if err == nil && attached {
// Volume is already attached to node.
glog.Infof("Attach operation is successful. volume %q is already attached to node %q.", volumeID, hostName)
return nil
}
if _, err = awsCloud.AttachDisk(volumeID, hostName, readOnly); err != nil {
glog.Errorf("Error attaching volume %q: %+v", volumeID, err)
return err
}
return nil
}
func (attacher *awsElasticBlockStoreAttacher) WaitForAttach(spec *volume.Spec, timeout time.Duration) (string, error) {
awsCloud, err := getCloudProvider(attacher.host.GetCloudProvider()) awsCloud, err := getCloudProvider(attacher.host.GetCloudProvider())
if err != nil { if err != nil {
return "", err return "", err
} }
// awsCloud.AttachDisk checks if disk is already attached to node and
// succeeds in that case, so no need to do that separately.
devicePath, err := awsCloud.AttachDisk(volumeID, hostName, readOnly)
if err != nil {
glog.Errorf("Error attaching volume %q: %+v", volumeID, err)
return "", err
}
return devicePath, nil
}
func (attacher *awsElasticBlockStoreAttacher) WaitForAttach(spec *volume.Spec, devicePath string, timeout time.Duration) (string, error) {
volumeSource, _, err := getVolumeSource(spec) volumeSource, _, err := getVolumeSource(spec)
if err != nil { if err != nil {
return "", err return "", err
@ -92,11 +77,8 @@ func (attacher *awsElasticBlockStoreAttacher) WaitForAttach(spec *volume.Spec, t
partition = strconv.Itoa(int(volumeSource.Partition)) partition = strconv.Itoa(int(volumeSource.Partition))
} }
devicePath := "" if devicePath == "" {
if d, err := awsCloud.GetDiskPath(volumeID); err == nil { return "", fmt.Errorf("WaitForAttach failed for AWS Volume %q: devicePath is empty.", volumeID)
devicePath = d
} else {
glog.Errorf("GetDiskPath %q gets error %v", volumeID, err)
} }
ticker := time.NewTicker(checkSleepDuration) ticker := time.NewTicker(checkSleepDuration)
@ -108,13 +90,6 @@ func (attacher *awsElasticBlockStoreAttacher) WaitForAttach(spec *volume.Spec, t
select { select {
case <-ticker.C: case <-ticker.C:
glog.V(5).Infof("Checking AWS Volume %q is attached.", volumeID) glog.V(5).Infof("Checking AWS Volume %q is attached.", volumeID)
if devicePath == "" {
if d, err := awsCloud.GetDiskPath(volumeID); err == nil {
devicePath = d
} else {
glog.Errorf("GetDiskPath %q gets error %v", volumeID, err)
}
}
if devicePath != "" { if devicePath != "" {
devicePaths := getDiskByIdPaths(partition, devicePath) devicePaths := getDiskByIdPaths(partition, devicePath)
path, err := verifyDevicePath(devicePaths) path, err := verifyDevicePath(devicePaths)

View File

@ -45,25 +45,25 @@ func (plugin *cinderPlugin) NewAttacher() (volume.Attacher, error) {
return &cinderDiskAttacher{host: plugin.host}, nil return &cinderDiskAttacher{host: plugin.host}, nil
} }
func (attacher *cinderDiskAttacher) Attach(spec *volume.Spec, hostName string) error { func (attacher *cinderDiskAttacher) Attach(spec *volume.Spec, hostName string) (string, error) {
volumeSource, _, err := getVolumeSource(spec) volumeSource, _, err := getVolumeSource(spec)
if err != nil { if err != nil {
return err return "", err
} }
volumeID := volumeSource.VolumeID volumeID := volumeSource.VolumeID
cloud, err := getCloudProvider(attacher.host.GetCloudProvider()) cloud, err := getCloudProvider(attacher.host.GetCloudProvider())
if err != nil { if err != nil {
return err return "", err
} }
instances, res := cloud.Instances() instances, res := cloud.Instances()
if !res { if !res {
return fmt.Errorf("failed to list openstack instances") return "", fmt.Errorf("failed to list openstack instances")
} }
instanceid, err := instances.InstanceID(hostName) instanceid, err := instances.InstanceID(hostName)
if err != nil { if err != nil {
return err return "", err
} }
if ind := strings.LastIndex(instanceid, "/"); ind >= 0 { if ind := strings.LastIndex(instanceid, "/"); ind >= 0 {
instanceid = instanceid[(ind + 1):] instanceid = instanceid[(ind + 1):]
@ -71,7 +71,7 @@ func (attacher *cinderDiskAttacher) Attach(spec *volume.Spec, hostName string) e
attached, err := cloud.DiskIsAttached(volumeID, instanceid) attached, err := cloud.DiskIsAttached(volumeID, instanceid)
if err != nil { if err != nil {
// Log error and continue with attach // Log error and continue with attach
glog.Errorf( glog.Warningf(
"Error checking if volume (%q) is already attached to current node (%q). Will continue and try attach anyway. err=%v", "Error checking if volume (%q) is already attached to current node (%q). Will continue and try attach anyway. err=%v",
volumeID, instanceid, err) volumeID, instanceid, err)
} }
@ -79,38 +79,35 @@ func (attacher *cinderDiskAttacher) Attach(spec *volume.Spec, hostName string) e
if err == nil && attached { if err == nil && attached {
// Volume is already attached to node. // Volume is already attached to node.
glog.Infof("Attach operation is successful. volume %q is already attached to node %q.", volumeID, instanceid) glog.Infof("Attach operation is successful. volume %q is already attached to node %q.", volumeID, instanceid)
return nil } else {
}
_, err = cloud.AttachDisk(instanceid, volumeID) _, err = cloud.AttachDisk(instanceid, volumeID)
if err != nil { if err == nil {
glog.Infof("attach volume %q to instance %q gets %v", volumeID, instanceid, err) glog.Infof("Attach operation successful: volume %q attached to node %q.", volumeID, instanceid)
} else {
glog.Infof("Attach volume %q to instance %q failed with %v", volumeID, instanceid, err)
return "", err
}
} }
glog.Infof("attached volume %q to instance %q", volumeID, instanceid)
return err
}
func (attacher *cinderDiskAttacher) WaitForAttach(spec *volume.Spec, timeout time.Duration) (string, error) { devicePath, err := cloud.GetAttachmentDiskPath(instanceid, volumeID)
cloud, err := getCloudProvider(attacher.host.GetCloudProvider())
if err != nil { if err != nil {
glog.Infof("Attach volume %q to instance %q failed with %v", volumeID, instanceid, err)
return "", err return "", err
} }
return devicePath, err
}
func (attacher *cinderDiskAttacher) WaitForAttach(spec *volume.Spec, devicePath string, timeout time.Duration) (string, error) {
volumeSource, _, err := getVolumeSource(spec) volumeSource, _, err := getVolumeSource(spec)
if err != nil { if err != nil {
return "", err return "", err
} }
volumeID := volumeSource.VolumeID volumeID := volumeSource.VolumeID
instanceid, err := cloud.InstanceID()
if err != nil { if devicePath == "" {
return "", err return "", fmt.Errorf("WaitForAttach failed for Cinder disk %q: devicePath is empty.", volumeID)
}
devicePath := ""
if d, err := cloud.GetAttachmentDiskPath(instanceid, volumeID); err == nil {
devicePath = d
} else {
glog.Errorf("%q GetAttachmentDiskPath (%q) gets error %v", instanceid, volumeID, err)
} }
ticker := time.NewTicker(checkSleepDuration) ticker := time.NewTicker(checkSleepDuration)
@ -123,16 +120,6 @@ func (attacher *cinderDiskAttacher) WaitForAttach(spec *volume.Spec, timeout tim
select { select {
case <-ticker.C: case <-ticker.C:
glog.V(5).Infof("Checking Cinder disk %q is attached.", volumeID) glog.V(5).Infof("Checking Cinder disk %q is attached.", volumeID)
if devicePath == "" {
if d, err := cloud.GetAttachmentDiskPath(instanceid, volumeID); err == nil {
devicePath = d
} else {
glog.Errorf("%q GetAttachmentDiskPath (%q) gets error %v", instanceid, volumeID, err)
}
}
if devicePath == "" {
glog.V(5).Infof("Cinder disk (%q) is not attached yet", volumeID)
} else {
probeAttachedVolume() probeAttachedVolume()
exists, err := pathExists(devicePath) exists, err := pathExists(devicePath)
if exists && err == nil { if exists && err == nil {
@ -142,7 +129,6 @@ func (attacher *cinderDiskAttacher) WaitForAttach(spec *volume.Spec, timeout tim
//Log error, if any, and continue checking periodically //Log error, if any, and continue checking periodically
glog.Errorf("Error Stat Cinder disk (%q) is attached: %v", volumeID, err) glog.Errorf("Error Stat Cinder disk (%q) is attached: %v", volumeID, err)
} }
}
case <-timer.C: case <-timer.C:
return "", fmt.Errorf("Could not find attached Cinder disk %q. Timeout waiting for mount paths to be created.", volumeID) return "", fmt.Errorf("Could not find attached Cinder disk %q. Timeout waiting for mount paths to be created.", volumeID)
} }

View File

@ -60,10 +60,10 @@ func (plugin *gcePersistentDiskPlugin) NewAttacher() (volume.Attacher, error) {
// Callers are responsible for retryinging on failure. // Callers are responsible for retryinging on failure.
// Callers are responsible for thread safety between concurrent attach and // Callers are responsible for thread safety between concurrent attach and
// detach operations. // detach operations.
func (attacher *gcePersistentDiskAttacher) Attach(spec *volume.Spec, hostName string) error { func (attacher *gcePersistentDiskAttacher) Attach(spec *volume.Spec, hostName string) (string, error) {
volumeSource, readOnly, err := getVolumeSource(spec) volumeSource, readOnly, err := getVolumeSource(spec)
if err != nil { if err != nil {
return err return "", err
} }
pdName := volumeSource.PDName pdName := volumeSource.PDName
@ -79,18 +79,17 @@ func (attacher *gcePersistentDiskAttacher) Attach(spec *volume.Spec, hostName st
if err == nil && attached { if err == nil && attached {
// Volume is already attached to node. // Volume is already attached to node.
glog.Infof("Attach operation is successful. PD %q is already attached to node %q.", pdName, hostName) glog.Infof("Attach operation is successful. PD %q is already attached to node %q.", pdName, hostName)
return nil } else {
} if err := attacher.gceDisks.AttachDisk(pdName, hostName, readOnly); err != nil {
if err = attacher.gceDisks.AttachDisk(pdName, hostName, readOnly); err != nil {
glog.Errorf("Error attaching PD %q to node %q: %+v", pdName, hostName, err) glog.Errorf("Error attaching PD %q to node %q: %+v", pdName, hostName, err)
return err return "", err
}
} }
return nil return path.Join(diskByIdPath, diskGooglePrefix+pdName), nil
} }
func (attacher *gcePersistentDiskAttacher) WaitForAttach(spec *volume.Spec, timeout time.Duration) (string, error) { func (attacher *gcePersistentDiskAttacher) WaitForAttach(spec *volume.Spec, devicePath string, timeout time.Duration) (string, error) {
ticker := time.NewTicker(checkSleepDuration) ticker := time.NewTicker(checkSleepDuration)
defer ticker.Stop() defer ticker.Stop()
timer := time.NewTimer(timeout) timer := time.NewTimer(timeout)

View File

@ -18,6 +18,7 @@ package gce_pd
import ( import (
"errors" "errors"
"fmt"
"testing" "testing"
"k8s.io/kubernetes/pkg/api" "k8s.io/kubernetes/pkg/api"
@ -86,7 +87,11 @@ func TestAttachDetach(t *testing.T) {
attach: attachCall{diskName, instanceID, readOnly, nil}, attach: attachCall{diskName, instanceID, readOnly, nil},
test: func(testcase *testcase) error { test: func(testcase *testcase) error {
attacher := newAttacher(testcase) attacher := newAttacher(testcase)
return attacher.Attach(spec, instanceID) devicePath, err := attacher.Attach(spec, instanceID)
if devicePath != "/dev/disk/by-id/google-disk" {
return fmt.Errorf("devicePath incorrect. Expected<\"/dev/disk/by-id/google-disk\"> Actual: <%q>", devicePath)
}
return err
}, },
}, },
@ -96,7 +101,11 @@ func TestAttachDetach(t *testing.T) {
diskIsAttached: diskIsAttachedCall{diskName, instanceID, true, nil}, diskIsAttached: diskIsAttachedCall{diskName, instanceID, true, nil},
test: func(testcase *testcase) error { test: func(testcase *testcase) error {
attacher := newAttacher(testcase) attacher := newAttacher(testcase)
return attacher.Attach(spec, instanceID) devicePath, err := attacher.Attach(spec, instanceID)
if devicePath != "/dev/disk/by-id/google-disk" {
return fmt.Errorf("devicePath incorrect. Expected<\"/dev/disk/by-id/google-disk\"> Actual: <%q>", devicePath)
}
return err
}, },
}, },
@ -107,7 +116,11 @@ func TestAttachDetach(t *testing.T) {
attach: attachCall{diskName, instanceID, readOnly, nil}, attach: attachCall{diskName, instanceID, readOnly, nil},
test: func(testcase *testcase) error { test: func(testcase *testcase) error {
attacher := newAttacher(testcase) attacher := newAttacher(testcase)
return attacher.Attach(spec, instanceID) devicePath, err := attacher.Attach(spec, instanceID)
if devicePath != "/dev/disk/by-id/google-disk" {
return fmt.Errorf("devicePath incorrect. Expected<\"/dev/disk/by-id/google-disk\"> Actual: <%q>", devicePath)
}
return err
}, },
}, },
@ -118,7 +131,11 @@ func TestAttachDetach(t *testing.T) {
attach: attachCall{diskName, instanceID, readOnly, attachError}, attach: attachCall{diskName, instanceID, readOnly, attachError},
test: func(testcase *testcase) error { test: func(testcase *testcase) error {
attacher := newAttacher(testcase) attacher := newAttacher(testcase)
return attacher.Attach(spec, instanceID) devicePath, err := attacher.Attach(spec, instanceID)
if devicePath != "" {
return fmt.Errorf("devicePath incorrect. Expected<\"\"> Actual: <%q>", devicePath)
}
return err
}, },
expectedReturn: attachError, expectedReturn: attachError,
}, },

View File

@ -358,11 +358,11 @@ func (fv *FakeVolume) TearDownAt(dir string) error {
return os.RemoveAll(dir) return os.RemoveAll(dir)
} }
func (fv *FakeVolume) Attach(spec *Spec, hostName string) error { func (fv *FakeVolume) Attach(spec *Spec, hostName string) (string, error) {
fv.Lock() fv.Lock()
defer fv.Unlock() defer fv.Unlock()
fv.AttachCallCount++ fv.AttachCallCount++
return nil return "", nil
} }
func (fv *FakeVolume) GetAttachCallCount() int { func (fv *FakeVolume) GetAttachCallCount() int {
@ -371,7 +371,7 @@ func (fv *FakeVolume) GetAttachCallCount() int {
return fv.AttachCallCount return fv.AttachCallCount
} }
func (fv *FakeVolume) WaitForAttach(spec *Spec, spectimeout time.Duration) (string, error) { func (fv *FakeVolume) WaitForAttach(spec *Spec, devicePath string, spectimeout time.Duration) (string, error) {
fv.Lock() fv.Lock()
defer fv.Unlock() defer fv.Unlock()
fv.WaitForAttachCallCount++ fv.WaitForAttachCallCount++

View File

@ -25,6 +25,7 @@ import (
"github.com/golang/glog" "github.com/golang/glog"
"k8s.io/kubernetes/pkg/api" "k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset"
"k8s.io/kubernetes/pkg/types" "k8s.io/kubernetes/pkg/types"
"k8s.io/kubernetes/pkg/util/goroutinemap" "k8s.io/kubernetes/pkg/util/goroutinemap"
"k8s.io/kubernetes/pkg/volume" "k8s.io/kubernetes/pkg/volume"
@ -49,6 +50,9 @@ import (
// Once the operation is started, since it is executed asynchronously, // Once the operation is started, since it is executed asynchronously,
// errors are simply logged and the goroutine is terminated without updating // errors are simply logged and the goroutine is terminated without updating
// actualStateOfWorld (callers are responsible for retrying as needed). // actualStateOfWorld (callers are responsible for retrying as needed).
//
// Some of these operations may result in calls to the API server; callers are
// responsible for rate limiting on errors.
type OperationExecutor interface { type OperationExecutor interface {
// AttachVolume attaches the volume to the node specified in volumeToAttach. // AttachVolume attaches the volume to the node specified in volumeToAttach.
// It then updates the actual state of the world to reflect that. // It then updates the actual state of the world to reflect that.
@ -78,14 +82,29 @@ type OperationExecutor interface {
// attachable volumes only, freeing it for detach. It then updates the // attachable volumes only, freeing it for detach. It then updates the
// actual state of the world to reflect that. // actual state of the world to reflect that.
UnmountDevice(deviceToDetach AttachedVolume, actualStateOfWorld ActualStateOfWorldMounterUpdater) error UnmountDevice(deviceToDetach AttachedVolume, actualStateOfWorld ActualStateOfWorldMounterUpdater) error
// VerifyControllerAttachedVolume checks if the specified volume is present
// in the specified nodes AttachedVolumes Status field. It uses kubeClient
// to fetch the node object.
// If the volume is found, the actual state of the world is updated to mark
// the volume as attached.
// If the volume does not implement the attacher interface, it is assumed to
// be attached and the the actual state of the world is updated accordingly.
// If the volume is not found or there is an error (fetching the node
// object, for example) then an error is returned which triggers exponential
// back off on retries.
VerifyControllerAttachedVolume(volumeToMount VolumeToMount, nodeName string, actualStateOfWorld ActualStateOfWorldAttacherUpdater) error
} }
// NewOperationExecutor returns a new instance of OperationExecutor. // NewOperationExecutor returns a new instance of OperationExecutor.
func NewOperationExecutor( func NewOperationExecutor(
kubeClient internalclientset.Interface,
volumePluginMgr *volume.VolumePluginMgr) OperationExecutor { volumePluginMgr *volume.VolumePluginMgr) OperationExecutor {
return &operationExecutor{ return &operationExecutor{
kubeClient: kubeClient,
volumePluginMgr: volumePluginMgr, volumePluginMgr: volumePluginMgr,
pendingOperations: goroutinemap.NewGoRoutineMap(), pendingOperations: goroutinemap.NewGoRoutineMap(
true /* exponentialBackOffOnError */),
} }
} }
@ -109,7 +128,7 @@ type ActualStateOfWorldMounterUpdater interface {
// actual state of the world cache after successful attach/detach/mount/unmount. // actual state of the world cache after successful attach/detach/mount/unmount.
type ActualStateOfWorldAttacherUpdater interface { type ActualStateOfWorldAttacherUpdater interface {
// Marks the specified volume as attached to the specified node // Marks the specified volume as attached to the specified node
MarkVolumeAsAttached(volumeSpec *volume.Spec, nodeName string) error MarkVolumeAsAttached(volumeSpec *volume.Spec, nodeName string, devicePath string) error
// Marks the specified volume as detached from the specified node // Marks the specified volume as detached from the specified node
MarkVolumeAsDetached(volumeName api.UniqueVolumeName, nodeName string) MarkVolumeAsDetached(volumeName api.UniqueVolumeName, nodeName string)
@ -160,6 +179,10 @@ type VolumeToMount struct {
// VolumeGidValue contains the value of the GID annotation, if present. // VolumeGidValue contains the value of the GID annotation, if present.
VolumeGidValue string VolumeGidValue string
// DevicePath contains the path on the node where the volume is attached.
// For non-attachable volumes this is empty.
DevicePath string
} }
// AttachedVolume represents a volume that is attached to a node. // AttachedVolume represents a volume that is attached to a node.
@ -284,9 +307,14 @@ type MountedVolume struct {
} }
type operationExecutor struct { type operationExecutor struct {
// Used to fetch objects from the API server like Node in the
// VerifyControllerAttachedVolume operation.
kubeClient internalclientset.Interface
// volumePluginMgr is the volume plugin manager used to create volume // volumePluginMgr is the volume plugin manager used to create volume
// plugin objects. // plugin objects.
volumePluginMgr *volume.VolumePluginMgr volumePluginMgr *volume.VolumePluginMgr
// pendingOperations keeps track of pending attach and detach operations so // pendingOperations keeps track of pending attach and detach operations so
// multiple operations are not started on the same volume // multiple operations are not started on the same volume
pendingOperations goroutinemap.GoRoutineMap pendingOperations goroutinemap.GoRoutineMap
@ -358,6 +386,20 @@ func (oe *operationExecutor) UnmountDevice(
string(deviceToDetach.VolumeName), unmountDeviceFunc) string(deviceToDetach.VolumeName), unmountDeviceFunc)
} }
func (oe *operationExecutor) VerifyControllerAttachedVolume(
volumeToMount VolumeToMount,
nodeName string,
actualStateOfWorld ActualStateOfWorldAttacherUpdater) error {
verifyControllerAttachedVolumeFunc, err :=
oe.generateVerifyControllerAttachedVolumeFunc(volumeToMount, nodeName, actualStateOfWorld)
if err != nil {
return err
}
return oe.pendingOperations.Run(
string(volumeToMount.VolumeName), verifyControllerAttachedVolumeFunc)
}
func (oe *operationExecutor) generateAttachVolumeFunc( func (oe *operationExecutor) generateAttachVolumeFunc(
volumeToAttach VolumeToAttach, volumeToAttach VolumeToAttach,
actualStateOfWorld ActualStateOfWorldAttacherUpdater) (func() error, error) { actualStateOfWorld ActualStateOfWorldAttacherUpdater) (func() error, error) {
@ -385,18 +427,17 @@ func (oe *operationExecutor) generateAttachVolumeFunc(
return func() error { return func() error {
// Execute attach // Execute attach
attachErr := volumeAttacher.Attach( devicePath, attachErr := volumeAttacher.Attach(
volumeToAttach.VolumeSpec, volumeToAttach.NodeName) volumeToAttach.VolumeSpec, volumeToAttach.NodeName)
if attachErr != nil { if attachErr != nil {
// On failure, just log and exit. The controller will retry // On failure, return error. Caller will log and retry.
glog.Errorf( return fmt.Errorf(
"AttachVolume.Attach failed for volume %q (spec.Name: %q) from node %q with: %v", "AttachVolume.Attach failed for volume %q (spec.Name: %q) from node %q with: %v",
volumeToAttach.VolumeName, volumeToAttach.VolumeName,
volumeToAttach.VolumeSpec.Name(), volumeToAttach.VolumeSpec.Name(),
volumeToAttach.NodeName, volumeToAttach.NodeName,
attachErr) attachErr)
return attachErr
} }
glog.Infof( glog.Infof(
@ -407,16 +448,15 @@ func (oe *operationExecutor) generateAttachVolumeFunc(
// Update actual state of world // Update actual state of world
addVolumeNodeErr := actualStateOfWorld.MarkVolumeAsAttached( addVolumeNodeErr := actualStateOfWorld.MarkVolumeAsAttached(
volumeToAttach.VolumeSpec, volumeToAttach.NodeName) volumeToAttach.VolumeSpec, volumeToAttach.NodeName, devicePath)
if addVolumeNodeErr != nil { if addVolumeNodeErr != nil {
// On failure, just log and exit. The controller will retry // On failure, return error. Caller will log and retry.
glog.Errorf( return fmt.Errorf(
"AttachVolume.MarkVolumeAsAttached failed for volume %q (spec.Name: %q) from node %q with: %v.", "AttachVolume.MarkVolumeAsAttached failed for volume %q (spec.Name: %q) from node %q with: %v.",
volumeToAttach.VolumeName, volumeToAttach.VolumeName,
volumeToAttach.VolumeSpec.Name(), volumeToAttach.VolumeSpec.Name(),
volumeToAttach.NodeName, volumeToAttach.NodeName,
addVolumeNodeErr) addVolumeNodeErr)
return addVolumeNodeErr
} }
return nil return nil
@ -463,14 +503,13 @@ func (oe *operationExecutor) generateDetachVolumeFunc(
// Execute detach // Execute detach
detachErr := volumeDetacher.Detach(volumeName, volumeToDetach.NodeName) detachErr := volumeDetacher.Detach(volumeName, volumeToDetach.NodeName)
if detachErr != nil { if detachErr != nil {
// On failure, just log and exit. The controller will retry // On failure, return error. Caller will log and retry.
glog.Errorf( return fmt.Errorf(
"DetachVolume.Detach failed for volume %q (spec.Name: %q) from node %q with: %v", "DetachVolume.Detach failed for volume %q (spec.Name: %q) from node %q with: %v",
volumeToDetach.VolumeName, volumeToDetach.VolumeName,
volumeToDetach.VolumeSpec.Name(), volumeToDetach.VolumeSpec.Name(),
volumeToDetach.NodeName, volumeToDetach.NodeName,
detachErr) detachErr)
return detachErr
} }
glog.Infof( glog.Infof(
@ -543,16 +582,16 @@ func (oe *operationExecutor) generateMountVolumeFunc(
volumeToMount.Pod.UID) volumeToMount.Pod.UID)
devicePath, err := volumeAttacher.WaitForAttach( devicePath, err := volumeAttacher.WaitForAttach(
volumeToMount.VolumeSpec, waitForAttachTimeout) volumeToMount.VolumeSpec, volumeToMount.DevicePath, waitForAttachTimeout)
if err != nil { if err != nil {
glog.Errorf( // On failure, return error. Caller will log and retry.
return fmt.Errorf(
"MountVolume.WaitForAttach failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v", "MountVolume.WaitForAttach failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v",
volumeToMount.VolumeName, volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(), volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName, volumeToMount.PodName,
volumeToMount.Pod.UID, volumeToMount.Pod.UID,
err) err)
return err
} }
glog.Infof( glog.Infof(
@ -565,14 +604,14 @@ func (oe *operationExecutor) generateMountVolumeFunc(
deviceMountPath, err := deviceMountPath, err :=
volumeAttacher.GetDeviceMountPath(volumeToMount.VolumeSpec) volumeAttacher.GetDeviceMountPath(volumeToMount.VolumeSpec)
if err != nil { if err != nil {
glog.Errorf( // On failure, return error. Caller will log and retry.
return fmt.Errorf(
"MountVolume.GetDeviceMountPath failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v", "MountVolume.GetDeviceMountPath failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v",
volumeToMount.VolumeName, volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(), volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName, volumeToMount.PodName,
volumeToMount.Pod.UID, volumeToMount.Pod.UID,
err) err)
return err
} }
// Mount device to global mount path // Mount device to global mount path
@ -581,14 +620,14 @@ func (oe *operationExecutor) generateMountVolumeFunc(
devicePath, devicePath,
deviceMountPath) deviceMountPath)
if err != nil { if err != nil {
glog.Errorf( // On failure, return error. Caller will log and retry.
return fmt.Errorf(
"MountVolume.MountDevice failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v", "MountVolume.MountDevice failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v",
volumeToMount.VolumeName, volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(), volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName, volumeToMount.PodName,
volumeToMount.Pod.UID, volumeToMount.Pod.UID,
err) err)
return err
} }
glog.Infof( glog.Infof(
@ -602,30 +641,28 @@ func (oe *operationExecutor) generateMountVolumeFunc(
markDeviceMountedErr := actualStateOfWorld.MarkDeviceAsMounted( markDeviceMountedErr := actualStateOfWorld.MarkDeviceAsMounted(
volumeToMount.VolumeName) volumeToMount.VolumeName)
if markDeviceMountedErr != nil { if markDeviceMountedErr != nil {
// On failure, just log and exit. The controller will retry // On failure, return error. Caller will log and retry.
glog.Errorf( return fmt.Errorf(
"MountVolume.MarkDeviceAsMounted failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v", "MountVolume.MarkDeviceAsMounted failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v",
volumeToMount.VolumeName, volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(), volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName, volumeToMount.PodName,
volumeToMount.Pod.UID, volumeToMount.Pod.UID,
markDeviceMountedErr) markDeviceMountedErr)
return markDeviceMountedErr
} }
} }
// Execute mount // Execute mount
mountErr := volumeMounter.SetUp(fsGroup) mountErr := volumeMounter.SetUp(fsGroup)
if mountErr != nil { if mountErr != nil {
// On failure, just log and exit. The controller will retry // On failure, return error. Caller will log and retry.
glog.Errorf( return fmt.Errorf(
"MountVolume.SetUp failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v", "MountVolume.SetUp failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v",
volumeToMount.VolumeName, volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(), volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName, volumeToMount.PodName,
volumeToMount.Pod.UID, volumeToMount.Pod.UID,
mountErr) mountErr)
return mountErr
} }
glog.Infof( glog.Infof(
@ -644,15 +681,14 @@ func (oe *operationExecutor) generateMountVolumeFunc(
volumeToMount.OuterVolumeSpecName, volumeToMount.OuterVolumeSpecName,
volumeToMount.VolumeGidValue) volumeToMount.VolumeGidValue)
if markVolMountedErr != nil { if markVolMountedErr != nil {
// On failure, just log and exit. The controller will retry // On failure, return error. Caller will log and retry.
glog.Errorf( return fmt.Errorf(
"MountVolume.MarkVolumeAsMounted failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v", "MountVolume.MarkVolumeAsMounted failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v",
volumeToMount.VolumeName, volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(), volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName, volumeToMount.PodName,
volumeToMount.Pod.UID, volumeToMount.Pod.UID,
markVolMountedErr) markVolMountedErr)
return markVolMountedErr
} }
return nil return nil
@ -691,15 +727,14 @@ func (oe *operationExecutor) generateUnmountVolumeFunc(
// Execute unmount // Execute unmount
unmountErr := volumeUnmounter.TearDown() unmountErr := volumeUnmounter.TearDown()
if unmountErr != nil { if unmountErr != nil {
// On failure, just log and exit. The controller will retry // On failure, return error. Caller will log and retry.
glog.Errorf( return fmt.Errorf(
"UnmountVolume.TearDown failed for volume %q (volume.spec.Name: %q) pod %q (UID: %q) with: %v", "UnmountVolume.TearDown failed for volume %q (volume.spec.Name: %q) pod %q (UID: %q) with: %v",
volumeToUnmount.VolumeName, volumeToUnmount.VolumeName,
volumeToUnmount.OuterVolumeSpecName, volumeToUnmount.OuterVolumeSpecName,
volumeToUnmount.PodName, volumeToUnmount.PodName,
volumeToUnmount.PodUID, volumeToUnmount.PodUID,
unmountErr) unmountErr)
return unmountErr
} }
glog.Infof( glog.Infof(
@ -763,25 +798,23 @@ func (oe *operationExecutor) generateUnmountDeviceFunc(
deviceMountPath, err := deviceMountPath, err :=
volumeAttacher.GetDeviceMountPath(deviceToDetach.VolumeSpec) volumeAttacher.GetDeviceMountPath(deviceToDetach.VolumeSpec)
if err != nil { if err != nil {
// On failure, just log and exit. The controller will retry // On failure, return error. Caller will log and retry.
glog.Errorf( return fmt.Errorf(
"GetDeviceMountPath failed for volume %q (spec.Name: %q) with: %v", "GetDeviceMountPath failed for volume %q (spec.Name: %q) with: %v",
deviceToDetach.VolumeName, deviceToDetach.VolumeName,
deviceToDetach.VolumeSpec.Name(), deviceToDetach.VolumeSpec.Name(),
err) err)
return err
} }
// Execute unmount // Execute unmount
unmountDeviceErr := volumeDetacher.UnmountDevice(deviceMountPath) unmountDeviceErr := volumeDetacher.UnmountDevice(deviceMountPath)
if unmountDeviceErr != nil { if unmountDeviceErr != nil {
// On failure, just log and exit. The controller will retry // On failure, return error. Caller will log and retry.
glog.Errorf( return fmt.Errorf(
"UnmountDevice failed for volume %q (spec.Name: %q) with: %v", "UnmountDevice failed for volume %q (spec.Name: %q) with: %v",
deviceToDetach.VolumeName, deviceToDetach.VolumeName,
deviceToDetach.VolumeSpec.Name(), deviceToDetach.VolumeSpec.Name(),
unmountDeviceErr) unmountDeviceErr)
return unmountDeviceErr
} }
glog.Infof( glog.Infof(
@ -793,15 +826,95 @@ func (oe *operationExecutor) generateUnmountDeviceFunc(
markDeviceUnmountedErr := actualStateOfWorld.MarkDeviceAsUnmounted( markDeviceUnmountedErr := actualStateOfWorld.MarkDeviceAsUnmounted(
deviceToDetach.VolumeName) deviceToDetach.VolumeName)
if markDeviceUnmountedErr != nil { if markDeviceUnmountedErr != nil {
// On failure, just log and exit. The controller will retry // On failure, return error. Caller will log and retry.
glog.Errorf( return fmt.Errorf(
"MarkDeviceAsUnmounted failed for device %q (spec.Name: %q) with: %v", "MarkDeviceAsUnmounted failed for device %q (spec.Name: %q) with: %v",
deviceToDetach.VolumeName, deviceToDetach.VolumeName,
deviceToDetach.VolumeSpec.Name(), deviceToDetach.VolumeSpec.Name(),
markDeviceUnmountedErr) markDeviceUnmountedErr)
return markDeviceUnmountedErr
} }
return nil return nil
}, nil }, nil
} }
func (oe *operationExecutor) generateVerifyControllerAttachedVolumeFunc(
volumeToMount VolumeToMount,
nodeName string,
actualStateOfWorld ActualStateOfWorldAttacherUpdater) (func() error, error) {
return func() error {
if !volumeToMount.PluginIsAttachable {
// If the volume does not implement the attacher interface, it is
// assumed to be attached and the the actual state of the world is
// updated accordingly.
addVolumeNodeErr := actualStateOfWorld.MarkVolumeAsAttached(
volumeToMount.VolumeSpec, nodeName, volumeToMount.DevicePath)
if addVolumeNodeErr != nil {
// On failure, return error. Caller will log and retry.
return fmt.Errorf(
"VerifyControllerAttachedVolume.MarkVolumeAsAttached failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v.",
volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName,
volumeToMount.Pod.UID,
addVolumeNodeErr)
}
return nil
}
// Fetch current node object
node, fetchErr := oe.kubeClient.Core().Nodes().Get(nodeName)
if fetchErr != nil {
// On failure, return error. Caller will log and retry.
return fmt.Errorf(
"VerifyControllerAttachedVolume failed fetching node from API server. Volume %q (spec.Name: %q) pod %q (UID: %q). Error: %v.",
volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName,
volumeToMount.Pod.UID,
fetchErr)
}
if node == nil {
// On failure, return error. Caller will log and retry.
return fmt.Errorf(
"VerifyControllerAttachedVolume failed. Volume %q (spec.Name: %q) pod %q (UID: %q). Error: node object retrieved from API server is nil.",
volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName,
volumeToMount.Pod.UID)
}
for _, attachedVolume := range node.Status.VolumesAttached {
if attachedVolume.Name == volumeToMount.VolumeName {
addVolumeNodeErr := actualStateOfWorld.MarkVolumeAsAttached(
volumeToMount.VolumeSpec, nodeName, volumeToMount.DevicePath)
glog.Infof("Controller successfully attached volume %q (spec.Name: %q) pod %q (UID: %q)",
volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName,
volumeToMount.Pod.UID)
if addVolumeNodeErr != nil {
// On failure, return error. Caller will log and retry.
return fmt.Errorf(
"VerifyControllerAttachedVolume.MarkVolumeAsAttached failed for volume %q (spec.Name: %q) pod %q (UID: %q) with: %v.",
volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName,
volumeToMount.Pod.UID,
addVolumeNodeErr)
}
return nil
}
}
// Volume not attached, return error. Caller will log and retry.
return fmt.Errorf("Volume %q (spec.Name: %q) pod %q (UID: %q) is not yet attached according to node status.",
volumeToMount.VolumeName,
volumeToMount.VolumeSpec.Name(),
volumeToMount.PodName,
volumeToMount.Pod.UID)
}, nil
}

View File

@ -134,14 +134,16 @@ type Deleter interface {
// Attacher can attach a volume to a node. // Attacher can attach a volume to a node.
type Attacher interface { type Attacher interface {
// Attach the volume specified by the given spec to the given host // Attaches the volume specified by the given spec to the given host.
Attach(spec *Spec, hostName string) error // On success, returns the device path where the device was attache don the
// node.
Attach(spec *Spec, hostName string) (string, error)
// WaitForAttach blocks until the device is attached to this // WaitForAttach blocks until the device is attached to this
// node. If it successfully attaches, the path to the device // node. If it successfully attaches, the path to the device
// is returned. Otherwise, if the device does not attach after // is returned. Otherwise, if the device does not attach after
// the given timeout period, an error will be returned. // the given timeout period, an error will be returned.
WaitForAttach(spec *Spec, timeout time.Duration) (string, error) WaitForAttach(spec *Spec, devicePath string, timeout time.Duration) (string, error)
// GetDeviceMountPath returns a path where the device should // GetDeviceMountPath returns a path where the device should
// be mounted after it is attached. This is a global mount // be mounted after it is attached. This is a global mount