1
0
mirror of https://github.com/rancher/types.git synced 2025-07-02 16:31:48 +00:00

Bump vendor

This commit is contained in:
Darren Shepherd 2017-12-16 01:49:21 -07:00
parent 3e9e7cb7d6
commit 0ec61b78a9
33 changed files with 4859 additions and 70 deletions

View File

@ -3,5 +3,5 @@ github.com/rancher/types
k8s.io/kubernetes v1.8.3 transitive=true,staging=true
bitbucket.org/ww/goautoneg a547fc61f48d567d5b4ec6f8aee5573d8efce11d https://github.com/rancher/goautoneg.git
github.com/rancher/norman 8c0d4bfe2e63a801e4e21906d6b37a5173dadcbb
github.com/rancher/norman 550db511ba72a75c4a921cf4b1a94d2075fdaaf1
golang.org/x/sync fd80eb99c8f653c847d294a001bdf2a3a6f768f5

1
vendor/github.com/golang/groupcache/.gitignore generated vendored Normal file
View File

@ -0,0 +1 @@
*~

191
vendor/github.com/golang/groupcache/LICENSE generated vendored Normal file
View File

@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and
distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright
owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities
that control, are controlled by, or are under common control with that entity.
For the purposes of this definition, "control" means (i) the power, direct or
indirect, to cause the direction or management of such entity, whether by
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising
permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including
but not limited to software source code, documentation source, and configuration
files.
"Object" form shall mean any form resulting from mechanical transformation or
translation of a Source form, including but not limited to compiled object code,
generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made
available under the License, as indicated by a copyright notice that is included
in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that
is based on (or derived from) the Work and for which the editorial revisions,
annotations, elaborations, or other modifications represent, as a whole, an
original work of authorship. For the purposes of this License, Derivative Works
shall not include works that remain separable from, or merely link (or bind by
name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version
of the Work and any modifications or additions to that Work or Derivative Works
thereof, that is intentionally submitted to Licensor for inclusion in the Work
by the copyright owner or by an individual or Legal Entity authorized to submit
on behalf of the copyright owner. For the purposes of this definition,
"submitted" means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems, and
issue tracking systems that are managed by, or on behalf of, the Licensor for
the purpose of discussing and improving the Work, but excluding communication
that is conspicuously marked or otherwise designated in writing by the copyright
owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
of whom a Contribution has been received by Licensor and subsequently
incorporated within the Work.
2. Grant of Copyright License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the Work and such
Derivative Works in Source or Object form.
3. Grant of Patent License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable (except as stated in this section) patent license to make, have
made, use, offer to sell, sell, import, and otherwise transfer the Work, where
such license applies only to those patent claims licensable by such Contributor
that are necessarily infringed by their Contribution(s) alone or by combination
of their Contribution(s) with the Work to which such Contribution(s) was
submitted. If You institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work or a
Contribution incorporated within the Work constitutes direct or contributory
patent infringement, then any patent licenses granted to You under this License
for that Work shall terminate as of the date such litigation is filed.
4. Redistribution.
You may reproduce and distribute copies of the Work or Derivative Works thereof
in any medium, with or without modifications, and in Source or Object form,
provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of
this License; and
You must cause any modified files to carry prominent notices stating that You
changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute,
all copyright, patent, trademark, and attribution notices from the Source form
of the Work, excluding those notices that do not pertain to any part of the
Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any
Derivative Works that You distribute must include a readable copy of the
attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the
following places: within a NOTICE text file distributed as part of the
Derivative Works; within the Source form or documentation, if provided along
with the Derivative Works; or, within a display generated by the Derivative
Works, if and wherever such third-party notices normally appear. The contents of
the NOTICE file are for informational purposes only and do not modify the
License. You may add Your own attribution notices within Derivative Works that
You distribute, alongside or as an addendum to the NOTICE text from the Work,
provided that such additional attribution notices cannot be construed as
modifying the License.
You may add Your own copyright statement to Your modifications and may provide
additional or different license terms and conditions for use, reproduction, or
distribution of Your modifications, or for any such Derivative Works as a whole,
provided Your use, reproduction, and distribution of the Work otherwise complies
with the conditions stated in this License.
5. Submission of Contributions.
Unless You explicitly state otherwise, any Contribution intentionally submitted
for inclusion in the Work by You to the Licensor shall be under the terms and
conditions of this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify the terms of
any separate license agreement you may have executed with Licensor regarding
such Contributions.
6. Trademarks.
This License does not grant permission to use the trade names, trademarks,
service marks, or product names of the Licensor, except as required for
reasonable and customary use in describing the origin of the Work and
reproducing the content of the NOTICE file.
7. Disclaimer of Warranty.
Unless required by applicable law or agreed to in writing, Licensor provides the
Work (and each Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE,
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are
solely responsible for determining the appropriateness of using or
redistributing the Work and assume any risks associated with Your exercise of
permissions under this License.
8. Limitation of Liability.
In no event and under no legal theory, whether in tort (including negligence),
contract, or otherwise, unless required by applicable law (such as deliberate
and grossly negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this License or
out of the use or inability to use the Work (including but not limited to
damages for loss of goodwill, work stoppage, computer failure or malfunction, or
any and all other commercial damages or losses), even if such Contributor has
been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability.
While redistributing the Work or Derivative Works thereof, You may choose to
offer, and charge a fee for, acceptance of support, warranty, indemnity, or
other liability obligations and/or rights consistent with this License. However,
in accepting such obligations, You may act only on Your own behalf and on Your
sole responsibility, not on behalf of any other Contributor, and only if You
agree to indemnify, defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason of your
accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate
notice, with the fields enclosed by brackets "[]" replaced with your own
identifying information. (Don't include the brackets!) The text should be
enclosed in the appropriate comment syntax for the file format. We also
recommend that a file or class name and description of purpose be included on
the same "printed page" as the copyright notice for easier identification within
third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

73
vendor/github.com/golang/groupcache/README.md generated vendored Normal file
View File

@ -0,0 +1,73 @@
# groupcache
## Summary
groupcache is a caching and cache-filling library, intended as a
replacement for memcached in many cases.
For API docs and examples, see http://godoc.org/github.com/golang/groupcache
## Comparison to memcached
### **Like memcached**, groupcache:
* shards by key to select which peer is responsible for that key
### **Unlike memcached**, groupcache:
* does not require running a separate set of servers, thus massively
reducing deployment/configuration pain. groupcache is a client
library as well as a server. It connects to its own peers.
* comes with a cache filling mechanism. Whereas memcached just says
"Sorry, cache miss", often resulting in a thundering herd of
database (or whatever) loads from an unbounded number of clients
(which has resulted in several fun outages), groupcache coordinates
cache fills such that only one load in one process of an entire
replicated set of processes populates the cache, then multiplexes
the loaded value to all callers.
* does not support versioned values. If key "foo" is value "bar",
key "foo" must always be "bar". There are neither cache expiration
times, nor explicit cache evictions. Thus there is also no CAS,
nor Increment/Decrement. This also means that groupcache....
* ... supports automatic mirroring of super-hot items to multiple
processes. This prevents memcached hot spotting where a machine's
CPU and/or NIC are overloaded by very popular keys/values.
* is currently only available for Go. It's very unlikely that I
(bradfitz@) will port the code to any other language.
## Loading process
In a nutshell, a groupcache lookup of **Get("foo")** looks like:
(On machine #5 of a set of N machines running the same code)
1. Is the value of "foo" in local memory because it's super hot? If so, use it.
2. Is the value of "foo" in local memory because peer #5 (the current
peer) is the owner of it? If so, use it.
3. Amongst all the peers in my set of N, am I the owner of the key
"foo"? (e.g. does it consistent hash to 5?) If so, load it. If
other callers come in, via the same process or via RPC requests
from peers, they block waiting for the load to finish and get the
same answer. If not, RPC to the peer that's the owner and get
the answer. If the RPC fails, just load it locally (still with
local dup suppression).
## Users
groupcache is in production use by dl.google.com (its original user),
parts of Blogger, parts of Google Code, parts of Google Fiber, parts
of Google production monitoring systems, etc.
## Presentations
See http://talks.golang.org/2013/oscon-dl.slide
## Help
Use the golang-nuts mailing list for any discussion or questions.

121
vendor/github.com/golang/groupcache/lru/lru.go generated vendored Normal file
View File

@ -0,0 +1,121 @@
/*
Copyright 2013 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package lru implements an LRU cache.
package lru
import "container/list"
// Cache is an LRU cache. It is not safe for concurrent access.
type Cache struct {
// MaxEntries is the maximum number of cache entries before
// an item is evicted. Zero means no limit.
MaxEntries int
// OnEvicted optionally specificies a callback function to be
// executed when an entry is purged from the cache.
OnEvicted func(key Key, value interface{})
ll *list.List
cache map[interface{}]*list.Element
}
// A Key may be any value that is comparable. See http://golang.org/ref/spec#Comparison_operators
type Key interface{}
type entry struct {
key Key
value interface{}
}
// New creates a new Cache.
// If maxEntries is zero, the cache has no limit and it's assumed
// that eviction is done by the caller.
func New(maxEntries int) *Cache {
return &Cache{
MaxEntries: maxEntries,
ll: list.New(),
cache: make(map[interface{}]*list.Element),
}
}
// Add adds a value to the cache.
func (c *Cache) Add(key Key, value interface{}) {
if c.cache == nil {
c.cache = make(map[interface{}]*list.Element)
c.ll = list.New()
}
if ee, ok := c.cache[key]; ok {
c.ll.MoveToFront(ee)
ee.Value.(*entry).value = value
return
}
ele := c.ll.PushFront(&entry{key, value})
c.cache[key] = ele
if c.MaxEntries != 0 && c.ll.Len() > c.MaxEntries {
c.RemoveOldest()
}
}
// Get looks up a key's value from the cache.
func (c *Cache) Get(key Key) (value interface{}, ok bool) {
if c.cache == nil {
return
}
if ele, hit := c.cache[key]; hit {
c.ll.MoveToFront(ele)
return ele.Value.(*entry).value, true
}
return
}
// Remove removes the provided key from the cache.
func (c *Cache) Remove(key Key) {
if c.cache == nil {
return
}
if ele, hit := c.cache[key]; hit {
c.removeElement(ele)
}
}
// RemoveOldest removes the oldest item from the cache.
func (c *Cache) RemoveOldest() {
if c.cache == nil {
return
}
ele := c.ll.Back()
if ele != nil {
c.removeElement(ele)
}
}
func (c *Cache) removeElement(e *list.Element) {
c.ll.Remove(e)
kv := e.Value.(*entry)
delete(c.cache, kv.key)
if c.OnEvicted != nil {
c.OnEvicted(kv.key, kv.value)
}
}
// Len returns the number of items in the cache.
func (c *Cache) Len() int {
if c.cache == nil {
return 0
}
return c.ll.Len()
}

View File

@ -6,6 +6,7 @@ import (
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/watch"
@ -18,6 +19,17 @@ type ObjectFactory interface {
List() runtime.Object
}
type UnstructuredObjectFactory struct {
}
func (u *UnstructuredObjectFactory) Object() runtime.Object {
return &unstructured.Unstructured{}
}
func (u *UnstructuredObjectFactory) List() runtime.Object {
return &unstructured.UnstructuredList{}
}
type ObjectClient struct {
restClient rest.Interface
resource *metav1.APIResource
@ -36,6 +48,16 @@ func NewObjectClient(namespace string, restClient rest.Interface, apiResource *m
}
}
func (p *ObjectClient) UnstructuredClient() *ObjectClient {
return &ObjectClient{
restClient: p.restClient,
resource: p.resource,
gvk: p.gvk,
ns: p.ns,
Factory: &UnstructuredObjectFactory{},
}
}
func (p *ObjectClient) getAPIPrefix() string {
if p.gvk.Group == "" {
return "api"

149
vendor/github.com/rancher/norman/condition/condition.go generated vendored Normal file
View File

@ -0,0 +1,149 @@
package condition
import (
"reflect"
"time"
"github.com/pkg/errors"
"github.com/rancher/norman/controller"
"k8s.io/apimachinery/pkg/runtime"
)
type Cond string
func (c Cond) True(obj runtime.Object) {
setStatus(obj, string(c), "True")
}
func (c Cond) IsTrue(obj runtime.Object) bool {
return getStatus(obj, string(c)) == "True"
}
func (c Cond) False(obj runtime.Object) {
setStatus(obj, string(c), "False")
}
func (c Cond) IsFalse(obj runtime.Object) bool {
return getStatus(obj, string(c)) == "False"
}
func (c Cond) Unknown(obj runtime.Object) {
setStatus(obj, string(c), "Unknown")
}
func (c Cond) IsUnknown(obj runtime.Object) bool {
return getStatus(obj, string(c)) == "Unknown"
}
func (c Cond) Reason(obj runtime.Object, reason string) {
cond := findOrCreateCond(obj, string(c))
getFieldValue(cond, "Reason").SetString(reason)
touchTS(cond)
}
func (c Cond) GetReason(obj runtime.Object) string {
cond := findOrCreateCond(obj, string(c))
return getFieldValue(cond, "Reason").String()
}
func (c Cond) Once(obj runtime.Object, f func() (runtime.Object, error)) (runtime.Object, error) {
if c.IsFalse(obj) {
return obj, &controller.ForgetError{
Err: errors.New(c.GetReason(obj)),
}
}
if c.IsTrue(obj) {
return obj, nil
}
c.Unknown(obj)
newObj, err := f()
if newObj != nil {
obj = newObj
}
if err != nil {
c.False(obj)
c.Reason(obj, err.Error())
return obj, err
}
c.True(obj)
return obj, nil
}
func (c Cond) Do(obj runtime.Object, f func() error) error {
c.Unknown(obj)
if err := f(); err != nil {
c.False(obj)
c.Reason(obj, err.Error())
return err
}
c.True(obj)
return nil
}
func touchTS(value reflect.Value) {
now := time.Now().Format(time.RFC3339)
getFieldValue(value, "LastUpdateTime").SetString(now)
}
func getStatus(obj interface{}, condName string) string {
cond := findOrCreateCond(obj, condName)
return getFieldValue(cond, "Status").String()
}
func setStatus(obj interface{}, condName, status string) {
cond := findOrCreateCond(obj, condName)
getFieldValue(cond, "Status").SetString(status)
touchTS(cond)
}
func findOrCreateCond(obj interface{}, condName string) reflect.Value {
condSlice := getValue(obj, "Status", "Conditions")
cond := findCond(condSlice, condName)
if cond != nil {
return *cond
}
newCond := reflect.New(condSlice.Type().Elem()).Elem()
newCond.FieldByName("Type").SetString(condName)
newCond.FieldByName("Status").SetString("Unknown")
condSlice.Set(reflect.Append(condSlice, newCond))
return newCond
}
func findCond(val reflect.Value, name string) *reflect.Value {
for i := 0; i < val.Len(); i++ {
cond := val.Index(i)
typeVal := getFieldValue(cond, "Type")
if typeVal.String() == name {
return &cond
}
}
return nil
}
func getValue(obj interface{}, name ...string) reflect.Value {
v := reflect.ValueOf(obj)
t := v.Type()
if t.Kind() == reflect.Ptr {
v = v.Elem()
t = v.Type()
}
field := v.FieldByName(name[0])
if len(name) == 1 {
return field
}
return getFieldValue(field, name[1:]...)
}
func getFieldValue(v reflect.Value, name ...string) reflect.Value {
field := v.FieldByName(name[0])
if len(name) == 1 {
return field
}
return getFieldValue(field, name[1:]...)
}

9
vendor/github.com/rancher/norman/controller/error.go generated vendored Normal file
View File

@ -0,0 +1,9 @@
package controller
type ForgetError struct {
Err error
}
func (f *ForgetError) Error() string {
return f.Err.Error()
}

View File

@ -162,7 +162,7 @@ func (g *genericController) processNextWorkItem() bool {
// do your work on the key. This method will contains your "do stuff" logic
err := g.syncHandler(key.(string))
if err == nil {
if _, ok := err.(*ForgetError); err == nil || ok {
g.queue.Forget(key)
return true
}

39
vendor/github.com/rancher/norman/event/logger.go generated vendored Normal file
View File

@ -0,0 +1,39 @@
package event
import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/record"
)
type Logger interface {
Info(obj runtime.Object, message string)
Infof(obj runtime.Object, messagefmt string, args ...interface{})
Error(obj runtime.Object, message string)
Errorf(obj runtime.Object, messagefmt string, args ...interface{})
}
type logger struct {
recorder record.EventRecorder
}
func (l *logger) Info(obj runtime.Object, message string) {
l.recorder.Event(obj, "Normal", "Message", message)
}
func (l *logger) Infof(obj runtime.Object, messagefmt string, args ...interface{}) {
l.recorder.Eventf(obj, "Normal", "Message", messagefmt, args...)
}
func (l *logger) Error(obj runtime.Object, message string) {
l.recorder.Event(obj, "Warning", "Message", message)
}
func (l *logger) Errorf(obj runtime.Object, messagefmt string, args ...interface{}) {
l.recorder.Eventf(obj, "Warning", "Message", messagefmt, args...)
}
func NewLogger(recorder record.EventRecorder) Logger {
return &logger{
recorder: recorder,
}
}

View File

@ -19,8 +19,8 @@ import (
var (
{{.schema.CodeName}}GroupVersionKind = schema.GroupVersionKind{
Version: "{{.schema.Version.Version}}",
Group: "{{.schema.Version.Group}}",
Version: Version,
Group: GroupName,
Kind: "{{.schema.CodeName}}",
}
{{.schema.CodeName}}Resource = metav1.APIResource{
@ -67,6 +67,8 @@ type {{.schema.CodeName}}Interface interface {
Watch(opts metav1.ListOptions) (watch.Interface, error)
DeleteCollection(deleteOpts *metav1.DeleteOptions, listOpts metav1.ListOptions) error
Controller() {{.schema.CodeName}}Controller
AddSyncHandler(sync {{.schema.CodeName}}HandlerFunc)
AddLifecycle(name string, lifecycle {{.schema.CodeName}}Lifecycle)
}
type {{.schema.ID}}Lister struct {
@ -199,4 +201,13 @@ func (s *{{.schema.ID}}Client) Watch(opts metav1.ListOptions) (watch.Interface,
func (s *{{.schema.ID}}Client) DeleteCollection(deleteOpts *metav1.DeleteOptions, listOpts metav1.ListOptions) error {
return s.objectClient.DeleteCollection(deleteOpts, listOpts)
}
func (s *{{.schema.ID}}Client) AddSyncHandler(sync {{.schema.CodeName}}HandlerFunc) {
s.Controller().AddHandler(sync)
}
func (s *{{.schema.ID}}Client) AddLifecycle(name string, lifecycle {{.schema.CodeName}}Lifecycle) {
sync := New{{.schema.CodeName}}LifecycleAdapter(name, s, lifecycle)
s.AddSyncHandler(sync)
}
`

View File

@ -195,6 +195,38 @@ func generateController(external bool, outputDir string, schema *types.Schema, s
})
}
func generateScheme(external bool, outputDir string, version *types.APIVersion, schemas []*types.Schema) error {
filePath := strings.ToLower("zz_generated_scheme.go")
output, err := os.Create(path.Join(outputDir, filePath))
if err != nil {
return err
}
defer output.Close()
typeTemplate, err := template.New("scheme.template").
Funcs(funcs()).
Parse(strings.Replace(schemeTemplate, "%BACK%", "`", -1))
if err != nil {
return err
}
names := []string{}
for _, schema := range schemas {
if !external {
names = append(names, schema.CodeName)
}
if schema.CanList() {
names = append(names, schema.CodeName+"List")
}
}
return typeTemplate.Execute(output, map[string]interface{}{
"version": version,
"schemas": schemas,
"names": names,
})
}
func generateK8sClient(outputDir string, version *types.APIVersion, schemas []*types.Schema) error {
filePath := strings.ToLower("zz_generated_k8s_client.go")
output, err := os.Create(path.Join(outputDir, filePath))
@ -270,6 +302,10 @@ func GenerateControllerForTypes(version *types.APIVersion, k8sOutputPackage stri
return err
}
if err := generateScheme(true, k8sDir, version, controllers); err != nil {
return err
}
return gofmt(baseDir, k8sOutputPackage)
}
@ -318,7 +354,13 @@ func Generate(schemas *types.Schemas, cattleOutputPackage, k8sOutputPackage stri
return err
}
generateK8sClient(k8sDir, &controllers[0].Version, controllers)
if err := generateK8sClient(k8sDir, &controllers[0].Version, controllers); err != nil {
return err
}
if err := generateScheme(false, k8sDir, &controllers[0].Version, controllers); err != nil {
return err
}
}
if err := gofmt(baseDir, k8sOutputPackage); err != nil {
@ -382,9 +424,9 @@ func deepCopyGen(workDir, pkg string) error {
HeaderText: []byte{},
GeneratorFunc: func(c *generator.Context) []generator.Generator {
return []generator.Generator{
&noInitGenerator{
generators.NewGenDeepCopy(arguments.OutputFileBaseName, pkg, nil, true, true),
},
//&noInitGenerator{
generators.NewGenDeepCopy(arguments.OutputFileBaseName, pkg, nil, true, true),
//},
}
},
FilterFunc: func(c *generator.Context, t *gengotypes.Type) bool {

View File

@ -0,0 +1,42 @@
package generator
var schemeTemplate = `package {{.version.Version}}
import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
)
const (
GroupName = "{{.version.Group}}"
Version = "{{.version.Version}}"
)
// SchemeGroupVersion is group version used to register these objects
var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: Version}
// Kind takes an unqualified kind and returns a Group qualified GroupKind
func Kind(kind string) schema.GroupKind {
return SchemeGroupVersion.WithKind(kind).GroupKind()
}
// Resource takes an unqualified resource and returns a Group qualified GroupResource
func Resource(resource string) schema.GroupResource {
return SchemeGroupVersion.WithResource(resource).GroupResource()
}
var (
SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
AddToScheme = SchemeBuilder.AddToScheme
)
// Adds the list of known types to api.Scheme.
func addKnownTypes(scheme *runtime.Scheme) error {
// TODO this gets cleaned up when the types are fixed
scheme.AddKnownTypes(SchemeGroupVersion,
{{range .names}}
&{{.}}{},{{end}}
)
return nil
}
`

View File

@ -51,7 +51,11 @@ func (o *objectLifecycleAdapter) sync(key string, obj runtime.Object) error {
return err
}
obj = obj.DeepCopyObject()
if newObj, err := o.lifecycle.Updated(obj); err != nil {
if newObj != nil {
o.objectClient.Update(metadata.GetName(), newObj)
}
return err
} else if newObj != nil {
_, err = o.objectClient.Update(metadata.GetName(), newObj)
@ -72,29 +76,39 @@ func (o *objectLifecycleAdapter) finalize(metadata metav1.Object, obj runtime.Ob
}
obj = obj.DeepCopyObject()
if newObj, err := o.lifecycle.Finalize(obj); err != nil {
if newObj != nil {
o.objectClient.Update(metadata.GetName(), newObj)
}
return false, err
} else if newObj != nil {
obj = newObj
}
if err := removeFinalizer(o.name, obj); err != nil {
return false, err
}
_, err := o.objectClient.Update(metadata.GetName(), obj)
return false, err
}
func removeFinalizer(name string, obj runtime.Object) error {
metadata, err := meta.Accessor(obj)
if err != nil {
return false, err
return err
}
var finalizers []string
for _, finalizer := range metadata.GetFinalizers() {
if finalizer == o.name {
if finalizer == name {
continue
}
finalizers = append(finalizers, finalizer)
}
metadata.SetFinalizers(finalizers)
if newObj, err := o.lifecycle.Finalize(obj); err != nil {
return false, err
} else if newObj != nil {
_, err = o.objectClient.Update(metadata.GetName(), newObj)
} else {
_, err = o.objectClient.Update(metadata.GetName(), obj)
}
return false, err
return nil
}
func (o *objectLifecycleAdapter) createKey() string {
@ -104,28 +118,31 @@ func (o *objectLifecycleAdapter) createKey() string {
func (o *objectLifecycleAdapter) create(metadata metav1.Object, obj runtime.Object) (bool, error) {
initialized := o.createKey()
if metadata.GetLabels()[initialized] == "true" {
if metadata.GetAnnotations()[initialized] == "true" {
return true, nil
}
obj = obj.DeepCopyObject()
if newObj, err := o.lifecycle.Create(obj); err != nil {
if newObj != nil {
o.objectClient.Update(metadata.GetName(), newObj)
}
return false, err
} else if newObj != nil {
obj = newObj
}
metadata, err := meta.Accessor(obj)
if err != nil {
return false, err
}
if metadata.GetLabels() == nil {
metadata.SetLabels(map[string]string{})
if metadata.GetAnnotations() == nil {
metadata.SetAnnotations(map[string]string{})
}
metadata.SetFinalizers(append(metadata.GetFinalizers(), o.name))
metadata.GetLabels()[initialized] = "true"
if newObj, err := o.lifecycle.Create(obj); err != nil {
return false, err
} else if newObj != nil {
_, err = o.objectClient.Update(metadata.GetName(), newObj)
return false, err
}
metadata.GetAnnotations()[initialized] = "true"
_, err = o.objectClient.Update(metadata.GetName(), obj)
return false, err

View File

@ -5,6 +5,8 @@ import (
"fmt"
"strings"
"sync"
"github.com/rancher/norman/name"
"github.com/rancher/norman/types/convert"
"github.com/rancher/norman/types/definition"
@ -14,7 +16,9 @@ type SchemaCollection struct {
Data []Schema
}
type SchemaInitFunc func(*Schemas) *Schemas
type SchemasInitFunc func(*Schemas) *Schemas
type SchemaHook func(*Schema)
type MappersFactory func() []Mapper
@ -24,14 +28,17 @@ type BackReference struct {
}
type Schemas struct {
sync.Mutex
schemasByPath map[string]map[string]*Schema
schemasBySubContext map[string]*Schema
mappers map[string]map[string][]Mapper
references map[string][]BackReference
embedded map[string]*Schema
DefaultMappers MappersFactory
DefaultPostMappers MappersFactory
versions []APIVersion
schemas []*Schema
AddHook SchemaHook
errors []error
}
@ -41,10 +48,11 @@ func NewSchemas() *Schemas {
schemasBySubContext: map[string]*Schema{},
mappers: map[string]map[string][]Mapper{},
references: map[string][]BackReference{},
embedded: map[string]*Schema{},
}
}
func (s *Schemas) Init(initFunc SchemaInitFunc) *Schemas {
func (s *Schemas) Init(initFunc SchemasInitFunc) *Schemas {
return initFunc(s)
}
@ -53,13 +61,11 @@ func (s *Schemas) Err() error {
}
func (s *Schemas) SubContext(subContext string) *Schema {
s.Lock()
defer s.Unlock()
return s.schemasBySubContext[subContext]
}
func (s *Schemas) SubContextSchemas() map[string]*Schema {
return s.schemasBySubContext
}
func (s *Schemas) AddSchemas(schema *Schemas) *Schemas {
for _, schema := range schema.Schemas() {
s.AddSchema(*schema)
@ -67,27 +73,58 @@ func (s *Schemas) AddSchemas(schema *Schemas) *Schemas {
return s
}
func (s *Schemas) RemoveSchema(schema Schema) *Schemas {
s.Lock()
defer s.Unlock()
return s.doRemoveSchema(schema)
}
func (s *Schemas) doRemoveSchema(schema Schema) *Schemas {
delete(s.schemasByPath[schema.Version.Path], schema.ID)
s.removeReferences(&schema)
delete(s.schemasBySubContext, schema.SubContext)
if schema.Embed {
s.removeEmbed(&schema)
}
return s
}
func (s *Schemas) removeReferences(schema *Schema) {
fullType := convert.ToFullReference(schema.Version.Path, schema.ID)
delete(s.references, fullType)
for name, values := range s.references {
changed := false
var modified []BackReference
for _, value := range values {
if value.Schema.ID == schema.ID && value.Schema.Version.Path == schema.Version.Path {
changed = true
continue
}
modified = append(modified, value)
}
if changed {
s.references[name] = modified
}
}
}
func (s *Schemas) AddSchema(schema Schema) *Schemas {
schema.Type = "/meta/schemas/schema"
if schema.ID == "" {
s.errors = append(s.errors, fmt.Errorf("ID is not set on schema: %v", schema))
return s
}
if schema.Version.Path == "" || schema.Version.Version == "" {
s.errors = append(s.errors, fmt.Errorf("version is not set on schema: %s", schema.ID))
return s
}
if schema.PluralName == "" {
schema.PluralName = name.GuessPluralName(schema.ID)
}
if schema.CodeName == "" {
schema.CodeName = convert.Capitalize(schema.ID)
}
if schema.CodeNamePlural == "" {
schema.CodeNamePlural = name.GuessPluralName(schema.CodeName)
}
if schema.BaseType == "" {
schema.BaseType = schema.ID
s.Lock()
defer s.Unlock()
return s.doAddSchema(schema)
}
func (s *Schemas) doAddSchema(schema Schema) *Schemas {
s.setupDefaults(&schema)
if s.AddHook != nil {
s.AddHook(&schema)
}
schemas, ok := s.schemasByPath[schema.Version.Path]
@ -101,20 +138,8 @@ func (s *Schemas) AddSchema(schema Schema) *Schemas {
schemas[schema.ID] = &schema
s.schemas = append(s.schemas, &schema)
for name, field := range schema.ResourceFields {
if !definition.IsReferenceType(field.Type) {
continue
}
refType := definition.SubType(field.Type)
if !strings.HasPrefix(refType, "/") {
refType = convert.ToFullReference(schema.Version.Path, refType)
}
s.references[refType] = append(s.references[refType], BackReference{
FieldName: name,
Schema: &schema,
})
if !schema.Embed {
s.addReferences(&schema)
}
}
@ -122,11 +147,100 @@ func (s *Schemas) AddSchema(schema Schema) *Schemas {
s.schemasBySubContext[schema.SubContext] = &schema
}
if schema.Embed {
s.embed(&schema)
}
return s
}
func (s *Schemas) removeEmbed(schema *Schema) {
target := s.doSchema(&schema.Version, schema.EmbedType, false)
if target == nil {
return
}
newSchema := *target
newSchema.ResourceFields = map[string]Field{}
for k, v := range target.ResourceFields {
newSchema.ResourceFields[k] = v
}
for k := range schema.ResourceFields {
delete(newSchema.ResourceFields, k)
}
s.doRemoveSchema(*target)
s.doAddSchema(newSchema)
}
func (s *Schemas) embed(schema *Schema) {
target := s.doSchema(&schema.Version, schema.EmbedType, false)
if target == nil {
return
}
newSchema := *target
newSchema.ResourceFields = map[string]Field{}
for k, v := range target.ResourceFields {
newSchema.ResourceFields[k] = v
}
for k, v := range schema.ResourceFields {
newSchema.ResourceFields[k] = v
}
s.doRemoveSchema(*target)
s.doAddSchema(newSchema)
}
func (s *Schemas) addReferences(schema *Schema) {
for name, field := range schema.ResourceFields {
if !definition.IsReferenceType(field.Type) {
continue
}
refType := definition.SubType(field.Type)
if !strings.HasPrefix(refType, "/") {
refType = convert.ToFullReference(schema.Version.Path, refType)
}
s.references[refType] = append(s.references[refType], BackReference{
FieldName: name,
Schema: schema,
})
}
}
func (s *Schemas) setupDefaults(schema *Schema) {
schema.Type = "/meta/schemas/schema"
if schema.ID == "" {
s.errors = append(s.errors, fmt.Errorf("ID is not set on schema: %v", schema))
return
}
if schema.Version.Path == "" || schema.Version.Version == "" {
s.errors = append(s.errors, fmt.Errorf("version is not set on schema: %s", schema.ID))
return
}
if schema.PluralName == "" {
schema.PluralName = name.GuessPluralName(schema.ID)
}
if schema.CodeName == "" {
schema.CodeName = convert.Capitalize(schema.ID)
}
if schema.CodeNamePlural == "" {
schema.CodeNamePlural = name.GuessPluralName(schema.CodeName)
}
if schema.BaseType == "" {
schema.BaseType = schema.ID
}
}
func (s *Schemas) References(schema *Schema) []BackReference {
refType := convert.ToFullReference(schema.Version.Path, schema.ID)
s.Lock()
defer s.Unlock()
return s.references[refType]
}
@ -142,6 +256,8 @@ func (s *Schemas) AddMapper(version *APIVersion, schemaID string, mapper Mapper)
}
func (s *Schemas) SchemasForVersion(version APIVersion) map[string]*Schema {
s.Lock()
defer s.Unlock()
return s.schemasByPath[version.Path]
}
@ -182,6 +298,10 @@ func (s *Schemas) mapper(version *APIVersion, name string) []Mapper {
}
func (s *Schemas) Schema(version *APIVersion, name string) *Schema {
return s.doSchema(version, name, true)
}
func (s *Schemas) doSchema(version *APIVersion, name string, lock bool) *Schema {
var (
path string
)
@ -196,7 +316,13 @@ func (s *Schemas) Schema(version *APIVersion, name string) *Schema {
path = "core"
}
if lock {
s.Lock()
}
schemas, ok := s.schemasByPath[path]
if lock {
s.Unlock()
}
if !ok {
return nil
}

View File

@ -148,6 +148,7 @@ type URLBuilder interface {
Sort(field string) string
SetSubContext(subContext string)
FilterLink(schema *Schema, fieldName string, value string) string
Action(action string, resource *RawResource) string
}
type Store interface {
@ -155,6 +156,6 @@ type Store interface {
List(apiContext *APIContext, schema *Schema, opt QueryOptions) ([]map[string]interface{}, error)
Create(apiContext *APIContext, schema *Schema, data map[string]interface{}) (map[string]interface{}, error)
Update(apiContext *APIContext, schema *Schema, data map[string]interface{}, id string) (map[string]interface{}, error)
Delete(apiContext *APIContext, schema *Schema, id string) error
Delete(apiContext *APIContext, schema *Schema, id string) (map[string]interface{}, error)
Watch(apiContext *APIContext, schema *Schema, opt QueryOptions) (chan map[string]interface{}, error)
}

View File

@ -79,6 +79,8 @@ type TypeScope string
type Schema struct {
ID string `json:"id,omitempty"`
Embed bool `json:"embed,omitempty"`
EmbedType string `json:"embedType,omitempty"`
CodeName string `json:"-"`
CodeNamePlural string `json:"-"`
PkgName string `json:"-"`

38
vendor/k8s.io/apimachinery/pkg/util/mergepatch/BUILD generated vendored Normal file
View File

@ -0,0 +1,38 @@
package(default_visibility = ["//visibility:public"])
load(
"@io_bazel_rules_go//go:def.bzl",
"go_library",
"go_test",
)
go_test(
name = "go_default_test",
srcs = ["util_test.go"],
library = ":go_default_library",
)
go_library(
name = "go_default_library",
srcs = [
"errors.go",
"util.go",
],
deps = [
"//vendor/github.com/davecgh/go-spew/spew:go_default_library",
"//vendor/github.com/ghodss/yaml:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)

View File

@ -0,0 +1,5 @@
approvers:
- pwittrock
reviewers:
- mengqiy
- apelisse

View File

@ -0,0 +1,101 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package mergepatch
import (
"errors"
"fmt"
"reflect"
)
var (
ErrBadJSONDoc = errors.New("invalid JSON document")
ErrNoListOfLists = errors.New("lists of lists are not supported")
ErrBadPatchFormatForPrimitiveList = errors.New("invalid patch format of primitive list")
ErrBadPatchFormatForRetainKeys = errors.New("invalid patch format of retainKeys")
ErrBadPatchFormatForSetElementOrderList = errors.New("invalid patch format of setElementOrder list")
ErrPatchContentNotMatchRetainKeys = errors.New("patch content doesn't match retainKeys list")
)
func ErrNoMergeKey(m map[string]interface{}, k string) error {
return fmt.Errorf("map: %v does not contain declared merge key: %s", m, k)
}
func ErrBadArgType(expected, actual interface{}) error {
return fmt.Errorf("expected a %s, but received a %s",
reflect.TypeOf(expected),
reflect.TypeOf(actual))
}
func ErrBadArgKind(expected, actual interface{}) error {
var expectedKindString, actualKindString string
if expected == nil {
expectedKindString = "nil"
} else {
expectedKindString = reflect.TypeOf(expected).Kind().String()
}
if actual == nil {
actualKindString = "nil"
} else {
actualKindString = reflect.TypeOf(actual).Kind().String()
}
return fmt.Errorf("expected a %s, but received a %s", expectedKindString, actualKindString)
}
func ErrBadPatchType(t interface{}, m map[string]interface{}) error {
return fmt.Errorf("unknown patch type: %s in map: %v", t, m)
}
// IsPreconditionFailed returns true if the provided error indicates
// a precondition failed.
func IsPreconditionFailed(err error) bool {
_, ok := err.(ErrPreconditionFailed)
return ok
}
type ErrPreconditionFailed struct {
message string
}
func NewErrPreconditionFailed(target map[string]interface{}) ErrPreconditionFailed {
s := fmt.Sprintf("precondition failed for: %v", target)
return ErrPreconditionFailed{s}
}
func (err ErrPreconditionFailed) Error() string {
return err.message
}
type ErrConflict struct {
message string
}
func NewErrConflict(patch, current string) ErrConflict {
s := fmt.Sprintf("patch:\n%s\nconflicts with changes made from original to current:\n%s\n", patch, current)
return ErrConflict{s}
}
func (err ErrConflict) Error() string {
return err.message
}
// IsConflict returns true if the provided error indicates
// a conflict between the patch and the current configuration.
func IsConflict(err error) bool {
_, ok := err.(ErrConflict)
return ok
}

133
vendor/k8s.io/apimachinery/pkg/util/mergepatch/util.go generated vendored Normal file
View File

@ -0,0 +1,133 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package mergepatch
import (
"fmt"
"reflect"
"github.com/davecgh/go-spew/spew"
"github.com/ghodss/yaml"
)
// PreconditionFunc asserts that an incompatible change is not present within a patch.
type PreconditionFunc func(interface{}) bool
// RequireKeyUnchanged returns a precondition function that fails if the provided key
// is present in the patch (indicating that its value has changed).
func RequireKeyUnchanged(key string) PreconditionFunc {
return func(patch interface{}) bool {
patchMap, ok := patch.(map[string]interface{})
if !ok {
return true
}
// The presence of key means that its value has been changed, so the test fails.
_, ok = patchMap[key]
return !ok
}
}
// RequireMetadataKeyUnchanged creates a precondition function that fails
// if the metadata.key is present in the patch (indicating its value
// has changed).
func RequireMetadataKeyUnchanged(key string) PreconditionFunc {
return func(patch interface{}) bool {
patchMap, ok := patch.(map[string]interface{})
if !ok {
return true
}
patchMap1, ok := patchMap["metadata"]
if !ok {
return true
}
patchMap2, ok := patchMap1.(map[string]interface{})
if !ok {
return true
}
_, ok = patchMap2[key]
return !ok
}
}
func ToYAMLOrError(v interface{}) string {
y, err := toYAML(v)
if err != nil {
return err.Error()
}
return y
}
func toYAML(v interface{}) (string, error) {
y, err := yaml.Marshal(v)
if err != nil {
return "", fmt.Errorf("yaml marshal failed:%v\n%v\n", err, spew.Sdump(v))
}
return string(y), nil
}
// HasConflicts returns true if the left and right JSON interface objects overlap with
// different values in any key. All keys are required to be strings. Since patches of the
// same Type have congruent keys, this is valid for multiple patch types. This method
// supports JSON merge patch semantics.
//
// NOTE: Numbers with different types (e.g. int(0) vs int64(0)) will be detected as conflicts.
// Make sure the unmarshaling of left and right are consistent (e.g. use the same library).
func HasConflicts(left, right interface{}) (bool, error) {
switch typedLeft := left.(type) {
case map[string]interface{}:
switch typedRight := right.(type) {
case map[string]interface{}:
for key, leftValue := range typedLeft {
rightValue, ok := typedRight[key]
if !ok {
continue
}
if conflict, err := HasConflicts(leftValue, rightValue); err != nil || conflict {
return conflict, err
}
}
return false, nil
default:
return true, nil
}
case []interface{}:
switch typedRight := right.(type) {
case []interface{}:
if len(typedLeft) != len(typedRight) {
return true, nil
}
for i := range typedLeft {
if conflict, err := HasConflicts(typedLeft[i], typedRight[i]); err != nil || conflict {
return conflict, err
}
}
return false, nil
default:
return true, nil
}
case string, float64, bool, int, int64, nil:
return !reflect.DeepEqual(left, right), nil
default:
return true, fmt.Errorf("unknown type: %v", reflect.TypeOf(left))
}
}

View File

@ -0,0 +1,43 @@
package(default_visibility = ["//visibility:public"])
load(
"@io_bazel_rules_go//go:def.bzl",
"go_library",
"go_test",
)
go_test(
name = "go_default_test",
srcs = ["patch_test.go"],
library = ":go_default_library",
deps = [
"//vendor/github.com/davecgh/go-spew/spew:go_default_library",
"//vendor/github.com/ghodss/yaml:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/runtime:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/mergepatch:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/sets:go_default_library",
],
)
go_library(
name = "go_default_library",
srcs = ["patch.go"],
deps = [
"//vendor/k8s.io/apimachinery/pkg/util/json:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/mergepatch:go_default_library",
"//vendor/k8s.io/apimachinery/third_party/forked/golang/json:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)

View File

@ -0,0 +1,5 @@
approvers:
- pwittrock
reviewers:
- mengqiy
- apelisse

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,31 @@
package(default_visibility = ["//visibility:public"])
load(
"@io_bazel_rules_go//go:def.bzl",
"go_library",
"go_test",
)
go_library(
name = "go_default_library",
srcs = ["fields.go"],
)
go_test(
name = "go_default_test",
srcs = ["fields_test.go"],
library = ":go_default_library",
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)

View File

@ -0,0 +1,5 @@
approvers:
- pwittrock
reviewers:
- mengqiy
- apelisse

View File

@ -0,0 +1,516 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package json is forked from the Go standard library to enable us to find the
// field of a struct that a given JSON key maps to.
package json
import (
"bytes"
"fmt"
"reflect"
"sort"
"strings"
"sync"
"unicode"
"unicode/utf8"
)
const (
patchStrategyTagKey = "patchStrategy"
patchMergeKeyTagKey = "patchMergeKey"
)
// Finds the patchStrategy and patchMergeKey struct tag fields on a given
// struct field given the struct type and the JSON name of the field.
// It returns field type, a slice of patch strategies, merge key and error.
// TODO: fix the returned errors to be introspectable.
func LookupPatchMetadata(t reflect.Type, jsonField string) (
elemType reflect.Type, patchStrategies []string, patchMergeKey string, e error) {
if t.Kind() == reflect.Ptr {
t = t.Elem()
}
if t.Kind() == reflect.Map {
elemType = t.Elem()
return
}
if t.Kind() != reflect.Struct {
e = fmt.Errorf("merging an object in json but data type is not map or struct, instead is: %s",
t.Kind().String())
return
}
jf := []byte(jsonField)
// Find the field that the JSON library would use.
var f *field
fields := cachedTypeFields(t)
for i := range fields {
ff := &fields[i]
if bytes.Equal(ff.nameBytes, jf) {
f = ff
break
}
// Do case-insensitive comparison.
if f == nil && ff.equalFold(ff.nameBytes, jf) {
f = ff
}
}
if f != nil {
// Find the reflect.Value of the most preferential struct field.
tjf := t.Field(f.index[0])
// we must navigate down all the anonymously included structs in the chain
for i := 1; i < len(f.index); i++ {
tjf = tjf.Type.Field(f.index[i])
}
patchStrategy := tjf.Tag.Get(patchStrategyTagKey)
patchMergeKey = tjf.Tag.Get(patchMergeKeyTagKey)
patchStrategies = strings.Split(patchStrategy, ",")
elemType = tjf.Type
return
}
e = fmt.Errorf("unable to find api field in struct %s for the json field %q", t.Name(), jsonField)
return
}
// A field represents a single field found in a struct.
type field struct {
name string
nameBytes []byte // []byte(name)
equalFold func(s, t []byte) bool // bytes.EqualFold or equivalent
tag bool
// index is the sequence of indexes from the containing type fields to this field.
// it is a slice because anonymous structs will need multiple navigation steps to correctly
// resolve the proper fields
index []int
typ reflect.Type
omitEmpty bool
quoted bool
}
func (f field) String() string {
return fmt.Sprintf("{name: %s, type: %v, tag: %v, index: %v, omitEmpty: %v, quoted: %v}", f.name, f.typ, f.tag, f.index, f.omitEmpty, f.quoted)
}
func fillField(f field) field {
f.nameBytes = []byte(f.name)
f.equalFold = foldFunc(f.nameBytes)
return f
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from json tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that JSON should recognize for the given type.
// The algorithm is breadth-first search over the set of structs to include - the top struct
// and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
count := map[reflect.Type]int{}
nextCount := map[reflect.Type]int{}
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" { // unexported
continue
}
tag := sf.Tag.Get("json")
if tag == "-" {
continue
}
name, opts := parseTag(tag)
if !isValidTag(name) {
name = ""
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := name != ""
if name == "" {
name = sf.Name
}
fields = append(fields, fillField(field{
name: name,
tag: tagged,
index: index,
typ: ft,
omitEmpty: opts.Contains("omitempty"),
quoted: opts.Contains("string"),
}))
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
next = append(next, fillField(field{name: ft.Name(), index: index, typ: ft}))
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with JSON tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// JSON tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}
func isValidTag(s string) bool {
if s == "" {
return false
}
for _, c := range s {
switch {
case strings.ContainsRune("!#$%&()*+-./:<=>?@[]^_{|}~ ", c):
// Backslash and quote chars are reserved, but
// otherwise any punctuation chars are allowed
// in a tag name.
default:
if !unicode.IsLetter(c) && !unicode.IsDigit(c) {
return false
}
}
}
return true
}
const (
caseMask = ^byte(0x20) // Mask to ignore case in ASCII.
kelvin = '\u212a'
smallLongEss = '\u017f'
)
// foldFunc returns one of four different case folding equivalence
// functions, from most general (and slow) to fastest:
//
// 1) bytes.EqualFold, if the key s contains any non-ASCII UTF-8
// 2) equalFoldRight, if s contains special folding ASCII ('k', 'K', 's', 'S')
// 3) asciiEqualFold, no special, but includes non-letters (including _)
// 4) simpleLetterEqualFold, no specials, no non-letters.
//
// The letters S and K are special because they map to 3 runes, not just 2:
// * S maps to s and to U+017F 'ſ' Latin small letter long s
// * k maps to K and to U+212A '' Kelvin sign
// See http://play.golang.org/p/tTxjOc0OGo
//
// The returned function is specialized for matching against s and
// should only be given s. It's not curried for performance reasons.
func foldFunc(s []byte) func(s, t []byte) bool {
nonLetter := false
special := false // special letter
for _, b := range s {
if b >= utf8.RuneSelf {
return bytes.EqualFold
}
upper := b & caseMask
if upper < 'A' || upper > 'Z' {
nonLetter = true
} else if upper == 'K' || upper == 'S' {
// See above for why these letters are special.
special = true
}
}
if special {
return equalFoldRight
}
if nonLetter {
return asciiEqualFold
}
return simpleLetterEqualFold
}
// equalFoldRight is a specialization of bytes.EqualFold when s is
// known to be all ASCII (including punctuation), but contains an 's',
// 'S', 'k', or 'K', requiring a Unicode fold on the bytes in t.
// See comments on foldFunc.
func equalFoldRight(s, t []byte) bool {
for _, sb := range s {
if len(t) == 0 {
return false
}
tb := t[0]
if tb < utf8.RuneSelf {
if sb != tb {
sbUpper := sb & caseMask
if 'A' <= sbUpper && sbUpper <= 'Z' {
if sbUpper != tb&caseMask {
return false
}
} else {
return false
}
}
t = t[1:]
continue
}
// sb is ASCII and t is not. t must be either kelvin
// sign or long s; sb must be s, S, k, or K.
tr, size := utf8.DecodeRune(t)
switch sb {
case 's', 'S':
if tr != smallLongEss {
return false
}
case 'k', 'K':
if tr != kelvin {
return false
}
default:
return false
}
t = t[size:]
}
if len(t) > 0 {
return false
}
return true
}
// asciiEqualFold is a specialization of bytes.EqualFold for use when
// s is all ASCII (but may contain non-letters) and contains no
// special-folding letters.
// See comments on foldFunc.
func asciiEqualFold(s, t []byte) bool {
if len(s) != len(t) {
return false
}
for i, sb := range s {
tb := t[i]
if sb == tb {
continue
}
if ('a' <= sb && sb <= 'z') || ('A' <= sb && sb <= 'Z') {
if sb&caseMask != tb&caseMask {
return false
}
} else {
return false
}
}
return true
}
// simpleLetterEqualFold is a specialization of bytes.EqualFold for
// use when s is all ASCII letters (no underscores, etc) and also
// doesn't contain 'k', 'K', 's', or 'S'.
// See comments on foldFunc.
func simpleLetterEqualFold(s, t []byte) bool {
if len(s) != len(t) {
return false
}
for i, b := range s {
if b&caseMask != t[i]&caseMask {
return false
}
}
return true
}
// tagOptions is the string following a comma in a struct field's "json"
// tag, or the empty string. It does not include the leading comma.
type tagOptions string
// parseTag splits a struct field's json tag into its name and
// comma-separated options.
func parseTag(tag string) (string, tagOptions) {
if idx := strings.Index(tag, ","); idx != -1 {
return tag[:idx], tagOptions(tag[idx+1:])
}
return tag, tagOptions("")
}
// Contains reports whether a comma-separated list of options
// contains a particular substr flag. substr must be surrounded by a
// string boundary or commas.
func (o tagOptions) Contains(optionName string) bool {
if len(o) == 0 {
return false
}
s := string(o)
for s != "" {
var next string
i := strings.Index(s, ",")
if i >= 0 {
s, next = s[:i], s[i+1:]
}
if s == optionName {
return true
}
s = next
}
return false
}

67
vendor/k8s.io/client-go/tools/record/BUILD generated vendored Normal file
View File

@ -0,0 +1,67 @@
package(default_visibility = ["//visibility:public"])
load(
"@io_bazel_rules_go//go:def.bzl",
"go_library",
"go_test",
)
go_test(
name = "go_default_test",
srcs = [
"event_test.go",
"events_cache_test.go",
],
library = ":go_default_library",
deps = [
"//vendor/k8s.io/api/core/v1:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/api/errors:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/runtime:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/clock:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/diff:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/strategicpatch:go_default_library",
"//vendor/k8s.io/client-go/kubernetes/scheme:go_default_library",
"//vendor/k8s.io/client-go/rest:go_default_library",
"//vendor/k8s.io/client-go/tools/reference:go_default_library",
],
)
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"event.go",
"events_cache.go",
"fake.go",
],
deps = [
"//vendor/github.com/golang/glog:go_default_library",
"//vendor/github.com/golang/groupcache/lru:go_default_library",
"//vendor/k8s.io/api/core/v1:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/api/errors:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/runtime:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/clock:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/runtime:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/sets:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/strategicpatch:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/watch:go_default_library",
"//vendor/k8s.io/client-go/rest:go_default_library",
"//vendor/k8s.io/client-go/tools/reference:go_default_library",
"//vendor/k8s.io/client-go/util/flowcontrol:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)

27
vendor/k8s.io/client-go/tools/record/OWNERS generated vendored Executable file
View File

@ -0,0 +1,27 @@
reviewers:
- lavalamp
- smarterclayton
- wojtek-t
- deads2k
- derekwaynecarr
- caesarxuchao
- vishh
- mikedanese
- liggitt
- nikhiljindal
- erictune
- pmorie
- dchen1107
- saad-ali
- luxas
- yifan-gu
- eparis
- mwielgus
- timothysc
- jsafrane
- dims
- krousey
- a-robinson
- aveshagarwal
- resouer
- cjcullen

18
vendor/k8s.io/client-go/tools/record/doc.go generated vendored Normal file
View File

@ -0,0 +1,18 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package record has all client logic for recording and reporting events.
package record // import "k8s.io/client-go/tools/record"

318
vendor/k8s.io/client-go/tools/record/event.go generated vendored Normal file
View File

@ -0,0 +1,318 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package record
import (
"fmt"
"math/rand"
"time"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/clock"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/watch"
restclient "k8s.io/client-go/rest"
ref "k8s.io/client-go/tools/reference"
"net/http"
"github.com/golang/glog"
)
const maxTriesPerEvent = 12
var defaultSleepDuration = 10 * time.Second
const maxQueuedEvents = 1000
// EventSink knows how to store events (client.Client implements it.)
// EventSink must respect the namespace that will be embedded in 'event'.
// It is assumed that EventSink will return the same sorts of errors as
// pkg/client's REST client.
type EventSink interface {
Create(event *v1.Event) (*v1.Event, error)
Update(event *v1.Event) (*v1.Event, error)
Patch(oldEvent *v1.Event, data []byte) (*v1.Event, error)
}
// EventRecorder knows how to record events on behalf of an EventSource.
type EventRecorder interface {
// Event constructs an event from the given information and puts it in the queue for sending.
// 'object' is the object this event is about. Event will make a reference-- or you may also
// pass a reference to the object directly.
// 'type' of this event, and can be one of Normal, Warning. New types could be added in future
// 'reason' is the reason this event is generated. 'reason' should be short and unique; it
// should be in UpperCamelCase format (starting with a capital letter). "reason" will be used
// to automate handling of events, so imagine people writing switch statements to handle them.
// You want to make that easy.
// 'message' is intended to be human readable.
//
// The resulting event will be created in the same namespace as the reference object.
Event(object runtime.Object, eventtype, reason, message string)
// Eventf is just like Event, but with Sprintf for the message field.
Eventf(object runtime.Object, eventtype, reason, messageFmt string, args ...interface{})
// PastEventf is just like Eventf, but with an option to specify the event's 'timestamp' field.
PastEventf(object runtime.Object, timestamp metav1.Time, eventtype, reason, messageFmt string, args ...interface{})
}
// EventBroadcaster knows how to receive events and send them to any EventSink, watcher, or log.
type EventBroadcaster interface {
// StartEventWatcher starts sending events received from this EventBroadcaster to the given
// event handler function. The return value can be ignored or used to stop recording, if
// desired.
StartEventWatcher(eventHandler func(*v1.Event)) watch.Interface
// StartRecordingToSink starts sending events received from this EventBroadcaster to the given
// sink. The return value can be ignored or used to stop recording, if desired.
StartRecordingToSink(sink EventSink) watch.Interface
// StartLogging starts sending events received from this EventBroadcaster to the given logging
// function. The return value can be ignored or used to stop recording, if desired.
StartLogging(logf func(format string, args ...interface{})) watch.Interface
// NewRecorder returns an EventRecorder that can be used to send events to this EventBroadcaster
// with the event source set to the given event source.
NewRecorder(scheme *runtime.Scheme, source v1.EventSource) EventRecorder
}
// Creates a new event broadcaster.
func NewBroadcaster() EventBroadcaster {
return &eventBroadcasterImpl{watch.NewBroadcaster(maxQueuedEvents, watch.DropIfChannelFull), defaultSleepDuration}
}
func NewBroadcasterForTests(sleepDuration time.Duration) EventBroadcaster {
return &eventBroadcasterImpl{watch.NewBroadcaster(maxQueuedEvents, watch.DropIfChannelFull), sleepDuration}
}
type eventBroadcasterImpl struct {
*watch.Broadcaster
sleepDuration time.Duration
}
// StartRecordingToSink starts sending events received from the specified eventBroadcaster to the given sink.
// The return value can be ignored or used to stop recording, if desired.
// TODO: make me an object with parameterizable queue length and retry interval
func (eventBroadcaster *eventBroadcasterImpl) StartRecordingToSink(sink EventSink) watch.Interface {
// The default math/rand package functions aren't thread safe, so create a
// new Rand object for each StartRecording call.
randGen := rand.New(rand.NewSource(time.Now().UnixNano()))
eventCorrelator := NewEventCorrelator(clock.RealClock{})
return eventBroadcaster.StartEventWatcher(
func(event *v1.Event) {
recordToSink(sink, event, eventCorrelator, randGen, eventBroadcaster.sleepDuration)
})
}
func recordToSink(sink EventSink, event *v1.Event, eventCorrelator *EventCorrelator, randGen *rand.Rand, sleepDuration time.Duration) {
// Make a copy before modification, because there could be multiple listeners.
// Events are safe to copy like this.
eventCopy := *event
event = &eventCopy
result, err := eventCorrelator.EventCorrelate(event)
if err != nil {
utilruntime.HandleError(err)
}
if result.Skip {
return
}
tries := 0
for {
if recordEvent(sink, result.Event, result.Patch, result.Event.Count > 1, eventCorrelator) {
break
}
tries++
if tries >= maxTriesPerEvent {
glog.Errorf("Unable to write event '%#v' (retry limit exceeded!)", event)
break
}
// Randomize the first sleep so that various clients won't all be
// synced up if the master goes down.
if tries == 1 {
time.Sleep(time.Duration(float64(sleepDuration) * randGen.Float64()))
} else {
time.Sleep(sleepDuration)
}
}
}
func isKeyNotFoundError(err error) bool {
statusErr, _ := err.(*errors.StatusError)
if statusErr != nil && statusErr.Status().Code == http.StatusNotFound {
return true
}
return false
}
// recordEvent attempts to write event to a sink. It returns true if the event
// was successfully recorded or discarded, false if it should be retried.
// If updateExistingEvent is false, it creates a new event, otherwise it updates
// existing event.
func recordEvent(sink EventSink, event *v1.Event, patch []byte, updateExistingEvent bool, eventCorrelator *EventCorrelator) bool {
var newEvent *v1.Event
var err error
if updateExistingEvent {
newEvent, err = sink.Patch(event, patch)
}
// Update can fail because the event may have been removed and it no longer exists.
if !updateExistingEvent || (updateExistingEvent && isKeyNotFoundError(err)) {
// Making sure that ResourceVersion is empty on creation
event.ResourceVersion = ""
newEvent, err = sink.Create(event)
}
if err == nil {
// we need to update our event correlator with the server returned state to handle name/resourceversion
eventCorrelator.UpdateState(newEvent)
return true
}
// If we can't contact the server, then hold everything while we keep trying.
// Otherwise, something about the event is malformed and we should abandon it.
switch err.(type) {
case *restclient.RequestConstructionError:
// We will construct the request the same next time, so don't keep trying.
glog.Errorf("Unable to construct event '%#v': '%v' (will not retry!)", event, err)
return true
case *errors.StatusError:
if errors.IsAlreadyExists(err) {
glog.V(5).Infof("Server rejected event '%#v': '%v' (will not retry!)", event, err)
} else {
glog.Errorf("Server rejected event '%#v': '%v' (will not retry!)", event, err)
}
return true
case *errors.UnexpectedObjectError:
// We don't expect this; it implies the server's response didn't match a
// known pattern. Go ahead and retry.
default:
// This case includes actual http transport errors. Go ahead and retry.
}
glog.Errorf("Unable to write event: '%v' (may retry after sleeping)", err)
return false
}
// StartLogging starts sending events received from this EventBroadcaster to the given logging function.
// The return value can be ignored or used to stop recording, if desired.
func (eventBroadcaster *eventBroadcasterImpl) StartLogging(logf func(format string, args ...interface{})) watch.Interface {
return eventBroadcaster.StartEventWatcher(
func(e *v1.Event) {
logf("Event(%#v): type: '%v' reason: '%v' %v", e.InvolvedObject, e.Type, e.Reason, e.Message)
})
}
// StartEventWatcher starts sending events received from this EventBroadcaster to the given event handler function.
// The return value can be ignored or used to stop recording, if desired.
func (eventBroadcaster *eventBroadcasterImpl) StartEventWatcher(eventHandler func(*v1.Event)) watch.Interface {
watcher := eventBroadcaster.Watch()
go func() {
defer utilruntime.HandleCrash()
for {
watchEvent, open := <-watcher.ResultChan()
if !open {
return
}
event, ok := watchEvent.Object.(*v1.Event)
if !ok {
// This is all local, so there's no reason this should
// ever happen.
continue
}
eventHandler(event)
}
}()
return watcher
}
// NewRecorder returns an EventRecorder that records events with the given event source.
func (eventBroadcaster *eventBroadcasterImpl) NewRecorder(scheme *runtime.Scheme, source v1.EventSource) EventRecorder {
return &recorderImpl{scheme, source, eventBroadcaster.Broadcaster, clock.RealClock{}}
}
type recorderImpl struct {
scheme *runtime.Scheme
source v1.EventSource
*watch.Broadcaster
clock clock.Clock
}
func (recorder *recorderImpl) generateEvent(object runtime.Object, timestamp metav1.Time, eventtype, reason, message string) {
ref, err := ref.GetReference(recorder.scheme, object)
if err != nil {
glog.Errorf("Could not construct reference to: '%#v' due to: '%v'. Will not report event: '%v' '%v' '%v'", object, err, eventtype, reason, message)
return
}
if !validateEventType(eventtype) {
glog.Errorf("Unsupported event type: '%v'", eventtype)
return
}
event := recorder.makeEvent(ref, eventtype, reason, message)
event.Source = recorder.source
go func() {
// NOTE: events should be a non-blocking operation
defer utilruntime.HandleCrash()
recorder.Action(watch.Added, event)
}()
}
func validateEventType(eventtype string) bool {
switch eventtype {
case v1.EventTypeNormal, v1.EventTypeWarning:
return true
}
return false
}
func (recorder *recorderImpl) Event(object runtime.Object, eventtype, reason, message string) {
recorder.generateEvent(object, metav1.Now(), eventtype, reason, message)
}
func (recorder *recorderImpl) Eventf(object runtime.Object, eventtype, reason, messageFmt string, args ...interface{}) {
recorder.Event(object, eventtype, reason, fmt.Sprintf(messageFmt, args...))
}
func (recorder *recorderImpl) PastEventf(object runtime.Object, timestamp metav1.Time, eventtype, reason, messageFmt string, args ...interface{}) {
recorder.generateEvent(object, timestamp, eventtype, reason, fmt.Sprintf(messageFmt, args...))
}
func (recorder *recorderImpl) makeEvent(ref *v1.ObjectReference, eventtype, reason, message string) *v1.Event {
t := metav1.Time{Time: recorder.clock.Now()}
namespace := ref.Namespace
if namespace == "" {
namespace = metav1.NamespaceDefault
}
return &v1.Event{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%v.%x", ref.Name, t.UnixNano()),
Namespace: namespace,
},
InvolvedObject: *ref,
Reason: reason,
Message: message,
FirstTimestamp: t,
LastTimestamp: t,
Count: 1,
Type: eventtype,
}
}

467
vendor/k8s.io/client-go/tools/record/events_cache.go generated vendored Normal file
View File

@ -0,0 +1,467 @@
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package record
import (
"encoding/json"
"fmt"
"strings"
"sync"
"time"
"github.com/golang/groupcache/lru"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/clock"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/util/strategicpatch"
"k8s.io/client-go/util/flowcontrol"
)
const (
maxLruCacheEntries = 4096
// if we see the same event that varies only by message
// more than 10 times in a 10 minute period, aggregate the event
defaultAggregateMaxEvents = 10
defaultAggregateIntervalInSeconds = 600
// by default, allow a source to send 25 events about an object
// but control the refill rate to 1 new event every 5 minutes
// this helps control the long-tail of events for things that are always
// unhealthy
defaultSpamBurst = 25
defaultSpamQPS = 1. / 300.
)
// getEventKey builds unique event key based on source, involvedObject, reason, message
func getEventKey(event *v1.Event) string {
return strings.Join([]string{
event.Source.Component,
event.Source.Host,
event.InvolvedObject.Kind,
event.InvolvedObject.Namespace,
event.InvolvedObject.Name,
event.InvolvedObject.FieldPath,
string(event.InvolvedObject.UID),
event.InvolvedObject.APIVersion,
event.Type,
event.Reason,
event.Message,
},
"")
}
// getSpamKey builds unique event key based on source, involvedObject
func getSpamKey(event *v1.Event) string {
return strings.Join([]string{
event.Source.Component,
event.Source.Host,
event.InvolvedObject.Kind,
event.InvolvedObject.Namespace,
event.InvolvedObject.Name,
string(event.InvolvedObject.UID),
event.InvolvedObject.APIVersion,
},
"")
}
// EventFilterFunc is a function that returns true if the event should be skipped
type EventFilterFunc func(event *v1.Event) bool
// DefaultEventFilterFunc returns false for all incoming events
func DefaultEventFilterFunc(event *v1.Event) bool {
return false
}
// EventSourceObjectSpamFilter is responsible for throttling
// the amount of events a source and object can produce.
type EventSourceObjectSpamFilter struct {
sync.RWMutex
// the cache that manages last synced state
cache *lru.Cache
// burst is the amount of events we allow per source + object
burst int
// qps is the refill rate of the token bucket in queries per second
qps float32
// clock is used to allow for testing over a time interval
clock clock.Clock
}
// NewEventSourceObjectSpamFilter allows burst events from a source about an object with the specified qps refill.
func NewEventSourceObjectSpamFilter(lruCacheSize, burst int, qps float32, clock clock.Clock) *EventSourceObjectSpamFilter {
return &EventSourceObjectSpamFilter{
cache: lru.New(lruCacheSize),
burst: burst,
qps: qps,
clock: clock,
}
}
// spamRecord holds data used to perform spam filtering decisions.
type spamRecord struct {
// rateLimiter controls the rate of events about this object
rateLimiter flowcontrol.RateLimiter
}
// Filter controls that a given source+object are not exceeding the allowed rate.
func (f *EventSourceObjectSpamFilter) Filter(event *v1.Event) bool {
var record spamRecord
// controls our cached information about this event (source+object)
eventKey := getSpamKey(event)
// do we have a record of similar events in our cache?
f.Lock()
defer f.Unlock()
value, found := f.cache.Get(eventKey)
if found {
record = value.(spamRecord)
}
// verify we have a rate limiter for this record
if record.rateLimiter == nil {
record.rateLimiter = flowcontrol.NewTokenBucketRateLimiterWithClock(f.qps, f.burst, f.clock)
}
// ensure we have available rate
filter := !record.rateLimiter.TryAccept()
// update the cache
f.cache.Add(eventKey, record)
return filter
}
// EventAggregatorKeyFunc is responsible for grouping events for aggregation
// It returns a tuple of the following:
// aggregateKey - key the identifies the aggregate group to bucket this event
// localKey - key that makes this event in the local group
type EventAggregatorKeyFunc func(event *v1.Event) (aggregateKey string, localKey string)
// EventAggregatorByReasonFunc aggregates events by exact match on event.Source, event.InvolvedObject, event.Type and event.Reason
func EventAggregatorByReasonFunc(event *v1.Event) (string, string) {
return strings.Join([]string{
event.Source.Component,
event.Source.Host,
event.InvolvedObject.Kind,
event.InvolvedObject.Namespace,
event.InvolvedObject.Name,
string(event.InvolvedObject.UID),
event.InvolvedObject.APIVersion,
event.Type,
event.Reason,
},
""), event.Message
}
// EventAggregatorMessageFunc is responsible for producing an aggregation message
type EventAggregatorMessageFunc func(event *v1.Event) string
// EventAggregratorByReasonMessageFunc returns an aggregate message by prefixing the incoming message
func EventAggregatorByReasonMessageFunc(event *v1.Event) string {
return "(combined from similar events): " + event.Message
}
// EventAggregator identifies similar events and aggregates them into a single event
type EventAggregator struct {
sync.RWMutex
// The cache that manages aggregation state
cache *lru.Cache
// The function that groups events for aggregation
keyFunc EventAggregatorKeyFunc
// The function that generates a message for an aggregate event
messageFunc EventAggregatorMessageFunc
// The maximum number of events in the specified interval before aggregation occurs
maxEvents uint
// The amount of time in seconds that must transpire since the last occurrence of a similar event before it's considered new
maxIntervalInSeconds uint
// clock is used to allow for testing over a time interval
clock clock.Clock
}
// NewEventAggregator returns a new instance of an EventAggregator
func NewEventAggregator(lruCacheSize int, keyFunc EventAggregatorKeyFunc, messageFunc EventAggregatorMessageFunc,
maxEvents int, maxIntervalInSeconds int, clock clock.Clock) *EventAggregator {
return &EventAggregator{
cache: lru.New(lruCacheSize),
keyFunc: keyFunc,
messageFunc: messageFunc,
maxEvents: uint(maxEvents),
maxIntervalInSeconds: uint(maxIntervalInSeconds),
clock: clock,
}
}
// aggregateRecord holds data used to perform aggregation decisions
type aggregateRecord struct {
// we track the number of unique local keys we have seen in the aggregate set to know when to actually aggregate
// if the size of this set exceeds the max, we know we need to aggregate
localKeys sets.String
// The last time at which the aggregate was recorded
lastTimestamp metav1.Time
}
// EventAggregate checks if a similar event has been seen according to the
// aggregation configuration (max events, max interval, etc) and returns:
//
// - The (potentially modified) event that should be created
// - The cache key for the event, for correlation purposes. This will be set to
// the full key for normal events, and to the result of
// EventAggregatorMessageFunc for aggregate events.
func (e *EventAggregator) EventAggregate(newEvent *v1.Event) (*v1.Event, string) {
now := metav1.NewTime(e.clock.Now())
var record aggregateRecord
// eventKey is the full cache key for this event
eventKey := getEventKey(newEvent)
// aggregateKey is for the aggregate event, if one is needed.
aggregateKey, localKey := e.keyFunc(newEvent)
// Do we have a record of similar events in our cache?
e.Lock()
defer e.Unlock()
value, found := e.cache.Get(aggregateKey)
if found {
record = value.(aggregateRecord)
}
// Is the previous record too old? If so, make a fresh one. Note: if we didn't
// find a similar record, its lastTimestamp will be the zero value, so we
// create a new one in that case.
maxInterval := time.Duration(e.maxIntervalInSeconds) * time.Second
interval := now.Time.Sub(record.lastTimestamp.Time)
if interval > maxInterval {
record = aggregateRecord{localKeys: sets.NewString()}
}
// Write the new event into the aggregation record and put it on the cache
record.localKeys.Insert(localKey)
record.lastTimestamp = now
e.cache.Add(aggregateKey, record)
// If we are not yet over the threshold for unique events, don't correlate them
if uint(record.localKeys.Len()) < e.maxEvents {
return newEvent, eventKey
}
// do not grow our local key set any larger than max
record.localKeys.PopAny()
// create a new aggregate event, and return the aggregateKey as the cache key
// (so that it can be overwritten.)
eventCopy := &v1.Event{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%v.%x", newEvent.InvolvedObject.Name, now.UnixNano()),
Namespace: newEvent.Namespace,
},
Count: 1,
FirstTimestamp: now,
InvolvedObject: newEvent.InvolvedObject,
LastTimestamp: now,
Message: e.messageFunc(newEvent),
Type: newEvent.Type,
Reason: newEvent.Reason,
Source: newEvent.Source,
}
return eventCopy, aggregateKey
}
// eventLog records data about when an event was observed
type eventLog struct {
// The number of times the event has occurred since first occurrence.
count uint
// The time at which the event was first recorded.
firstTimestamp metav1.Time
// The unique name of the first occurrence of this event
name string
// Resource version returned from previous interaction with server
resourceVersion string
}
// eventLogger logs occurrences of an event
type eventLogger struct {
sync.RWMutex
cache *lru.Cache
clock clock.Clock
}
// newEventLogger observes events and counts their frequencies
func newEventLogger(lruCacheEntries int, clock clock.Clock) *eventLogger {
return &eventLogger{cache: lru.New(lruCacheEntries), clock: clock}
}
// eventObserve records an event, or updates an existing one if key is a cache hit
func (e *eventLogger) eventObserve(newEvent *v1.Event, key string) (*v1.Event, []byte, error) {
var (
patch []byte
err error
)
eventCopy := *newEvent
event := &eventCopy
e.Lock()
defer e.Unlock()
// Check if there is an existing event we should update
lastObservation := e.lastEventObservationFromCache(key)
// If we found a result, prepare a patch
if lastObservation.count > 0 {
// update the event based on the last observation so patch will work as desired
event.Name = lastObservation.name
event.ResourceVersion = lastObservation.resourceVersion
event.FirstTimestamp = lastObservation.firstTimestamp
event.Count = int32(lastObservation.count) + 1
eventCopy2 := *event
eventCopy2.Count = 0
eventCopy2.LastTimestamp = metav1.NewTime(time.Unix(0, 0))
eventCopy2.Message = ""
newData, _ := json.Marshal(event)
oldData, _ := json.Marshal(eventCopy2)
patch, err = strategicpatch.CreateTwoWayMergePatch(oldData, newData, event)
}
// record our new observation
e.cache.Add(
key,
eventLog{
count: uint(event.Count),
firstTimestamp: event.FirstTimestamp,
name: event.Name,
resourceVersion: event.ResourceVersion,
},
)
return event, patch, err
}
// updateState updates its internal tracking information based on latest server state
func (e *eventLogger) updateState(event *v1.Event) {
key := getEventKey(event)
e.Lock()
defer e.Unlock()
// record our new observation
e.cache.Add(
key,
eventLog{
count: uint(event.Count),
firstTimestamp: event.FirstTimestamp,
name: event.Name,
resourceVersion: event.ResourceVersion,
},
)
}
// lastEventObservationFromCache returns the event from the cache, reads must be protected via external lock
func (e *eventLogger) lastEventObservationFromCache(key string) eventLog {
value, ok := e.cache.Get(key)
if ok {
observationValue, ok := value.(eventLog)
if ok {
return observationValue
}
}
return eventLog{}
}
// EventCorrelator processes all incoming events and performs analysis to avoid overwhelming the system. It can filter all
// incoming events to see if the event should be filtered from further processing. It can aggregate similar events that occur
// frequently to protect the system from spamming events that are difficult for users to distinguish. It performs de-duplication
// to ensure events that are observed multiple times are compacted into a single event with increasing counts.
type EventCorrelator struct {
// the function to filter the event
filterFunc EventFilterFunc
// the object that performs event aggregation
aggregator *EventAggregator
// the object that observes events as they come through
logger *eventLogger
}
// EventCorrelateResult is the result of a Correlate
type EventCorrelateResult struct {
// the event after correlation
Event *v1.Event
// if provided, perform a strategic patch when updating the record on the server
Patch []byte
// if true, do no further processing of the event
Skip bool
}
// NewEventCorrelator returns an EventCorrelator configured with default values.
//
// The EventCorrelator is responsible for event filtering, aggregating, and counting
// prior to interacting with the API server to record the event.
//
// The default behavior is as follows:
// * Aggregation is performed if a similar event is recorded 10 times in a
// in a 10 minute rolling interval. A similar event is an event that varies only by
// the Event.Message field. Rather than recording the precise event, aggregation
// will create a new event whose message reports that it has combined events with
// the same reason.
// * Events are incrementally counted if the exact same event is encountered multiple
// times.
// * A source may burst 25 events about an object, but has a refill rate budget
// per object of 1 event every 5 minutes to control long-tail of spam.
func NewEventCorrelator(clock clock.Clock) *EventCorrelator {
cacheSize := maxLruCacheEntries
spamFilter := NewEventSourceObjectSpamFilter(cacheSize, defaultSpamBurst, defaultSpamQPS, clock)
return &EventCorrelator{
filterFunc: spamFilter.Filter,
aggregator: NewEventAggregator(
cacheSize,
EventAggregatorByReasonFunc,
EventAggregatorByReasonMessageFunc,
defaultAggregateMaxEvents,
defaultAggregateIntervalInSeconds,
clock),
logger: newEventLogger(cacheSize, clock),
}
}
// EventCorrelate filters, aggregates, counts, and de-duplicates all incoming events
func (c *EventCorrelator) EventCorrelate(newEvent *v1.Event) (*EventCorrelateResult, error) {
if newEvent == nil {
return nil, fmt.Errorf("event is nil")
}
aggregateEvent, ckey := c.aggregator.EventAggregate(newEvent)
observedEvent, patch, err := c.logger.eventObserve(aggregateEvent, ckey)
if c.filterFunc(observedEvent) {
return &EventCorrelateResult{Skip: true}, nil
}
return &EventCorrelateResult{Event: observedEvent, Patch: patch}, err
}
// UpdateState based on the latest observed state from server
func (c *EventCorrelator) UpdateState(event *v1.Event) {
c.logger.updateState(event)
}

54
vendor/k8s.io/client-go/tools/record/fake.go generated vendored Normal file
View File

@ -0,0 +1,54 @@
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package record
import (
"fmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
)
// FakeRecorder is used as a fake during tests. It is thread safe. It is usable
// when created manually and not by NewFakeRecorder, however all events may be
// thrown away in this case.
type FakeRecorder struct {
Events chan string
}
func (f *FakeRecorder) Event(object runtime.Object, eventtype, reason, message string) {
if f.Events != nil {
f.Events <- fmt.Sprintf("%s %s %s", eventtype, reason, message)
}
}
func (f *FakeRecorder) Eventf(object runtime.Object, eventtype, reason, messageFmt string, args ...interface{}) {
if f.Events != nil {
f.Events <- fmt.Sprintf(eventtype+" "+reason+" "+messageFmt, args...)
}
}
func (f *FakeRecorder) PastEventf(object runtime.Object, timestamp metav1.Time, eventtype, reason, messageFmt string, args ...interface{}) {
}
// NewFakeRecorder creates new fake event recorder with event channel with
// buffer of given size.
func NewFakeRecorder(bufferSize int) *FakeRecorder {
return &FakeRecorder{
Events: make(chan string, bufferSize),
}
}