Merge pull request #62736 from cblecker/remove-unneeded-deps

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove unneeded deps from vendor

**What this PR does / why we need it**:
#58784 removed the initresource admission plugin, but didn't remove it's dependencies. 

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #62696.

**Release note**:
```release-note
NONE
```
This commit is contained in:
Kubernetes Submit Queue 2018-04-17 15:14:28 -07:00 committed by GitHub
commit cdba0ce3e3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
22 changed files with 2 additions and 5981 deletions

20
Godeps/Godeps.json generated
View File

@ -1,6 +1,6 @@
{
"ImportPath": "k8s.io/kubernetes",
"GoVersion": "go1.9",
"GoVersion": "go1.10",
"GodepVersion": "v80",
"Packages": [
"github.com/onsi/ginkgo/ginkgo",
@ -1022,10 +1022,6 @@
"ImportPath": "github.com/daviddengcn/go-colortext",
"Rev": "511bcaf42ccd42c38aba7427b6673277bf19e2a1"
},
{
"ImportPath": "github.com/dchest/safefile",
"Rev": "855e8d98f1852d48dde521e0522408d1fe7e836a"
},
{
"ImportPath": "github.com/dgrijalva/jwt-go",
"Comment": "v3.0.0-4-g01aeca5",
@ -1987,11 +1983,6 @@
"ImportPath": "github.com/hashicorp/hcl/json/token",
"Rev": "d8c773c4cba11b11539e3d45f93daeaa5dcf1fa1"
},
{
"ImportPath": "github.com/hawkular/hawkular-client-go/metrics",
"Comment": "v0.5.1-1-g1d46ce7",
"Rev": "1d46ce7e1eca635f372357a8ccbf1fa7cc28b7d2"
},
{
"ImportPath": "github.com/heketi/heketi/client/api/go-client",
"Comment": "v4.0.0-95-gaaf4061",
@ -2017,11 +2008,6 @@
"Comment": "v1.0",
"Rev": "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
},
{
"ImportPath": "github.com/influxdata/influxdb/client",
"Comment": "v1.1.1",
"Rev": "e47cf1f2e83a02443d7115c54f838be8ee959644"
},
{
"ImportPath": "github.com/influxdata/influxdb/client/v2",
"Comment": "v1.1.1",
@ -3083,10 +3069,6 @@
"ImportPath": "golang.org/x/tools/imports",
"Rev": "2382e3994d48b1d22acc2c86bcad0a2aff028e32"
},
{
"ImportPath": "google.golang.org/api/cloudmonitoring/v2beta2",
"Rev": "7f657476956314fee258816aaf81c0ff65cf8bee"
},
{
"ImportPath": "google.golang.org/api/compute/v0.alpha",
"Rev": "7f657476956314fee258816aaf81c0ff65cf8bee"

307
Godeps/LICENSES generated
View File

@ -35515,40 +35515,6 @@ SOFTWARE.
================================================================================
================================================================================
= vendor/github.com/dchest/safefile licensed under: =
Copyright (c) 2013 Dmitry Chestnykh <dmitry@codingrobots.com>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials
provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
= vendor/github.com/dchest/safefile/LICENSE 499ffd499356286ac590c21f7b8bd677
================================================================================
================================================================================
= vendor/github.com/dgrijalva/jwt-go licensed under: =
@ -70397,216 +70363,6 @@ Exhibit B - “Incompatible With Secondary Licenses” Notice
================================================================================
================================================================================
= vendor/github.com/hawkular/hawkular-client-go/metrics licensed under: =
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
= vendor/github.com/hawkular/hawkular-client-go/LICENSE fa818a259cbed7ce8bc2a22d35a464fc
================================================================================
================================================================================
= vendor/github.com/heketi/heketi/client/api/go-client licensed under: =
@ -70709,34 +70465,6 @@ limitations under the License.
================================================================================
================================================================================
= vendor/github.com/influxdata/influxdb/client licensed under: =
The MIT License (MIT)
Copyright (c) 2013-2016 Errplane Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
= vendor/github.com/influxdata/influxdb/LICENSE ba8146ad9cc2a128209983265136e06a
================================================================================
================================================================================
= vendor/github.com/influxdata/influxdb/client/v2 licensed under: =
@ -92082,41 +91810,6 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
================================================================================
================================================================================
= vendor/google.golang.org/api/cloudmonitoring/v2beta2 licensed under: =
Copyright (c) 2011 Google Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
= vendor/google.golang.org/api/LICENSE a651bb3d8b1c412632e28823bb432b40
================================================================================
================================================================================
= vendor/google.golang.org/api/compute/v0.alpha licensed under: =

5
vendor/BUILD vendored
View File

@ -147,7 +147,6 @@ filegroup(
"//vendor/github.com/d2g/dhcp4client:all-srcs",
"//vendor/github.com/davecgh/go-spew/spew:all-srcs",
"//vendor/github.com/daviddengcn/go-colortext:all-srcs",
"//vendor/github.com/dchest/safefile:all-srcs",
"//vendor/github.com/dgrijalva/jwt-go:all-srcs",
"//vendor/github.com/docker/distribution/digestset:all-srcs",
"//vendor/github.com/docker/distribution/reference:all-srcs",
@ -267,13 +266,12 @@ filegroup(
"//vendor/github.com/grpc-ecosystem/grpc-gateway/utilities:all-srcs",
"//vendor/github.com/hashicorp/golang-lru:all-srcs",
"//vendor/github.com/hashicorp/hcl:all-srcs",
"//vendor/github.com/hawkular/hawkular-client-go/metrics:all-srcs",
"//vendor/github.com/heketi/heketi/client/api/go-client:all-srcs",
"//vendor/github.com/heketi/heketi/pkg/glusterfs/api:all-srcs",
"//vendor/github.com/heketi/heketi/pkg/utils:all-srcs",
"//vendor/github.com/imdario/mergo:all-srcs",
"//vendor/github.com/inconshreveable/mousetrap:all-srcs",
"//vendor/github.com/influxdata/influxdb/client:all-srcs",
"//vendor/github.com/influxdata/influxdb/client/v2:all-srcs",
"//vendor/github.com/influxdata/influxdb/models:all-srcs",
"//vendor/github.com/influxdata/influxdb/pkg/escape:all-srcs",
"//vendor/github.com/jmespath/go-jmespath:all-srcs",
@ -400,7 +398,6 @@ filegroup(
"//vendor/golang.org/x/tools/go/ast/astutil:all-srcs",
"//vendor/golang.org/x/tools/go/vcs:all-srcs",
"//vendor/golang.org/x/tools/imports:all-srcs",
"//vendor/google.golang.org/api/cloudmonitoring/v2beta2:all-srcs",
"//vendor/google.golang.org/api/compute/v0.alpha:all-srcs",
"//vendor/google.golang.org/api/compute/v0.beta:all-srcs",
"//vendor/google.golang.org/api/compute/v1:all-srcs",

View File

@ -1,8 +0,0 @@
language: go
go:
- 1.1
- 1.2
- 1.3
- 1.4
- tip

View File

@ -1,60 +0,0 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"safefile.go",
] + select({
"@io_bazel_rules_go//go/platform:android": [
"rename.go",
],
"@io_bazel_rules_go//go/platform:darwin": [
"rename.go",
],
"@io_bazel_rules_go//go/platform:dragonfly": [
"rename.go",
],
"@io_bazel_rules_go//go/platform:freebsd": [
"rename.go",
],
"@io_bazel_rules_go//go/platform:linux": [
"rename.go",
],
"@io_bazel_rules_go//go/platform:nacl": [
"rename.go",
],
"@io_bazel_rules_go//go/platform:netbsd": [
"rename.go",
],
"@io_bazel_rules_go//go/platform:openbsd": [
"rename.go",
],
"@io_bazel_rules_go//go/platform:plan9": [
"rename_nonatomic.go",
],
"@io_bazel_rules_go//go/platform:solaris": [
"rename.go",
],
"@io_bazel_rules_go//go/platform:windows": [
"rename.go",
"rename_nonatomic.go",
],
"//conditions:default": [],
}),
importpath = "github.com/dchest/safefile",
visibility = ["//visibility:public"],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

View File

@ -1,26 +0,0 @@
Copyright (c) 2013 Dmitry Chestnykh <dmitry@codingrobots.com>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials
provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,44 +0,0 @@
# safefile
[![Build Status](https://travis-ci.org/dchest/safefile.svg)](https://travis-ci.org/dchest/safefile) [![Windows Build status](https://ci.appveyor.com/api/projects/status/owlifxeekg75t2ho?svg=true)](https://ci.appveyor.com/project/dchest/safefile)
Go package safefile implements safe "atomic" saving of files.
Instead of truncating and overwriting the destination file, it creates a
temporary file in the same directory, writes to it, and then renames the
temporary file to the original name when calling Commit.
### Installation
```
$ go get github.com/dchest/safefile
```
### Documentation
<https://godoc.org/github.com/dchest/safefile>
### Example
```go
f, err := safefile.Create("/home/ken/report.txt", 0644)
if err != nil {
// ...
}
// Created temporary file /home/ken/sf-ppcyksu5hyw2mfec.tmp
defer f.Close()
_, err = io.WriteString(f, "Hello world")
if err != nil {
// ...
}
// Wrote "Hello world" to /home/ken/sf-ppcyksu5hyw2mfec.tmp
err = f.Commit()
if err != nil {
// ...
}
// Renamed /home/ken/sf-ppcyksu5hyw2mfec.tmp to /home/ken/report.txt
```

View File

@ -1,24 +0,0 @@
version: "{build}"
os: Windows Server 2012 R2
clone_folder: c:\projects\src\github.com\dchest\safefile
environment:
PATH: c:\projects\bin;%PATH%
GOPATH: c:\projects
NOTIFY_TIMEOUT: 5s
install:
- go version
- go get golang.org/x/tools/cmd/vet
- go get -v -t ./...
build_script:
- go tool vet -all .
- go build ./...
- go test -v -race ./...
test: off
deploy: off

View File

@ -1,9 +0,0 @@
// +build !plan9,!windows windows,go1.5
package safefile
import "os"
func rename(oldname, newname string) error {
return os.Rename(oldname, newname)
}

View File

@ -1,51 +0,0 @@
// +build plan9 windows,!go1.5
// os.Rename on Windows before Go 1.5 and Plan 9 will not overwrite existing
// files, thus we cannot guarantee atomic saving of file by doing rename.
// We will have to do some voodoo to minimize data loss on those systems.
package safefile
import (
"os"
"path/filepath"
)
func rename(oldname, newname string) error {
err := os.Rename(oldname, newname)
if err != nil {
// If newname exists ("original"), we will try renaming it to a
// new temporary name, then renaming oldname to the newname,
// and deleting the renamed original. If system crashes between
// renaming and deleting, the original file will still be available
// under the temporary name, so users can manually recover data.
// (No automatic recovery is possible because after crash the
// temporary name is not known.)
var origtmp string
for {
origtmp, err = makeTempName(newname, filepath.Base(newname))
if err != nil {
return err
}
_, err = os.Stat(origtmp)
if err == nil {
continue // most likely will never happen
}
break
}
err = os.Rename(newname, origtmp)
if err != nil {
return err
}
err = os.Rename(oldname, newname)
if err != nil {
// Rename still fails, try to revert original rename,
// ignoring errors.
os.Rename(origtmp, newname)
return err
}
// Rename succeeded, now delete original file.
os.Remove(origtmp)
}
return nil
}

View File

@ -1,197 +0,0 @@
// Copyright 2013 Dmitry Chestnykh. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package safefile implements safe "atomic" saving of files.
//
// Instead of truncating and overwriting the destination file, it creates a
// temporary file in the same directory, writes to it, and then renames the
// temporary file to the original name when calling Commit.
//
// Example:
//
// f, err := safefile.Create("/home/ken/report.txt", 0644)
// if err != nil {
// // ...
// }
// // Created temporary file /home/ken/sf-ppcyksu5hyw2mfec.tmp
//
// defer f.Close()
//
// _, err = io.WriteString(f, "Hello world")
// if err != nil {
// // ...
// }
// // Wrote "Hello world" to /home/ken/sf-ppcyksu5hyw2mfec.tmp
//
// err = f.Commit()
// if err != nil {
// // ...
// }
// // Renamed /home/ken/sf-ppcyksu5hyw2mfec.tmp to /home/ken/report.txt
//
package safefile
import (
"crypto/rand"
"encoding/base32"
"errors"
"io"
"os"
"path/filepath"
"strings"
)
// ErrAlreadyCommitted error is returned when calling Commit on a file that
// has been already successfully committed.
var ErrAlreadyCommitted = errors.New("file already committed")
type File struct {
*os.File
origName string
closeFunc func(*File) error
isClosed bool // if true, temporary file has been closed, but not renamed
isCommitted bool // if true, the file has been successfully committed
}
func makeTempName(origname, prefix string) (tempname string, err error) {
origname = filepath.Clean(origname)
if len(origname) == 0 || origname[len(origname)-1] == filepath.Separator {
return "", os.ErrInvalid
}
// Generate 10 random bytes.
// This gives 80 bits of entropy, good enough
// for making temporary file name unpredictable.
var rnd [10]byte
if _, err := rand.Read(rnd[:]); err != nil {
return "", err
}
name := prefix + "-" + strings.ToLower(base32.StdEncoding.EncodeToString(rnd[:])) + ".tmp"
return filepath.Join(filepath.Dir(origname), name), nil
}
// Create creates a temporary file in the same directory as filename,
// which will be renamed to the given filename when calling Commit.
func Create(filename string, perm os.FileMode) (*File, error) {
for {
tempname, err := makeTempName(filename, "sf")
if err != nil {
return nil, err
}
f, err := os.OpenFile(tempname, os.O_RDWR|os.O_CREATE|os.O_EXCL, perm)
if err != nil {
if os.IsExist(err) {
continue
}
return nil, err
}
return &File{
File: f,
origName: filename,
closeFunc: closeUncommitted,
}, nil
}
}
// OrigName returns the original filename given to Create.
func (f *File) OrigName() string {
return f.origName
}
// Close closes temporary file and removes it.
// If the file has been committed, Close is no-op.
func (f *File) Close() error {
return f.closeFunc(f)
}
func closeUncommitted(f *File) error {
err0 := f.File.Close()
err1 := os.Remove(f.Name())
f.closeFunc = closeAgainError
if err0 != nil {
return err0
}
return err1
}
func closeAfterFailedRename(f *File) error {
// Remove temporary file.
//
// The note from Commit function applies here too, as we may be
// removing a different file. However, since we rely on our temporary
// names being unpredictable, this should not be a concern.
f.closeFunc = closeAgainError
return os.Remove(f.Name())
}
func closeCommitted(f *File) error {
// noop
return nil
}
func closeAgainError(f *File) error {
return os.ErrInvalid
}
// Commit safely commits data into the original file by syncing temporary
// file to disk, closing it and renaming to the original file name.
//
// In case of success, the temporary file is closed and no longer exists
// on disk. It is safe to call Close after Commit: the operation will do
// nothing.
//
// In case of error, the temporary file is still opened and exists on disk;
// it must be closed by callers by calling Close or by trying to commit again.
// Note that when trying to Commit again after a failed Commit when the file
// has been closed, but not renamed to its original name (the new commit will
// try again to rename it), safefile cannot guarantee that the temporary file
// has not been changed, or that it is the same temporary file we were dealing
// with. However, since the temporary name is unpredictable, it is unlikely
// that this happened accidentally. If complete atomicity is needed, do not
// Commit again after error, write the file again.
func (f *File) Commit() error {
if f.isCommitted {
return ErrAlreadyCommitted
}
if !f.isClosed {
// Sync to disk.
err := f.Sync()
if err != nil {
return err
}
// Close underlying os.File.
err = f.File.Close()
if err != nil {
return err
}
f.isClosed = true
}
// Rename.
err := rename(f.Name(), f.origName)
if err != nil {
f.closeFunc = closeAfterFailedRename
return err
}
f.closeFunc = closeCommitted
f.isCommitted = true
return nil
}
// WriteFile is a safe analog of ioutil.WriteFile.
func WriteFile(filename string, data []byte, perm os.FileMode) error {
f, err := Create(filename, perm)
if err != nil {
return err
}
defer f.Close()
n, err := f.Write(data)
if err != nil {
return err
}
if err == nil && n < len(data) {
err = io.ErrShortWrite
return err
}
return f.Commit()
}

View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,26 +0,0 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"client.go",
"helpers.go",
"types.go",
],
importpath = "github.com/hawkular/hawkular-client-go/metrics",
visibility = ["//visibility:public"],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

View File

@ -1,583 +0,0 @@
package metrics
import (
"bytes"
"crypto/tls"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"strconv"
"strings"
"sync"
"time"
)
// TODO Instrumentation? To get statistics?
// More detailed error
type HawkularClientError struct {
msg string
Code int
}
func (self *HawkularClientError) Error() string {
return fmt.Sprintf("Hawkular returned status code %d, error message: %s", self.Code, self.msg)
}
// Client creation and instance config
const (
base_url string = "hawkular/metrics"
timeout time.Duration = time.Duration(30 * time.Second)
)
type Parameters struct {
Tenant string // Technically optional, but requires setting Tenant() option everytime
Url string
TLSConfig *tls.Config
Token string
}
type Client struct {
Tenant string
url *url.URL
client *http.Client
Token string
}
type HawkularClient interface {
Send(*http.Request) (*http.Response, error)
}
// Modifiers
type Modifier func(*http.Request) error
// Override function to replace the Tenant (defaults to Client default)
func Tenant(tenant string) Modifier {
return func(r *http.Request) error {
r.Header.Set("Hawkular-Tenant", tenant)
return nil
}
}
// Add payload to the request
func Data(data interface{}) Modifier {
return func(r *http.Request) error {
jsonb, err := json.Marshal(data)
if err != nil {
return err
}
b := bytes.NewBuffer(jsonb)
rc := ioutil.NopCloser(b)
r.Body = rc
// fmt.Printf("Sending: %s\n", string(jsonb))
if b != nil {
r.ContentLength = int64(b.Len())
}
return nil
}
}
func (self *Client) Url(method string, e ...Endpoint) Modifier {
// TODO Create composite URLs? Add().Add().. etc? Easier to modify on the fly..
return func(r *http.Request) error {
u := self.createUrl(e...)
r.URL = u
r.Method = method
return nil
}
}
// Filters for querying
type Filter func(r *http.Request)
func Filters(f ...Filter) Modifier {
return func(r *http.Request) error {
for _, filter := range f {
filter(r)
}
return nil // Or should filter return err?
}
}
// Add query parameters
func Param(k string, v string) Filter {
return func(r *http.Request) {
q := r.URL.Query()
q.Set(k, v)
r.URL.RawQuery = q.Encode()
}
}
func TypeFilter(t MetricType) Filter {
return Param("type", t.shortForm())
}
func TagsFilter(t map[string]string) Filter {
j := tagsEncoder(t)
return Param("tags", j)
}
// Requires HWKMETRICS-233
func IdFilter(regexp string) Filter {
return Param("id", regexp)
}
func StartTimeFilter(startTime time.Time) Filter {
return Param("start", strconv.Itoa(int(startTime.Unix())))
}
func EndTimeFilter(endTime time.Time) Filter {
return Param("end", strconv.Itoa(int(endTime.Unix())))
}
func BucketsFilter(buckets int) Filter {
return Param("buckets", strconv.Itoa(buckets))
}
func PercentilesFilter(percentiles []float64) Filter {
s := make([]string, 0, len(percentiles))
for _, v := range percentiles {
s = append(s, fmt.Sprintf("%v", v))
}
j := strings.Join(s, ",")
return Param("percentiles", j)
}
// The SEND method..
func (self *Client) createRequest() *http.Request {
req := &http.Request{
Proto: "HTTP/1.1",
ProtoMajor: 1,
ProtoMinor: 1,
Header: make(http.Header),
Host: self.url.Host,
}
req.Header.Add("Content-Type", "application/json")
req.Header.Add("Hawkular-Tenant", self.Tenant)
if len(self.Token) > 0 {
req.Header.Add("Authorization", fmt.Sprintf("Bearer %s", self.Token))
}
return req
}
func (self *Client) Send(o ...Modifier) (*http.Response, error) {
// Initialize
r := self.createRequest()
// Run all the modifiers
for _, f := range o {
err := f(r)
if err != nil {
return nil, err
}
}
return self.client.Do(r)
}
// Commands
func prepend(slice []Modifier, a ...Modifier) []Modifier {
p := make([]Modifier, 0, len(slice)+len(a))
p = append(p, a...)
p = append(p, slice...)
return p
}
// Create new Definition
func (self *Client) Create(md MetricDefinition, o ...Modifier) (bool, error) {
// Keep the order, add custom prepend
o = prepend(o, self.Url("POST", TypeEndpoint(md.Type)), Data(md))
r, err := self.Send(o...)
if err != nil {
return false, err
}
defer r.Body.Close()
if r.StatusCode > 399 {
err = self.parseErrorResponse(r)
if err, ok := err.(*HawkularClientError); ok {
if err.Code != http.StatusConflict {
return false, err
} else {
return false, nil
}
}
return false, err
}
return true, nil
}
// Fetch definitions
func (self *Client) Definitions(o ...Modifier) ([]*MetricDefinition, error) {
o = prepend(o, self.Url("GET", TypeEndpoint(Generic)))
r, err := self.Send(o...)
if err != nil {
return nil, err
}
defer r.Body.Close()
if r.StatusCode == http.StatusOK {
b, err := ioutil.ReadAll(r.Body)
if err != nil {
return nil, err
}
md := []*MetricDefinition{}
if b != nil {
if err = json.Unmarshal(b, &md); err != nil {
return nil, err
}
}
return md, err
} else if r.StatusCode > 399 {
return nil, self.parseErrorResponse(r)
}
return nil, nil
}
// Return a single definition
func (self *Client) Definition(t MetricType, id string, o ...Modifier) (*MetricDefinition, error) {
o = prepend(o, self.Url("GET", TypeEndpoint(t), SingleMetricEndpoint(id)))
r, err := self.Send(o...)
if err != nil {
return nil, err
}
defer r.Body.Close()
if r.StatusCode == http.StatusOK {
b, err := ioutil.ReadAll(r.Body)
if err != nil {
return nil, err
}
md := &MetricDefinition{}
if b != nil {
if err = json.Unmarshal(b, md); err != nil {
return nil, err
}
}
return md, err
} else if r.StatusCode > 399 {
return nil, self.parseErrorResponse(r)
}
return nil, nil
}
// Update tags
func (self *Client) UpdateTags(t MetricType, id string, tags map[string]string, o ...Modifier) error {
o = prepend(o, self.Url("PUT", TypeEndpoint(t), SingleMetricEndpoint(id), TagEndpoint()), Data(tags))
r, err := self.Send(o...)
if err != nil {
return err
}
defer r.Body.Close()
if r.StatusCode > 399 {
return self.parseErrorResponse(r)
}
return nil
}
// Delete given tags from the definition
func (self *Client) DeleteTags(t MetricType, id string, tags map[string]string, o ...Modifier) error {
o = prepend(o, self.Url("DELETE", TypeEndpoint(t), SingleMetricEndpoint(id), TagEndpoint(), TagsEndpoint(tags)))
r, err := self.Send(o...)
if err != nil {
return err
}
defer r.Body.Close()
if r.StatusCode > 399 {
return self.parseErrorResponse(r)
}
return nil
}
// Fetch metric definition tags
func (self *Client) Tags(t MetricType, id string, o ...Modifier) (map[string]string, error) {
o = prepend(o, self.Url("GET", TypeEndpoint(t), SingleMetricEndpoint(id), TagEndpoint()))
r, err := self.Send(o...)
if err != nil {
return nil, err
}
defer r.Body.Close()
if r.StatusCode == http.StatusOK {
b, err := ioutil.ReadAll(r.Body)
if err != nil {
return nil, err
}
tags := make(map[string]string)
if b != nil {
if err = json.Unmarshal(b, &tags); err != nil {
return nil, err
}
}
return tags, nil
} else if r.StatusCode > 399 {
return nil, self.parseErrorResponse(r)
}
return nil, nil
}
// Write datapoints to the server
func (self *Client) Write(metrics []MetricHeader, o ...Modifier) error {
if len(metrics) > 0 {
mHs := make(map[MetricType][]MetricHeader)
for _, m := range metrics {
if _, found := mHs[m.Type]; !found {
mHs[m.Type] = make([]MetricHeader, 0, 1)
}
mHs[m.Type] = append(mHs[m.Type], m)
}
wg := &sync.WaitGroup{}
errorsChan := make(chan error, len(mHs))
for k, v := range mHs {
wg.Add(1)
go func(k MetricType, v []MetricHeader) {
defer wg.Done()
// Should be sorted and splitted by type & tenant..
on := o
on = prepend(on, self.Url("POST", TypeEndpoint(k), DataEndpoint()), Data(v))
r, err := self.Send(on...)
if err != nil {
errorsChan <- err
return
}
defer r.Body.Close()
if r.StatusCode > 399 {
errorsChan <- self.parseErrorResponse(r)
}
}(k, v)
}
wg.Wait()
select {
case err, ok := <-errorsChan:
if ok {
return err
}
// If channel is closed, we're done
default:
// Nothing to do
}
}
return nil
}
// Read data from the server
func (self *Client) ReadMetric(t MetricType, id string, o ...Modifier) ([]*Datapoint, error) {
o = prepend(o, self.Url("GET", TypeEndpoint(t), SingleMetricEndpoint(id), DataEndpoint()))
r, err := self.Send(o...)
if err != nil {
return nil, err
}
defer r.Body.Close()
if r.StatusCode == http.StatusOK {
b, err := ioutil.ReadAll(r.Body)
if err != nil {
return nil, err
}
// Check for GaugeBucketpoint and so on for the rest.. uh
dp := []*Datapoint{}
if b != nil {
if err = json.Unmarshal(b, &dp); err != nil {
return nil, err
}
}
return dp, nil
} else if r.StatusCode > 399 {
return nil, self.parseErrorResponse(r)
}
return nil, nil
}
// TODO ReadMetrics should be equal also, to read new tagsFilter aggregation..
func (self *Client) ReadBuckets(t MetricType, o ...Modifier) ([]*Bucketpoint, error) {
o = prepend(o, self.Url("GET", TypeEndpoint(t), DataEndpoint()))
r, err := self.Send(o...)
if err != nil {
return nil, err
}
defer r.Body.Close()
if r.StatusCode == http.StatusOK {
b, err := ioutil.ReadAll(r.Body)
if err != nil {
return nil, err
}
// Check for GaugeBucketpoint and so on for the rest.. uh
bp := []*Bucketpoint{}
if b != nil {
if err = json.Unmarshal(b, &bp); err != nil {
return nil, err
}
}
return bp, nil
} else if r.StatusCode > 399 {
return nil, self.parseErrorResponse(r)
}
return nil, nil
}
// Initialization
func NewHawkularClient(p Parameters) (*Client, error) {
uri, err := url.Parse(p.Url)
if err != nil {
return nil, err
}
if uri.Path == "" {
uri.Path = base_url
}
u := &url.URL{
Host: uri.Host,
Path: uri.Path,
Scheme: uri.Scheme,
Opaque: fmt.Sprintf("//%s/%s", uri.Host, uri.Path),
}
c := &http.Client{
Timeout: timeout,
}
if p.TLSConfig != nil {
transport := &http.Transport{TLSClientConfig: p.TLSConfig}
c.Transport = transport
}
return &Client{
url: u,
Tenant: p.Tenant,
Token: p.Token,
client: c,
}, nil
}
// HTTP Helper functions
func cleanId(id string) string {
return url.QueryEscape(id)
}
func (self *Client) parseErrorResponse(resp *http.Response) error {
// Parse error messages here correctly..
reply, err := ioutil.ReadAll(resp.Body)
if err != nil {
return &HawkularClientError{Code: resp.StatusCode,
msg: fmt.Sprintf("Reply could not be read: %s", err.Error()),
}
}
details := &HawkularError{}
err = json.Unmarshal(reply, details)
if err != nil {
return &HawkularClientError{Code: resp.StatusCode,
msg: fmt.Sprintf("Reply could not be parsed: %s", err.Error()),
}
}
return &HawkularClientError{Code: resp.StatusCode,
msg: details.ErrorMsg,
}
}
// URL functions (...)
type Endpoint func(u *url.URL)
func (self *Client) createUrl(e ...Endpoint) *url.URL {
mu := *self.url
for _, f := range e {
f(&mu)
}
return &mu
}
func TypeEndpoint(t MetricType) Endpoint {
return func(u *url.URL) {
addToUrl(u, t.String())
}
}
func SingleMetricEndpoint(id string) Endpoint {
return func(u *url.URL) {
addToUrl(u, url.QueryEscape(id))
}
}
func TagEndpoint() Endpoint {
return func(u *url.URL) {
addToUrl(u, "tags")
}
}
func TagsEndpoint(tags map[string]string) Endpoint {
return func(u *url.URL) {
addToUrl(u, tagsEncoder(tags))
}
}
func DataEndpoint() Endpoint {
return func(u *url.URL) {
addToUrl(u, "data")
}
}
func addToUrl(u *url.URL, s string) *url.URL {
u.Opaque = fmt.Sprintf("%s/%s", u.Opaque, s)
return u
}
func tagsEncoder(t map[string]string) string {
tags := make([]string, 0, len(t))
for k, v := range t {
tags = append(tags, fmt.Sprintf("%s:%s", k, v))
}
j := strings.Join(tags, ",")
return j
}

View File

@ -1,50 +0,0 @@
package metrics
import (
"fmt"
"math"
"strconv"
"time"
)
func ConvertToFloat64(v interface{}) (float64, error) {
switch i := v.(type) {
case float64:
return float64(i), nil
case float32:
return float64(i), nil
case int64:
return float64(i), nil
case int32:
return float64(i), nil
case int16:
return float64(i), nil
case int8:
return float64(i), nil
case uint64:
return float64(i), nil
case uint32:
return float64(i), nil
case uint16:
return float64(i), nil
case uint8:
return float64(i), nil
case int:
return float64(i), nil
case uint:
return float64(i), nil
case string:
f, err := strconv.ParseFloat(i, 64)
if err != nil {
return math.NaN(), err
}
return f, err
default:
return math.NaN(), fmt.Errorf("Cannot convert %s to float64", i)
}
}
// Returns milliseconds since epoch
func UnixMilli(t time.Time) int64 {
return t.UnixNano() / 1e6
}

View File

@ -1,129 +0,0 @@
package metrics
import (
"encoding/json"
"fmt"
// "time"
)
// MetricType restrictions
type MetricType int
const (
Gauge = iota
Availability
Counter
Generic
)
var longForm = []string{
"gauges",
"availability",
"counters",
"metrics",
}
var shortForm = []string{
"gauge",
"availability",
"counter",
"metrics",
}
func (self MetricType) validate() error {
if int(self) > len(longForm) && int(self) > len(shortForm) {
return fmt.Errorf("Given MetricType value %d is not valid", self)
}
return nil
}
func (self MetricType) String() string {
if err := self.validate(); err != nil {
return "unknown"
}
return longForm[self]
}
func (self MetricType) shortForm() string {
if err := self.validate(); err != nil {
return "unknown"
}
return shortForm[self]
}
// Custom unmarshaller
func (self *MetricType) UnmarshalJSON(b []byte) error {
var f interface{}
err := json.Unmarshal(b, &f)
if err != nil {
return err
}
if str, ok := f.(string); ok {
for i, v := range shortForm {
if str == v {
*self = MetricType(i)
break
}
}
}
return nil
}
func (self MetricType) MarshalJSON() ([]byte, error) {
return json.Marshal(self.String())
}
type SortKey struct {
Tenant string
Type MetricType
}
// Hawkular-Metrics external structs
// Do I need external.. hmph.
type MetricHeader struct {
Tenant string `json:"-"`
Type MetricType `json:"-"`
Id string `json:"id"`
Data []Datapoint `json:"data"`
}
// Value should be convertible to float64 for numeric values
// Timestamp is milliseconds since epoch
type Datapoint struct {
Timestamp int64 `json:"timestamp"`
Value interface{} `json:"value"`
Tags map[string]string `json:"tags,omitempty"`
}
type HawkularError struct {
ErrorMsg string `json:"errorMsg"`
}
type MetricDefinition struct {
Tenant string `json:"-"`
Type MetricType `json:"type,omitempty"`
Id string `json:"id"`
Tags map[string]string `json:"tags,omitempty"`
RetentionTime int `json:"dataRetention,omitempty"`
}
// TODO Fix the Start & End to return a time.Time
type Bucketpoint struct {
Start int64 `json:"start"`
End int64 `json:"end"`
Min float64 `json:"min"`
Max float64 `json:"max"`
Avg float64 `json:"avg"`
Median float64 `json:"median"`
Empty bool `json:"empty"`
Samples int64 `json:"samples"`
Percentiles []Percentile `json:"percentiles"`
}
type Percentile struct {
Quantile float64 `json:"quantile"`
Value float64 `json:"value"`
}

View File

@ -1,26 +0,0 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["influxdb.go"],
importpath = "github.com/influxdata/influxdb/client",
visibility = ["//visibility:public"],
deps = ["//vendor/github.com/influxdata/influxdb/models:go_default_library"],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [
":package-srcs",
"//vendor/github.com/influxdata/influxdb/client/v2:all-srcs",
],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

View File

@ -1,289 +0,0 @@
# InfluxDB Client
[![GoDoc](https://godoc.org/github.com/influxdata/influxdb?status.svg)](http://godoc.org/github.com/influxdata/influxdb/client/v2)
## Description
**NOTE:** The Go client library now has a "v2" version, with the old version
being deprecated. The new version can be imported at
`import "github.com/influxdata/influxdb/client/v2"`. It is not backwards-compatible.
A Go client library written and maintained by the **InfluxDB** team.
This package provides convenience functions to read and write time series data.
It uses the HTTP protocol to communicate with your **InfluxDB** cluster.
## Getting Started
### Connecting To Your Database
Connecting to an **InfluxDB** database is straightforward. You will need a host
name, a port and the cluster user credentials if applicable. The default port is
8086. You can customize these settings to your specific installation via the
**InfluxDB** configuration file.
Though not necessary for experimentation, you may want to create a new user
and authenticate the connection to your database.
For more information please check out the
[Admin Docs](https://docs.influxdata.com/influxdb/latest/administration/).
For the impatient, you can create a new admin user _bubba_ by firing off the
[InfluxDB CLI](https://github.com/influxdata/influxdb/blob/master/cmd/influx/main.go).
```shell
influx
> create user bubba with password 'bumblebeetuna'
> grant all privileges to bubba
```
And now for good measure set the credentials in you shell environment.
In the example below we will use $INFLUX_USER and $INFLUX_PWD
Now with the administrivia out of the way, let's connect to our database.
NOTE: If you've opted out of creating a user, you can omit Username and Password in
the configuration below.
```go
package main
import (
"log"
"time"
"github.com/influxdata/influxdb/client/v2"
)
const (
MyDB = "square_holes"
username = "bubba"
password = "bumblebeetuna"
)
func main() {
// Make client
c, err := client.NewHTTPClient(client.HTTPConfig{
Addr: "http://localhost:8086",
Username: username,
Password: password,
})
if err != nil {
log.Fatalln("Error: ", err)
}
// Create a new point batch
bp, err := client.NewBatchPoints(client.BatchPointsConfig{
Database: MyDB,
Precision: "s",
})
if err != nil {
log.Fatalln("Error: ", err)
}
// Create a point and add to batch
tags := map[string]string{"cpu": "cpu-total"}
fields := map[string]interface{}{
"idle": 10.1,
"system": 53.3,
"user": 46.6,
}
pt, err := client.NewPoint("cpu_usage", tags, fields, time.Now())
if err != nil {
log.Fatalln("Error: ", err)
}
bp.AddPoint(pt)
// Write the batch
c.Write(bp)
}
```
### Inserting Data
Time series data aka *points* are written to the database using batch inserts.
The mechanism is to create one or more points and then create a batch aka
*batch points* and write these to a given database and series. A series is a
combination of a measurement (time/values) and a set of tags.
In this sample we will create a batch of a 1,000 points. Each point has a time and
a single value as well as 2 tags indicating a shape and color. We write these points
to a database called _square_holes_ using a measurement named _shapes_.
NOTE: You can specify a RetentionPolicy as part of the batch points. If not
provided InfluxDB will use the database _default_ retention policy.
```go
func writePoints(clnt client.Client) {
sampleSize := 1000
rand.Seed(42)
bp, _ := client.NewBatchPoints(client.BatchPointsConfig{
Database: "systemstats",
Precision: "us",
})
for i := 0; i < sampleSize; i++ {
regions := []string{"us-west1", "us-west2", "us-west3", "us-east1"}
tags := map[string]string{
"cpu": "cpu-total",
"host": fmt.Sprintf("host%d", rand.Intn(1000)),
"region": regions[rand.Intn(len(regions))],
}
idle := rand.Float64() * 100.0
fields := map[string]interface{}{
"idle": idle,
"busy": 100.0 - idle,
}
bp.AddPoint(client.NewPoint(
"cpu_usage",
tags,
fields,
time.Now(),
))
}
err := clnt.Write(bp)
if err != nil {
log.Fatal(err)
}
}
```
### Querying Data
One nice advantage of using **InfluxDB** the ability to query your data using familiar
SQL constructs. In this example we can create a convenience function to query the database
as follows:
```go
// queryDB convenience function to query the database
func queryDB(clnt client.Client, cmd string) (res []client.Result, err error) {
q := client.Query{
Command: cmd,
Database: MyDB,
}
if response, err := clnt.Query(q); err == nil {
if response.Error() != nil {
return res, response.Error()
}
res = response.Results
} else {
return res, err
}
return res, nil
}
```
#### Creating a Database
```go
_, err := queryDB(clnt, fmt.Sprintf("CREATE DATABASE %s", MyDB))
if err != nil {
log.Fatal(err)
}
```
#### Count Records
```go
q := fmt.Sprintf("SELECT count(%s) FROM %s", "value", MyMeasurement)
res, err := queryDB(clnt, q)
if err != nil {
log.Fatal(err)
}
count := res[0].Series[0].Values[0][1]
log.Printf("Found a total of %v records\n", count)
```
#### Find the last 10 _shapes_ records
```go
q := fmt.Sprintf("SELECT * FROM %s LIMIT %d", MyMeasurement, 20)
res, err = queryDB(clnt, q)
if err != nil {
log.Fatal(err)
}
for i, row := range res[0].Series[0].Values {
t, err := time.Parse(time.RFC3339, row[0].(string))
if err != nil {
log.Fatal(err)
}
val := row[1].(string)
log.Printf("[%2d] %s: %s\n", i, t.Format(time.Stamp), val)
}
```
### Using the UDP Client
The **InfluxDB** client also supports writing over UDP.
```go
func WriteUDP() {
// Make client
c := client.NewUDPClient("localhost:8089")
// Create a new point batch
bp, _ := client.NewBatchPoints(client.BatchPointsConfig{
Precision: "s",
})
// Create a point and add to batch
tags := map[string]string{"cpu": "cpu-total"}
fields := map[string]interface{}{
"idle": 10.1,
"system": 53.3,
"user": 46.6,
}
pt, err := client.NewPoint("cpu_usage", tags, fields, time.Now())
if err != nil {
panic(err.Error())
}
bp.AddPoint(pt)
// Write the batch
c.Write(bp)
}
```
### Point Splitting
The UDP client now supports splitting single points that exceed the configured
payload size. The logic for processing each point is listed here, starting with
an empty payload.
1. If adding the point to the current (non-empty) payload would exceed the
configured size, send the current payload. Otherwise, add it to the current
payload.
1. If the point is smaller than the configured size, add it to the payload.
1. If the point has no timestamp, just try to send the entire point as a single
UDP payload, and process the next point.
1. Since the point has a timestamp, re-use the existing measurement name,
tagset, and timestamp and create multiple new points by splitting up the
fields. The per-point length will be kept close to the configured size,
staying under it if possible. This does mean that one large field, maybe a
long string, could be sent as a larger-than-configured payload.
The above logic attempts to respect configured payload sizes, but not sacrifice
any data integrity. Points without a timestamp can't be split, as that may
cause fields to have differing timestamps when processed by the server.
## Go Docs
Please refer to
[http://godoc.org/github.com/influxdata/influxdb/client/v2](http://godoc.org/github.com/influxdata/influxdb/client/v2)
for documentation.
## See Also
You can also examine how the client library is used by the
[InfluxDB CLI](https://github.com/influxdata/influxdb/blob/master/cmd/influx/main.go).

View File

@ -1,813 +0,0 @@
package client
import (
"bytes"
"crypto/tls"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"net"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/influxdata/influxdb/models"
)
const (
// DefaultHost is the default host used to connect to an InfluxDB instance
DefaultHost = "localhost"
// DefaultPort is the default port used to connect to an InfluxDB instance
DefaultPort = 8086
// DefaultTimeout is the default connection timeout used to connect to an InfluxDB instance
DefaultTimeout = 0
)
// Query is used to send a command to the server. Both Command and Database are required.
type Query struct {
Command string
Database string
// Chunked tells the server to send back chunked responses. This places
// less load on the server by sending back chunks of the response rather
// than waiting for the entire response all at once.
Chunked bool
// ChunkSize sets the maximum number of rows that will be returned per
// chunk. Chunks are either divided based on their series or if they hit
// the chunk size limit.
//
// Chunked must be set to true for this option to be used.
ChunkSize int
}
// ParseConnectionString will parse a string to create a valid connection URL
func ParseConnectionString(path string, ssl bool) (url.URL, error) {
var host string
var port int
h, p, err := net.SplitHostPort(path)
if err != nil {
if path == "" {
host = DefaultHost
} else {
host = path
}
// If they didn't specify a port, always use the default port
port = DefaultPort
} else {
host = h
port, err = strconv.Atoi(p)
if err != nil {
return url.URL{}, fmt.Errorf("invalid port number %q: %s\n", path, err)
}
}
u := url.URL{
Scheme: "http",
}
if ssl {
u.Scheme = "https"
}
u.Host = net.JoinHostPort(host, strconv.Itoa(port))
return u, nil
}
// Config is used to specify what server to connect to.
// URL: The URL of the server connecting to.
// Username/Password are optional. They will be passed via basic auth if provided.
// UserAgent: If not provided, will default "InfluxDBClient",
// Timeout: If not provided, will default to 0 (no timeout)
type Config struct {
URL url.URL
Username string
Password string
UserAgent string
Timeout time.Duration
Precision string
UnsafeSsl bool
}
// NewConfig will create a config to be used in connecting to the client
func NewConfig() Config {
return Config{
Timeout: DefaultTimeout,
}
}
// Client is used to make calls to the server.
type Client struct {
url url.URL
username string
password string
httpClient *http.Client
userAgent string
precision string
}
const (
// ConsistencyOne requires at least one data node acknowledged a write.
ConsistencyOne = "one"
// ConsistencyAll requires all data nodes to acknowledge a write.
ConsistencyAll = "all"
// ConsistencyQuorum requires a quorum of data nodes to acknowledge a write.
ConsistencyQuorum = "quorum"
// ConsistencyAny allows for hinted hand off, potentially no write happened yet.
ConsistencyAny = "any"
)
// NewClient will instantiate and return a connected client to issue commands to the server.
func NewClient(c Config) (*Client, error) {
tlsConfig := &tls.Config{
InsecureSkipVerify: c.UnsafeSsl,
}
tr := &http.Transport{
TLSClientConfig: tlsConfig,
}
client := Client{
url: c.URL,
username: c.Username,
password: c.Password,
httpClient: &http.Client{Timeout: c.Timeout, Transport: tr},
userAgent: c.UserAgent,
precision: c.Precision,
}
if client.userAgent == "" {
client.userAgent = "InfluxDBClient"
}
return &client, nil
}
// SetAuth will update the username and passwords
func (c *Client) SetAuth(u, p string) {
c.username = u
c.password = p
}
// SetPrecision will update the precision
func (c *Client) SetPrecision(precision string) {
c.precision = precision
}
// Query sends a command to the server and returns the Response
func (c *Client) Query(q Query) (*Response, error) {
u := c.url
u.Path = "query"
values := u.Query()
values.Set("q", q.Command)
values.Set("db", q.Database)
if q.Chunked {
values.Set("chunked", "true")
if q.ChunkSize > 0 {
values.Set("chunk_size", strconv.Itoa(q.ChunkSize))
}
}
if c.precision != "" {
values.Set("epoch", c.precision)
}
u.RawQuery = values.Encode()
req, err := http.NewRequest("POST", u.String(), nil)
if err != nil {
return nil, err
}
req.Header.Set("User-Agent", c.userAgent)
if c.username != "" {
req.SetBasicAuth(c.username, c.password)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var response Response
if q.Chunked {
cr := NewChunkedResponse(resp.Body)
for {
r, err := cr.NextResponse()
if err != nil {
// If we got an error while decoding the response, send that back.
return nil, err
}
if r == nil {
break
}
response.Results = append(response.Results, r.Results...)
if r.Err != nil {
response.Err = r.Err
break
}
}
} else {
dec := json.NewDecoder(resp.Body)
dec.UseNumber()
if err := dec.Decode(&response); err != nil {
// Ignore EOF errors if we got an invalid status code.
if !(err == io.EOF && resp.StatusCode != http.StatusOK) {
return nil, err
}
}
}
// If we don't have an error in our json response, and didn't get StatusOK,
// then send back an error.
if resp.StatusCode != http.StatusOK && response.Error() == nil {
return &response, fmt.Errorf("received status code %d from server", resp.StatusCode)
}
return &response, nil
}
// Write takes BatchPoints and allows for writing of multiple points with defaults
// If successful, error is nil and Response is nil
// If an error occurs, Response may contain additional information if populated.
func (c *Client) Write(bp BatchPoints) (*Response, error) {
u := c.url
u.Path = "write"
var b bytes.Buffer
for _, p := range bp.Points {
err := checkPointTypes(p)
if err != nil {
return nil, err
}
if p.Raw != "" {
if _, err := b.WriteString(p.Raw); err != nil {
return nil, err
}
} else {
for k, v := range bp.Tags {
if p.Tags == nil {
p.Tags = make(map[string]string, len(bp.Tags))
}
p.Tags[k] = v
}
if _, err := b.WriteString(p.MarshalString()); err != nil {
return nil, err
}
}
if err := b.WriteByte('\n'); err != nil {
return nil, err
}
}
req, err := http.NewRequest("POST", u.String(), &b)
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", "")
req.Header.Set("User-Agent", c.userAgent)
if c.username != "" {
req.SetBasicAuth(c.username, c.password)
}
precision := bp.Precision
if precision == "" {
precision = c.precision
}
params := req.URL.Query()
params.Set("db", bp.Database)
params.Set("rp", bp.RetentionPolicy)
params.Set("precision", precision)
params.Set("consistency", bp.WriteConsistency)
req.URL.RawQuery = params.Encode()
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var response Response
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusNoContent && resp.StatusCode != http.StatusOK {
var err = fmt.Errorf(string(body))
response.Err = err
return &response, err
}
return nil, nil
}
// WriteLineProtocol takes a string with line returns to delimit each write
// If successful, error is nil and Response is nil
// If an error occurs, Response may contain additional information if populated.
func (c *Client) WriteLineProtocol(data, database, retentionPolicy, precision, writeConsistency string) (*Response, error) {
u := c.url
u.Path = "write"
r := strings.NewReader(data)
req, err := http.NewRequest("POST", u.String(), r)
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", "")
req.Header.Set("User-Agent", c.userAgent)
if c.username != "" {
req.SetBasicAuth(c.username, c.password)
}
params := req.URL.Query()
params.Set("db", database)
params.Set("rp", retentionPolicy)
params.Set("precision", precision)
params.Set("consistency", writeConsistency)
req.URL.RawQuery = params.Encode()
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var response Response
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusNoContent && resp.StatusCode != http.StatusOK {
err := fmt.Errorf(string(body))
response.Err = err
return &response, err
}
return nil, nil
}
// Ping will check to see if the server is up
// Ping returns how long the request took, the version of the server it connected to, and an error if one occurred.
func (c *Client) Ping() (time.Duration, string, error) {
now := time.Now()
u := c.url
u.Path = "ping"
req, err := http.NewRequest("GET", u.String(), nil)
if err != nil {
return 0, "", err
}
req.Header.Set("User-Agent", c.userAgent)
if c.username != "" {
req.SetBasicAuth(c.username, c.password)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return 0, "", err
}
defer resp.Body.Close()
version := resp.Header.Get("X-Influxdb-Version")
return time.Since(now), version, nil
}
// Structs
// Message represents a user message.
type Message struct {
Level string `json:"level,omitempty"`
Text string `json:"text,omitempty"`
}
// Result represents a resultset returned from a single statement.
type Result struct {
Series []models.Row
Messages []*Message
Err error
}
// MarshalJSON encodes the result into JSON.
func (r *Result) MarshalJSON() ([]byte, error) {
// Define a struct that outputs "error" as a string.
var o struct {
Series []models.Row `json:"series,omitempty"`
Messages []*Message `json:"messages,omitempty"`
Err string `json:"error,omitempty"`
}
// Copy fields to output struct.
o.Series = r.Series
o.Messages = r.Messages
if r.Err != nil {
o.Err = r.Err.Error()
}
return json.Marshal(&o)
}
// UnmarshalJSON decodes the data into the Result struct
func (r *Result) UnmarshalJSON(b []byte) error {
var o struct {
Series []models.Row `json:"series,omitempty"`
Messages []*Message `json:"messages,omitempty"`
Err string `json:"error,omitempty"`
}
dec := json.NewDecoder(bytes.NewBuffer(b))
dec.UseNumber()
err := dec.Decode(&o)
if err != nil {
return err
}
r.Series = o.Series
r.Messages = o.Messages
if o.Err != "" {
r.Err = errors.New(o.Err)
}
return nil
}
// Response represents a list of statement results.
type Response struct {
Results []Result
Err error
}
// MarshalJSON encodes the response into JSON.
func (r *Response) MarshalJSON() ([]byte, error) {
// Define a struct that outputs "error" as a string.
var o struct {
Results []Result `json:"results,omitempty"`
Err string `json:"error,omitempty"`
}
// Copy fields to output struct.
o.Results = r.Results
if r.Err != nil {
o.Err = r.Err.Error()
}
return json.Marshal(&o)
}
// UnmarshalJSON decodes the data into the Response struct
func (r *Response) UnmarshalJSON(b []byte) error {
var o struct {
Results []Result `json:"results,omitempty"`
Err string `json:"error,omitempty"`
}
dec := json.NewDecoder(bytes.NewBuffer(b))
dec.UseNumber()
err := dec.Decode(&o)
if err != nil {
return err
}
r.Results = o.Results
if o.Err != "" {
r.Err = errors.New(o.Err)
}
return nil
}
// Error returns the first error from any statement.
// Returns nil if no errors occurred on any statements.
func (r *Response) Error() error {
if r.Err != nil {
return r.Err
}
for _, result := range r.Results {
if result.Err != nil {
return result.Err
}
}
return nil
}
// duplexReader reads responses and writes it to another writer while
// satisfying the reader interface.
type duplexReader struct {
r io.Reader
w io.Writer
}
func (r *duplexReader) Read(p []byte) (n int, err error) {
n, err = r.r.Read(p)
if err == nil {
r.w.Write(p[:n])
}
return n, err
}
// ChunkedResponse represents a response from the server that
// uses chunking to stream the output.
type ChunkedResponse struct {
dec *json.Decoder
duplex *duplexReader
buf bytes.Buffer
}
// NewChunkedResponse reads a stream and produces responses from the stream.
func NewChunkedResponse(r io.Reader) *ChunkedResponse {
resp := &ChunkedResponse{}
resp.duplex = &duplexReader{r: r, w: &resp.buf}
resp.dec = json.NewDecoder(resp.duplex)
resp.dec.UseNumber()
return resp
}
// NextResponse reads the next line of the stream and returns a response.
func (r *ChunkedResponse) NextResponse() (*Response, error) {
var response Response
if err := r.dec.Decode(&response); err != nil {
if err == io.EOF {
return nil, nil
}
// A decoding error happened. This probably means the server crashed
// and sent a last-ditch error message to us. Ensure we have read the
// entirety of the connection to get any remaining error text.
io.Copy(ioutil.Discard, r.duplex)
return nil, errors.New(strings.TrimSpace(r.buf.String()))
}
r.buf.Reset()
return &response, nil
}
// Point defines the fields that will be written to the database
// Measurement, Time, and Fields are required
// Precision can be specified if the time is in epoch format (integer).
// Valid values for Precision are n, u, ms, s, m, and h
type Point struct {
Measurement string
Tags map[string]string
Time time.Time
Fields map[string]interface{}
Precision string
Raw string
}
// MarshalJSON will format the time in RFC3339Nano
// Precision is also ignored as it is only used for writing, not reading
// Or another way to say it is we always send back in nanosecond precision
func (p *Point) MarshalJSON() ([]byte, error) {
point := struct {
Measurement string `json:"measurement,omitempty"`
Tags map[string]string `json:"tags,omitempty"`
Time string `json:"time,omitempty"`
Fields map[string]interface{} `json:"fields,omitempty"`
Precision string `json:"precision,omitempty"`
}{
Measurement: p.Measurement,
Tags: p.Tags,
Fields: p.Fields,
Precision: p.Precision,
}
// Let it omit empty if it's really zero
if !p.Time.IsZero() {
point.Time = p.Time.UTC().Format(time.RFC3339Nano)
}
return json.Marshal(&point)
}
// MarshalString renders string representation of a Point with specified
// precision. The default precision is nanoseconds.
func (p *Point) MarshalString() string {
pt, err := models.NewPoint(p.Measurement, models.NewTags(p.Tags), p.Fields, p.Time)
if err != nil {
return "# ERROR: " + err.Error() + " " + p.Measurement
}
if p.Precision == "" || p.Precision == "ns" || p.Precision == "n" {
return pt.String()
}
return pt.PrecisionString(p.Precision)
}
// UnmarshalJSON decodes the data into the Point struct
func (p *Point) UnmarshalJSON(b []byte) error {
var normal struct {
Measurement string `json:"measurement"`
Tags map[string]string `json:"tags"`
Time time.Time `json:"time"`
Precision string `json:"precision"`
Fields map[string]interface{} `json:"fields"`
}
var epoch struct {
Measurement string `json:"measurement"`
Tags map[string]string `json:"tags"`
Time *int64 `json:"time"`
Precision string `json:"precision"`
Fields map[string]interface{} `json:"fields"`
}
if err := func() error {
var err error
dec := json.NewDecoder(bytes.NewBuffer(b))
dec.UseNumber()
if err = dec.Decode(&epoch); err != nil {
return err
}
// Convert from epoch to time.Time, but only if Time
// was actually set.
var ts time.Time
if epoch.Time != nil {
ts, err = EpochToTime(*epoch.Time, epoch.Precision)
if err != nil {
return err
}
}
p.Measurement = epoch.Measurement
p.Tags = epoch.Tags
p.Time = ts
p.Precision = epoch.Precision
p.Fields = normalizeFields(epoch.Fields)
return nil
}(); err == nil {
return nil
}
dec := json.NewDecoder(bytes.NewBuffer(b))
dec.UseNumber()
if err := dec.Decode(&normal); err != nil {
return err
}
normal.Time = SetPrecision(normal.Time, normal.Precision)
p.Measurement = normal.Measurement
p.Tags = normal.Tags
p.Time = normal.Time
p.Precision = normal.Precision
p.Fields = normalizeFields(normal.Fields)
return nil
}
// Remove any notion of json.Number
func normalizeFields(fields map[string]interface{}) map[string]interface{} {
newFields := map[string]interface{}{}
for k, v := range fields {
switch v := v.(type) {
case json.Number:
jv, e := v.Float64()
if e != nil {
panic(fmt.Sprintf("unable to convert json.Number to float64: %s", e))
}
newFields[k] = jv
default:
newFields[k] = v
}
}
return newFields
}
// BatchPoints is used to send batched data in a single write.
// Database and Points are required
// If no retention policy is specified, it will use the databases default retention policy.
// If tags are specified, they will be "merged" with all points. If a point already has that tag, it will be ignored.
// If time is specified, it will be applied to any point with an empty time.
// Precision can be specified if the time is in epoch format (integer).
// Valid values for Precision are n, u, ms, s, m, and h
type BatchPoints struct {
Points []Point `json:"points,omitempty"`
Database string `json:"database,omitempty"`
RetentionPolicy string `json:"retentionPolicy,omitempty"`
Tags map[string]string `json:"tags,omitempty"`
Time time.Time `json:"time,omitempty"`
Precision string `json:"precision,omitempty"`
WriteConsistency string `json:"-"`
}
// UnmarshalJSON decodes the data into the BatchPoints struct
func (bp *BatchPoints) UnmarshalJSON(b []byte) error {
var normal struct {
Points []Point `json:"points"`
Database string `json:"database"`
RetentionPolicy string `json:"retentionPolicy"`
Tags map[string]string `json:"tags"`
Time time.Time `json:"time"`
Precision string `json:"precision"`
}
var epoch struct {
Points []Point `json:"points"`
Database string `json:"database"`
RetentionPolicy string `json:"retentionPolicy"`
Tags map[string]string `json:"tags"`
Time *int64 `json:"time"`
Precision string `json:"precision"`
}
if err := func() error {
var err error
if err = json.Unmarshal(b, &epoch); err != nil {
return err
}
// Convert from epoch to time.Time
var ts time.Time
if epoch.Time != nil {
ts, err = EpochToTime(*epoch.Time, epoch.Precision)
if err != nil {
return err
}
}
bp.Points = epoch.Points
bp.Database = epoch.Database
bp.RetentionPolicy = epoch.RetentionPolicy
bp.Tags = epoch.Tags
bp.Time = ts
bp.Precision = epoch.Precision
return nil
}(); err == nil {
return nil
}
if err := json.Unmarshal(b, &normal); err != nil {
return err
}
normal.Time = SetPrecision(normal.Time, normal.Precision)
bp.Points = normal.Points
bp.Database = normal.Database
bp.RetentionPolicy = normal.RetentionPolicy
bp.Tags = normal.Tags
bp.Time = normal.Time
bp.Precision = normal.Precision
return nil
}
// utility functions
// Addr provides the current url as a string of the server the client is connected to.
func (c *Client) Addr() string {
return c.url.String()
}
// checkPointTypes ensures no unsupported types are submitted to influxdb, returning error if they are found.
func checkPointTypes(p Point) error {
for _, v := range p.Fields {
switch v.(type) {
case int, int8, int16, int32, int64, uint, uint8, uint16, uint32, float32, float64, bool, string, nil:
return nil
default:
return fmt.Errorf("unsupported point type: %T", v)
}
}
return nil
}
// helper functions
// EpochToTime takes a unix epoch time and uses precision to return back a time.Time
func EpochToTime(epoch int64, precision string) (time.Time, error) {
if precision == "" {
precision = "s"
}
var t time.Time
switch precision {
case "h":
t = time.Unix(0, epoch*int64(time.Hour))
case "m":
t = time.Unix(0, epoch*int64(time.Minute))
case "s":
t = time.Unix(0, epoch*int64(time.Second))
case "ms":
t = time.Unix(0, epoch*int64(time.Millisecond))
case "u":
t = time.Unix(0, epoch*int64(time.Microsecond))
case "n":
t = time.Unix(0, epoch)
default:
return time.Time{}, fmt.Errorf("Unknown precision %q", precision)
}
return t, nil
}
// SetPrecision will round a time to the specified precision
func SetPrecision(t time.Time, precision string) time.Time {
switch precision {
case "n":
case "u":
return t.Round(time.Microsecond)
case "ms":
return t.Round(time.Millisecond)
case "s":
return t.Round(time.Second)
case "m":
return t.Round(time.Minute)
case "h":
return t.Round(time.Hour)
}
return t
}

View File

@ -1,28 +0,0 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["cloudmonitoring-gen.go"],
importpath = "google.golang.org/api/cloudmonitoring/v2beta2",
visibility = ["//visibility:public"],
deps = [
"//vendor/golang.org/x/net/context:go_default_library",
"//vendor/golang.org/x/net/context/ctxhttp:go_default_library",
"//vendor/google.golang.org/api/gensupport:go_default_library",
"//vendor/google.golang.org/api/googleapi:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

View File

@ -1,839 +0,0 @@
{
"kind": "discovery#restDescription",
"etag": "\"YWOzh2SDasdU84ArJnpYek-OMdg/NOsFjtPY9zRzc3K9F8mQGuDeSj0\"",
"discoveryVersion": "v1",
"id": "cloudmonitoring:v2beta2",
"name": "cloudmonitoring",
"canonicalName": "Cloud Monitoring",
"version": "v2beta2",
"revision": "20170501",
"title": "Cloud Monitoring API",
"description": "Accesses Google Cloud Monitoring data.",
"ownerDomain": "google.com",
"ownerName": "Google",
"icons": {
"x16": "https://www.gstatic.com/images/branding/product/1x/googleg_16dp.png",
"x32": "https://www.gstatic.com/images/branding/product/1x/googleg_32dp.png"
},
"documentationLink": "https://cloud.google.com/monitoring/v2beta2/",
"protocol": "rest",
"baseUrl": "https://www.googleapis.com/cloudmonitoring/v2beta2/projects/",
"basePath": "/cloudmonitoring/v2beta2/projects/",
"rootUrl": "https://www.googleapis.com/",
"servicePath": "cloudmonitoring/v2beta2/projects/",
"batchPath": "batch/cloudmonitoring/v2beta2",
"parameters": {
"alt": {
"type": "string",
"description": "Data format for the response.",
"default": "json",
"enum": [
"json"
],
"enumDescriptions": [
"Responses with Content-Type of application/json"
],
"location": "query"
},
"fields": {
"type": "string",
"description": "Selector specifying which fields to include in a partial response.",
"location": "query"
},
"key": {
"type": "string",
"description": "API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.",
"location": "query"
},
"oauth_token": {
"type": "string",
"description": "OAuth 2.0 token for the current user.",
"location": "query"
},
"prettyPrint": {
"type": "boolean",
"description": "Returns response with indentations and line breaks.",
"default": "true",
"location": "query"
},
"quotaUser": {
"type": "string",
"description": "Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. Overrides userIp if both are provided.",
"location": "query"
},
"userIp": {
"type": "string",
"description": "IP address of the site where the request originates. Use this if you want to enforce per-user limits.",
"location": "query"
}
},
"auth": {
"oauth2": {
"scopes": {
"https://www.googleapis.com/auth/cloud-platform": {
"description": "View and manage your data across Google Cloud Platform services"
},
"https://www.googleapis.com/auth/monitoring": {
"description": "View and write monitoring data for all of your Google and third-party Cloud and API projects"
}
}
}
},
"schemas": {
"DeleteMetricDescriptorResponse": {
"id": "DeleteMetricDescriptorResponse",
"type": "object",
"description": "The response of cloudmonitoring.metricDescriptors.delete.",
"properties": {
"kind": {
"type": "string",
"description": "Identifies what kind of resource this is. Value: the fixed string \"cloudmonitoring#deleteMetricDescriptorResponse\".",
"default": "cloudmonitoring#deleteMetricDescriptorResponse"
}
}
},
"ListMetricDescriptorsRequest": {
"id": "ListMetricDescriptorsRequest",
"type": "object",
"description": "The request of cloudmonitoring.metricDescriptors.list.",
"properties": {
"kind": {
"type": "string",
"description": "Identifies what kind of resource this is. Value: the fixed string \"cloudmonitoring#listMetricDescriptorsRequest\".",
"default": "cloudmonitoring#listMetricDescriptorsRequest"
}
}
},
"ListMetricDescriptorsResponse": {
"id": "ListMetricDescriptorsResponse",
"type": "object",
"description": "The response of cloudmonitoring.metricDescriptors.list.",
"properties": {
"kind": {
"type": "string",
"description": "Identifies what kind of resource this is. Value: the fixed string \"cloudmonitoring#listMetricDescriptorsResponse\".",
"default": "cloudmonitoring#listMetricDescriptorsResponse"
},
"metrics": {
"type": "array",
"description": "The returned metric descriptors.",
"items": {
"$ref": "MetricDescriptor"
}
},
"nextPageToken": {
"type": "string",
"description": "Pagination token. If present, indicates that additional results are available for retrieval. To access the results past the pagination limit, pass this value to the pageToken query parameter."
}
}
},
"ListTimeseriesDescriptorsRequest": {
"id": "ListTimeseriesDescriptorsRequest",
"type": "object",
"description": "The request of cloudmonitoring.timeseriesDescriptors.list",
"properties": {
"kind": {
"type": "string",
"description": "Identifies what kind of resource this is. Value: the fixed string \"cloudmonitoring#listTimeseriesDescriptorsRequest\".",
"default": "cloudmonitoring#listTimeseriesDescriptorsRequest"
}
}
},
"ListTimeseriesDescriptorsResponse": {
"id": "ListTimeseriesDescriptorsResponse",
"type": "object",
"description": "The response of cloudmonitoring.timeseriesDescriptors.list",
"properties": {
"kind": {
"type": "string",
"description": "Identifies what kind of resource this is. Value: the fixed string \"cloudmonitoring#listTimeseriesDescriptorsResponse\".",
"default": "cloudmonitoring#listTimeseriesDescriptorsResponse"
},
"nextPageToken": {
"type": "string",
"description": "Pagination token. If present, indicates that additional results are available for retrieval. To access the results past the pagination limit, set this value to the pageToken query parameter."
},
"oldest": {
"type": "string",
"description": "The oldest timestamp of the interval of this query, as an RFC 3339 string.",
"format": "date-time"
},
"timeseries": {
"type": "array",
"description": "The returned time series descriptors.",
"items": {
"$ref": "TimeseriesDescriptor"
}
},
"youngest": {
"type": "string",
"description": "The youngest timestamp of the interval of this query, as an RFC 3339 string.",
"format": "date-time"
}
}
},
"ListTimeseriesRequest": {
"id": "ListTimeseriesRequest",
"type": "object",
"description": "The request of cloudmonitoring.timeseries.list",
"properties": {
"kind": {
"type": "string",
"description": "Identifies what kind of resource this is. Value: the fixed string \"cloudmonitoring#listTimeseriesRequest\".",
"default": "cloudmonitoring#listTimeseriesRequest"
}
}
},
"ListTimeseriesResponse": {
"id": "ListTimeseriesResponse",
"type": "object",
"description": "The response of cloudmonitoring.timeseries.list",
"properties": {
"kind": {
"type": "string",
"description": "Identifies what kind of resource this is. Value: the fixed string \"cloudmonitoring#listTimeseriesResponse\".",
"default": "cloudmonitoring#listTimeseriesResponse"
},
"nextPageToken": {
"type": "string",
"description": "Pagination token. If present, indicates that additional results are available for retrieval. To access the results past the pagination limit, set the pageToken query parameter to this value. All of the points of a time series will be returned before returning any point of the subsequent time series."
},
"oldest": {
"type": "string",
"description": "The oldest timestamp of the interval of this query as an RFC 3339 string.",
"format": "date-time"
},
"timeseries": {
"type": "array",
"description": "The returned time series.",
"items": {
"$ref": "Timeseries"
}
},
"youngest": {
"type": "string",
"description": "The youngest timestamp of the interval of this query as an RFC 3339 string.",
"format": "date-time"
}
}
},
"MetricDescriptor": {
"id": "MetricDescriptor",
"type": "object",
"description": "A metricDescriptor defines the name, label keys, and data type of a particular metric.",
"properties": {
"description": {
"type": "string",
"description": "Description of this metric."
},
"labels": {
"type": "array",
"description": "Labels defined for this metric.",
"items": {
"$ref": "MetricDescriptorLabelDescriptor"
}
},
"name": {
"type": "string",
"description": "The name of this metric."
},
"project": {
"type": "string",
"description": "The project ID to which the metric belongs."
},
"typeDescriptor": {
"$ref": "MetricDescriptorTypeDescriptor",
"description": "Type description for this metric."
}
}
},
"MetricDescriptorLabelDescriptor": {
"id": "MetricDescriptorLabelDescriptor",
"type": "object",
"description": "A label in a metric is a description of this metric, including the key of this description (what the description is), and the value for this description.",
"properties": {
"description": {
"type": "string",
"description": "Label description."
},
"key": {
"type": "string",
"description": "Label key."
}
}
},
"MetricDescriptorTypeDescriptor": {
"id": "MetricDescriptorTypeDescriptor",
"type": "object",
"description": "A type in a metric contains information about how the metric is collected and what its data points look like.",
"properties": {
"metricType": {
"type": "string",
"description": "The method of collecting data for the metric. See Metric types."
},
"valueType": {
"type": "string",
"description": "The data type of of individual points in the metric's time series. See Metric value types."
}
}
},
"Point": {
"id": "Point",
"type": "object",
"description": "Point is a single point in a time series. It consists of a start time, an end time, and a value.",
"properties": {
"boolValue": {
"type": "boolean",
"description": "The value of this data point. Either \"true\" or \"false\"."
},
"distributionValue": {
"$ref": "PointDistribution",
"description": "The value of this data point as a distribution. A distribution value can contain a list of buckets and/or an underflowBucket and an overflowBucket. The values of these points can be used to create a histogram."
},
"doubleValue": {
"type": "number",
"description": "The value of this data point as a double-precision floating-point number.",
"format": "double"
},
"end": {
"type": "string",
"description": "The interval [start, end] is the time period to which the point's value applies. For gauge metrics, whose values are instantaneous measurements, this interval should be empty (start should equal end). For cumulative metrics (of which deltas and rates are special cases), the interval should be non-empty. Both start and end are RFC 3339 strings.",
"format": "date-time"
},
"int64Value": {
"type": "string",
"description": "The value of this data point as a 64-bit integer.",
"format": "int64"
},
"start": {
"type": "string",
"description": "The interval [start, end] is the time period to which the point's value applies. For gauge metrics, whose values are instantaneous measurements, this interval should be empty (start should equal end). For cumulative metrics (of which deltas and rates are special cases), the interval should be non-empty. Both start and end are RFC 3339 strings.",
"format": "date-time"
},
"stringValue": {
"type": "string",
"description": "The value of this data point in string format."
}
}
},
"PointDistribution": {
"id": "PointDistribution",
"type": "object",
"description": "Distribution data point value type. When writing distribution points, try to be consistent with the boundaries of your buckets. If you must modify the bucket boundaries, then do so by merging, partitioning, or appending rather than skewing them.",
"properties": {
"buckets": {
"type": "array",
"description": "The finite buckets.",
"items": {
"$ref": "PointDistributionBucket"
}
},
"overflowBucket": {
"$ref": "PointDistributionOverflowBucket",
"description": "The overflow bucket."
},
"underflowBucket": {
"$ref": "PointDistributionUnderflowBucket",
"description": "The underflow bucket."
}
}
},
"PointDistributionBucket": {
"id": "PointDistributionBucket",
"type": "object",
"description": "The histogram's bucket. Buckets that form the histogram of a distribution value. If the upper bound of a bucket, say U1, does not equal the lower bound of the next bucket, say L2, this means that there is no event in [U1, L2).",
"properties": {
"count": {
"type": "string",
"description": "The number of events whose values are in the interval defined by this bucket.",
"format": "int64"
},
"lowerBound": {
"type": "number",
"description": "The lower bound of the value interval of this bucket (inclusive).",
"format": "double"
},
"upperBound": {
"type": "number",
"description": "The upper bound of the value interval of this bucket (exclusive).",
"format": "double"
}
}
},
"PointDistributionOverflowBucket": {
"id": "PointDistributionOverflowBucket",
"type": "object",
"description": "The overflow bucket is a special bucket that does not have the upperBound field; it includes all of the events that are no less than its lower bound.",
"properties": {
"count": {
"type": "string",
"description": "The number of events whose values are in the interval defined by this bucket.",
"format": "int64"
},
"lowerBound": {
"type": "number",
"description": "The lower bound of the value interval of this bucket (inclusive).",
"format": "double"
}
}
},
"PointDistributionUnderflowBucket": {
"id": "PointDistributionUnderflowBucket",
"type": "object",
"description": "The underflow bucket is a special bucket that does not have the lowerBound field; it includes all of the events that are less than its upper bound.",
"properties": {
"count": {
"type": "string",
"description": "The number of events whose values are in the interval defined by this bucket.",
"format": "int64"
},
"upperBound": {
"type": "number",
"description": "The upper bound of the value interval of this bucket (exclusive).",
"format": "double"
}
}
},
"Timeseries": {
"id": "Timeseries",
"type": "object",
"description": "The monitoring data is organized as metrics and stored as data points that are recorded over time. Each data point represents information like the CPU utilization of your virtual machine. A historical record of these data points is called a time series.",
"properties": {
"points": {
"type": "array",
"description": "The data points of this time series. The points are listed in order of their end timestamp, from younger to older.",
"items": {
"$ref": "Point"
}
},
"timeseriesDesc": {
"$ref": "TimeseriesDescriptor",
"description": "The descriptor of this time series."
}
}
},
"TimeseriesDescriptor": {
"id": "TimeseriesDescriptor",
"type": "object",
"description": "TimeseriesDescriptor identifies a single time series.",
"properties": {
"labels": {
"type": "object",
"description": "The label's name.",
"additionalProperties": {
"type": "string",
"description": "The label's name."
}
},
"metric": {
"type": "string",
"description": "The name of the metric."
},
"project": {
"type": "string",
"description": "The Developers Console project number to which this time series belongs."
}
}
},
"TimeseriesDescriptorLabel": {
"id": "TimeseriesDescriptorLabel",
"type": "object",
"properties": {
"key": {
"type": "string",
"description": "The label's name."
},
"value": {
"type": "string",
"description": "The label's value."
}
}
},
"TimeseriesPoint": {
"id": "TimeseriesPoint",
"type": "object",
"description": "When writing time series, TimeseriesPoint should be used instead of Timeseries, to enforce single point for each time series in the timeseries.write request.",
"properties": {
"point": {
"$ref": "Point",
"description": "The data point in this time series snapshot."
},
"timeseriesDesc": {
"$ref": "TimeseriesDescriptor",
"description": "The descriptor of this time series."
}
}
},
"WriteTimeseriesRequest": {
"id": "WriteTimeseriesRequest",
"type": "object",
"description": "The request of cloudmonitoring.timeseries.write",
"properties": {
"commonLabels": {
"type": "object",
"description": "The label's name.",
"additionalProperties": {
"type": "string",
"description": "The label's name."
}
},
"timeseries": {
"type": "array",
"description": "Provide time series specific labels and the data points for each time series. The labels in timeseries and the common_labels should form a complete list of labels that required by the metric.",
"items": {
"$ref": "TimeseriesPoint"
}
}
}
},
"WriteTimeseriesResponse": {
"id": "WriteTimeseriesResponse",
"type": "object",
"description": "The response of cloudmonitoring.timeseries.write",
"properties": {
"kind": {
"type": "string",
"description": "Identifies what kind of resource this is. Value: the fixed string \"cloudmonitoring#writeTimeseriesResponse\".",
"default": "cloudmonitoring#writeTimeseriesResponse"
}
}
}
},
"resources": {
"metricDescriptors": {
"methods": {
"create": {
"id": "cloudmonitoring.metricDescriptors.create",
"path": "{project}/metricDescriptors",
"httpMethod": "POST",
"description": "Create a new metric.",
"parameters": {
"project": {
"type": "string",
"description": "The project id. The value can be the numeric project ID or string-based project name.",
"required": true,
"location": "path"
}
},
"parameterOrder": [
"project"
],
"request": {
"$ref": "MetricDescriptor"
},
"response": {
"$ref": "MetricDescriptor"
},
"scopes": [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/monitoring"
]
},
"delete": {
"id": "cloudmonitoring.metricDescriptors.delete",
"path": "{project}/metricDescriptors/{metric}",
"httpMethod": "DELETE",
"description": "Delete an existing metric.",
"parameters": {
"metric": {
"type": "string",
"description": "Name of the metric.",
"required": true,
"location": "path"
},
"project": {
"type": "string",
"description": "The project ID to which the metric belongs.",
"required": true,
"location": "path"
}
},
"parameterOrder": [
"project",
"metric"
],
"response": {
"$ref": "DeleteMetricDescriptorResponse"
},
"scopes": [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/monitoring"
]
},
"list": {
"id": "cloudmonitoring.metricDescriptors.list",
"path": "{project}/metricDescriptors",
"httpMethod": "GET",
"description": "List metric descriptors that match the query. If the query is not set, then all of the metric descriptors will be returned. Large responses will be paginated, use the nextPageToken returned in the response to request subsequent pages of results by setting the pageToken query parameter to the value of the nextPageToken.",
"parameters": {
"count": {
"type": "integer",
"description": "Maximum number of metric descriptors per page. Used for pagination. If not specified, count = 100.",
"default": "100",
"format": "int32",
"minimum": "1",
"maximum": "1000",
"location": "query"
},
"pageToken": {
"type": "string",
"description": "The pagination token, which is used to page through large result sets. Set this value to the value of the nextPageToken to retrieve the next page of results.",
"location": "query"
},
"project": {
"type": "string",
"description": "The project id. The value can be the numeric project ID or string-based project name.",
"required": true,
"location": "path"
},
"query": {
"type": "string",
"description": "The query used to search against existing metrics. Separate keywords with a space; the service joins all keywords with AND, meaning that all keywords must match for a metric to be returned. If this field is omitted, all metrics are returned. If an empty string is passed with this field, no metrics are returned.",
"location": "query"
}
},
"parameterOrder": [
"project"
],
"request": {
"$ref": "ListMetricDescriptorsRequest"
},
"response": {
"$ref": "ListMetricDescriptorsResponse"
},
"scopes": [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/monitoring"
]
}
}
},
"timeseries": {
"methods": {
"list": {
"id": "cloudmonitoring.timeseries.list",
"path": "{project}/timeseries/{metric}",
"httpMethod": "GET",
"description": "List the data points of the time series that match the metric and labels values and that have data points in the interval. Large responses are paginated; use the nextPageToken returned in the response to request subsequent pages of results by setting the pageToken query parameter to the value of the nextPageToken.",
"parameters": {
"aggregator": {
"type": "string",
"description": "The aggregation function that will reduce the data points in each window to a single point. This parameter is only valid for non-cumulative metrics with a value type of INT64 or DOUBLE.",
"enum": [
"max",
"mean",
"min",
"sum"
],
"enumDescriptions": [
"",
"",
"",
""
],
"location": "query"
},
"count": {
"type": "integer",
"description": "Maximum number of data points per page, which is used for pagination of results.",
"default": "6000",
"format": "int32",
"minimum": "1",
"maximum": "12000",
"location": "query"
},
"labels": {
"type": "string",
"description": "A collection of labels for the matching time series, which are represented as: \n- key==value: key equals the value \n- key=~value: key regex matches the value \n- key!=value: key does not equal the value \n- key!~value: key regex does not match the value For example, to list all of the time series descriptors for the region us-central1, you could specify:\nlabel=cloud.googleapis.com%2Flocation=~us-central1.*",
"pattern": "(.+?)(==|=~|!=|!~)(.+)",
"repeated": true,
"location": "query"
},
"metric": {
"type": "string",
"description": "Metric names are protocol-free URLs as listed in the Supported Metrics page. For example, compute.googleapis.com/instance/disk/read_ops_count.",
"required": true,
"location": "path"
},
"oldest": {
"type": "string",
"description": "Start of the time interval (exclusive), which is expressed as an RFC 3339 timestamp. If neither oldest nor timespan is specified, the default time interval will be (youngest - 4 hours, youngest]",
"location": "query"
},
"pageToken": {
"type": "string",
"description": "The pagination token, which is used to page through large result sets. Set this value to the value of the nextPageToken to retrieve the next page of results.",
"location": "query"
},
"project": {
"type": "string",
"description": "The project ID to which this time series belongs. The value can be the numeric project ID or string-based project name.",
"required": true,
"location": "path"
},
"timespan": {
"type": "string",
"description": "Length of the time interval to query, which is an alternative way to declare the interval: (youngest - timespan, youngest]. The timespan and oldest parameters should not be used together. Units: \n- s: second \n- m: minute \n- h: hour \n- d: day \n- w: week Examples: 2s, 3m, 4w. Only one unit is allowed, for example: 2w3d is not allowed; you should use 17d instead.\n\nIf neither oldest nor timespan is specified, the default time interval will be (youngest - 4 hours, youngest].",
"pattern": "[0-9]+[smhdw]?",
"location": "query"
},
"window": {
"type": "string",
"description": "The sampling window. At most one data point will be returned for each window in the requested time interval. This parameter is only valid for non-cumulative metric types. Units: \n- m: minute \n- h: hour \n- d: day \n- w: week Examples: 3m, 4w. Only one unit is allowed, for example: 2w3d is not allowed; you should use 17d instead.",
"pattern": "[0-9]+[mhdw]?",
"location": "query"
},
"youngest": {
"type": "string",
"description": "End of the time interval (inclusive), which is expressed as an RFC 3339 timestamp.",
"required": true,
"location": "query"
}
},
"parameterOrder": [
"project",
"metric",
"youngest"
],
"request": {
"$ref": "ListTimeseriesRequest"
},
"response": {
"$ref": "ListTimeseriesResponse"
},
"scopes": [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/monitoring"
]
},
"write": {
"id": "cloudmonitoring.timeseries.write",
"path": "{project}/timeseries:write",
"httpMethod": "POST",
"description": "Put data points to one or more time series for one or more metrics. If a time series does not exist, a new time series will be created. It is not allowed to write a time series point that is older than the existing youngest point of that time series. Points that are older than the existing youngest point of that time series will be discarded silently. Therefore, users should make sure that points of a time series are written sequentially in the order of their end time.",
"parameters": {
"project": {
"type": "string",
"description": "The project ID. The value can be the numeric project ID or string-based project name.",
"required": true,
"location": "path"
}
},
"parameterOrder": [
"project"
],
"request": {
"$ref": "WriteTimeseriesRequest"
},
"response": {
"$ref": "WriteTimeseriesResponse"
},
"scopes": [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/monitoring"
]
}
}
},
"timeseriesDescriptors": {
"methods": {
"list": {
"id": "cloudmonitoring.timeseriesDescriptors.list",
"path": "{project}/timeseriesDescriptors/{metric}",
"httpMethod": "GET",
"description": "List the descriptors of the time series that match the metric and labels values and that have data points in the interval. Large responses are paginated; use the nextPageToken returned in the response to request subsequent pages of results by setting the pageToken query parameter to the value of the nextPageToken.",
"parameters": {
"aggregator": {
"type": "string",
"description": "The aggregation function that will reduce the data points in each window to a single point. This parameter is only valid for non-cumulative metrics with a value type of INT64 or DOUBLE.",
"enum": [
"max",
"mean",
"min",
"sum"
],
"enumDescriptions": [
"",
"",
"",
""
],
"location": "query"
},
"count": {
"type": "integer",
"description": "Maximum number of time series descriptors per page. Used for pagination. If not specified, count = 100.",
"default": "100",
"format": "int32",
"minimum": "1",
"maximum": "1000",
"location": "query"
},
"labels": {
"type": "string",
"description": "A collection of labels for the matching time series, which are represented as: \n- key==value: key equals the value \n- key=~value: key regex matches the value \n- key!=value: key does not equal the value \n- key!~value: key regex does not match the value For example, to list all of the time series descriptors for the region us-central1, you could specify:\nlabel=cloud.googleapis.com%2Flocation=~us-central1.*",
"pattern": "(.+?)(==|=~|!=|!~)(.+)",
"repeated": true,
"location": "query"
},
"metric": {
"type": "string",
"description": "Metric names are protocol-free URLs as listed in the Supported Metrics page. For example, compute.googleapis.com/instance/disk/read_ops_count.",
"required": true,
"location": "path"
},
"oldest": {
"type": "string",
"description": "Start of the time interval (exclusive), which is expressed as an RFC 3339 timestamp. If neither oldest nor timespan is specified, the default time interval will be (youngest - 4 hours, youngest]",
"location": "query"
},
"pageToken": {
"type": "string",
"description": "The pagination token, which is used to page through large result sets. Set this value to the value of the nextPageToken to retrieve the next page of results.",
"location": "query"
},
"project": {
"type": "string",
"description": "The project ID to which this time series belongs. The value can be the numeric project ID or string-based project name.",
"required": true,
"location": "path"
},
"timespan": {
"type": "string",
"description": "Length of the time interval to query, which is an alternative way to declare the interval: (youngest - timespan, youngest]. The timespan and oldest parameters should not be used together. Units: \n- s: second \n- m: minute \n- h: hour \n- d: day \n- w: week Examples: 2s, 3m, 4w. Only one unit is allowed, for example: 2w3d is not allowed; you should use 17d instead.\n\nIf neither oldest nor timespan is specified, the default time interval will be (youngest - 4 hours, youngest].",
"pattern": "[0-9]+[smhdw]?",
"location": "query"
},
"window": {
"type": "string",
"description": "The sampling window. At most one data point will be returned for each window in the requested time interval. This parameter is only valid for non-cumulative metric types. Units: \n- m: minute \n- h: hour \n- d: day \n- w: week Examples: 3m, 4w. Only one unit is allowed, for example: 2w3d is not allowed; you should use 17d instead.",
"pattern": "[0-9]+[mhdw]?",
"location": "query"
},
"youngest": {
"type": "string",
"description": "End of the time interval (inclusive), which is expressed as an RFC 3339 timestamp.",
"required": true,
"location": "query"
}
},
"parameterOrder": [
"project",
"metric",
"youngest"
],
"request": {
"$ref": "ListTimeseriesDescriptorsRequest"
},
"response": {
"$ref": "ListTimeseriesDescriptorsResponse"
},
"scopes": [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/monitoring"
]
}
}
}
}
}

File diff suppressed because it is too large Load Diff