Compare commits

..

28 Commits

Author SHA1 Message Date
Antonio Murdaca
e802625b7c bump to v0.1.20
- support image-spec v1.0.0-rc5

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 12:37:07 +02:00
Miloslav Trmač
105be6a0ab Merge pull request #337 from mtrmac/docker-push-to-tag
Update name/tag embedded into schema1 manifests
2017-05-10 12:33:18 +02:00
Miloslav Trmač
8ec2a142c9 Enable tests for schema2→schema1 conversion with docker/distribution registries
Now that we can update the embedded name:tag, the test no longer fails
on a schema1→schema1 copy with the old schema1 server which verifies the
name:tag value.
2017-05-10 12:13:39 +02:00
Miloslav Trmač
c4ec970bb2 Uncomment TestCopyCompression test cases
We have been able to push to Docker tags for a long time, and recently
implemented s2→s1 autodetection.
2017-05-10 12:13:38 +02:00
Miloslav Trmač
cf7e58a297 Test that names are updated as expected when pushing to schema1
Before the update, we have loosened the equality check to ignore the
name/tag; now that we are generating them correctly, test for the
expected values.
2017-05-10 12:13:38 +02:00
Miloslav Trmač
03233a5ca7 Vendor after merging mtrmac/image:docker-push-to-tag 2017-05-10 12:13:24 +02:00
Miloslav Trmač
9bc847e656 Use a schema2 server in TestCopySignatures
TestCopySignatures, among other things, tests handling of a correctly
signed image to a different name without breaking the signature, which
will be impossible with schema1 after we start updating the names
embedded in the schema1 manifest.  So, use the schema2 server binary,
and docker://busybox image versions which use schema2.
2017-05-10 12:01:09 +02:00
Miloslav Trmač
22965c443f Allow updated names when comparing schema1 images
The new version of containers/image will update the name and tag fields
when pushing to schema1; so accept that before we update, so that tests
keep working.

For now, just ignore the name/tag fields, so that both the current and
updated versions of containers/image are acceptable; we will tighten
that after the update.
2017-05-10 12:01:09 +02:00
Miloslav Trmač
6f23c88e84 Make schema1 dir: comparisons nondestructive
Use (diff -x manifest.json) instead of removing the manifest.json files.
Also rename the helper from destructiveCheckDirImageAreEqual to
assertDirImagesAreEqual.
2017-05-10 12:01:09 +02:00
Miloslav Trmač
4afafe9538 Log output of docker/distribution registries instead of sending it to /dev/null
This is useful for diagnosing failures which are logged locally but not
reported to the clients.
2017-05-10 12:01:09 +02:00
Antonio Murdaca
50dda3492c Merge pull request #313 from runcom/update-imgspec-rc5
oci: update to image-spec v1 RC5
2017-05-10 11:57:20 +02:00
Antonio Murdaca
f4a44f00b8 integration: disable check with image-tools for image-spec RC5
We need https://github.com/opencontainers/image-tools/pull/144 first

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 11:35:19 +02:00
Antonio Murdaca
dd13a0d60b oci: update to image-spec v1 RC5
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-10 11:35:16 +02:00
Antonio Murdaca
80b751a225 Merge pull request #332 from mtrmac/schema12
Run-time manifest type support (e.g. schema1/schema2) autodetection re-vendor + integration tests
2017-05-10 11:04:28 +02:00
Miloslav Trmač
8556dd1aa1 Add tests for automatic schema conversion depending on registry
In addition to the default registry in the OpenShift cluster, start two
more (one known to support s1 only, one known to support s1+s2), and
also a docker/distribution s1-only registry.

Then test that copying images around works as expected.

NOTE: The docker/distribution s1-only tests currently fail and are
disabled.  See the added comment for details.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
c43e4cffaf Collect "processes" to kill in openshiftCluster instead of named members
We don’t really need to differentiate between the master/registry, we
just want to terminate them, maybe in the right order.  So, collect them
in an array instead of using separate members.

This will make it easier to have more registry instances in the near
future.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2017-05-09 15:08:11 +02:00
Miloslav Trmač
44ee5be6db Do not store a *check.C in openshiftCluster
The *check.C object can not be reused across tests, so storing it in
openshiftCluster is incorrect (and leads to weird behavior like
assertion failures being silently ignored).  So far this hasn't really
been an issue because we have been using the *check.C only in SetUpSuite
and TearDownSuite, and the changes to this have turned out to be
unnecessary after all, but this is still the right thing to do.

This is more or less
> s/c\./cluster\./g; s/cluster\.c/c/g
(paying more attention to the syntax) and corresponding modifications
to the method declarations.

Does not change behavior, apart from using the correct *check.C in
CopySuite.TearDownSuite.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
cfd1cf6def Abort in fileFromFixture if a replacement template is not found
This makes the fixture editation more robust against typos or unexpected
changes (if the “fixture” comes from third parties, like the OpenShift
registry configuration file).
2017-05-09 15:08:11 +02:00
Miloslav Trmač
15ce6488dd Move fileFromFixture from copy_test.go to utils.go
… to make it possible to call it from openshift.go.

Does not change behavior.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
4f199f86f7 Split a startRegistryProcess from openshiftCluster.startRegistry
The helper will be reused in the future.  For now, this does not change
behavior.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
1f1f6801bb Split openshiftCluster.startRegistry into prepareRegistryConfig and startRegistry
This separates creation of the account and configuration, which can be
shared across service instances, from actually starting the registry; we
will soon start several of them.

Only splits a function, does not change behavior.
2017-05-09 15:08:11 +02:00
Miloslav Trmač
1ee776b09b Vendor after merging mtrmac/image:schema12 2017-05-09 15:07:49 +02:00
Antonio Murdaca
8bb9d5f0f2 Merge pull request #335 from runcom/fix-crio-openshift
Fix docker-daemon to containers-storage copy
2017-05-06 12:29:16 +02:00
Antonio Murdaca
88ea901938 vendor c/image to fix docker-daemon -> containers-storage copy
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-05-06 12:09:29 +02:00
Antonio Murdaca
fd41f20bb8 Merge pull request #314 from giuseppe/ostree
skopeo: support copy to OSTree storage
2017-04-24 22:58:51 +02:00
Giuseppe Scrivano
1d5c681f0f skopeo: support copy to OSTree storage
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2017-04-24 20:23:45 +02:00
Antonio Murdaca
c467afa37c Merge pull request #328 from runcom/new-release
New release: v0.1.19
2017-04-14 12:45:56 +02:00
Antonio Murdaca
53a90e51d4 bump again to v0.1.20-dev
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-04-14 12:21:28 +02:00
89 changed files with 1384 additions and 8760 deletions

View File

@@ -105,5 +105,10 @@ var copyCmd = cli.Command{
Name: "dest-tls-verify",
Usage: "require HTTPS and verify certificates when talking to the docker destination registry (defaults to true)",
},
cli.StringFlag{
Name: "dest-ostree-tmp-dir",
Value: "",
Usage: "`DIRECTORY` to use for OSTree temporary files",
},
},
}

View File

@@ -16,6 +16,7 @@ func contextFromGlobalOptions(c *cli.Context, flagPrefix string) (*types.SystemC
// DEPRECATED: keep this here for backward compatibility, but override
// them if per subcommand flags are provided (see below).
DockerInsecureSkipTLSVerify: !c.GlobalBoolT("tls-verify"),
OSTreeTmpDirPath: c.String(flagPrefix + "ostree-tmp-dir"),
}
if c.IsSet(flagPrefix + "tls-verify") {
ctx.DockerInsecureSkipTLSVerify = !c.BoolT(flagPrefix + "tls-verify")

View File

@@ -27,6 +27,7 @@ _skopeo_copy() {
--src-tls-verify
--dest-creds --dcreds
--dest-cert-dir
--dest-ostree-tmp-dir
--dest-tls-verify
"
local boolean_options="

View File

@@ -74,6 +74,8 @@ Uses the system's trust policy to validate images, rejects images not trusted by
**--dest-cert-dir** _path_ Use certificates at _path_ (*.crt, *.cert, *.key) to connect to the destination registry
**--dest-ostree-tmp-dir** _path_ Directory to use for OSTree temporary files.
**--dest-tls-verify** _bool-value_ Require HTTPS and verify certificates when talking to docker destination registry (defaults to true)
Existing signatures, if any, are preserved as well.

View File

@@ -1,10 +1,9 @@
package main
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
"net/http/httptest"
"os"
@@ -15,19 +14,22 @@ import (
"github.com/containers/image/signature"
"github.com/go-check/check"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-tools/image"
)
func init() {
check.Suite(&CopySuite{})
}
const v2DockerRegistryURL = "localhost:5555" // Update also policy.json
const (
v2DockerRegistryURL = "localhost:5555" // Update also policy.json
v2s1DockerRegistryURL = "localhost:5556"
)
type CopySuite struct {
cluster *openshiftCluster
registry *testRegistryV2
gpgHome string
cluster *openshiftCluster
registry *testRegistryV2
s1Registry *testRegistryV2
gpgHome string
}
func (s *CopySuite) SetUpSuite(c *check.C) {
@@ -37,7 +39,7 @@ func (s *CopySuite) SetUpSuite(c *check.C) {
s.cluster = startOpenshiftCluster(c) // FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
for _, stream := range []string{"unsigned", "personal", "official", "naming", "cosigned", "compression"} {
for _, stream := range []string{"unsigned", "personal", "official", "naming", "cosigned", "compression", "schema1", "schema2"} {
isJSON := fmt.Sprintf(`{
"kind": "ImageStream",
"apiVersion": "v1",
@@ -49,7 +51,9 @@ func (s *CopySuite) SetUpSuite(c *check.C) {
runCommandWithInput(c, isJSON, "oc", "create", "-f", "-")
}
s.registry = setupRegistryV2At(c, v2DockerRegistryURL, false, false) // FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
// FIXME: Set up TLS for the docker registry port instead of using "--tls-verify=false" all over the place.
s.registry = setupRegistryV2At(c, v2DockerRegistryURL, false, false)
s.s1Registry = setupRegistryV2At(c, v2s1DockerRegistryURL, false, true)
gpgHome, err := ioutil.TempDir("", "skopeo-gpg")
c.Assert(err, check.IsNil)
@@ -75,31 +79,14 @@ func (s *CopySuite) TearDownSuite(c *check.C) {
if s.registry != nil {
s.registry.Close()
}
if s.s1Registry != nil {
s.s1Registry.Close()
}
if s.cluster != nil {
s.cluster.tearDown()
s.cluster.tearDown(c)
}
}
// fileFromFixtureFixture applies edits to inputPath and returns a path to the temporary file.
// Callers should defer os.Remove(the_returned_path)
func fileFromFixture(c *check.C, inputPath string, edits map[string]string) string {
contents, err := ioutil.ReadFile(inputPath)
c.Assert(err, check.IsNil)
for template, value := range edits {
contents = bytes.Replace(contents, []byte(template), []byte(value), -1)
}
file, err := ioutil.TempFile("", "policy.json")
c.Assert(err, check.IsNil)
path := file.Name()
_, err = file.Write(contents)
c.Assert(err, check.IsNil)
err = file.Close()
c.Assert(err, check.IsNil)
return path
}
func (s *CopySuite) TestCopyFailsWithManifestList(c *check.C) {
assertSkopeoFails(c, ".*can not copy docker://estesp/busybox:latest: manifest contains multiple images.*", "copy", "docker://estesp/busybox:latest", "dir:somedir")
}
@@ -117,9 +104,9 @@ func (s *CopySuite) TestCopySimpleAtomicRegistry(c *check.C) {
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:amd64", "dir:"+dir1)
// "push": dir: → atomic:
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--debug", "copy", "dir:"+dir1, "atomic:localhost:5000/myns/unsigned:unsigned")
// The result of pushing and pulling is an unmodified image.
// The result of pushing and pulling is an equivalent image, except for schema1 embedded names.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:unsigned", "dir:"+dir2)
destructiveCheckDirImagesAreEqual(c, dir1, dir2)
assertSchema1DirImagesAreEqualExceptNames(c, dir1, "estesp/busybox:amd64", dir2, "myns/unsigned:unsigned")
}
// The most basic (skopeo copy) use:
@@ -153,11 +140,10 @@ func (s *CopySuite) TestCopySimple(c *check.C) {
c.Assert(err, check.IsNil)
}
// Check whether dir: images in dir1 and dir2 are equal.
// WARNING: This modifies the contents of dir1 and dir2!
func destructiveCheckDirImagesAreEqual(c *check.C, dir1, dir2 string) {
// Check whether dir: images in dir1 and dir2 are equal, ignoring schema1 signatures.
func assertDirImagesAreEqual(c *check.C, dir1, dir2 string) {
// The manifests may have different JWS signatures; so, compare the manifests by digests, which
// strips the signatures, and remove them, comparing the rest file by file.
// strips the signatures.
digests := []digest.Digest{}
for _, dir := range []string{dir1, dir2} {
manifestPath := filepath.Join(dir, "manifest.json")
@@ -166,12 +152,37 @@ func destructiveCheckDirImagesAreEqual(c *check.C, dir1, dir2 string) {
digest, err := manifest.Digest(m)
c.Assert(err, check.IsNil)
digests = append(digests, digest)
err = os.Remove(manifestPath)
c.Assert(err, check.IsNil)
c.Logf("Manifest file %s (digest %s) removed", manifestPath, digest)
}
c.Assert(digests[0], check.Equals, digests[1])
out := combinedOutputOfCommand(c, "diff", "-urN", dir1, dir2)
// Then compare the rest file by file.
out := combinedOutputOfCommand(c, "diff", "-urN", "-x", "manifest.json", dir1, dir2)
c.Assert(out, check.Equals, "")
}
// Check whether schema1 dir: images in dir1 and dir2 are equal, ignoring schema1 signatures and the embedded path/tag values, which should have the expected values.
func assertSchema1DirImagesAreEqualExceptNames(c *check.C, dir1, ref1, dir2, ref2 string) {
// The manifests may have different JWS signatures and names; so, unmarshal and delete these elements.
manifests := []map[string]interface{}{}
for dir, ref := range map[string]string{dir1: ref1, dir2: ref2} {
manifestPath := filepath.Join(dir, "manifest.json")
m, err := ioutil.ReadFile(manifestPath)
c.Assert(err, check.IsNil)
data := map[string]interface{}{}
err = json.Unmarshal(m, &data)
c.Assert(err, check.IsNil)
c.Assert(data["schemaVersion"], check.Equals, float64(1))
colon := strings.LastIndex(ref, ":")
c.Assert(colon, check.Not(check.Equals), -1)
c.Assert(data["name"], check.Equals, ref[:colon])
c.Assert(data["tag"], check.Equals, ref[colon+1:])
for _, key := range []string{"signatures", "name", "tag"} {
delete(data, key)
}
manifests = append(manifests, data)
}
c.Assert(manifests[0], check.DeepEquals, manifests[1])
// Then compare the rest file by file.
out := combinedOutputOfCommand(c, "diff", "-urN", "-x", "manifest.json", dir1, dir2)
c.Assert(out, check.Equals, "")
}
@@ -190,7 +201,7 @@ func (s *CopySuite) TestCopyStreaming(c *check.C) {
// Compare (copies of) the original and the copy:
assertSkopeoSucceeds(c, "", "copy", "docker://estesp/busybox:amd64", "dir:"+dir1)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/unsigned:streaming", "dir:"+dir2)
destructiveCheckDirImagesAreEqual(c, dir1, dir2)
assertSchema1DirImagesAreEqualExceptNames(c, dir1, "estesp/busybox:amd64", dir2, "myns/unsigned:streaming")
// FIXME: Also check pushing to docker://
}
@@ -225,13 +236,13 @@ func (s *CopySuite) TestCopyOCIRoundTrip(c *check.C) {
c.Assert(out, check.Equals, "")
// For some silly reason we pass a logger to the OCI library here...
logger := log.New(os.Stderr, "", 0)
//logger := log.New(os.Stderr, "", 0)
// TODO: Verify using the upstream OCI image validator.
err = image.ValidateLayout(oci1, nil, logger)
c.Assert(err, check.IsNil)
err = image.ValidateLayout(oci2, nil, logger)
c.Assert(err, check.IsNil)
//err = image.ValidateLayout(oci1, nil, logger)
//c.Assert(err, check.IsNil)
//err = image.ValidateLayout(oci2, nil, logger)
//c.Assert(err, check.IsNil)
// Now verify that everything is identical. Currently this is true, but
// because we recompute the manifests on-the-fly this doesn't necessarily
@@ -267,34 +278,34 @@ func (s *CopySuite) TestCopySignatures(c *check.C) {
// type: signedBy
// Sign the images
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "docker://busybox:1.23", "atomic:localhost:5000/myns/personal:personal")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "official@example.com", "docker://busybox:1.23.2", "atomic:localhost:5000/myns/official:official")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "docker://busybox:1.26", "atomic:localhost:5006/myns/personal:personal")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "official@example.com", "docker://busybox:1.26.1", "atomic:localhost:5006/myns/official:official")
// Verify that we can pull them
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/personal:personal", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/official:official", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/personal:personal", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/official:official", dirDest)
// Verify that mis-signed images are rejected
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/personal:personal", "atomic:localhost:5000/myns/official:attack")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/personal:attack")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/personal:personal", "atomic:localhost:5006/myns/official:attack")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/personal:attack")
assertSkopeoFails(c, ".*Source image rejected: Invalid GPG signature.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/personal:attack", dirDest)
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/personal:attack", dirDest)
assertSkopeoFails(c, ".*Source image rejected: Invalid GPG signature.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/official:attack", dirDest)
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/official:attack", dirDest)
// Verify that signed identity is verified.
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/naming:test1")
assertSkopeoFails(c, ".*Source image rejected: Signature for identity localhost:5000/myns/official:official is not accepted.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/naming:test1", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/naming:test1")
assertSkopeoFails(c, ".*Source image rejected: Signature for identity localhost:5006/myns/official:official is not accepted.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/naming:test1", dirDest)
// signedIdentity works
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/naming:naming")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/naming:naming", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/naming:naming")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/naming:naming", dirDest)
// Verify that cosigning requirements are enforced
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/cosigned:cosigned")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/cosigned:cosigned")
assertSkopeoFails(c, ".*Source image rejected: Invalid GPG signature.*",
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/cosigned:cosigned", dirDest)
"--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/cosigned:cosigned", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "atomic:localhost:5000/myns/official:official", "atomic:localhost:5000/myns/cosigned:cosigned")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5000/myns/cosigned:cosigned", dirDest)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "copy", "--sign-by", "personal@example.com", "atomic:localhost:5006/myns/official:official", "atomic:localhost:5006/myns/cosigned:cosigned")
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "copy", "atomic:localhost:5006/myns/cosigned:cosigned", dirDest)
}
// --policy copy for dir: sources
@@ -356,10 +367,10 @@ func (s *CopySuite) TestCopyCompression(c *check.C) {
defer os.RemoveAll(topDir)
for i, t := range []struct{ fixture, remote string }{
//{"uncompressed-image-s1", "docker://" + v2DockerRegistryURL + "/compression/compression:s1"}, // FIXME: depends on push to tag working
//{"uncompressed-image-s2", "docker://" + v2DockerRegistryURL + "/compression/compression:s2"}, // FIXME: depends on push to tag working
{"uncompressed-image-s1", "docker://" + v2DockerRegistryURL + "/compression/compression:s1"},
{"uncompressed-image-s2", "docker://" + v2DockerRegistryURL + "/compression/compression:s2"},
{"uncompressed-image-s1", "atomic:localhost:5000/myns/compression:s1"},
//{"uncompressed-image-s2", "atomic:localhost:5000/myns/compression:s2"}, // FIXME: The unresolved "MANIFEST_UNKNOWN"/"unexpected end of JSON input" failure
{"uncompressed-image-s2", "atomic:localhost:5000/myns/compression:s2"},
} {
dir := filepath.Join(topDir, fmt.Sprintf("case%d", i))
err := os.MkdirAll(dir, 0755)
@@ -515,7 +526,7 @@ func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir,
"copy", "docker://localhost:5000/myns/extension:atomic", dirDest+"/dirAD")
// Both access methods result in the same data.
destructiveCheckDirImagesAreEqual(c, filepath.Join(topDir, "dirAA"), filepath.Join(topDir, "dirAD"))
assertDirImagesAreEqual(c, filepath.Join(topDir, "dirAA"), filepath.Join(topDir, "dirAD"))
// Get another image (different so that they don't share signatures, and sign it using docker://)
assertSkopeoSucceeds(c, "", "--tls-verify=false", "--registries.d", registriesDir,
@@ -528,7 +539,7 @@ func (s *CopySuite) TestCopyAtomicExtension(c *check.C) {
assertSkopeoSucceeds(c, "", "--debug", "--tls-verify=false", "--policy", policy, "--registries.d", registriesDir,
"copy", "docker://localhost:5000/myns/extension:extension", dirDest+"/dirDD")
// Both access methods result in the same data.
destructiveCheckDirImagesAreEqual(c, filepath.Join(topDir, "dirDA"), filepath.Join(topDir, "dirDD"))
assertDirImagesAreEqual(c, filepath.Join(topDir, "dirDA"), filepath.Join(topDir, "dirDD"))
}
func (s *SkopeoSuite) TestCopySrcWithAuth(c *check.C) {
@@ -556,3 +567,52 @@ func (s *CopySuite) TestCopyNoPanicOnHTTPResponseWOTLSVerifyFalse(c *check.C) {
assertSkopeoFails(c, ".*server gave HTTP response to HTTPS client.*",
"copy", ourRegistry+"foobar", "dir:test")
}
func (s *CopySuite) TestCopySchemaConversion(c *check.C) {
// Test conversion / schema autodetection both for the OpenShift embedded registry…
s.testCopySchemaConversionRegistries(c, "docker://localhost:5005/myns/schema1", "docker://localhost:5006/myns/schema2")
// … and for various docker/distribution registry versions.
s.testCopySchemaConversionRegistries(c, "docker://"+v2s1DockerRegistryURL+"/schema1", "docker://"+v2DockerRegistryURL+"/schema2")
}
func (s *CopySuite) testCopySchemaConversionRegistries(c *check.C, schema1Registry, schema2Registry string) {
topDir, err := ioutil.TempDir("", "schema-conversion")
c.Assert(err, check.IsNil)
defer os.RemoveAll(topDir)
for _, subdir := range []string{"input1", "input2", "dest2"} {
err := os.MkdirAll(filepath.Join(topDir, subdir), 0755)
c.Assert(err, check.IsNil)
}
input1Dir := filepath.Join(topDir, "input1")
input2Dir := filepath.Join(topDir, "input2")
destDir := filepath.Join(topDir, "dest2")
// Ensure we are working with a schema2 image.
// dir: accepts any manifest format, i.e. this makes …/input2 a schema2 source which cannot be asked to produce schema1 like ordinary docker: registries can.
assertSkopeoSucceeds(c, "", "copy", "docker://busybox", "dir:"+input2Dir)
verifyManifestMIMEType(c, input2Dir, manifest.DockerV2Schema2MediaType)
// 2→2 (the "f2t2" in tag means "from 2 to 2")
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input2Dir, schema2Registry+":f2t2")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema2Registry+":f2t2", "dir:"+destDir)
verifyManifestMIMEType(c, destDir, manifest.DockerV2Schema2MediaType)
// 2→1; we will use the result as a schema1 image for further tests.
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input2Dir, schema1Registry+":f2t1")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema1Registry+":f2t1", "dir:"+input1Dir)
verifyManifestMIMEType(c, input1Dir, manifest.DockerV2Schema1SignedMediaType)
// 1→1
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input1Dir, schema1Registry+":f1t1")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema1Registry+":f1t1", "dir:"+destDir)
verifyManifestMIMEType(c, destDir, manifest.DockerV2Schema1SignedMediaType)
// 1→2: image stays unmodified schema1
assertSkopeoSucceeds(c, "", "copy", "--dest-tls-verify=false", "dir:"+input1Dir, schema2Registry+":f1t2")
assertSkopeoSucceeds(c, "", "copy", "--src-tls-verify=false", schema2Registry+":f1t2", "dir:"+destDir)
verifyManifestMIMEType(c, destDir, manifest.DockerV2Schema1SignedMediaType)
}
// Verify manifest in a dir: image at dir is expectedMIMEType.
func verifyManifestMIMEType(c *check.C, dir string, expectedMIMEType string) {
manifestBlob, err := ioutil.ReadFile(filepath.Join(dir, "manifest.json"))
c.Assert(err, check.IsNil)
mimeType := manifest.GuessMIMEType(manifestBlob)
c.Assert(mimeType, check.Equals, expectedMIMEType)
}

View File

@@ -45,46 +45,46 @@
]
},
"atomic": {
"localhost:5000/myns/personal": [
"localhost:5006/myns/personal": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/personal-pubkey.gpg"
}
],
"localhost:5000/myns/official": [
"localhost:5006/myns/official": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg"
}
],
"localhost:5000/myns/naming:test1": [
"localhost:5006/myns/naming:test1": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg"
}
],
"localhost:5000/myns/naming:naming": [
"localhost:5006/myns/naming:naming": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg",
"signedIdentity": {
"type": "exactRepository",
"dockerRepository": "localhost:5000/myns/official"
"dockerRepository": "localhost:5006/myns/official"
}
}
],
"localhost:5000/myns/cosigned:cosigned": [
"localhost:5006/myns/cosigned:cosigned": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "@keydir@/official-pubkey.gpg",
"signedIdentity": {
"type": "exactRepository",
"dockerRepository": "localhost:5000/myns/official"
"dockerRepository": "localhost:5006/myns/official"
}
},
{

View File

@@ -21,35 +21,34 @@ var adminKUBECONFIG = map[string]string{
// openshiftCluster is an OpenShift API master and integrated registry
// running on localhost.
type openshiftCluster struct {
c *check.C
workingDir string
master *exec.Cmd
registry *exec.Cmd
processes []*exec.Cmd // Processes to terminate on teardown; append to the end, terminate from end to the start.
}
// startOpenshiftCluster creates a new openshiftCluster.
// WARNING: This affects state in users' home directory! Only run
// in isolated test environment.
func startOpenshiftCluster(c *check.C) *openshiftCluster {
cluster := &openshiftCluster{c: c}
cluster := &openshiftCluster{}
dir, err := ioutil.TempDir("", "openshift-cluster")
cluster.c.Assert(err, check.IsNil)
c.Assert(err, check.IsNil)
cluster.workingDir = dir
cluster.startMaster()
cluster.startRegistry()
cluster.ocLoginToProject()
cluster.dockerLogin()
cluster.relaxImageSignerPermissions()
cluster.startMaster(c)
cluster.prepareRegistryConfig(c)
cluster.startRegistry(c)
cluster.ocLoginToProject(c)
cluster.dockerLogin(c)
cluster.relaxImageSignerPermissions(c)
return cluster
}
// clusterCmd creates an exec.Cmd in c.workingDir with current environment modified by environment
func (c *openshiftCluster) clusterCmd(env map[string]string, name string, args ...string) *exec.Cmd {
// clusterCmd creates an exec.Cmd in cluster.workingDir with current environment modified by environment
func (cluster *openshiftCluster) clusterCmd(env map[string]string, name string, args ...string) *exec.Cmd {
cmd := exec.Command(name, args...)
cmd.Dir = c.workingDir
cmd.Dir = cluster.workingDir
cmd.Env = os.Environ()
for key, value := range env {
cmd.Env = modifyEnviron(cmd.Env, key, value)
@@ -58,19 +57,20 @@ func (c *openshiftCluster) clusterCmd(env map[string]string, name string, args .
}
// startMaster starts the OpenShift master (etcd+API server) and waits for it to be ready, or terminates on failure.
func (c *openshiftCluster) startMaster() {
c.master = c.clusterCmd(nil, "openshift", "start", "master")
stdout, err := c.master.StdoutPipe()
func (cluster *openshiftCluster) startMaster(c *check.C) {
cmd := cluster.clusterCmd(nil, "openshift", "start", "master")
cluster.processes = append(cluster.processes, cmd)
stdout, err := cmd.StdoutPipe()
// Send both to the same pipe. This might cause the two streams to be mixed up,
// but logging actually goes only to stderr - this primarily ensure we log any
// unexpected output to stdout.
c.master.Stderr = c.master.Stdout
err = c.master.Start()
c.c.Assert(err, check.IsNil)
cmd.Stderr = cmd.Stdout
err = cmd.Start()
c.Assert(err, check.IsNil)
portOpen, terminatePortCheck := newPortChecker(c.c, 8443)
portOpen, terminatePortCheck := newPortChecker(c, 8443)
defer func() {
c.c.Logf("Terminating port check")
c.Logf("Terminating port check")
terminatePortCheck <- true
}()
@@ -78,12 +78,12 @@ func (c *openshiftCluster) startMaster() {
logCheckFound := make(chan bool)
go func() {
defer func() {
c.c.Logf("Log checker exiting")
c.Logf("Log checker exiting")
}()
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
line := scanner.Text()
c.c.Logf("Log line: %s", line)
c.Logf("Log line: %s", line)
if strings.Contains(line, "Started Origin Controllers") {
logCheckFound <- true
return
@@ -92,7 +92,7 @@ func (c *openshiftCluster) startMaster() {
// Note: we can block before we get here.
select {
case <-terminateLogCheck:
c.c.Logf("terminated")
c.Logf("terminated")
return
default:
// Do not block here and read the next line.
@@ -101,31 +101,31 @@ func (c *openshiftCluster) startMaster() {
logCheckFound <- false
}()
defer func() {
c.c.Logf("Terminating log check")
c.Logf("Terminating log check")
terminateLogCheck <- true
}()
gotPortCheck := false
gotLogCheck := false
for !gotPortCheck || !gotLogCheck {
c.c.Logf("Waiting for master")
c.Logf("Waiting for master")
select {
case <-portOpen:
c.c.Logf("port check done")
c.Logf("port check done")
gotPortCheck = true
case found := <-logCheckFound:
c.c.Logf("log check done, found: %t", found)
c.Logf("log check done, found: %t", found)
if !found {
c.c.Fatal("log check done, success message not found")
c.Fatal("log check done, success message not found")
}
gotLogCheck = true
}
}
c.c.Logf("OK, master started!")
c.Logf("OK, master started!")
}
// startRegistry starts the OpenShift registry and waits for it to be ready, or terminates on failure.
func (c *openshiftCluster) startRegistry() {
// prepareRegistryConfig creates a registry service account and a related k8s client configuration in ${cluster.workingDir}/openshift.local.registry.
func (cluster *openshiftCluster) prepareRegistryConfig(c *check.C) {
// This partially mimics the objects created by (oadm registry), except that we run the
// server directly as an ordinary process instead of a pod with an implicitly attached service account.
saJSON := `{
@@ -135,90 +135,118 @@ func (c *openshiftCluster) startRegistry() {
"name": "registry"
}
}`
cmd := c.clusterCmd(adminKUBECONFIG, "oc", "create", "-f", "-")
runExecCmdWithInput(c.c, cmd, saJSON)
cmd := cluster.clusterCmd(adminKUBECONFIG, "oc", "create", "-f", "-")
runExecCmdWithInput(c, cmd, saJSON)
cmd = c.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-user", "system:registry", "-z", "registry")
cmd = cluster.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-user", "system:registry", "-z", "registry")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.c.Assert(string(out), check.Equals, "cluster role \"system:registry\" added: \"registry\"\n")
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "cluster role \"system:registry\" added: \"registry\"\n")
cmd = c.clusterCmd(adminKUBECONFIG, "oadm", "create-api-client-config", "--client-dir=openshift.local.registry", "--basename=openshift-registry", "--user=system:serviceaccount:default:registry")
cmd = cluster.clusterCmd(adminKUBECONFIG, "oadm", "create-api-client-config", "--client-dir=openshift.local.registry", "--basename=openshift-registry", "--user=system:serviceaccount:default:registry")
out, err = cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.c.Assert(string(out), check.Equals, "")
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "")
}
//KUBECONFIG=openshift.local.registry/openshift-registry.kubeconfig DOCKER_REGISTRY_URL=127.0.0.1:5000
c.registry = c.clusterCmd(map[string]string{
// startRegistry starts the OpenShift registry with configPart on port, waits for it to be ready, and returns the process object, or terminates on failure.
func (cluster *openshiftCluster) startRegistryProcess(c *check.C, port int, configPath string) *exec.Cmd {
cmd := cluster.clusterCmd(map[string]string{
"KUBECONFIG": "openshift.local.registry/openshift-registry.kubeconfig",
"DOCKER_REGISTRY_URL": "127.0.0.1:5000",
}, "dockerregistry", "/atomic-registry-config.yml")
consumeAndLogOutputs(c.c, "registry", c.registry)
err = c.registry.Start()
c.c.Assert(err, check.IsNil)
"DOCKER_REGISTRY_URL": fmt.Sprintf("127.0.0.1:%d", port),
}, "dockerregistry", configPath)
consumeAndLogOutputs(c, fmt.Sprintf("registry-%d", port), cmd)
err := cmd.Start()
c.Assert(err, check.IsNil)
portOpen, terminatePortCheck := newPortChecker(c.c, 5000)
portOpen, terminatePortCheck := newPortChecker(c, port)
defer func() {
terminatePortCheck <- true
}()
c.c.Logf("Waiting for registry to start")
c.Logf("Waiting for registry to start")
<-portOpen
c.c.Logf("OK, Registry port open")
c.Logf("OK, Registry port open")
return cmd
}
// startRegistry starts the OpenShift registry and waits for it to be ready, or terminates on failure.
func (cluster *openshiftCluster) startRegistry(c *check.C) {
// Our “primary” registry
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5000, "/atomic-registry-config.yml"))
// A registry configured with acceptschema2:false
schema1Config := fileFromFixture(c, "/atomic-registry-config.yml", map[string]string{
"addr: :5000": "addr: :5005",
"rootdirectory: /registry": "rootdirectory: /registry-schema1",
// The default configuration currently already contains acceptschema2: false
})
// Make sure the configuration contains "acceptschema2: false", because eventually it will be enabled upstream and this function will need to be updated.
configContents, err := ioutil.ReadFile(schema1Config)
c.Assert(err, check.IsNil)
c.Assert(string(configContents), check.Matches, "(?s).*acceptschema2: false.*")
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5005, schema1Config))
// A registry configured with acceptschema2:true
schema2Config := fileFromFixture(c, "/atomic-registry-config.yml", map[string]string{
"addr: :5000": "addr: :5006",
"rootdirectory: /registry": "rootdirectory: /registry-schema2",
"acceptschema2: false": "acceptschema2: true",
})
cluster.processes = append(cluster.processes, cluster.startRegistryProcess(c, 5006, schema2Config))
}
// ocLogin runs (oc login) and (oc new-project) on the cluster, or terminates on failure.
func (c *openshiftCluster) ocLoginToProject() {
c.c.Logf("oc login")
cmd := c.clusterCmd(nil, "oc", "login", "--certificate-authority=openshift.local.config/master/ca.crt", "-u", "myuser", "-p", "mypw", "https://localhost:8443")
func (cluster *openshiftCluster) ocLoginToProject(c *check.C) {
c.Logf("oc login")
cmd := cluster.clusterCmd(nil, "oc", "login", "--certificate-authority=openshift.local.config/master/ca.crt", "-u", "myuser", "-p", "mypw", "https://localhost:8443")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", out))
c.c.Assert(string(out), check.Matches, "(?s).*Login successful.*") // (?s) : '.' will also match newlines
c.Assert(err, check.IsNil, check.Commentf("%s", out))
c.Assert(string(out), check.Matches, "(?s).*Login successful.*") // (?s) : '.' will also match newlines
outString := combinedOutputOfCommand(c.c, "oc", "new-project", "myns")
c.c.Assert(outString, check.Matches, `(?s).*Now using project "myns".*`) // (?s) : '.' will also match newlines
outString := combinedOutputOfCommand(c, "oc", "new-project", "myns")
c.Assert(outString, check.Matches, `(?s).*Now using project "myns".*`) // (?s) : '.' will also match newlines
}
// dockerLogin simulates (docker login) to the cluster, or terminates on failure.
// We do not run (docker login) directly, because that requires a running daemon and a docker package.
func (c *openshiftCluster) dockerLogin() {
func (cluster *openshiftCluster) dockerLogin(c *check.C) {
dockerDir := filepath.Join(homedir.Get(), ".docker")
err := os.Mkdir(dockerDir, 0700)
c.c.Assert(err, check.IsNil)
c.Assert(err, check.IsNil)
out := combinedOutputOfCommand(c.c, "oc", "config", "view", "-o", "json", "-o", "jsonpath={.users[*].user.token}")
c.c.Logf("oc config value: %s", out)
configJSON := fmt.Sprintf(`{
"auths": {
"localhost:5000": {
out := combinedOutputOfCommand(c, "oc", "config", "view", "-o", "json", "-o", "jsonpath={.users[*].user.token}")
c.Logf("oc config value: %s", out)
authValue := base64.StdEncoding.EncodeToString([]byte("unused:" + out))
auths := []string{}
for _, port := range []int{5000, 5005, 5006} {
auths = append(auths, fmt.Sprintf(`"localhost:%d": {
"auth": "%s",
"email": "unused"
}
}
}`, base64.StdEncoding.EncodeToString([]byte("unused:"+out)))
}`, port, authValue))
}
configJSON := `{"auths": {` + strings.Join(auths, ",") + `}}`
err = ioutil.WriteFile(filepath.Join(dockerDir, "config.json"), []byte(configJSON), 0600)
c.c.Assert(err, check.IsNil)
c.Assert(err, check.IsNil)
}
// relaxImageSignerPermissions opens up the system:image-signer permissions so that
// anyone can work with signatures
// FIXME: This also allows anyone to DoS anyone else; this design is really not all
// that workable, but it is the best we can do for now.
func (c *openshiftCluster) relaxImageSignerPermissions() {
cmd := c.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-group", "system:image-signer", "system:authenticated")
func (cluster *openshiftCluster) relaxImageSignerPermissions(c *check.C) {
cmd := cluster.clusterCmd(adminKUBECONFIG, "oadm", "policy", "add-cluster-role-to-group", "system:image-signer", "system:authenticated")
out, err := cmd.CombinedOutput()
c.c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.c.Assert(string(out), check.Equals, "cluster role \"system:image-signer\" added: \"system:authenticated\"\n")
c.Assert(err, check.IsNil, check.Commentf("%s", string(out)))
c.Assert(string(out), check.Equals, "cluster role \"system:image-signer\" added: \"system:authenticated\"\n")
}
// tearDown stops the cluster services and deletes (only some!) of the state.
func (c *openshiftCluster) tearDown() {
if c.registry != nil && c.registry.Process != nil {
c.registry.Process.Kill()
func (cluster *openshiftCluster) tearDown(c *check.C) {
for i := len(cluster.processes) - 1; i >= 0; i-- {
cluster.processes[i].Process.Kill()
}
if c.master != nil && c.master.Process != nil {
c.master.Process.Kill()
}
if c.workingDir != "" {
os.RemoveAll(c.workingDir)
if cluster.workingDir != "" {
os.RemoveAll(cluster.workingDir)
}
}

View File

@@ -109,6 +109,7 @@ http:
}
cmd := exec.Command(binary, confPath)
consumeAndLogOutputs(c, fmt.Sprintf("registry-%s", url), cmd)
if err := cmd.Start(); err != nil {
os.RemoveAll(tmp)
if os.IsNotExist(err) {

View File

@@ -1,7 +1,9 @@
package main
import (
"bytes"
"io"
"io/ioutil"
"net"
"os/exec"
"strings"
@@ -150,3 +152,25 @@ func modifyEnviron(env []string, name, value string) []string {
}
return append(res, prefix+value)
}
// fileFromFixtureFixture applies edits to inputPath and returns a path to the temporary file.
// Callers should defer os.Remove(the_returned_path)
func fileFromFixture(c *check.C, inputPath string, edits map[string]string) string {
contents, err := ioutil.ReadFile(inputPath)
c.Assert(err, check.IsNil)
for template, value := range edits {
updated := bytes.Replace(contents, []byte(template), []byte(value), -1)
c.Assert(bytes.Equal(updated, contents), check.Equals, false, check.Commentf("Replacing %s in %#v failed", template, string(contents))) // Verify that the template has matched something and we are not silently ignoring it.
contents = updated
}
file, err := ioutil.TempFile("", "policy.json")
c.Assert(err, check.IsNil)
path := file.Name()
_, err = file.Write(contents)
c.Assert(err, check.IsNil)
err = file.Close()
c.Assert(err, check.IsNil)
return path
}

View File

@@ -23,7 +23,7 @@ golang.org/x/text master
github.com/docker/distribution master
github.com/docker/libtrust master
github.com/opencontainers/runc master
github.com/opencontainers/image-spec v1.0.0-rc4
github.com/opencontainers/image-spec v1.0.0-rc5
# -- start OCI image validation requirements.
github.com/opencontainers/runtime-spec v1.0.0-rc4
github.com/opencontainers/image-tools v0.1.0

View File

@@ -7,13 +7,13 @@ import (
"io"
"io/ioutil"
"reflect"
"strings"
"time"
pb "gopkg.in/cheggaaa/pb.v1"
"github.com/Sirupsen/logrus"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/compression"
"github.com/containers/image/signature"
"github.com/containers/image/transports"
@@ -22,11 +22,6 @@ import (
"github.com/pkg/errors"
)
// preferredManifestMIMETypes lists manifest MIME types in order of our preference, if we can't use the original manifest and need to convert.
// Prefer v2s2 to v2s1 because v2s2 does not need to be changed when uploading to a different location.
// Include v2s1 signed but not v2s1 unsigned, because docker/distribution requires a signature even if the unsigned MIME type is used.
var preferredManifestMIMETypes = []string{manifest.DockerV2Schema2MediaType, manifest.DockerV2Schema1SignedMediaType}
type digestingReader struct {
source io.Reader
digester digest.Digester
@@ -186,8 +181,16 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
canModifyManifest := len(sigs) == 0
manifestUpdates := types.ManifestUpdateOptions{}
manifestUpdates.InformationOnly.Destination = dest
if err := determineManifestConversion(&manifestUpdates, src, destSupportedManifestMIMETypes, canModifyManifest); err != nil {
if err := updateEmbeddedDockerReference(&manifestUpdates, dest, src, canModifyManifest); err != nil {
return err
}
// We compute preferredManifestMIMEType only to show it in error messages.
// Without having to add this context in an error message, we would be happy enough to know only that no conversion is needed.
preferredManifestMIMEType, otherManifestMIMETypeCandidates, err := determineManifestConversion(&manifestUpdates, src, destSupportedManifestMIMETypes, canModifyManifest)
if err != nil {
return err
}
@@ -210,54 +213,58 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
return err
}
pendingImage := src
if !reflect.DeepEqual(manifestUpdates, types.ManifestUpdateOptions{InformationOnly: manifestUpdates.InformationOnly}) {
if !canModifyManifest {
return errors.Errorf("Internal error: copy needs an updated manifest but that was known to be forbidden")
}
manifestUpdates.InformationOnly.Destination = dest
pendingImage, err = src.UpdatedImage(manifestUpdates)
if err != nil {
return errors.Wrap(err, "Error creating an updated image manifest")
}
}
manifest, _, err := pendingImage.Manifest()
// With docker/distribution registries we do not know whether the registry accepts schema2 or schema1 only;
// and at least with the OpenShift registry "acceptschema2" option, there is no way to detect the support
// without actually trying to upload something and getting a types.ManifestTypeRejectedError.
// So, try the preferred manifest MIME type. If the process succeeds, fine…
manifest, err := ic.copyUpdatedConfigAndManifest()
if err != nil {
return errors.Wrap(err, "Error reading manifest")
}
logrus.Debugf("Writing manifest using preferred type %s failed: %v", preferredManifestMIMEType, err)
// … if it fails, _and_ the failure is because the manifest is rejected, we may have other options.
if _, isManifestRejected := errors.Cause(err).(types.ManifestTypeRejectedError); !isManifestRejected || len(otherManifestMIMETypeCandidates) == 0 {
// We dont have other options.
// In principle the code below would handle this as well, but the resulting error message is fairly ugly.
// Dont bother the user with MIME types if we have no choice.
return err
}
// If the original MIME type is acceptable, determineManifestConversion always uses it as preferredManifestMIMEType.
// So if we are here, we will definitely be trying to convert the manifest.
// With !canModifyManifest, that would just be a string of repeated failures for the same reason,
// so lets bail out early and with a better error message.
if !canModifyManifest {
return errors.Wrap(err, "Writing manifest failed (and converting it is not possible)")
}
if err := ic.copyConfig(pendingImage); err != nil {
return err
// errs is a list of errors when trying various manifest types. Also serves as an "upload succeeded" flag when set to nil.
errs := []string{fmt.Sprintf("%s(%v)", preferredManifestMIMEType, err)}
for _, manifestMIMEType := range otherManifestMIMETypeCandidates {
logrus.Debugf("Trying to use manifest type %s…", manifestMIMEType)
manifestUpdates.ManifestMIMEType = manifestMIMEType
attemptedManifest, err := ic.copyUpdatedConfigAndManifest()
if err != nil {
logrus.Debugf("Upload of manifest type %s failed: %v", manifestMIMEType, err)
errs = append(errs, fmt.Sprintf("%s(%v)", manifestMIMEType, err))
continue
}
// We have successfully uploaded a manifest.
manifest = attemptedManifest
errs = nil // Mark this as a success so that we don't abort below.
break
}
if errs != nil {
return fmt.Errorf("Uploading manifest failed, attempted the following formats: %s", strings.Join(errs, ", "))
}
}
if options.SignBy != "" {
mech, err := signature.NewGPGSigningMechanism()
newSig, err := createSignature(dest, manifest, options.SignBy, reportWriter)
if err != nil {
return errors.Wrap(err, "Error initializing GPG")
}
defer mech.Close()
if err := mech.SupportsSigning(); err != nil {
return errors.Wrap(err, "Signing not supported")
}
dockerReference := dest.Reference().DockerReference()
if dockerReference == nil {
return errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
}
writeReport("Signing manifest\n")
newSig, err := signature.SignDockerManifest(manifest, dockerReference.String(), mech, options.SignBy)
if err != nil {
return errors.Wrap(err, "Error creating signature")
return err
}
sigs = append(sigs, newSig)
}
writeReport("Writing manifest to image destination\n")
if err := dest.PutManifest(manifest); err != nil {
return errors.Wrap(err, "Error writing manifest")
}
writeReport("Storing signatures\n")
if err := dest.PutSignatures(sigs); err != nil {
return errors.Wrap(err, "Error writing signatures")
@@ -270,6 +277,24 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
return nil
}
// updateEmbeddedDockerReference handles the Docker reference embedded in Docker schema1 manifests.
func updateEmbeddedDockerReference(manifestUpdates *types.ManifestUpdateOptions, dest types.ImageDestination, src types.Image, canModifyManifest bool) error {
destRef := dest.Reference().DockerReference()
if destRef == nil {
return nil // Destination does not care about Docker references
}
if !src.EmbeddedDockerReferenceConflicts(destRef) {
return nil // No reference embedded in the manifest, or it matches destRef already.
}
if !canModifyManifest {
return errors.Errorf("Copying a schema1 image with an embedded Docker reference to %s (Docker reference %s) would invalidate existing signatures. Explicitly enable signature removal to proceed anyway",
transports.ImageName(dest.Reference()), destRef.String())
}
manifestUpdates.EmbeddedDockerReference = destRef
return nil
}
// copyLayers copies layers from src/rawSource to dest, using and updating ic.manifestUpdates if necessary and ic.canModifyManifest.
func (ic *imageCopier) copyLayers() error {
srcInfos := ic.src.LayerInfos()
@@ -322,6 +347,45 @@ func layerDigestsDiffer(a, b []types.BlobInfo) bool {
return false
}
// copyUpdatedConfigAndManifest updates the image per ic.manifestUpdates, if necessary,
// stores the resulting config and manifest to the destination, and returns the stored manifest.
func (ic *imageCopier) copyUpdatedConfigAndManifest() ([]byte, error) {
pendingImage := ic.src
if !reflect.DeepEqual(*ic.manifestUpdates, types.ManifestUpdateOptions{InformationOnly: ic.manifestUpdates.InformationOnly}) {
if !ic.canModifyManifest {
return nil, errors.Errorf("Internal error: copy needs an updated manifest but that was known to be forbidden")
}
if !ic.diffIDsAreNeeded && ic.src.UpdatedImageNeedsLayerDiffIDs(*ic.manifestUpdates) {
// We have set ic.diffIDsAreNeeded based on the preferred MIME type returned by determineManifestConversion.
// So, this can only happen if we are trying to upload using one of the other MIME type candidates.
// Because UpdatedImageNeedsLayerDiffIDs is true only when converting from s1 to s2, this case should only arise
// when ic.dest.SupportedManifestMIMETypes() includes both s1 and s2, the upload using s1 failed, and we are now trying s2.
// Supposedly s2-only registries do not exist or are extremely rare, so failing with this error message is good enough for now.
// If handling such registries turns out to be necessary, we could compute ic.diffIDsAreNeeded based on the full list of manifest MIME type candidates.
return nil, errors.Errorf("Can not convert image to %s, preparing DiffIDs for this case is not supported", ic.manifestUpdates.ManifestMIMEType)
}
pi, err := ic.src.UpdatedImage(*ic.manifestUpdates)
if err != nil {
return nil, errors.Wrap(err, "Error creating an updated image manifest")
}
pendingImage = pi
}
manifest, _, err := pendingImage.Manifest()
if err != nil {
return nil, errors.Wrap(err, "Error reading manifest")
}
if err := ic.copyConfig(pendingImage); err != nil {
return nil, err
}
fmt.Fprintf(ic.reportWriter, "Writing manifest to image destination\n")
if err := ic.dest.PutManifest(manifest); err != nil {
return nil, errors.Wrap(err, "Error writing manifest")
}
return manifest, nil
}
// copyConfig copies config.json, if any, from src to dest.
func (ic *imageCopier) copyConfig(src types.Image) error {
srcInfo := src.ConfigInfo()
@@ -575,41 +639,3 @@ func compressGoroutine(dest *io.PipeWriter, src io.Reader) {
_, err = io.Copy(zipper, src) // Sets err to nil, i.e. causes dest.Close()
}
// determineManifestConversion updates manifestUpdates to convert manifest to a supported MIME type, if necessary and canModifyManifest.
// Note that the conversion will only happen later, through src.UpdatedImage
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool) error {
if len(destSupportedManifestMIMETypes) == 0 {
return nil // Anything goes
}
supportedByDest := map[string]struct{}{}
for _, t := range destSupportedManifestMIMETypes {
supportedByDest[t] = struct{}{}
}
_, srcType, err := src.Manifest()
if err != nil { // This should have been cached?!
return errors.Wrap(err, "Error reading manifest")
}
if _, ok := supportedByDest[srcType]; ok {
logrus.Debugf("Manifest MIME type %s is declared supported by the destination", srcType)
return nil
}
// OK, we should convert the manifest.
if !canModifyManifest {
logrus.Debugf("Manifest MIME type %s is not supported by the destination, but we can't modify the manifest, hoping for the best...")
return nil // Take our chances - FIXME? Or should we fail without trying?
}
var chosenType = destSupportedManifestMIMETypes[0] // This one is known to be supported.
for _, t := range preferredManifestMIMETypes {
if _, ok := supportedByDest[t]; ok {
chosenType = t
break
}
}
logrus.Debugf("Will convert manifest from MIME type %s to %s", srcType, chosenType)
manifestUpdates.ManifestMIMEType = chosenType
return nil
}

102
vendor/github.com/containers/image/copy/manifest.go generated vendored Normal file
View File

@@ -0,0 +1,102 @@
package copy
import (
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// preferredManifestMIMETypes lists manifest MIME types in order of our preference, if we can't use the original manifest and need to convert.
// Prefer v2s2 to v2s1 because v2s2 does not need to be changed when uploading to a different location.
// Include v2s1 signed but not v2s1 unsigned, because docker/distribution requires a signature even if the unsigned MIME type is used.
var preferredManifestMIMETypes = []string{manifest.DockerV2Schema2MediaType, manifest.DockerV2Schema1SignedMediaType}
// orderedSet is a list of strings (MIME types in our case), with each string appearing at most once.
type orderedSet struct {
list []string
included map[string]struct{}
}
// newOrderedSet creates a correctly initialized orderedSet.
// [Sometimes it would be really nice if Golang had constructors…]
func newOrderedSet() *orderedSet {
return &orderedSet{
list: []string{},
included: map[string]struct{}{},
}
}
// append adds s to the end of os, only if it is not included already.
func (os *orderedSet) append(s string) {
if _, ok := os.included[s]; !ok {
os.list = append(os.list, s)
os.included[s] = struct{}{}
}
}
// determineManifestConversion updates manifestUpdates to convert manifest to a supported MIME type, if necessary and canModifyManifest.
// Note that the conversion will only happen later, through src.UpdatedImage
// Returns the preferred manifest MIME type (whether we are converting to it or using it unmodified),
// and a list of other possible alternatives, in order.
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool) (string, []string, error) {
_, srcType, err := src.Manifest()
if err != nil { // This should have been cached?!
return "", nil, errors.Wrap(err, "Error reading manifest")
}
if len(destSupportedManifestMIMETypes) == 0 {
return srcType, []string{}, nil // Anything goes; just use the original as is, do not try any conversions.
}
supportedByDest := map[string]struct{}{}
for _, t := range destSupportedManifestMIMETypes {
supportedByDest[t] = struct{}{}
}
// destSupportedManifestMIMETypes is a static guess; a particular registry may still only support a subset of the types.
// So, build a list of types to try in order of decreasing preference.
// FIXME? This treats manifest.DockerV2Schema1SignedMediaType and manifest.DockerV2Schema1MediaType as distinct,
// although we are not really making any conversion, and it is very unlikely that a destination would support one but not the other.
// In practice, schema1 is probably the lowest common denominator, so we would expect to try the first one of the MIME types
// and never attempt the other one.
prioritizedTypes := newOrderedSet()
// First of all, prefer to keep the original manifest unmodified.
if _, ok := supportedByDest[srcType]; ok {
prioritizedTypes.append(srcType)
}
if !canModifyManifest {
// We could also drop the !canModifyManifest parameter and have the caller
// make the choice; it is already doing that to an extent, to improve error
// messages. But it is nice to hide the “if !canModifyManifest, do no conversion”
// special case in here; the caller can then worry (or not) only about a good UI.
logrus.Debugf("We can't modify the manifest, hoping for the best...")
return srcType, []string{}, nil // Take our chances - FIXME? Or should we fail without trying?
}
// Then use our list of preferred types.
for _, t := range preferredManifestMIMETypes {
if _, ok := supportedByDest[t]; ok {
prioritizedTypes.append(t)
}
}
// Finally, try anything else the destination supports.
for _, t := range destSupportedManifestMIMETypes {
prioritizedTypes.append(t)
}
logrus.Debugf("Manifest has MIME type %s, ordered candidate list [%s]", srcType, strings.Join(prioritizedTypes.list, ", "))
if len(prioritizedTypes.list) == 0 { // Coverage: destSupportedManifestMIMETypes is not empty (or we would have exited in the “Anything goes” case above), so this should never happen.
return "", nil, errors.New("Internal error: no candidate MIME types")
}
preferredType := prioritizedTypes.list[0]
if preferredType != srcType {
manifestUpdates.ManifestMIMEType = preferredType
} else {
logrus.Debugf("... will first try using the original manifest unmodified")
}
return preferredType, prioritizedTypes.list[1:], nil
}

35
vendor/github.com/containers/image/copy/sign.go generated vendored Normal file
View File

@@ -0,0 +1,35 @@
package copy
import (
"fmt"
"io"
"github.com/containers/image/signature"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/pkg/errors"
)
// createSignature creates a new signature of manifest at (identified by) dest using keyIdentity.
func createSignature(dest types.ImageDestination, manifest []byte, keyIdentity string, reportWriter io.Writer) ([]byte, error) {
mech, err := signature.NewGPGSigningMechanism()
if err != nil {
return nil, errors.Wrap(err, "Error initializing GPG")
}
defer mech.Close()
if err := mech.SupportsSigning(); err != nil {
return nil, errors.Wrap(err, "Signing not supported")
}
dockerReference := dest.Reference().DockerReference()
if dockerReference == nil {
return nil, errors.Errorf("Cannot determine canonical Docker reference for destination %s", transports.ImageName(dest.Reference()))
}
fmt.Fprintf(reportWriter, "Signing manifest\n")
newSig, err := signature.SignDockerManifest(manifest, dockerReference.String(), mech, keyIdentity)
if err != nil {
return nil, errors.Wrap(err, "Error creating signature")
}
return newSig, nil
}

View File

@@ -118,6 +118,10 @@ func (d *dirImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo,
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dirImageDestination) PutManifest(manifest []byte) error {
return ioutil.WriteFile(d.ref.manifestPath(), manifest, 0644)
}

View File

@@ -16,6 +16,9 @@ import (
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/client"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
@@ -209,6 +212,10 @@ func (d *dockerImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInf
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dockerImageDestination) PutManifest(m []byte) error {
digest, err := manifest.Digest(m)
if err != nil {
@@ -233,16 +240,31 @@ func (d *dockerImageDestination) PutManifest(m []byte) error {
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
body, err := ioutil.ReadAll(res.Body)
if err == nil {
logrus.Debugf("Error body %s", string(body))
err = errors.Wrapf(client.HandleErrorResponse(res), "Error uploading manifest to %s", path)
if isManifestInvalidError(errors.Cause(err)) {
err = types.ManifestTypeRejectedError{Err: err}
}
logrus.Debugf("Error uploading manifest, status %d, %#v", res.StatusCode, res)
return errors.Errorf("Error uploading manifest to %s, status %d", path, res.StatusCode)
return err
}
return nil
}
// isManifestInvalidError returns true iff err from client.HandleErrorReponse is a “manifest invalid” error.
func isManifestInvalidError(err error) bool {
errors, ok := err.(errcode.Errors)
if !ok || len(errors) == 0 {
return false
}
ec, ok := errors[0].(errcode.ErrorCoder)
if !ok {
return false
}
// ErrorCodeManifestInvalid is returned by OpenShift with acceptschema2=false.
// ErrorCodeTagInvalid is returned by docker/distribution (at least as of commit ec87e9b6971d831f0eff752ddb54fb64693e51cd)
// when uploading to a tag (because it cant find a matching tag inside the manifest)
return ec.ErrorCode() == v2.ErrorCodeManifestInvalid || ec.ErrorCode() == v2.ErrorCodeTagInvalid
}
func (d *dockerImageDestination) PutSignatures(signatures [][]byte) error {
// Do not fail if we dont really need to support signatures.
if len(signatures) == 0 {

View File

@@ -156,10 +156,13 @@ func (d *Destination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
// PutManifest sends the given manifest blob to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate
// between schema versions.
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *Destination) PutManifest(m []byte) error {
// We do not bother with types.ManifestTypeRejectedError; our .SupportedManifestMIMETypes() above is already providing only one alternative,
// so the caller trying a different manifest kind would be pointless.
var man schema2Manifest
if err := json.Unmarshal(m, &man); err != nil {
return errors.Wrap(err, "Error parsing manifest")

View File

@@ -135,6 +135,27 @@ func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
return layers
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestSchema1) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
// This is a bit convoluted: We cant just have a "get embedded docker reference" method
// and have the “does it conflict” logic in the generic copy code, because the manifest does not actually
// embed a full docker/distribution reference, but only the repo name and tag (without the host name).
// So we would have to provide a “return repo without host name, and tag” getter for the generic code,
// which would be very awkward. Instead, we do the matching here in schema1-specific code, and all the
// generic copy code needs to know about is reference.Named and that a manifest may need updating
// for some destinations.
name := reference.Path(ref)
var tag string
if tagged, isTagged := ref.(reference.NamedTagged); isTagged {
tag = tagged.Tag()
} else {
tag = ""
}
return m.Name != name || m.Tag != tag
}
func (m *manifestSchema1) imageInspectInfo() (*types.ImageInspectInfo, error) {
v1 := &v1Image{}
if err := json.Unmarshal([]byte(m.History[0].V1Compatibility), v1); err != nil {
@@ -173,6 +194,14 @@ func (m *manifestSchema1) UpdatedImage(options types.ManifestUpdateOptions) (typ
copy.FSLayers[(len(options.LayerInfos)-1)-i].BlobSum = info.Digest
}
}
if options.EmbeddedDockerReference != nil {
copy.Name = reference.Path(options.EmbeddedDockerReference)
if tagged, isTagged := options.EmbeddedDockerReference.(reference.NamedTagged); isTagged {
copy.Tag = tagged.Tag()
} else {
copy.Tag = ""
}
}
switch options.ManifestMIMEType {
case "": // No conversion, OK

View File

@@ -9,6 +9,7 @@ import (
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
@@ -140,6 +141,13 @@ func (m *manifestSchema2) LayerInfos() []types.BlobInfo {
return blobs
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestSchema2) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
return false
}
func (m *manifestSchema2) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
@@ -180,6 +188,7 @@ func (m *manifestSchema2) UpdatedImage(options types.ManifestUpdateOptions) (typ
copy.LayersDescriptors[i].URLs = info.URLs
}
}
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1 to schema2, but we really don't care.
switch options.ManifestMIMEType {
case "": // No conversion, OK

View File

@@ -3,6 +3,7 @@ package image
import (
"time"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/pkg/strslice"
"github.com/containers/image/types"
@@ -72,6 +73,10 @@ type genericManifest interface {
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() []types.BlobInfo
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
EmbeddedDockerReferenceConflicts(ref reference.Named) bool
imageInspectInfo() (*types.ImageInspectInfo, error) // To be called by inspectManifest
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute

View File

@@ -4,6 +4,7 @@ import (
"encoding/json"
"io/ioutil"
"github.com/containers/image/docker/reference"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
@@ -107,6 +108,13 @@ func (m *manifestOCI1) LayerInfos() []types.BlobInfo {
return blobs
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestOCI1) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
return false
}
func (m *manifestOCI1) imageInspectInfo() (*types.ImageInspectInfo, error) {
config, err := m.ConfigBlob()
if err != nil {
@@ -146,6 +154,7 @@ func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.
copy.LayersDescriptors[i].Size = info.Size
}
}
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1, but we really don't care.
switch options.ManifestMIMEType {
case "": // No conversion, OK

View File

@@ -54,7 +54,7 @@ func GuessMIMEType(manifest []byte) string {
}
switch meta.MediaType {
case DockerV2Schema2MediaType, DockerV2ListMediaType, imgspecv1.MediaTypeImageManifest, imgspecv1.MediaTypeImageManifestList: // A recognized type.
case DockerV2Schema2MediaType, DockerV2ListMediaType: // A recognized type.
return meta.MediaType
}
// this is the only way the function can return DockerV2Schema1MediaType, and recognizing that is essential for stripping the JWS signatures = computing the correct manifest digest.
@@ -64,7 +64,31 @@ func GuessMIMEType(manifest []byte) string {
return DockerV2Schema1SignedMediaType
}
return DockerV2Schema1MediaType
case 2: // Really should not happen, meta.MediaType should have been set. But given the data, this is our best guess.
case 2:
// best effort to understand if this is an OCI image since mediaType
// isn't in the manifest for OCI anymore
// for docker v2s2 meta.MediaType should have been set. But given the data, this is our best guess.
ociMan := struct {
Config struct {
MediaType string `json:"mediaType"`
} `json:"config"`
Layers []imgspecv1.Descriptor `json:"layers"`
}{}
if err := json.Unmarshal(manifest, &ociMan); err != nil {
return ""
}
if ociMan.Config.MediaType == imgspecv1.MediaTypeImageConfig && len(ociMan.Layers) != 0 {
return imgspecv1.MediaTypeImageManifest
}
ociIndex := struct {
Manifests []imgspecv1.Descriptor `json:"manifests"`
}{}
if err := json.Unmarshal(manifest, &ociIndex); err != nil {
return ""
}
if len(ociIndex.Manifests) != 0 && ociIndex.Manifests[0].MediaType == imgspecv1.MediaTypeImageManifest {
return imgspecv1.MediaTypeImageIndex
}
return DockerV2Schema2MediaType
}
return ""

View File

@@ -6,22 +6,30 @@ import (
"io/ioutil"
"os"
"path/filepath"
"runtime"
"github.com/pkg/errors"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspec "github.com/opencontainers/image-spec/specs-go"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
type ociImageDestination struct {
ref ociReference
ref ociReference
index imgspecv1.ImageIndex
}
// newImageDestination returns an ImageDestination for writing to an existing directory.
func newImageDestination(ref ociReference) types.ImageDestination {
return &ociImageDestination{ref: ref}
index := imgspecv1.ImageIndex{
Versioned: imgspec.Versioned{
SchemaVersion: 2,
},
}
return &ociImageDestination{ref: ref, index: index}
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
@@ -138,6 +146,10 @@ func (d *ociImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo,
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *ociImageDestination) PutManifest(m []byte) error {
digest, err := manifest.Digest(m)
if err != nil {
@@ -148,10 +160,6 @@ func (d *ociImageDestination) PutManifest(m []byte) error {
// TODO(runcom): beaware and add support for OCI manifest list
desc.MediaType = imgspecv1.MediaTypeImageManifest
desc.Size = int64(len(m))
data, err := json.Marshal(desc)
if err != nil {
return err
}
blobPath, err := d.ref.blobPath(digest)
if err != nil {
@@ -163,15 +171,19 @@ func (d *ociImageDestination) PutManifest(m []byte) error {
if err := ioutil.WriteFile(blobPath, m, 0644); err != nil {
return err
}
// TODO(runcom): ugly here?
if err := ioutil.WriteFile(d.ref.ociLayoutPath(), []byte(`{"imageLayoutVersion": "1.0.0"}`), 0644); err != nil {
return err
}
descriptorPath := d.ref.descriptorPath(d.ref.tag)
if err := ensureParentDirectoryExists(descriptorPath); err != nil {
return err
}
return ioutil.WriteFile(descriptorPath, data, 0644)
annotations := make(map[string]string)
annotations["org.opencontainers.ref.name"] = d.ref.tag
desc.Annotations = annotations
d.index.Manifests = append(d.index.Manifests, imgspecv1.ManifestDescriptor{
Descriptor: desc,
Platform: imgspecv1.Platform{
Architecture: runtime.GOARCH,
OS: runtime.GOOS,
},
})
return nil
}
func ensureDirectoryExists(path string) error {
@@ -200,5 +212,12 @@ func (d *ociImageDestination) PutSignatures(signatures [][]byte) error {
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (d *ociImageDestination) Commit() error {
return nil
if err := ioutil.WriteFile(d.ref.ociLayoutPath(), []byte(`{"imageLayoutVersion": "1.0.0"}`), 0644); err != nil {
return err
}
indexJSON, err := json.Marshal(d.index)
if err != nil {
return err
}
return ioutil.WriteFile(d.ref.indexPath(), indexJSON, 0644)
}

View File

@@ -1,7 +1,6 @@
package layout
import (
"encoding/json"
"io"
"io/ioutil"
"os"
@@ -12,12 +11,17 @@ import (
)
type ociImageSource struct {
ref ociReference
ref ociReference
descriptor imgspecv1.ManifestDescriptor
}
// newImageSource returns an ImageSource for reading from an existing directory.
func newImageSource(ref ociReference) types.ImageSource {
return &ociImageSource{ref: ref}
func newImageSource(ref ociReference) (types.ImageSource, error) {
descriptor, err := ref.getManifestDescriptor()
if err != nil {
return nil, err
}
return &ociImageSource{ref: ref, descriptor: descriptor}, nil
}
// Reference returns the reference used to set up this source.
@@ -33,19 +37,7 @@ func (s *ociImageSource) Close() error {
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
func (s *ociImageSource) GetManifest() ([]byte, string, error) {
descriptorPath := s.ref.descriptorPath(s.ref.tag)
data, err := ioutil.ReadFile(descriptorPath)
if err != nil {
return nil, "", err
}
desc := imgspecv1.Descriptor{}
err = json.Unmarshal(data, &desc)
if err != nil {
return nil, "", err
}
manifestPath, err := s.ref.blobPath(digest.Digest(desc.Digest))
manifestPath, err := s.ref.blobPath(digest.Digest(s.descriptor.Digest))
if err != nil {
return nil, "", err
}
@@ -54,7 +46,7 @@ func (s *ociImageSource) GetManifest() ([]byte, string, error) {
return nil, "", err
}
return m, desc.MediaType, nil
return m, s.descriptor.MediaType, nil
}
func (s *ociImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {

View File

@@ -1,7 +1,9 @@
package layout
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"regexp"
"strings"
@@ -12,6 +14,7 @@ import (
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
@@ -176,16 +179,49 @@ func (ref ociReference) PolicyConfigurationNamespaces() []string {
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref ociReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
src := newImageSource(ref)
src, err := newImageSource(ref)
if err != nil {
return nil, err
}
return image.FromSource(src)
}
func (ref ociReference) getManifestDescriptor() (imgspecv1.ManifestDescriptor, error) {
indexJSON, err := os.Open(ref.indexPath())
if err != nil {
return imgspecv1.ManifestDescriptor{}, err
}
defer indexJSON.Close()
index := imgspecv1.ImageIndex{}
if err := json.NewDecoder(indexJSON).Decode(&index); err != nil {
return imgspecv1.ManifestDescriptor{}, err
}
var d *imgspecv1.ManifestDescriptor
for _, md := range index.Manifests {
if md.MediaType != imgspecv1.MediaTypeImageManifest {
continue
}
refName, ok := md.Annotations["org.opencontainers.ref.name"]
if !ok {
continue
}
if refName == ref.tag {
d = &md
break
}
}
if d == nil {
return imgspecv1.ManifestDescriptor{}, fmt.Errorf("no descriptor found for reference %q", ref.tag)
}
return *d, nil
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func (ref ociReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return newImageSource(ref), nil
return newImageSource(ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
@@ -199,11 +235,16 @@ func (ref ociReference) DeleteImage(ctx *types.SystemContext) error {
return errors.Errorf("Deleting images not implemented for oci: images")
}
// ociLayoutPathPath returns a path for the oci-layout within a directory using OCI conventions.
// ociLayoutPath returns a path for the oci-layout within a directory using OCI conventions.
func (ref ociReference) ociLayoutPath() string {
return filepath.Join(ref.dir, "oci-layout")
}
// indexPath returns a path for the index.json within a directory using OCI conventions.
func (ref ociReference) indexPath() string {
return filepath.Join(ref.dir, "index.json")
}
// blobPath returns a path for a blob within a directory using OCI image-layout conventions.
func (ref ociReference) blobPath(digest digest.Digest) (string, error) {
if err := digest.Validate(); err != nil {
@@ -211,8 +252,3 @@ func (ref ociReference) blobPath(digest digest.Digest) (string, error) {
}
return filepath.Join(ref.dir, "blobs", digest.Algorithm().String(), digest.Hex()), nil
}
// descriptorPath returns a path for the manifest within a directory using OCI conventions.
func (ref ociReference) descriptorPath(digest string) string {
return filepath.Join(ref.dir, "refs", digest)
}

View File

@@ -338,10 +338,7 @@ func (d *openshiftImageDestination) Close() error {
}
func (d *openshiftImageDestination) SupportedManifestMIMETypes() []string {
return []string{
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
return d.docker.SupportedManifestMIMETypes()
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
@@ -383,6 +380,10 @@ func (d *openshiftImageDestination) ReapplyBlob(info types.BlobInfo) (types.Blob
return d.docker.ReapplyBlob(info)
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *openshiftImageDestination) PutManifest(m []byte) error {
manifestDigest, err := manifest.Digest(m)
if err != nil {

View File

@@ -0,0 +1,299 @@
package ostree
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/archive"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type blobToImport struct {
Size int64
Digest digest.Digest
BlobPath string
}
type descriptor struct {
Size int64 `json:"size"`
Digest digest.Digest `json:"digest"`
}
type manifestSchema struct {
ConfigDescriptor descriptor `json:"config"`
LayersDescriptors []descriptor `json:"layers"`
}
type ostreeImageDestination struct {
ref ostreeReference
manifest string
schema manifestSchema
tmpDirPath string
blobs map[string]*blobToImport
}
// newImageDestination returns an ImageDestination for writing to an existing ostree.
func newImageDestination(ref ostreeReference, tmpDirPath string) (types.ImageDestination, error) {
tmpDirPath = filepath.Join(tmpDirPath, ref.branchName)
if err := ensureDirectoryExists(tmpDirPath); err != nil {
return nil, err
}
return &ostreeImageDestination{ref, "", manifestSchema{}, tmpDirPath, map[string]*blobToImport{}}, nil
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
func (d *ostreeImageDestination) Reference() types.ImageReference {
return d.ref
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *ostreeImageDestination) Close() error {
return os.RemoveAll(d.tmpDirPath)
}
func (d *ostreeImageDestination) SupportedManifestMIMETypes() []string {
return []string{
manifest.DockerV2Schema2MediaType,
}
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *ostreeImageDestination) SupportsSignatures() error {
return nil
}
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *ostreeImageDestination) ShouldCompressLayers() bool {
return false
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *ostreeImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
func (d *ostreeImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
tmpDir, err := ioutil.TempDir(d.tmpDirPath, "blob")
if err != nil {
return types.BlobInfo{}, err
}
blobPath := filepath.Join(tmpDir, "content")
blobFile, err := os.Create(blobPath)
if err != nil {
return types.BlobInfo{}, err
}
defer blobFile.Close()
digester := digest.Canonical.Digester()
tee := io.TeeReader(stream, digester.Hash())
size, err := io.Copy(blobFile, tee)
if err != nil {
return types.BlobInfo{}, err
}
computedDigest := digester.Digest()
if inputInfo.Size != -1 && size != inputInfo.Size {
return types.BlobInfo{}, errors.Errorf("Size mismatch when copying %s, expected %d, got %d", computedDigest, inputInfo.Size, size)
}
if err := blobFile.Sync(); err != nil {
return types.BlobInfo{}, err
}
hash := computedDigest.Hex()
d.blobs[hash] = &blobToImport{Size: size, Digest: computedDigest, BlobPath: blobPath}
return types.BlobInfo{Digest: computedDigest, Size: size}, nil
}
func fixFiles(dir string, usermode bool) error {
entries, err := ioutil.ReadDir(dir)
if err != nil {
return err
}
for _, info := range entries {
fullpath := filepath.Join(dir, info.Name())
if info.Mode()&(os.ModeNamedPipe|os.ModeSocket|os.ModeDevice) != 0 {
if err := os.Remove(fullpath); err != nil {
return err
}
continue
}
if info.IsDir() {
if usermode {
if err := os.Chmod(fullpath, info.Mode()|0700); err != nil {
return err
}
}
err = fixFiles(fullpath, usermode)
if err != nil {
return err
}
} else if usermode && (info.Mode().IsRegular() || (info.Mode()&os.ModeSymlink) != 0) {
if err := os.Chmod(fullpath, info.Mode()|0600); err != nil {
return err
}
}
}
return nil
}
func (d *ostreeImageDestination) importBlob(blob *blobToImport) error {
ostreeBranch := fmt.Sprintf("ociimage/%s", blob.Digest.Hex())
destinationPath := filepath.Join(d.tmpDirPath, blob.Digest.Hex(), "root")
if err := ensureDirectoryExists(destinationPath); err != nil {
return err
}
defer func() {
os.Remove(blob.BlobPath)
os.RemoveAll(destinationPath)
}()
if os.Getuid() == 0 {
if err := archive.UntarPath(blob.BlobPath, destinationPath); err != nil {
return err
}
if err := fixFiles(destinationPath, false); err != nil {
return err
}
} else {
os.MkdirAll(destinationPath, 0755)
if err := exec.Command("tar", "-C", destinationPath, "--no-same-owner", "--no-same-permissions", "--delay-directory-restore", "-xf", blob.BlobPath).Run(); err != nil {
return err
}
if err := fixFiles(destinationPath, true); err != nil {
return err
}
}
return exec.Command("ostree", "commit",
"--repo", d.ref.repo,
fmt.Sprintf("--add-metadata-string=docker.size=%d", blob.Size),
"--branch", ostreeBranch,
fmt.Sprintf("--tree=dir=%s", destinationPath)).Run()
}
func (d *ostreeImageDestination) importConfig(blob *blobToImport) error {
ostreeBranch := fmt.Sprintf("ociimage/%s", blob.Digest.Hex())
return exec.Command("ostree", "commit",
"--repo", d.ref.repo,
fmt.Sprintf("--add-metadata-string=docker.size=%d", blob.Size),
"--branch", ostreeBranch, filepath.Dir(blob.BlobPath)).Run()
}
func (d *ostreeImageDestination) HasBlob(info types.BlobInfo) (bool, int64, error) {
branch := fmt.Sprintf("ociimage/%s", info.Digest.Hex())
output, err := exec.Command("ostree", "show", "--repo", d.ref.repo, "--print-metadata-key=docker.size", branch).CombinedOutput()
if err != nil {
if bytes.Index(output, []byte("not found")) >= 0 || bytes.Index(output, []byte("No such")) >= 0 {
return false, -1, nil
}
return false, -1, err
}
size, err := strconv.ParseInt(strings.Trim(string(output), "'\n"), 10, 64)
if err != nil {
return false, -1, err
}
return true, size, nil
}
func (d *ostreeImageDestination) ReapplyBlob(info types.BlobInfo) (types.BlobInfo, error) {
return info, nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *ostreeImageDestination) PutManifest(manifest []byte) error {
d.manifest = string(manifest)
if err := json.Unmarshal(manifest, &d.schema); err != nil {
return err
}
manifestPath := filepath.Join(d.tmpDirPath, d.ref.manifestPath())
if err := ensureParentDirectoryExists(manifestPath); err != nil {
return err
}
return ioutil.WriteFile(manifestPath, manifest, 0644)
}
func (d *ostreeImageDestination) PutSignatures(signatures [][]byte) error {
path := filepath.Join(d.tmpDirPath, d.ref.signaturePath(0))
if err := ensureParentDirectoryExists(path); err != nil {
return err
}
for i, sig := range signatures {
signaturePath := filepath.Join(d.tmpDirPath, d.ref.signaturePath(i))
if err := ioutil.WriteFile(signaturePath, sig, 0644); err != nil {
return err
}
}
return nil
}
func (d *ostreeImageDestination) Commit() error {
for _, layer := range d.schema.LayersDescriptors {
hash := layer.Digest.Hex()
blob := d.blobs[hash]
// if the blob is not present in d.blobs then it is already stored in OSTree,
// and we don't need to import it.
if blob == nil {
continue
}
err := d.importBlob(blob)
if err != nil {
return err
}
}
hash := d.schema.ConfigDescriptor.Digest.Hex()
blob := d.blobs[hash]
if blob != nil {
err := d.importConfig(blob)
if err != nil {
return err
}
}
manifestPath := filepath.Join(d.tmpDirPath, "manifest")
err := exec.Command("ostree", "commit",
"--repo", d.ref.repo,
fmt.Sprintf("--add-metadata-string=docker.manifest=%s", string(d.manifest)),
fmt.Sprintf("--branch=ociimage/%s", d.ref.branchName),
manifestPath).Run()
return err
}
func ensureDirectoryExists(path string) error {
if _, err := os.Stat(path); err != nil && os.IsNotExist(err) {
if err := os.MkdirAll(path, 0755); err != nil {
return err
}
}
return nil
}
func ensureParentDirectoryExists(path string) error {
return ensureDirectoryExists(filepath.Dir(path))
}

View File

@@ -0,0 +1,235 @@
package ostree
import (
"bytes"
"fmt"
"os"
"path/filepath"
"regexp"
"strings"
"github.com/pkg/errors"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/transports"
"github.com/containers/image/types"
)
const defaultOSTreeRepo = "/ostree/repo"
// Transport is an ImageTransport for ostree paths.
var Transport = ostreeTransport{}
type ostreeTransport struct{}
func (t ostreeTransport) Name() string {
return "ostree"
}
func init() {
transports.Register(Transport)
}
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (t ostreeTransport) ValidatePolicyConfigurationScope(scope string) error {
sep := strings.Index(scope, ":")
if sep < 0 {
return errors.Errorf("Invalid ostree: scope %s: Must include a repo", scope)
}
repo := scope[:sep]
if !strings.HasPrefix(repo, "/") {
return errors.Errorf("Invalid ostree: scope %s: repository must be an absolute path", scope)
}
cleaned := filepath.Clean(repo)
if cleaned != repo {
return errors.Errorf(`Invalid ostree: scope %s: Uses non-canonical path format, perhaps try with path %s`, scope, cleaned)
}
// FIXME? In the namespaces within a repo,
// we could be verifying the various character set and length restrictions
// from docker/distribution/reference.regexp.go, but other than that there
// are few semantically invalid strings.
return nil
}
// ostreeReference is an ImageReference for ostree paths.
type ostreeReference struct {
image string
branchName string
repo string
}
func (t ostreeTransport) ParseReference(ref string) (types.ImageReference, error) {
var repo = ""
var image = ""
s := strings.SplitN(ref, "@/", 2)
if len(s) == 1 {
image, repo = s[0], defaultOSTreeRepo
} else {
image, repo = s[0], "/"+s[1]
}
return NewReference(image, repo)
}
// NewReference returns an OSTree reference for a specified repo and image.
func NewReference(image string, repo string) (types.ImageReference, error) {
// image is not _really_ in a containers/image/docker/reference format;
// as far as the libOSTree ociimage/* namespace is concerned, it is more or
// less an arbitrary string with an implied tag.
// We use the reference.* parsers basically for the default tag name in
// reference.TagNameOnly, and incidentally for some character set and length
// restrictions.
var ostreeImage reference.Named
s := strings.SplitN(image, ":", 2)
named, err := reference.WithName(s[0])
if err != nil {
return nil, err
}
if len(s) == 1 {
ostreeImage = reference.TagNameOnly(named)
} else {
ostreeImage, err = reference.WithTag(named, s[1])
if err != nil {
return nil, err
}
}
resolved, err := explicitfilepath.ResolvePathToFullyExplicit(repo)
if err != nil {
// With os.IsNotExist(err), the parent directory of repo is also not existent;
// that should ordinarily not happen, but it would be a bit weird to reject
// references which do not specify a repo just because the implicit defaultOSTreeRepo
// does not exist.
if os.IsNotExist(err) && repo == defaultOSTreeRepo {
resolved = repo
} else {
return nil, err
}
}
// This is necessary to prevent directory paths returned by PolicyConfigurationNamespaces
// from being ambiguous with values of PolicyConfigurationIdentity.
if strings.Contains(resolved, ":") {
return nil, errors.Errorf("Invalid OSTreeCI reference %s@%s: path %s contains a colon", image, repo, resolved)
}
return ostreeReference{
image: ostreeImage.String(),
branchName: encodeOStreeRef(ostreeImage.String()),
repo: resolved,
}, nil
}
func (ref ostreeReference) Transport() types.ImageTransport {
return Transport
}
// StringWithinTransport returns a string representation of the reference, which MUST be such that
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
// e.g. default attribute values omitted by the user may be filled in in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
func (ref ostreeReference) StringWithinTransport() string {
return fmt.Sprintf("%s@%s", ref.image, ref.repo)
}
// DockerReference returns a Docker reference associated with this reference
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
func (ref ostreeReference) DockerReference() reference.Named {
return nil
}
func (ref ostreeReference) PolicyConfigurationIdentity() string {
return fmt.Sprintf("%s:%s", ref.repo, ref.image)
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
// in order, terminating on first match, and an implicit "" is always checked at the end.
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
// and each following element to be a prefix of the element preceding it.
func (ref ostreeReference) PolicyConfigurationNamespaces() []string {
s := strings.SplitN(ref.image, ":", 2)
if len(s) != 2 { // Coverage: Should never happen, NewReference above ensures ref.image has a :tag.
panic(fmt.Sprintf("Internal inconsistency: ref.image value %q does not have a :tag", ref.image))
}
name := s[0]
res := []string{}
for {
res = append(res, fmt.Sprintf("%s:%s", ref.repo, name))
lastSlash := strings.LastIndex(name, "/")
if lastSlash == -1 {
break
}
name = name[:lastSlash]
}
return res
}
// NewImage returns a types.Image for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned Image.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
func (ref ostreeReference) NewImage(ctx *types.SystemContext) (types.Image, error) {
return nil, errors.New("Reading ostree: images is currently not supported")
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// The caller must call .Close() on the returned ImageSource.
func (ref ostreeReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
return nil, errors.New("Reading ostree: images is currently not supported")
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref ostreeReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
var tmpDir string
if ctx == nil || ctx.OSTreeTmpDirPath == "" {
tmpDir = os.TempDir()
} else {
tmpDir = ctx.OSTreeTmpDirPath
}
return newImageDestination(ref, tmpDir)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref ostreeReference) DeleteImage(ctx *types.SystemContext) error {
return errors.Errorf("Deleting images not implemented for ostree: images")
}
var ostreeRefRegexp = regexp.MustCompile(`^[A-Za-z0-9.-]$`)
func encodeOStreeRef(in string) string {
var buffer bytes.Buffer
for i := range in {
sub := in[i : i+1]
if ostreeRefRegexp.MatchString(sub) {
buffer.WriteString(sub)
} else {
buffer.WriteString(fmt.Sprintf("_%02X", sub[0]))
}
}
return buffer.String()
}
// manifestPath returns a path for the manifest within a ostree using our conventions.
func (ref ostreeReference) manifestPath() string {
return filepath.Join("manifest", "manifest.json")
}
// signaturePath returns a path for a signature within a ostree using our conventions.
func (ref ostreeReference) signaturePath(index int) string {
return filepath.Join("manifest", fmt.Sprintf("signature-%d", index+1))
}

View File

@@ -71,14 +71,9 @@ type storageImage struct {
// newImageSource sets us up to read out an image, which needs to already exist.
func newImageSource(imageRef storageReference) (*storageImageSource, error) {
id := imageRef.resolveID()
if id == "" {
logrus.Errorf("no image matching reference %q found", imageRef.StringWithinTransport())
return nil, ErrNoSuchImage
}
img, err := imageRef.transport.store.GetImage(id)
img, err := imageRef.resolveImage()
if err != nil {
return nil, errors.Wrapf(err, "error reading image %q", id)
return nil, err
}
image := &storageImageSource{
imageRef: imageRef,
@@ -143,9 +138,9 @@ func (s *storageImageDestination) putBlob(stream io.Reader, blobinfo types.BlobI
Size: -1,
}
// Try to read an initial snippet of the blob.
header := make([]byte, 10240)
n, err := stream.Read(header)
if err != nil && err != io.EOF {
buf := [archive.HeaderSize]byte{}
n, err := io.ReadAtLeast(stream, buf[:], len(buf))
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
return errorBlobInfo, err
}
// Set up to read the whole blob (the initial snippet, plus the rest)
@@ -159,9 +154,9 @@ func (s *storageImageDestination) putBlob(stream io.Reader, blobinfo types.BlobI
}
hash := ""
counter := ioutils.NewWriteCounter(hasher.Hash())
defragmented := io.MultiReader(bytes.NewBuffer(header[:n]), stream)
defragmented := io.MultiReader(bytes.NewBuffer(buf[:n]), stream)
multi := io.TeeReader(defragmented, counter)
if (n > 0) && archive.IsArchive(header[:n]) {
if (n > 0) && archive.IsArchive(buf[:n]) {
// It's a filesystem layer. If it's not the first one in the
// image, we assume that the most recently added layer is its
// parent.
@@ -336,21 +331,37 @@ func (s *storageImageDestination) Commit() error {
}
img, err := s.imageRef.transport.store.CreateImage(s.ID, nil, lastLayer, "", nil)
if err != nil {
logrus.Debugf("error creating image: %q", err)
return err
if err != storage.ErrDuplicateID {
logrus.Debugf("error creating image: %q", err)
return errors.Wrapf(err, "error creating image %q", s.ID)
}
img, err = s.imageRef.transport.store.GetImage(s.ID)
if err != nil {
return errors.Wrapf(err, "error reading image %q", s.ID)
}
if img.TopLayer != lastLayer {
logrus.Debugf("error creating image: image with ID %q exists, but uses different layers", err)
return errors.Wrapf(err, "image with ID %q already exists, but uses a different top layer", s.ID)
}
logrus.Debugf("reusing image ID %q", img.ID)
} else {
logrus.Debugf("created new image ID %q", img.ID)
}
logrus.Debugf("created new image ID %q", img.ID)
s.ID = img.ID
names := img.Names
if s.Tag != "" {
// We have a name to set, so move the name to this image.
if err := s.imageRef.transport.store.SetNames(img.ID, []string{s.Tag}); err != nil {
names = append(names, s.Tag)
}
// We have names to set, so move those names to this image.
if len(names) > 0 {
if err := s.imageRef.transport.store.SetNames(img.ID, names); err != nil {
if _, err2 := s.imageRef.transport.store.DeleteImage(img.ID, true); err2 != nil {
logrus.Debugf("error deleting incomplete image %q: %v", img.ID, err2)
}
logrus.Debugf("error setting names on image %q: %v", img.ID, err)
return err
}
logrus.Debugf("set name of image %q to %q", img.ID, s.Tag)
logrus.Debugf("set names of image %q to %v", img.ID, names)
}
// Save the data blobs to disk, and drop their contents from memory.
keys := []ddigest.Digest{}
@@ -409,6 +420,10 @@ func (s *storageImageDestination) SupportedManifestMIMETypes() []string {
return nil
}
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (s *storageImageDestination) PutManifest(manifest []byte) error {
s.Manifest = make([]byte, len(manifest))
copy(s.Manifest, manifest)

View File

@@ -6,6 +6,8 @@ import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/types"
"github.com/containers/storage/storage"
"github.com/pkg/errors"
)
// A storageReference holds an arbitrary name and/or an ID, which is a 32-byte
@@ -32,15 +34,36 @@ func newReference(transport storageTransport, reference, id string, name referen
}
// Resolve the reference's name to an image ID in the store, if there's already
// one present with the same name or ID.
func (s *storageReference) resolveID() string {
// one present with the same name or ID, and return the image.
func (s *storageReference) resolveImage() (*storage.Image, error) {
if s.id == "" {
image, err := s.transport.store.GetImage(s.reference)
if image != nil && err == nil {
s.id = image.ID
}
}
return s.id
if s.id == "" {
logrus.Errorf("reference %q does not resolve to an image ID", s.StringWithinTransport())
return nil, ErrNoSuchImage
}
img, err := s.transport.store.GetImage(s.id)
if err != nil {
return nil, errors.Wrapf(err, "error reading image %q", s.id)
}
if s.reference != "" {
nameMatch := false
for _, name := range img.Names {
if name == s.reference {
nameMatch = true
break
}
}
if !nameMatch {
logrus.Errorf("no image matching reference %q found", s.StringWithinTransport())
return nil, ErrNoSuchImage
}
}
return img, nil
}
// Return a Transport object that defaults to using the same store that we used
@@ -103,14 +126,13 @@ func (s storageReference) NewImage(ctx *types.SystemContext) (types.Image, error
}
func (s storageReference) DeleteImage(ctx *types.SystemContext) error {
id := s.resolveID()
if id == "" {
logrus.Errorf("reference %q does not resolve to an image ID", s.StringWithinTransport())
return ErrNoSuchImage
img, err := s.resolveImage()
if err != nil {
return err
}
layers, err := s.transport.store.DeleteImage(id, true)
layers, err := s.transport.store.DeleteImage(img.ID, true)
if err == nil {
logrus.Debugf("deleted image %q", id)
logrus.Debugf("deleted image %q", img.ID)
for _, layer := range layers {
logrus.Debugf("deleted layer %q", layer)
}

View File

@@ -2,7 +2,6 @@ package storage
import (
"path/filepath"
"regexp"
"strings"
"github.com/pkg/errors"
@@ -30,7 +29,6 @@ var (
// ErrPathNotAbsolute is returned when a graph root is not an absolute
// path name.
ErrPathNotAbsolute = errors.New("path name is not absolute")
idRegexp = regexp.MustCompile("^(sha256:)?([0-9a-fA-F]{64})$")
)
// StoreTransport is an ImageTransport that uses a storage.Store to parse
@@ -100,9 +98,12 @@ func (s storageTransport) ParseStoreReference(store storage.Store, ref string) (
return nil, err
}
}
sum, err = digest.Parse("sha256:" + refInfo[1])
if err != nil {
return nil, err
sum, err = digest.Parse(refInfo[1])
if err != nil || sum.Validate() != nil {
sum, err = digest.Parse("sha256:" + refInfo[1])
if err != nil || sum.Validate() != nil {
return nil, err
}
}
} else { // Coverage: len(refInfo) is always 1 or 2
// Anything else: store specified in a form we don't
@@ -285,7 +286,7 @@ func verboseName(name reference.Named) string {
name = reference.TagNameOnly(name)
tag := ""
if tagged, ok := name.(reference.NamedTagged); ok {
tag = tagged.Tag()
tag = ":" + tagged.Tag()
}
return name.Name() + ":" + tag
return name.Name() + tag
}

View File

@@ -12,6 +12,7 @@ import (
_ "github.com/containers/image/docker/daemon"
_ "github.com/containers/image/oci/layout"
_ "github.com/containers/image/openshift"
_ "github.com/containers/image/ostree"
_ "github.com/containers/image/storage"
"github.com/containers/image/transports"
"github.com/containers/image/types"

View File

@@ -167,8 +167,11 @@ type ImageDestination interface {
HasBlob(info BlobInfo) (bool, int64, error)
// ReapplyBlob informs the image destination that a blob for which HasBlob previously returned true would have been passed to PutBlob if it had returned false. Like HasBlob and unlike PutBlob, the digest can not be empty. If the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree.
ReapplyBlob(info BlobInfo) (BlobInfo, error)
// PutManifest writes manifest to the destination.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
PutManifest([]byte) error
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
PutManifest(manifest []byte) error
PutSignatures(signatures [][]byte) error
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
// WARNING: This does not have any transactional semantics:
@@ -177,6 +180,16 @@ type ImageDestination interface {
Commit() error
}
// ManifestTypeRejectedError is returned by ImageDestination.PutManifest if the destination is in principle available,
// refuses specifically this manifest type, but may accept a different manifest type.
type ManifestTypeRejectedError struct { // We only use a struct to allow a type assertion, without limiting the contents of the error otherwise.
Err error
}
func (e ManifestTypeRejectedError) Error() string {
return e.Err.Error()
}
// UnparsedImage is an Image-to-be; until it is verified and accepted, it only caries its identity and caches manifest and signature blobs.
// Thus, an UnparsedImage can be created from an ImageSource simply by fetching blobs without interpreting them,
// allowing cryptographic signature verification to happen first, before even fetching the manifest, or parsing anything else.
@@ -213,6 +226,10 @@ type Image interface {
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() []BlobInfo
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
EmbeddedDockerReferenceConflicts(ref reference.Named) bool
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
Inspect() (*ImageInspectInfo, error)
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
@@ -232,8 +249,9 @@ type Image interface {
// ManifestUpdateOptions is a way to pass named optional arguments to Image.UpdatedManifest
type ManifestUpdateOptions struct {
LayerInfos []BlobInfo // Complete BlobInfos (size+digest+urls) which should replace the originals, in order (the root layer first, and then successive layered layers)
ManifestMIMEType string
LayerInfos []BlobInfo // Complete BlobInfos (size+digest+urls) which should replace the originals, in order (the root layer first, and then successive layered layers)
EmbeddedDockerReference reference.Named
ManifestMIMEType string
// The values below are NOT requests to modify the image; they provide optional context which may or may not be used.
InformationOnly ManifestUpdateInformation
}
@@ -299,6 +317,8 @@ type SystemContext struct {
// Note that this field is used mainly to integrate containers/image into projectatomic/docker
// in order to not break any existing docker's integration tests.
DockerDisableV1Ping bool
// Directory to use for OSTree temporary files
OSTreeTmpDirPath string
}
// ProgressProperties is used to pass information from the copy code to a monitor which

View File

@@ -1,31 +0,0 @@
github.com/Sirupsen/logrus 7f4b1adc791766938c29457bed0703fb9134421a
github.com/containers/storage 5cbbc6bafb45bd7ef10486b673deb3b81bb3b787
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
github.com/docker/distribution df5327f76fb6468b84a87771e361762b8be23fdb
github.com/docker/docker 75843d36aa5c3eaade50da005f9e0ff2602f3d5e
github.com/docker/go-connections 7da10c8c50cad14494ec818dcdfb6506265c0086
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
github.com/docker/libtrust aabc10ec26b754e797f9028f4589c5b7bd90dc20
github.com/ghodss/yaml 04f313413ffd65ce25f2541bfd2b2ceec5c0908c
github.com/gorilla/context 08b5f424b9271eedf6f9f0ce86cb9396ed337a42
github.com/gorilla/mux 94e7d24fd285520f3d12ae998f7fdd6b5393d453
github.com/imdario/mergo 50d4dbd4eb0e84778abe37cefef140271d96fade
github.com/mattn/go-runewidth 14207d285c6c197daabb5c9793d63e7af9ab2d50
github.com/mattn/go-shellwords 005a0944d84452842197c2108bd9168ced206f78
github.com/mistifyio/go-zfs c0224de804d438efd11ea6e52ada8014537d6062
github.com/mtrmac/gpgme b2432428689ca58c2b8e8dea9449d3295cf96fc9
github.com/opencontainers/go-digest aa2ec055abd10d26d539eb630a92241b781ce4bc
github.com/opencontainers/image-spec v1.0.0-rc4
github.com/opencontainers/runc 6b1d0e76f239ffb435445e5ae316d2676c07c6e3
github.com/pborman/uuid 1b00554d822231195d1babd97ff4a781231955c9
github.com/pkg/errors 248dadf4e9068a0b3e79f02ed0a610d935de5302
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
github.com/stretchr/testify 4d4bfba8f1d1027c4fdbe371823030df51419987
github.com/vbatts/tar-split bd4c5d64c3e9297f410025a3b1bd0c58f659e721
golang.org/x/crypto 453249f01cfeb54c3d549ddb75ff152ca243f9d8
golang.org/x/net 6b27048ae5e6ad1ef927e72e437531493de612fe
golang.org/x/sys 075e574b89e4c2d22f2286a7e2b919519c6f3547
gopkg.in/cheggaaa/pb.v1 d7e6ca3010b6f084d8056847f55d7f572f180678
gopkg.in/yaml.v2 a3f3340b5840cee44f372bddb5880fcbc419b46a
k8s.io/client-go bcde30fb7eaed76fd98a36b4120321b94995ffb6
github.com/xeipuuv/gojsonschema master

View File

@@ -19,7 +19,7 @@ Additional documentation about how this group operates:
- [Releases](RELEASES.md)
- [Project Documentation](project.md)
The _optional_ and _base_ layers of all OCI projects are tracked in the [OCI Scope Table](https://www.opencontainers.org/governance/oci-scope-table).
The _optional_ and _base_ layers of all OCI projects are tracked in the [OCI Scope Table](https://www.opencontainers.org/about/oci-scope-table).
## Running an OCI Image
@@ -39,7 +39,7 @@ To support this UX the OCI Image Format contains sufficient information to launc
**Q: Why doesn't this project mention distribution?**
A: Distribution, for example using HTTP as both Docker v2.2 and AppC do today, is currently out of scope on the [OCI Scope Table](https://www.opencontainers.org/governance/oci-scope-table).
A: Distribution, for example using HTTP as both Docker v2.2 and AppC do today, is currently out of scope on the [OCI Scope Table](https://www.opencontainers.org/about/oci-scope-table).
There has been [some discussion on the TOB mailing list](https://groups.google.com/a/opencontainers.org/d/msg/tob/A3JnmI-D-6Y/tLuptPDHAgAJ) to make distribution an optional layer, but this topic is a work in progress.
**Q: Why a new project?**

View File

@@ -1,16 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package schema defines the OCI image media types, schema definitions and validation functions.
package schema

View File

@@ -1,44 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package schema
import (
"encoding/json"
"io"
"go4.org/errorutil"
)
// A SyntaxError is a description of a JSON syntax error
// including line, column and offset in the JSON file.
type SyntaxError struct {
msg string
Line, Col int
Offset int64
}
func (e *SyntaxError) Error() string { return e.msg }
// WrapSyntaxError checks whether the given error is a *json.SyntaxError
// and converts it into a *schema.SyntaxError containing line/col information using the given reader.
// If the given error is not a *json.SyntaxError it is returned unchanged.
func WrapSyntaxError(r io.Reader, err error) error {
if serr, ok := err.(*json.SyntaxError); ok {
line, col, _ := errorutil.HighlightBytePosition(r, serr.Offset)
return &SyntaxError{serr.Error(), line, col, serr.Offset}
}
return err
}

View File

@@ -1,335 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package schema
import (
"bytes"
"compress/gzip"
"encoding/base64"
"io/ioutil"
"net/http"
"os"
"path"
"sync"
"time"
)
type _escLocalFS struct{}
var _escLocal _escLocalFS
type _escStaticFS struct{}
var _escStatic _escStaticFS
type _escDirectory struct {
fs http.FileSystem
name string
}
type _escFile struct {
compressed string
size int64
modtime int64
local string
isDir bool
once sync.Once
data []byte
name string
}
func (_escLocalFS) Open(name string) (http.File, error) {
f, present := _escData[path.Clean(name)]
if !present {
return nil, os.ErrNotExist
}
return os.Open(f.local)
}
func (_escStaticFS) prepare(name string) (*_escFile, error) {
f, present := _escData[path.Clean(name)]
if !present {
return nil, os.ErrNotExist
}
var err error
f.once.Do(func() {
f.name = path.Base(name)
if f.size == 0 {
return
}
var gr *gzip.Reader
b64 := base64.NewDecoder(base64.StdEncoding, bytes.NewBufferString(f.compressed))
gr, err = gzip.NewReader(b64)
if err != nil {
return
}
f.data, err = ioutil.ReadAll(gr)
})
if err != nil {
return nil, err
}
return f, nil
}
func (fs _escStaticFS) Open(name string) (http.File, error) {
f, err := fs.prepare(name)
if err != nil {
return nil, err
}
return f.File()
}
func (dir _escDirectory) Open(name string) (http.File, error) {
return dir.fs.Open(dir.name + name)
}
func (f *_escFile) File() (http.File, error) {
type httpFile struct {
*bytes.Reader
*_escFile
}
return &httpFile{
Reader: bytes.NewReader(f.data),
_escFile: f,
}, nil
}
func (f *_escFile) Close() error {
return nil
}
func (f *_escFile) Readdir(count int) ([]os.FileInfo, error) {
return nil, nil
}
func (f *_escFile) Stat() (os.FileInfo, error) {
return f, nil
}
func (f *_escFile) Name() string {
return f.name
}
func (f *_escFile) Size() int64 {
return f.size
}
func (f *_escFile) Mode() os.FileMode {
return 0
}
func (f *_escFile) ModTime() time.Time {
return time.Unix(f.modtime, 0)
}
func (f *_escFile) IsDir() bool {
return f.isDir
}
func (f *_escFile) Sys() interface{} {
return f
}
// _escFS returns a http.Filesystem for the embedded assets. If useLocal is true,
// the filesystem's contents are instead used.
func _escFS(useLocal bool) http.FileSystem {
if useLocal {
return _escLocal
}
return _escStatic
}
// _escDir returns a http.Filesystem for the embedded assets on a given prefix dir.
// If useLocal is true, the filesystem's contents are instead used.
func _escDir(useLocal bool, name string) http.FileSystem {
if useLocal {
return _escDirectory{fs: _escLocal, name: name}
}
return _escDirectory{fs: _escStatic, name: name}
}
// _escFSByte returns the named file from the embedded assets. If useLocal is
// true, the filesystem's contents are instead used.
func _escFSByte(useLocal bool, name string) ([]byte, error) {
if useLocal {
f, err := _escLocal.Open(name)
if err != nil {
return nil, err
}
b, err := ioutil.ReadAll(f)
f.Close()
return b, err
}
f, err := _escStatic.prepare(name)
if err != nil {
return nil, err
}
return f.data, nil
}
// _escFSMustByte is the same as _escFSByte, but panics if name is not present.
func _escFSMustByte(useLocal bool, name string) []byte {
b, err := _escFSByte(useLocal, name)
if err != nil {
panic(err)
}
return b
}
// _escFSString is the string version of _escFSByte.
func _escFSString(useLocal bool, name string) (string, error) {
b, err := _escFSByte(useLocal, name)
return string(b), err
}
// _escFSMustString is the string version of _escFSMustByte.
func _escFSMustString(useLocal bool, name string) string {
return string(_escFSMustByte(useLocal, name))
}
var _escData = map[string]*_escFile{
"/config-schema.json": {
local: "config-schema.json",
size: 774,
modtime: 1485388791,
compressed: `
H4sIAAAJbogA/5SRvW7rMAyFdz+F4WS8ju7QKWsfoEPHooMqUzEDWFRJZgiKvHv1EzcxEBTuEsSH/M4R
ya+mbbsBxDFGRQrdvu1eIoRnCmoxALfpn8dD+xrBoUdnS9e/jG3FjTDZjIyqcW/MUSj0Vd0RH8zA1mv/
/8lUbVM5HGZEEkMpzc1pUrDabXCyBzCu5FdSzxEySx9HcFq1yMmBFUFSJY+TNMdgFYYf4Q4VZQzVruie
eLKaK0NCesUJulK71JbOnnQk/sVq2c1uRE2POzGsZUjWdl53cde9ZfDl8eClr+VdvsLGJAUD5mvJvMOF
FxOpl797XbmF14iixOdHY1hme76tO+1mug9dHTtHXLlLM/+WN3QMnyfkcvK3B5e4bXo5ffp4by7NdwAA
AP//XlvgsQYDAAA=
`,
},
"/content-descriptor.json": {
local: "content-descriptor.json",
size: 956,
modtime: 1485388791,
compressed: `
H4sIAAAJbogA/5STP2/bMBDFd3+KgxIgSxx1CDoYQZZ279BuRQeaPEmXmn96PCNVC3/3HkUpttsigRfD
fHyP9zvy9HsF0DjMlikJxdBsoPmUMHyIQQwFZCj/MAh8nE2R4XNCSx1ZMyVuyxHX2Q7oTYkPImnTtk85
hnVV7yL3rWPTyfrdfVu1q5ojt0SyZqJWtkvlPMWqu3Uv1WtOxoQlGbdPaKVqiTXPQph1pzSmmkdH5ks1
V+nffmVAmHzlUIgdFIGxQ1YadHBSY4pf617JOezymrzp8a40e6WQHQUqx+b2WHiKHWq6yfTrLZRiAQqw
HQXzhTj/AaEg7+/PIRz1mOUNDMujXnfPJg1kQV/Bfs97DzW7YFWW24JblsmIIAe4eRhMHh43DwP+NE6H
xZvdnHy8ufAiZ9izBva8y6/gG9hRZSxG6Dh6eNYuBoWkPEODNyNsEVx8DruolO4ItkyXYTbjUSZBf1r3
xJmFKfQvVt3pIntTLllpqZn1w2r5nVppGH/sibF8BF//HtjTiTl/OF18Wx1WfwIAAP//z1UgVLwDAAA=
`,
},
"/defs-config.json": {
local: "defs-config.json",
size: 2236,
modtime: 1485388791,
compressed: `
H4sIAAAJbogA/+RVzY7TMBC+5yksw7Gwd6673JCKVAEHhCo3Ge/OEnvMeIKI0L47TrZ0888qpac9VG0m
/n7m89j9nSmlC4g5YxAkr98pfQMWPTZPUQXDgnlVGlZCahvAX5MXgx5YpV8Wb9UuQI4Wc9PiN4+EJ4ZE
2GikYt4uPz2nitQBGkE63EMuLbStB6YASRdiZ3Wqf4rAvUqHIwqjv9WnVw+bJ9z7X4EiFB+JJQ7xrxls
g0+W49v7SP7VVcf9lTNh1zJvHz1O8/ufc7YMs6n1pvsKBdzQxkIjSWpGVLgOhF6G2uRh2/T0tSfQl1u0
uGDzH1b7dgeWF134qiz7TF2eb5MRXLvixfb+mcrKwWicn9n/2qm/dFdfiL8n2Rtcdc4/mAOUl45kN7Hx
/z2SrPt9ZNdMJDaec4EWaO0ei1FEl7+tjuuXtrQnC75yoz3TpamBo17O7JQCw48KGYoez1MGQ3dZl/Fv
5ncYhbg+J/ScwQiMbqql7i2xM9JOY4K+EXQwPfGmkjtadVaOrvaHehWanIPxP89zoOCC1Pt2J+fgB6IS
jNdz5yFrPg/ZnwAAAP//3oH4m7wIAAA=
`,
},
"/defs-image.json": {
local: "defs-image.json",
size: 2916,
modtime: 1485388791,
compressed: `
H4sIAAAJbogA/8SWTU/bTBDH7/kU+xj0cMiLaVUhNUJIVbn0xIGeigIa1uN4qL3r7m6gIfJ3767t+D3Q
lKg9wc6u//P/zc443owY8wLUXFFqSApvzrxLDEmQW2mWgjLEVzEoZiS7SlF8lsIACVTsSwJLZNcpcgqJ
Q/74pNCrBKyeS2GDCQYEX9cpViEbpMAljIxJ9dz3pZXnW3k9k2rpax5hAj65VH4tMdkKmELQ00aRWNbx
FIxBlePc3nyafoPp8+n046L+97+j4/+nt3ez8WJzOnn3/izzf+/YsZenyIpMXkBL1KaJ1CmmiZBxtU6N
XCpII+LMEvHvepWw4lkmQ+YOyfsH5GbCSOTLEoCdnEego4v5eYQ/IbClTiAun7w42bMOBdLdeDZdjOd2
FTrWcYcoAUGhVb8sOaR6w4X1tXqOC+46rvDHihS6PDdlrNU9kzqo6bm1Li+jEUljMKFUiVeGFnVhlDVv
ext1A29Hm+661/ys49jeocIQlS0JBqyDlUsc23337JHfmJBGV1dnsy7k617cMdc792uDek8/1o2ePWgp
2sZImLMPw6Z6bf/vWv+FypYuBwlWKtav+AcWU2HSHWahkgl7shiRdUm6dM0SWLN7ZIF8ErG0NoO2s22b
g1Kwbm+RwaTrYfcol7uum8FV3hKQ19jLBjGrAeig7jfHlcog2lBrDU5xvgOKR5acm5XCLpzUTaJFS3HH
wPY1u7t/TOu/YLV/T63trA92OFtW7I1mZo82Q9HlhzNVib7VXIjgKn7YktWqO+31R7RIOTimr4I1J3II
9BEUgei+Q/eu10vF+ktgwy+hUfPv9uMChJAG2l+Ge99qU6T6PZcCr8LW22azx09dAul1jnrdAW6UezP0
7hOrOPZ60IvRdpWNstGvAAAA//83ffekZAsAAA==
`,
},
"/defs.json": {
local: "defs.json",
size: 3193,
modtime: 1485388791,
compressed: `
H4sIAAAJbogA/7RWTXPaMBC98ys8tEfa2PIX9NYp/cghAzOZnjo9uGYBtSCpstxpmuG/VzLGWPZiMKWH
JPau9r23T6tYzwPHGS4gSyUVinI2fOMMp7CkjJq3zMkzWDhqLXm+WvNc6UdwZgLYO85UQhlI51FASpc0
TYry0R6vAtB4hkIHKVPj6k2/qycBhk3HYQWyqCwSW127zbc698oj42M4+V2GPRIXwd2oQvaivtA+iSMM
3MRb8D7pC0+8IA7GfhRgHFWyRRQFfYkmhPh+TFw/GodBHEeu6yKMyCqLOr9idzAeEoYt3N57gwFHYei3
oXvvCwYdkEkwiWIyaeP33g4M3xsHQRQHgRv7sTsJQ4KZ70VzbnBlnZAzmC114EsZcKpUkX4pwWSHL+5q
B+6utLxauBvh1YduWL7Z1FaXT18RL26pUDt7U4WZkpSt+is8cOzrb6tpm4jHAnb/Gxsl/u07pOo4SSJR
Wj+bSy5AKgpZrUinXz97o50V6mphUP/bEjXbU/9nUSXWGVGf76d1IafHRh94q/DjtYVvpUyeZktdn2EW
JCZ9dIAq2Da6xqmMHrTDkm9v/ZWU+EbbPB/oBuaJWmMM9brD+vfs13kDG+ItgE+c/6gjiBNTImxRHWxV
C9hh1DatsstwMNVNNLDa7wAzPp0Z4pLPGHLTmSocRhnvpw+JEI1/Lac2YM0zZZ2WDsr6iWlalh5ufrcA
y+gfuBIFdeSB50xd4kbGc5leSN09kPryrChLysvzP8NxYd+bO6EuGfFy/Pp9MqoplfAzpxIW1hfU6nnU
MrVJbn8cB+ZnN/gbAAD//0JyEpx5DAAA
`,
},
"/image-layout-schema.json": {
local: "image-layout-schema.json",
size: 414,
modtime: 1485388791,
compressed: `
H4sIAAAJbogA/2yPsU7EMAyG9z6FFRhpUySmW286CekGJBbEEFpfmxNNQuIinVDfHSduYbhbWvmPP3/2
TwWgekxdtIGsd2oH6hjQ7b0jYx1GOExmQHg2Fz8TvHQjTkY9ZOo+ScHESBR2Wp+Td7WkjY+D7qM5Ud0+
acnuhLP9hiRmPMu6TZYKJt3aZnH9WcRC0iVgZv3HGbs1C5EnRLKY+CVfkw2ZlI1feaicJW/X135LB/gT
0Ihw3B/gyly4zZ4oWjf85+jmifO3tebksWmbVq31e/kv/F3KwhG/Zhux/0NurVtlbql+AwAA//8bwMuB
ngEAAA==
`,
},
"/image-manifest-schema.json": {
local: "image-manifest-schema.json",
size: 921,
modtime: 1485389045,
compressed: `
H4sIAAAJbogA/5ySMU/rMBSF9/yKq7RjU7/39KauTB0QA4gFMZjkJrlVbQfbRVSo/51ru6Y1ZUAdc+xz
7ndP/FEB1B261tLkyeh6BfXdhPrGaC9Jo4W1kgPCrdTUo/NwP2FLPbUy3l4E+9y1IyoZrKP300qIjTO6
SerS2EF0Vva++fNfJG2WfNRli2OP4altnuqiLd0WFAiEOhIkr99PGNzmZYPtUZssZ1hP6PgkLMZainjk
xLRcki93fhjJQU+47cClDdGBHxHicMjDIeXBWwoE6UBqIO1xQBspYvh1m4kS9ist73oxRpEmtVN89u+k
yfesRemQTmoG6Gk4b2BusQ+xAQ21b3Ijxi7D/6sL+1buGevcnqmktXJfMK09qnD176mPo5LNv53O8wsK
qbXx8eUVKNfV3WyJOz+PXHyvpsPeNdEVoWaCBe483i6cVWaNpLXF1x1ZDFhPP73D8p+UFfPHc3WoPgMA
AP//UcoRdpkDAAA=
`,
},
"/manifest-list-schema.json": {
local: "manifest-list-schema.json",
size: 873,
modtime: 1485389045,
compressed: `
H4sIAAAJbogA/6SSv0/7MBDF9/wVp/Q7flMjxNQVFiQQA4gFMZjk0lzV2OHORVSo/zv+EZdEZUDqUqnP
fu8+7+KvAqBsUGqmwZE15QrKhwHNtTVOk0GG216vEe61oRbFwR35n8cBa2qp1tHyP2T8k7rDXgd/59yw
Umoj1lRJXVpeq4Z166qLK5W0RfJRky3iPdaPrvNoibZ0W1HAUP2IUW09Rgpw+wFDhH3bYD1qA/sgdoTi
T0JFr6WcZx+baib5tP1TRwIt4bYBSTVRwHUIkQBmBJBC4SOlghbQBsg4XCNHlDjhjI5qjn2MzK1PZvVk
qN/1/uzyR9OfWYvSIZ2UeZJM15GTNbPeTzo47Kf3widnbMPNBlupIvsyfPOF8oKnCAuVY5ubccuWyzHh
MGPRxlgX39OM5pzVTSOPPf4EPXUWmTWSlozvO2IMWC+/PayT1fr/r8Wh+A4AAP//b2/SMmkDAAA=
`,
},
"/": {
isDir: true,
local: "/",
},
}

View File

@@ -1,21 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package schema
// Generates an embbedded http.FileSystem for all schema files
// using esc (https://github.com/mjibson/esc).
// This should generally be invoked with `make schema-fs`
//go:generate esc -private -pkg=schema -ignore=.*go .

View File

@@ -1,50 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package schema
import (
"net/http"
"github.com/opencontainers/image-spec/specs-go/v1"
)
// Media types for the OCI image formats
const (
MediaTypeDescriptor Validator = v1.MediaTypeDescriptor
MediaTypeManifest Validator = v1.MediaTypeImageManifest
MediaTypeManifestList Validator = v1.MediaTypeImageManifestList
MediaTypeImageConfig Validator = v1.MediaTypeImageConfig
MediaTypeImageLayer unimplemented = v1.MediaTypeImageLayer
)
var (
// fs stores the embedded http.FileSystem
// having the OCI JSON schema files in root "/".
fs = _escFS(false)
// specs maps OCI schema media types to schema files.
specs = map[Validator]string{
MediaTypeDescriptor: "content-descriptor.json",
MediaTypeManifest: "image-manifest-schema.json",
MediaTypeManifestList: "manifest-list-schema.json",
MediaTypeImageConfig: "config-schema.json",
}
)
// FileSystem returns an in-memory file system including the schema files.
// The schema files are located at the root directory.
func FileSystem() http.FileSystem {
return fs
}

View File

@@ -1,119 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package schema
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/xeipuuv/gojsonschema"
)
// Validator wraps a media type string identifier
// and implements validation against a JSON schema.
type Validator string
type validateDescendantsFunc func(r io.Reader) error
var mapValidateDescendants = map[Validator]validateDescendantsFunc{
MediaTypeManifest: validateManifestDescendants,
}
// ValidationError contains all the errors that happened during validation.
type ValidationError struct {
Errs []error
}
func (e ValidationError) Error() string {
return fmt.Sprintf("%v", e.Errs)
}
// Validate validates the given reader against the schema of the wrapped media type.
func (v Validator) Validate(src io.Reader) error {
buf, err := ioutil.ReadAll(src)
if err != nil {
return errors.Wrap(err, "unable to read the document file")
}
if f, ok := mapValidateDescendants[v]; ok {
if f == nil {
return fmt.Errorf("internal error: mapValidateDescendents[%q] is nil", v)
}
err = f(bytes.NewReader(buf))
if err != nil {
return err
}
}
sl := gojsonschema.NewReferenceLoaderFileSystem("file:///"+specs[v], fs)
ml := gojsonschema.NewStringLoader(string(buf))
result, err := gojsonschema.Validate(sl, ml)
if err != nil {
return errors.Wrapf(
WrapSyntaxError(bytes.NewReader(buf), err),
"schema %s: unable to validate", v)
}
if result.Valid() {
return nil
}
errs := make([]error, 0, len(result.Errors()))
for _, desc := range result.Errors() {
errs = append(errs, fmt.Errorf("%s", desc))
}
return ValidationError{
Errs: errs,
}
}
type unimplemented string
func (v unimplemented) Validate(src io.Reader) error {
return fmt.Errorf("%s: unimplemented", v)
}
func validateManifestDescendants(r io.Reader) error {
header := v1.Manifest{}
buf, err := ioutil.ReadAll(r)
if err != nil {
return errors.Wrapf(err, "error reading the io stream")
}
err = json.Unmarshal(buf, &header)
if err != nil {
return errors.Wrap(err, "manifest format mismatch")
}
if header.Config.MediaType != string(v1.MediaTypeImageConfig) {
fmt.Printf("warning: config %s has an unknown media type: %s\n", header.Config.Digest, header.Config.MediaType)
}
for _, layer := range header.Layers {
if layer.MediaType != string(v1.MediaTypeImageLayer) &&
layer.MediaType != string(v1.MediaTypeImageLayerNonDistributable) {
fmt.Printf("warning: layer %s has an unknown media type: %s\n", layer.Digest, layer.MediaType)
}
}
return nil
}

View File

@@ -30,4 +30,7 @@ type Descriptor struct {
// URLs specifies a list of URLs from which this object MAY be downloaded
URLs []string `json:"urls,omitempty"`
// Annotations contains arbitrary metadata relating to the targeted content.
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -50,14 +50,14 @@ type ManifestDescriptor struct {
Platform Platform `json:"platform"`
}
// ManifestList references manifests for various platforms.
// This structure provides `application/vnd.oci.image.manifest.list.v1+json` mediatype when marshalled to JSON.
type ManifestList struct {
// ImageIndex references manifests for various platforms.
// This structure provides `application/vnd.oci.image.index.v1+json` mediatype when marshalled to JSON.
type ImageIndex struct {
specs.Versioned
// Manifests references platform specific manifests.
Manifests []ManifestDescriptor `json:"manifests"`
// Annotations contains arbitrary metadata for the manifest list.
// Annotations contains arbitrary metadata for the image index.
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -14,18 +14,15 @@
package v1
import "regexp"
// ImageLayoutVersion is the version of ImageLayout
const ImageLayoutVersion = "1.0.0"
const (
// ImageLayoutFile is the file name of oci image layout file
ImageLayoutFile = "oci-layout"
// ImageLayoutVersion is the version of ImageLayout
ImageLayoutVersion = "1.0.0"
)
// ImageLayout is the structure in the "oci-layout" file, found in the root
// of an OCI Image-layout directory.
type ImageLayout struct {
Version string `json:"imageLayoutVersion"`
}
var (
// RefsRegexp matches requirement of image-layout 'refs' charset.
RefsRegexp = regexp.MustCompile(`^[a-zA-Z0-9-._]+$`)
)

View File

@@ -16,7 +16,7 @@ package v1
import "github.com/opencontainers/image-spec/specs-go"
// Manifest provides `application/vnd.oci.image.manifest.list.v1+json` mediatype structure when marshalled to JSON.
// Manifest provides `application/vnd.oci.image.manifest.v1+json` mediatype structure when marshalled to JSON.
type Manifest struct {
specs.Versioned
@@ -27,6 +27,6 @@ type Manifest struct {
// Layers is an indexed list of layers referenced by the manifest.
Layers []Descriptor `json:"layers"`
// Annotations contains arbitrary metadata for the manifest list.
// Annotations contains arbitrary metadata for the manifest.
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -21,8 +21,8 @@ const (
// MediaTypeImageManifest specifies the media type for an image manifest.
MediaTypeImageManifest = "application/vnd.oci.image.manifest.v1+json"
// MediaTypeImageManifestList specifies the media type for an image manifest list.
MediaTypeImageManifestList = "application/vnd.oci.image.manifest.list.v1+json"
// MediaTypeImageIndex specifies the media type for an image index.
MediaTypeImageIndex = "application/vnd.oci.image.index.v1+json"
// MediaTypeImageLayer is the media type used for layers referenced by the manifest.
MediaTypeImageLayer = "application/vnd.oci.image.layer.v1.tar"

View File

@@ -25,7 +25,7 @@ const (
VersionPatch = 0
// VersionDev indicates development branch. Releases will be empty string.
VersionDev = "-rc4"
VersionDev = "-rc5"
)
// Version is the specification version that the package types support.

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,177 +0,0 @@
# image-tools [![Build Status](https://travis-ci.org/opencontainers/image-tools.svg?branch=master)](https://travis-ci.org/opencontainers/image-tools)
`image-tools` is a collection of tools for working with the [OCI image format specification](https://github.com/opencontainers/image-spec).
## Install
It is recommended that use `go get` to download a single command tools.
```
$ go get -d github.com/opencontainers/image-tools/cmd/oci-unpack
$ cd $GOPATH/src/github.com/opencontainers/image-tools/
$ make all
$ sudo make install
```
## Uninstall
```
$ sudo make uninstall
```
## Example
### Obtaining an image
The following examples assume you have a [image-layout](https://github.com/opencontainers/image-spec/blob/v1.0.0-rc2/image-layout.md) tar archive at `busybox-oci`.
One way to acquire that image is with [skopeo](https://github.com/projectatomic/skopeo#installing):
```
$ skopeo copy docker://busybox oci:busybox-oci
```
### oci-create-runtime-bundle
More information about `oci-create-runtime-bundle` can be found in its [man page](./cmd/oci-create-runtime-bundle/oci-create-runtime-bundle.1.md)
```
$ mkdir busybox-bundle
$ oci-create-runtime-bundle --ref latest busybox-oci busybox-bundle
$ cd busybox-bundle && sudo runc run busybox
```
### oci-image-validate
More information about `oci-image-validate` can be found in its [man page](./cmd/oci-image-validate/oci-image-validate.1.md)
```
$ oci-image-validate --type imageLayout --ref latest busybox-oci
busybox-oci: OK
```
### oci-unpack
More information about `oci-unpack` can be found in its [man page](./cmd/oci-unpack/oci-unpack.1.md)
```
$ mkdir busybox-bundle
$ oci-unpack --ref latest busybox-oci busybox-bundle
$ tree busybox-bundle
busybox-bundle
├── bin
│   ├── [
│   ├── [[
│   ├── acpid
│   ├── addgroup
│   ├── add-shell
[...]
```
# Contributing
Development happens on GitHub. Issues are used for bugs and actionable items and longer discussions can happen on the [mailing list](#mailing-list).
The code is licensed under the Apache 2.0 license found in the `LICENSE` file of this repository.
## Code of Conduct
Participation in the OpenContainers community is governed by [OpenContainer's Code of Conduct](https://github.com/opencontainers/tob/blob/d2f9d68c1332870e40693fe077d311e0742bc73d/code-of-conduct.md).
## Discuss your design
The project welcomes submissions, but please let everyone know what you are working on.
Before undertaking a nontrivial change to this repository, send mail to the [mailing list](#mailing-list) to discuss what you plan to do.
This gives everyone a chance to validate the design, helps prevent duplication of effort, and ensures that the idea fits.
It also guarantees that the design is sound before code is written; a GitHub pull-request is not the place for high-level discussions.
Typos and grammatical errors can go straight to a pull-request.
When in doubt, start on the [mailing-list](#mailing-list).
## Weekly Call
The contributors and maintainers of all OCI projects have a weekly meeting Wednesdays at 2:00 PM (USA Pacific.)
Everyone is welcome to participate via [UberConference web][UberConference] or audio-only: 888-587-9088 or 860-706-8529 (no PIN needed.)
An initial agenda will be posted to the [mailing list](#mailing-list) earlier in the week, and everyone is welcome to propose additional topics or suggest other agenda alterations there.
Minutes are posted to the [mailing list](#mailing-list) and minutes from past calls are archived to the [wiki](https://github.com/opencontainers/runtime-spec/wiki) for those who are unable to join the call.
## Mailing List
You can subscribe and join the mailing list on [Google Groups](https://groups.google.com/a/opencontainers.org/forum/#!forum/dev).
## IRC
OCI discussion happens on #opencontainers on Freenode ([logs][irc-logs]).
## Git commit
### Sign your work
The sign-off is a simple line at the end of the explanation for the patch, which certifies that you wrote it or otherwise have the right to pass it on as an open-source patch.
The rules are pretty simple: if you can certify the below (from [developercertificate.org](http://developercertificate.org/)):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
then you just add a line to every git commit message:
Signed-off-by: Joe Smith <joe@gmail.com>
using your real name (sorry, no pseudonyms or anonymous contributions.)
You can add the sign off when creating the git commit via `git commit -s`.
### Commit Style
Simple house-keeping for clean git history.
Read more on [How to Write a Git Commit Message](http://chris.beams.io/posts/git-commit/) or the Discussion section of [`git-commit(1)`](http://git-scm.com/docs/git-commit).
1. Separate the subject from body with a blank line
2. Limit the subject line to 50 characters
3. Capitalize the subject line
4. Do not end the subject line with a period
5. Use the imperative mood in the subject line
6. Wrap the body at 72 characters
7. Use the body to explain what and why vs. how
* If there was important/useful/essential conversation or information, copy or include a reference
8. When possible, one keyword to scope the change in the subject (i.e. "README: ...", "runtime: ...")
[UberConference]: https://www.uberconference.com/opencontainers
[irc-logs]: http://ircbot.wl.linuxfoundation.org/eavesdrop/%23opencontainers/

View File

@@ -1,109 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package image
import (
"encoding/json"
"io"
"io/ioutil"
"net/http"
"os"
"github.com/opencontainers/image-spec/schema"
"github.com/pkg/errors"
)
// supported autodetection types
const (
TypeImageLayout = "imageLayout"
TypeImage = "image"
TypeManifest = "manifest"
TypeManifestList = "manifestList"
TypeConfig = "config"
)
// Autodetect detects the validation type for the given path
// or an error if the validation type could not be resolved.
func Autodetect(path string) (string, error) {
fi, err := os.Stat(path)
if err != nil {
return "", errors.Wrapf(err, "unable to access path") // err from os.Stat includes path name
}
if fi.IsDir() {
return TypeImageLayout, nil
}
f, err := os.Open(path)
if err != nil {
return "", errors.Wrap(err, "unable to open file") // os.Open includes the filename
}
defer f.Close()
buf, err := ioutil.ReadAll(io.LimitReader(f, 512)) // read some initial bytes to detect content
if err != nil {
return "", errors.Wrap(err, "unable to read")
}
mimeType := http.DetectContentType(buf)
switch mimeType {
case "application/x-gzip", "application/octet-stream":
return TypeImage, nil
case "text/plain; charset=utf-8":
// might be a JSON file, will be handled below
default:
return "", errors.New("unknown file type")
}
if _, err := f.Seek(0, os.SEEK_SET); err != nil {
return "", errors.Wrap(err, "unable to seek")
}
header := struct {
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
Config interface{} `json:"config"`
}{}
if err := json.NewDecoder(f).Decode(&header); err != nil {
if _, errSeek := f.Seek(0, os.SEEK_SET); errSeek != nil {
return "", errors.Wrap(err, "unable to seek")
}
e := errors.Wrap(
schema.WrapSyntaxError(f, err),
"unable to parse JSON",
)
return "", e
}
switch {
case header.MediaType == string(schema.MediaTypeManifest):
return TypeManifest, nil
case header.MediaType == string(schema.MediaTypeManifestList):
return TypeManifestList, nil
case header.MediaType == "" && header.SchemaVersion == 0 && header.Config != nil:
// config files don't have mediaType/schemaVersion header
return TypeConfig, nil
}
return "", errors.New("unknown media type")
}

View File

@@ -1,126 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package image
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"strconv"
"strings"
"github.com/opencontainers/image-spec/schema"
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/opencontainers/runtime-spec/specs-go"
"github.com/pkg/errors"
)
type config v1.Image
func findConfig(w walker, d *descriptor) (*config, error) {
var c config
cpath := filepath.Join("blobs", d.algo(), d.hash())
switch err := w.walk(func(path string, info os.FileInfo, r io.Reader) error {
if info.IsDir() || filepath.Clean(path) != cpath {
return nil
}
buf, err := ioutil.ReadAll(r)
if err != nil {
return errors.Wrapf(err, "%s: error reading config", path)
}
if err := schema.MediaTypeImageConfig.Validate(bytes.NewReader(buf)); err != nil {
return errors.Wrapf(err, "%s: config validation failed", path)
}
if err := json.Unmarshal(buf, &c); err != nil {
return err
}
return errEOW
}); err {
case nil:
return nil, fmt.Errorf("%s: config not found", cpath)
case errEOW:
return &c, nil
default:
return nil, err
}
}
func (c *config) runtimeSpec(rootfs string) (*specs.Spec, error) {
if c.OS != "linux" {
return nil, fmt.Errorf("%s: unsupported OS", c.OS)
}
var s specs.Spec
s.Version = specs.Version
// we should at least apply the default spec, otherwise this is totally useless
s.Process.Terminal = true
s.Root.Path = rootfs
s.Process.Cwd = "/"
if c.Config.WorkingDir != "" {
s.Process.Cwd = c.Config.WorkingDir
}
s.Process.Env = append(s.Process.Env, c.Config.Env...)
s.Process.Args = append(s.Process.Args, c.Config.Entrypoint...)
s.Process.Args = append(s.Process.Args, c.Config.Cmd...)
if len(s.Process.Args) == 0 {
s.Process.Args = append(s.Process.Args, "sh")
}
if uid, err := strconv.Atoi(c.Config.User); err == nil {
s.Process.User.UID = uint32(uid)
} else if ug := strings.Split(c.Config.User, ":"); len(ug) == 2 {
uid, err := strconv.Atoi(ug[0])
if err != nil {
return nil, errors.New("config.User: unsupported uid format")
}
gid, err := strconv.Atoi(ug[1])
if err != nil {
return nil, errors.New("config.User: unsupported gid format")
}
s.Process.User.UID = uint32(uid)
s.Process.User.GID = uint32(gid)
} else if c.Config.User != "" {
return nil, errors.New("config.User: unsupported format")
}
s.Platform.OS = c.OS
s.Platform.Arch = c.Architecture
s.Linux = &specs.Linux{}
for vol := range c.Config.Volumes {
s.Mounts = append(
s.Mounts,
specs.Mount{
Destination: vol,
Type: "bind",
Options: []string{"rbind"},
},
)
}
return &s, nil
}

View File

@@ -1,138 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package image
import (
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
type descriptor struct {
MediaType string `json:"mediaType"`
Digest string `json:"digest"`
Size int64 `json:"size"`
}
func (d *descriptor) algo() string {
pts := strings.SplitN(d.Digest, ":", 2)
if len(pts) != 2 {
return ""
}
return pts[0]
}
func (d *descriptor) hash() string {
pts := strings.SplitN(d.Digest, ":", 2)
if len(pts) != 2 {
return ""
}
return pts[1]
}
func listReferences(w walker) (map[string]*descriptor, error) {
refs := make(map[string]*descriptor)
if err := w.walk(func(path string, info os.FileInfo, r io.Reader) error {
if info.IsDir() || !strings.HasPrefix(path, "refs") {
return nil
}
var d descriptor
if err := json.NewDecoder(r).Decode(&d); err != nil {
return err
}
refs[info.Name()] = &d
return nil
}); err != nil {
return nil, err
}
return refs, nil
}
func findDescriptor(w walker, name string) (*descriptor, error) {
var d descriptor
dpath := filepath.Join("refs", name)
switch err := w.walk(func(path string, info os.FileInfo, r io.Reader) error {
if info.IsDir() || filepath.Clean(path) != dpath {
return nil
}
if err := json.NewDecoder(r).Decode(&d); err != nil {
return err
}
return errEOW
}); err {
case nil:
return nil, fmt.Errorf("%s: descriptor not found", dpath)
case errEOW:
return &d, nil
default:
return nil, err
}
}
func (d *descriptor) validate(w walker, mts []string) error {
var found bool
for _, mt := range mts {
if d.MediaType == mt {
found = true
break
}
}
if !found {
return fmt.Errorf("invalid descriptor MediaType %q", d.MediaType)
}
rc, err := w.Get(*d)
if err != nil {
return err
}
defer rc.Close()
return d.validateContent(rc)
}
func (d *descriptor) validateContent(r io.Reader) error {
parsed, err := digest.Parse(d.Digest)
if err != nil {
return err
}
verifier := parsed.Verifier()
n, err := io.Copy(verifier, r)
if err != nil {
return errors.Wrap(err, "error generating hash")
}
if n != d.Size {
return errors.New("size mismatch")
}
if !verifier.Verified() {
return errors.New("digest mismatch")
}
return nil
}

View File

@@ -1,16 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package image defines methods for validating, and unpacking OCI images.
package image

View File

@@ -1,210 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package image
import (
"encoding/json"
"fmt"
"log"
"os"
"path/filepath"
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
// ValidateLayout walks through the given file tree and validates the manifest
// pointed to by the given refs or returns an error if the validation failed.
func ValidateLayout(src string, refs []string, out *log.Logger) error {
return validate(newPathWalker(src), refs, out)
}
// Validate walks through the given .tar file and validates the manifest
// pointed to by the given refs or returns an error if the validation failed.
func Validate(tarFile string, refs []string, out *log.Logger) error {
f, err := os.Open(tarFile)
if err != nil {
return errors.Wrap(err, "unable to open file")
}
defer f.Close()
return validate(newTarWalker(tarFile, f), refs, out)
}
var validRefMediaTypes = []string{
v1.MediaTypeImageManifest,
v1.MediaTypeImageManifestList,
}
func validate(w walker, refs []string, out *log.Logger) error {
ds, err := listReferences(w)
if err != nil {
return err
}
if len(refs) == 0 && len(ds) == 0 {
// TODO(runcom): ugly, we'll need a better way and library
// to express log levels.
// see https://github.com/opencontainers/image-spec/issues/288
out.Print("WARNING: no descriptors found")
}
if len(refs) == 0 {
for ref := range ds {
refs = append(refs, ref)
}
}
for _, ref := range refs {
d, ok := ds[ref]
if !ok {
// TODO(runcom):
// soften this error to a warning if the user didn't ask for any specific reference
// with --ref but she's just validating the whole image.
return fmt.Errorf("reference %s not found", ref)
}
if err = d.validate(w, validRefMediaTypes); err != nil {
return err
}
m, err := findManifest(w, d)
if err != nil {
return err
}
if err := m.validate(w); err != nil {
return err
}
if out != nil {
out.Printf("reference %q: OK", ref)
}
}
return nil
}
// UnpackLayout walks through the file tree given by src and, using the layers
// specified in the manifest pointed to by the given ref, unpacks all layers in
// the given destination directory or returns an error if the unpacking failed.
func UnpackLayout(src, dest, ref string) error {
return unpack(newPathWalker(src), dest, ref)
}
// Unpack walks through the given .tar file and, using the layers specified in
// the manifest pointed to by the given ref, unpacks all layers in the given
// destination directory or returns an error if the unpacking failed.
func Unpack(tarFile, dest, ref string) error {
f, err := os.Open(tarFile)
if err != nil {
return errors.Wrap(err, "unable to open file")
}
defer f.Close()
return unpack(newTarWalker(tarFile, f), dest, ref)
}
func unpack(w walker, dest, refName string) error {
ref, err := findDescriptor(w, refName)
if err != nil {
return err
}
if err = ref.validate(w, validRefMediaTypes); err != nil {
return err
}
m, err := findManifest(w, ref)
if err != nil {
return err
}
if err = m.validate(w); err != nil {
return err
}
return m.unpack(w, dest)
}
// CreateRuntimeBundleLayout walks through the file tree given by src and
// creates an OCI runtime bundle in the given destination dest
// or returns an error if the unpacking failed.
func CreateRuntimeBundleLayout(src, dest, ref, root string) error {
return createRuntimeBundle(newPathWalker(src), dest, ref, root)
}
// CreateRuntimeBundle walks through the given .tar file and
// creates an OCI runtime bundle in the given destination dest
// or returns an error if the unpacking failed.
func CreateRuntimeBundle(tarFile, dest, ref, root string) error {
f, err := os.Open(tarFile)
if err != nil {
return errors.Wrap(err, "unable to open file")
}
defer f.Close()
return createRuntimeBundle(newTarWalker(tarFile, f), dest, ref, root)
}
func createRuntimeBundle(w walker, dest, refName, rootfs string) error {
ref, err := findDescriptor(w, refName)
if err != nil {
return err
}
if err = ref.validate(w, validRefMediaTypes); err != nil {
return err
}
m, err := findManifest(w, ref)
if err != nil {
return err
}
if err = m.validate(w); err != nil {
return err
}
c, err := findConfig(w, &m.Config)
if err != nil {
return err
}
if _, err = os.Stat(dest); err != nil {
if os.IsNotExist(err) {
if err2 := os.MkdirAll(dest, 0755); err2 != nil {
return err2
}
} else {
return err
}
}
err = m.unpack(w, filepath.Join(dest, rootfs))
if err != nil {
return err
}
spec, err := c.runtimeSpec(rootfs)
if err != nil {
return err
}
f, err := os.Create(filepath.Join(dest, "config.json"))
if err != nil {
return err
}
defer f.Close()
return json.NewEncoder(f).Encode(spec)
}

View File

@@ -1,258 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package image
import (
"archive/tar"
"bytes"
"compress/gzip"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"strings"
"time"
"github.com/opencontainers/image-spec/schema"
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
type manifest struct {
Config descriptor `json:"config"`
Layers []descriptor `json:"layers"`
}
func findManifest(w walker, d *descriptor) (*manifest, error) {
var m manifest
mpath := filepath.Join("blobs", d.algo(), d.hash())
switch err := w.walk(func(path string, info os.FileInfo, r io.Reader) error {
if info.IsDir() || filepath.Clean(path) != mpath {
return nil
}
buf, err := ioutil.ReadAll(r)
if err != nil {
return errors.Wrapf(err, "%s: error reading manifest", path)
}
if err := schema.MediaTypeManifest.Validate(bytes.NewReader(buf)); err != nil {
return errors.Wrapf(err, "%s: manifest validation failed", path)
}
if err := json.Unmarshal(buf, &m); err != nil {
return err
}
return errEOW
}); err {
case nil:
return nil, fmt.Errorf("%s: manifest not found", mpath)
case errEOW:
return &m, nil
default:
return nil, err
}
}
func (m *manifest) validate(w walker) error {
if err := m.Config.validate(w, []string{v1.MediaTypeImageConfig}); err != nil {
return errors.Wrap(err, "config validation failed")
}
validLayerMediaTypes := []string{
v1.MediaTypeImageLayer,
v1.MediaTypeImageLayerGzip,
v1.MediaTypeImageLayerNonDistributable,
v1.MediaTypeImageLayerNonDistributableGzip,
}
for _, d := range m.Layers {
if err := d.validate(w, validLayerMediaTypes); err != nil {
return errors.Wrap(err, "layer validation failed")
}
}
return nil
}
func (m *manifest) unpack(w walker, dest string) (retErr error) {
// error out if the dest directory is not empty
s, err := ioutil.ReadDir(dest)
if err != nil && !os.IsNotExist(err) {
return errors.Wrap(err, "unable to open file") // err contains dest
}
if len(s) > 0 {
return fmt.Errorf("%s is not empty", dest)
}
defer func() {
// if we encounter error during unpacking
// clean up the partially-unpacked destination
if retErr != nil {
if err := os.RemoveAll(dest); err != nil {
fmt.Printf("Error: failed to remove partially-unpacked destination %v", err)
}
}
}()
for _, d := range m.Layers {
switch err := w.walk(func(path string, info os.FileInfo, r io.Reader) error {
if info.IsDir() {
return nil
}
dd, err := filepath.Rel(filepath.Join("blobs", d.algo()), filepath.Clean(path))
if err != nil || d.hash() != dd {
return nil
}
if err := unpackLayer(dest, r); err != nil {
return errors.Wrap(err, "error extracting layer")
}
return errEOW
}); err {
case nil:
return fmt.Errorf("%s: layer not found", dest)
case errEOW:
default:
return err
}
}
return nil
}
func unpackLayer(dest string, r io.Reader) error {
entries := make(map[string]bool)
gz, err := gzip.NewReader(r)
if err != nil {
return errors.Wrap(err, "error creating gzip reader")
}
defer gz.Close()
var dirs []*tar.Header
tr := tar.NewReader(gz)
loop:
for {
hdr, err := tr.Next()
switch err {
case io.EOF:
break loop
case nil:
// success, continue below
default:
return errors.Wrapf(err, "error advancing tar stream")
}
hdr.Name = filepath.Clean(hdr.Name)
if !strings.HasSuffix(hdr.Name, string(os.PathSeparator)) {
// Not the root directory, ensure that the parent directory exists
parent := filepath.Dir(hdr.Name)
parentPath := filepath.Join(dest, parent)
if _, err2 := os.Lstat(parentPath); err2 != nil && os.IsNotExist(err2) {
if err3 := os.MkdirAll(parentPath, 0755); err3 != nil {
return err3
}
}
}
path := filepath.Join(dest, hdr.Name)
if entries[path] {
return fmt.Errorf("duplicate entry for %s", path)
}
entries[path] = true
rel, err := filepath.Rel(dest, path)
if err != nil {
return err
}
info := hdr.FileInfo()
if strings.HasPrefix(rel, ".."+string(os.PathSeparator)) {
return fmt.Errorf("%q is outside of %q", hdr.Name, dest)
}
if strings.HasPrefix(info.Name(), ".wh.") {
path = strings.Replace(path, ".wh.", "", 1)
if err := os.RemoveAll(path); err != nil {
return errors.Wrap(err, "unable to delete whiteout path")
}
continue loop
}
switch hdr.Typeflag {
case tar.TypeDir:
if fi, err := os.Lstat(path); !(err == nil && fi.IsDir()) {
if err2 := os.MkdirAll(path, info.Mode()); err2 != nil {
return errors.Wrap(err2, "error creating directory")
}
}
case tar.TypeReg, tar.TypeRegA:
f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY, info.Mode())
if err != nil {
return errors.Wrap(err, "unable to open file")
}
if _, err := io.Copy(f, tr); err != nil {
f.Close()
return errors.Wrap(err, "unable to copy")
}
f.Close()
case tar.TypeLink:
target := filepath.Join(dest, hdr.Linkname)
if !strings.HasPrefix(target, dest) {
return fmt.Errorf("invalid hardlink %q -> %q", target, hdr.Linkname)
}
if err := os.Link(target, path); err != nil {
return err
}
case tar.TypeSymlink:
target := filepath.Join(filepath.Dir(path), hdr.Linkname)
if !strings.HasPrefix(target, dest) {
return fmt.Errorf("invalid symlink %q -> %q", path, hdr.Linkname)
}
if err := os.Symlink(hdr.Linkname, path); err != nil {
return err
}
case tar.TypeXGlobalHeader:
return nil
}
// Directory mtimes must be handled at the end to avoid further
// file creation in them to modify the directory mtime
if hdr.Typeflag == tar.TypeDir {
dirs = append(dirs, hdr)
}
}
for _, hdr := range dirs {
path := filepath.Join(dest, hdr.Name)
finfo := hdr.FileInfo()
// I believe the old version was using time.Now().UTC() to overcome an
// invalid error from chtimes.....but here we lose hdr.AccessTime like this...
if err := os.Chtimes(path, time.Now().UTC(), finfo.ModTime()); err != nil {
return errors.Wrap(err, "error changing time")
}
}
return nil
}

View File

@@ -1,21 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package image
// SpecURL is the URL for the image-spec repository
var SpecURL = "https://github.com/opencontainers/image-spec"
// IssuesURL is the URL for the issues of image-tools
var IssuesURL = "https://github.com/opencontainers/image-tools/issues"

View File

@@ -1,84 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package image
import (
"archive/tar"
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
)
type reader interface {
Get(desc descriptor) (io.ReadCloser, error)
}
type tarReader struct {
name string
}
func (r *tarReader) Get(desc descriptor) (io.ReadCloser, error) {
f, err := os.Open(r.name)
if err != nil {
return nil, err
}
defer f.Close()
tr := tar.NewReader(f)
loop:
for {
hdr, err := tr.Next()
switch err {
case io.EOF:
break loop
case nil:
// success, continue below
default:
return nil, err
}
if hdr.Name == filepath.Join("blobs", desc.algo(), desc.hash()) &&
!hdr.FileInfo().IsDir() {
buf, err := ioutil.ReadAll(tr)
if err != nil {
return nil, err
}
return ioutil.NopCloser(bytes.NewReader(buf)), nil
}
}
return nil, fmt.Errorf("object not found")
}
type layoutReader struct {
root string
}
func (r *layoutReader) Get(desc descriptor) (io.ReadCloser, error) {
name := filepath.Join(r.root, "blobs", desc.algo(), desc.hash())
info, err := os.Stat(name)
if err != nil {
return nil, err
}
if info.IsDir() {
return nil, fmt.Errorf("object is dir")
}
return os.Open(name)
}

View File

@@ -1,121 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package image
import (
"archive/tar"
"fmt"
"io"
"os"
"path/filepath"
"github.com/pkg/errors"
)
var (
errEOW = fmt.Errorf("end of walk") // error to signal stop walking
)
// walkFunc is a function type that gets called for each file or directory visited by the Walker.
type walkFunc func(path string, _ os.FileInfo, _ io.Reader) error
// walker is the interface that walks through a file tree,
// calling walk for each file or directory in the tree.
type walker interface {
walk(walkFunc) error
reader
}
type tarWalker struct {
r io.ReadSeeker
tarReader
}
// newTarWalker returns a Walker that walks through .tar files.
func newTarWalker(tarFile string, r io.ReadSeeker) walker {
return &tarWalker{r, tarReader{name: tarFile}}
}
func (w *tarWalker) walk(f walkFunc) error {
if _, err := w.r.Seek(0, os.SEEK_SET); err != nil {
return errors.Wrapf(err, "unable to reset")
}
tr := tar.NewReader(w.r)
loop:
for {
hdr, err := tr.Next()
switch err {
case io.EOF:
break loop
case nil:
// success, continue below
default:
return errors.Wrapf(err, "error advancing tar stream")
}
info := hdr.FileInfo()
if err := f(hdr.Name, info, tr); err != nil {
return err
}
}
return nil
}
type eofReader struct{}
func (eofReader) Read(_ []byte) (int, error) {
return 0, io.EOF
}
type pathWalker struct {
root string
layoutReader
}
// newPathWalker returns a Walker that walks through directories
// starting at the given root path. It does not follow symlinks.
func newPathWalker(root string) walker {
return &pathWalker{root, layoutReader{root: root}}
}
func (w *pathWalker) walk(f walkFunc) error {
return filepath.Walk(w.root, func(path string, info os.FileInfo, err error) error {
// MUST check error value, to make sure the `os.FileInfo` is available.
// Otherwise panic risk will exist.
if err != nil {
return errors.Wrap(err, "error walking path")
}
rel, err := filepath.Rel(w.root, path)
if err != nil {
return errors.Wrap(err, "error walking path") // err from filepath.Walk includes path name
}
if info.IsDir() { // behave like a tar reader for directories
return f(rel, info, eofReader{})
}
file, err := os.Open(path)
if err != nil {
return errors.Wrap(err, "unable to open file") // os.Open includes the path
}
defer file.Close()
return f(rel, info, file)
})
}

View File

@@ -1,191 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright 2015 The Linux Foundation.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,147 +0,0 @@
# Open Container Initiative Runtime Specification
The [Open Container Initiative](http://www.opencontainers.org/) develops specifications for standards on Operating System process and application containers.
The specification can be found [here](spec.md).
## Table of Contents
Additional documentation about how this group operates:
- [Code of Conduct](https://github.com/opencontainers/tob/blob/d2f9d68c1332870e40693fe077d311e0742bc73d/code-of-conduct.md)
- [Style and Conventions](style.md)
- [Roadmap](ROADMAP.md)
- [Implementations](implementations.md)
- [Releases](RELEASES.md)
- [project](project.md)
- [charter][charter]
## Use Cases
To provide context for users the following section gives example use cases for each part of the spec.
### Application Bundle Builders
Application bundle builders can create a [bundle](bundle.md) directory that includes all of the files required for launching an application as a container.
The bundle contains an OCI [configuration file](config.md) where the builder can specify host-independent details such as [which executable to launch](config.md#process) and host-specific settings such as [mount](config.md#mounts) locations, [hook](config.md#hooks) paths, Linux [namespaces](config-linux.md#namespaces) and [cgroups](config-linux.md#control-groups).
Because the configuration includes host-specific settings, application bundle directories copied between two hosts may require configuration adjustments.
### Hook Developers
[Hook](config.md#hooks) developers can extend the functionality of an OCI-compliant runtime by hooking into a container's lifecycle with an external application.
Example use cases include sophisticated network configuration, volume garbage collection, etc.
### Runtime Developers
Runtime developers can build runtime implementations that run OCI-compliant bundles and container configuration, containing low-level OS and host specific details, on a particular platform.
## Releases
There is a loose [Road Map](./ROADMAP.md).
During the `0.x` series of OCI releases we make no backwards compatibility guarantees and intend to break the schema during this series.
## Contributing
Development happens on GitHub for the spec.
Issues are used for bugs and actionable items and longer discussions can happen on the [mailing list](#mailing-list).
The specification and code is licensed under the Apache 2.0 license found in the [LICENSE](./LICENSE) file.
### Discuss your design
The project welcomes submissions, but please let everyone know what you are working on.
Before undertaking a nontrivial change to this specification, send mail to the [mailing list](#mailing-list) to discuss what you plan to do.
This gives everyone a chance to validate the design, helps prevent duplication of effort, and ensures that the idea fits.
It also guarantees that the design is sound before code is written; a GitHub pull-request is not the place for high-level discussions.
Typos and grammatical errors can go straight to a pull-request.
When in doubt, start on the [mailing-list](#mailing-list).
### Weekly Call
The contributors and maintainers of all OCI projects have a weekly meeting Wednesdays at 2:00 PM (USA Pacific).
Everyone is welcome to participate via [UberConference web][UberConference] or audio-only: 415-968-0849 (no PIN needed.)
An initial agenda will be posted to the [mailing list](#mailing-list) earlier in the week, and everyone is welcome to propose additional topics or suggest other agenda alterations there.
Minutes are posted to the [mailing list](#mailing-list) and minutes from past calls are archived to the [wiki](https://github.com/opencontainers/runtime-spec/wiki) for those who are unable to join the call.
### Mailing List
You can subscribe and join the mailing list on [Google Groups](https://groups.google.com/a/opencontainers.org/forum/#!forum/dev).
### IRC
OCI discussion happens on #opencontainers on Freenode ([logs][irc-logs]).
### Git commit
#### Sign your work
The sign-off is a simple line at the end of the explanation for the patch, which certifies that you wrote it or otherwise have the right to pass it on as an open-source patch.
The rules are pretty simple: if you can certify the below (from [developercertificate.org](http://developercertificate.org/)):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
then you just add a line to every git commit message:
Signed-off-by: Joe Smith <joe@gmail.com>
using your real name (sorry, no pseudonyms or anonymous contributions.)
You can add the sign off when creating the git commit via `git commit -s`.
#### Commit Style
Simple house-keeping for clean git history.
Read more on [How to Write a Git Commit Message](http://chris.beams.io/posts/git-commit/) or the Discussion section of [`git-commit(1)`](http://git-scm.com/docs/git-commit).
1. Separate the subject from body with a blank line
2. Limit the subject line to 50 characters
3. Capitalize the subject line
4. Do not end the subject line with a period
5. Use the imperative mood in the subject line
6. Wrap the body at 72 characters
7. Use the body to explain what and why vs. how
* If there was important/useful/essential conversation or information, copy or include a reference
8. When possible, one keyword to scope the change in the subject (i.e. "README: ...", "runtime: ...")
[UberConference]: https://www.uberconference.com/opencontainers
[irc-logs]: http://ircbot.wl.linuxfoundation.org/eavesdrop/%23opencontainers/
[charter]: https://www.opencontainers.org/about/governance

View File

@@ -1,535 +0,0 @@
package specs
import "os"
// Spec is the base configuration for the container.
type Spec struct {
// Version of the Open Container Runtime Specification with which the bundle complies.
Version string `json:"ociVersion"`
// Platform specifies the configuration's target platform.
Platform Platform `json:"platform"`
// Process configures the container process.
Process Process `json:"process"`
// Root configures the container's root filesystem.
Root Root `json:"root"`
// Hostname configures the container's hostname.
Hostname string `json:"hostname,omitempty"`
// Mounts configures additional mounts (on top of Root).
Mounts []Mount `json:"mounts,omitempty"`
// Hooks configures callbacks for container lifecycle events.
Hooks *Hooks `json:"hooks,omitempty"`
// Annotations contains arbitrary metadata for the container.
Annotations map[string]string `json:"annotations,omitempty"`
// Linux is platform specific configuration for Linux based containers.
Linux *Linux `json:"linux,omitempty" platform:"linux"`
// Solaris is platform specific configuration for Solaris containers.
Solaris *Solaris `json:"solaris,omitempty" platform:"solaris"`
// Windows is platform specific configuration for Windows based containers, including Hyper-V containers.
Windows *Windows `json:"windows,omitempty" platform:"windows"`
}
// Process contains information to start a specific application inside the container.
type Process struct {
// Terminal creates an interactive terminal for the container.
Terminal bool `json:"terminal,omitempty"`
// ConsoleSize specifies the size of the console.
ConsoleSize Box `json:"consoleSize,omitempty"`
// User specifies user information for the process.
User User `json:"user"`
// Args specifies the binary and arguments for the application to execute.
Args []string `json:"args"`
// Env populates the process environment for the process.
Env []string `json:"env,omitempty"`
// Cwd is the current working directory for the process and must be
// relative to the container's root.
Cwd string `json:"cwd"`
// Capabilities are Linux capabilities that are kept for the container.
Capabilities []string `json:"capabilities,omitempty" platform:"linux"`
// Rlimits specifies rlimit options to apply to the process.
Rlimits []LinuxRlimit `json:"rlimits,omitempty" platform:"linux"`
// NoNewPrivileges controls whether additional privileges could be gained by processes in the container.
NoNewPrivileges bool `json:"noNewPrivileges,omitempty" platform:"linux"`
// ApparmorProfile specifies the apparmor profile for the container.
ApparmorProfile string `json:"apparmorProfile,omitempty" platform:"linux"`
// SelinuxLabel specifies the selinux context that the container process is run as.
SelinuxLabel string `json:"selinuxLabel,omitempty" platform:"linux"`
}
// Box specifies dimensions of a rectangle. Used for specifying the size of a console.
type Box struct {
// Height is the vertical dimension of a box.
Height uint `json:"height"`
// Width is the horizontal dimension of a box.
Width uint `json:"width"`
}
// User specifies specific user (and group) information for the container process.
type User struct {
// UID is the user id.
UID uint32 `json:"uid" platform:"linux,solaris"`
// GID is the group id.
GID uint32 `json:"gid" platform:"linux,solaris"`
// AdditionalGids are additional group ids set for the container's process.
AdditionalGids []uint32 `json:"additionalGids,omitempty" platform:"linux,solaris"`
// Username is the user name.
Username string `json:"username,omitempty" platform:"windows"`
}
// Root contains information about the container's root filesystem on the host.
type Root struct {
// Path is the absolute path to the container's root filesystem.
Path string `json:"path"`
// Readonly makes the root filesystem for the container readonly before the process is executed.
Readonly bool `json:"readonly,omitempty"`
}
// Platform specifies OS and arch information for the host system that the container
// is created for.
type Platform struct {
// OS is the operating system.
OS string `json:"os"`
// Arch is the architecture
Arch string `json:"arch"`
}
// Mount specifies a mount for a container.
type Mount struct {
// Destination is the path where the mount will be placed relative to the container's root. The path and child directories MUST exist, a runtime MUST NOT create directories automatically to a mount point.
Destination string `json:"destination"`
// Type specifies the mount kind.
Type string `json:"type"`
// Source specifies the source path of the mount. In the case of bind mounts on
// Linux based systems this would be the file on the host.
Source string `json:"source"`
// Options are fstab style mount options.
Options []string `json:"options,omitempty"`
}
// Hook specifies a command that is run at a particular event in the lifecycle of a container
type Hook struct {
Path string `json:"path"`
Args []string `json:"args,omitempty"`
Env []string `json:"env,omitempty"`
Timeout *int `json:"timeout,omitempty"`
}
// Hooks for container setup and teardown
type Hooks struct {
// Prestart is a list of hooks to be run before the container process is executed.
// On Linux, they are run after the container namespaces are created.
Prestart []Hook `json:"prestart,omitempty"`
// Poststart is a list of hooks to be run after the container process is started.
Poststart []Hook `json:"poststart,omitempty"`
// Poststop is a list of hooks to be run after the container process exits.
Poststop []Hook `json:"poststop,omitempty"`
}
// Linux contains platform specific configuration for Linux based containers.
type Linux struct {
// UIDMapping specifies user mappings for supporting user namespaces on Linux.
UIDMappings []LinuxIDMapping `json:"uidMappings,omitempty"`
// GIDMapping specifies group mappings for supporting user namespaces on Linux.
GIDMappings []LinuxIDMapping `json:"gidMappings,omitempty"`
// Sysctl are a set of key value pairs that are set for the container on start
Sysctl map[string]string `json:"sysctl,omitempty"`
// Resources contain cgroup information for handling resource constraints
// for the container
Resources *LinuxResources `json:"resources,omitempty"`
// CgroupsPath specifies the path to cgroups that are created and/or joined by the container.
// The path is expected to be relative to the cgroups mountpoint.
// If resources are specified, the cgroups at CgroupsPath will be updated based on resources.
CgroupsPath string `json:"cgroupsPath,omitempty"`
// Namespaces contains the namespaces that are created and/or joined by the container
Namespaces []LinuxNamespace `json:"namespaces,omitempty"`
// Devices are a list of device nodes that are created for the container
Devices []LinuxDevice `json:"devices,omitempty"`
// Seccomp specifies the seccomp security settings for the container.
Seccomp *LinuxSeccomp `json:"seccomp,omitempty"`
// RootfsPropagation is the rootfs mount propagation mode for the container.
RootfsPropagation string `json:"rootfsPropagation,omitempty"`
// MaskedPaths masks over the provided paths inside the container.
MaskedPaths []string `json:"maskedPaths,omitempty"`
// ReadonlyPaths sets the provided paths as RO inside the container.
ReadonlyPaths []string `json:"readonlyPaths,omitempty"`
// MountLabel specifies the selinux context for the mounts in the container.
MountLabel string `json:"mountLabel,omitempty"`
}
// LinuxNamespace is the configuration for a Linux namespace
type LinuxNamespace struct {
// Type is the type of Linux namespace
Type LinuxNamespaceType `json:"type"`
// Path is a path to an existing namespace persisted on disk that can be joined
// and is of the same type
Path string `json:"path,omitempty"`
}
// LinuxNamespaceType is one of the Linux namespaces
type LinuxNamespaceType string
const (
// PIDNamespace for isolating process IDs
PIDNamespace LinuxNamespaceType = "pid"
// NetworkNamespace for isolating network devices, stacks, ports, etc
NetworkNamespace = "network"
// MountNamespace for isolating mount points
MountNamespace = "mount"
// IPCNamespace for isolating System V IPC, POSIX message queues
IPCNamespace = "ipc"
// UTSNamespace for isolating hostname and NIS domain name
UTSNamespace = "uts"
// UserNamespace for isolating user and group IDs
UserNamespace = "user"
// CgroupNamespace for isolating cgroup hierarchies
CgroupNamespace = "cgroup"
)
// LinuxIDMapping specifies UID/GID mappings
type LinuxIDMapping struct {
// HostID is the starting UID/GID on the host to be mapped to 'ContainerID'
HostID uint32 `json:"hostID"`
// ContainerID is the starting UID/GID in the container
ContainerID uint32 `json:"containerID"`
// Size is the number of IDs to be mapped
Size uint32 `json:"size"`
}
// LinuxRlimit type and restrictions
type LinuxRlimit struct {
// Type of the rlimit to set
Type string `json:"type"`
// Hard is the hard limit for the specified type
Hard uint64 `json:"hard"`
// Soft is the soft limit for the specified type
Soft uint64 `json:"soft"`
}
// LinuxHugepageLimit structure corresponds to limiting kernel hugepages
type LinuxHugepageLimit struct {
// Pagesize is the hugepage size
Pagesize string `json:"pageSize"`
// Limit is the limit of "hugepagesize" hugetlb usage
Limit int64 `json:"limit"`
}
// LinuxInterfacePriority for network interfaces
type LinuxInterfacePriority struct {
// Name is the name of the network interface
Name string `json:"name"`
// Priority for the interface
Priority uint32 `json:"priority"`
}
// linuxBlockIODevice holds major:minor format supported in blkio cgroup
type linuxBlockIODevice struct {
// Major is the device's major number.
Major int64 `json:"major"`
// Minor is the device's minor number.
Minor int64 `json:"minor"`
}
// LinuxWeightDevice struct holds a `major:minor weight` pair for blkioWeightDevice
type LinuxWeightDevice struct {
linuxBlockIODevice
// Weight is the bandwidth rate for the device, range is from 10 to 1000
Weight *uint16 `json:"weight,omitempty"`
// LeafWeight is the bandwidth rate for the device while competing with the cgroup's child cgroups, range is from 10 to 1000, CFQ scheduler only
LeafWeight *uint16 `json:"leafWeight,omitempty"`
}
// LinuxThrottleDevice struct holds a `major:minor rate_per_second` pair
type LinuxThrottleDevice struct {
linuxBlockIODevice
// Rate is the IO rate limit per cgroup per device
Rate uint64 `json:"rate"`
}
// LinuxBlockIO for Linux cgroup 'blkio' resource management
type LinuxBlockIO struct {
// Specifies per cgroup weight, range is from 10 to 1000
Weight *uint16 `json:"blkioWeight,omitempty"`
// Specifies tasks' weight in the given cgroup while competing with the cgroup's child cgroups, range is from 10 to 1000, CFQ scheduler only
LeafWeight *uint16 `json:"blkioLeafWeight,omitempty"`
// Weight per cgroup per device, can override BlkioWeight
WeightDevice []LinuxWeightDevice `json:"blkioWeightDevice,omitempty"`
// IO read rate limit per cgroup per device, bytes per second
ThrottleReadBpsDevice []LinuxThrottleDevice `json:"blkioThrottleReadBpsDevice,omitempty"`
// IO write rate limit per cgroup per device, bytes per second
ThrottleWriteBpsDevice []LinuxThrottleDevice `json:"blkioThrottleWriteBpsDevice,omitempty"`
// IO read rate limit per cgroup per device, IO per second
ThrottleReadIOPSDevice []LinuxThrottleDevice `json:"blkioThrottleReadIOPSDevice,omitempty"`
// IO write rate limit per cgroup per device, IO per second
ThrottleWriteIOPSDevice []LinuxThrottleDevice `json:"blkioThrottleWriteIOPSDevice,omitempty"`
}
// LinuxMemory for Linux cgroup 'memory' resource management
type LinuxMemory struct {
// Memory limit (in bytes).
Limit *int64 `json:"limit,omitempty"`
// Memory reservation or soft_limit (in bytes).
Reservation *int64 `json:"reservation,omitempty"`
// Total memory limit (memory + swap).
Swap *int64 `json:"swap,omitempty"`
// Kernel memory limit (in bytes).
Kernel *int64 `json:"kernel,omitempty"`
// Kernel memory limit for tcp (in bytes)
KernelTCP *int64 `json:"kernelTCP,omitempty"`
// How aggressive the kernel will swap memory pages. Range from 0 to 100.
Swappiness *uint64 `json:"swappiness,omitempty"`
}
// LinuxCPU for Linux cgroup 'cpu' resource management
type LinuxCPU struct {
// CPU shares (relative weight (ratio) vs. other cgroups with cpu shares).
Shares *uint64 `json:"shares,omitempty"`
// CPU hardcap limit (in usecs). Allowed cpu time in a given period.
Quota *int64 `json:"quota,omitempty"`
// CPU period to be used for hardcapping (in usecs).
Period *uint64 `json:"period,omitempty"`
// How much time realtime scheduling may use (in usecs).
RealtimeRuntime *int64 `json:"realtimeRuntime,omitempty"`
// CPU period to be used for realtime scheduling (in usecs).
RealtimePeriod *uint64 `json:"realtimePeriod,omitempty"`
// CPUs to use within the cpuset. Default is to use any CPU available.
Cpus string `json:"cpus,omitempty"`
// List of memory nodes in the cpuset. Default is to use any available memory node.
Mems string `json:"mems,omitempty"`
}
// LinuxPids for Linux cgroup 'pids' resource management (Linux 4.3)
type LinuxPids struct {
// Maximum number of PIDs. Default is "no limit".
Limit int64 `json:"limit"`
}
// LinuxNetwork identification and priority configuration
type LinuxNetwork struct {
// Set class identifier for container's network packets
ClassID *uint32 `json:"classID,omitempty"`
// Set priority of network traffic for container
Priorities []LinuxInterfacePriority `json:"priorities,omitempty"`
}
// LinuxResources has container runtime resource constraints
type LinuxResources struct {
// Devices configures the device whitelist.
Devices []LinuxDeviceCgroup `json:"devices,omitempty"`
// DisableOOMKiller disables the OOM killer for out of memory conditions
DisableOOMKiller *bool `json:"disableOOMKiller,omitempty"`
// Specify an oom_score_adj for the container.
OOMScoreAdj *int `json:"oomScoreAdj,omitempty"`
// Memory restriction configuration
Memory *LinuxMemory `json:"memory,omitempty"`
// CPU resource restriction configuration
CPU *LinuxCPU `json:"cpu,omitempty"`
// Task resource restriction configuration.
Pids *LinuxPids `json:"pids,omitempty"`
// BlockIO restriction configuration
BlockIO *LinuxBlockIO `json:"blockIO,omitempty"`
// Hugetlb limit (in bytes)
HugepageLimits []LinuxHugepageLimit `json:"hugepageLimits,omitempty"`
// Network restriction configuration
Network *LinuxNetwork `json:"network,omitempty"`
}
// LinuxDevice represents the mknod information for a Linux special device file
type LinuxDevice struct {
// Path to the device.
Path string `json:"path"`
// Device type, block, char, etc.
Type string `json:"type"`
// Major is the device's major number.
Major int64 `json:"major"`
// Minor is the device's minor number.
Minor int64 `json:"minor"`
// FileMode permission bits for the device.
FileMode *os.FileMode `json:"fileMode,omitempty"`
// UID of the device.
UID *uint32 `json:"uid,omitempty"`
// Gid of the device.
GID *uint32 `json:"gid,omitempty"`
}
// LinuxDeviceCgroup represents a device rule for the whitelist controller
type LinuxDeviceCgroup struct {
// Allow or deny
Allow bool `json:"allow"`
// Device type, block, char, etc.
Type string `json:"type,omitempty"`
// Major is the device's major number.
Major *int64 `json:"major,omitempty"`
// Minor is the device's minor number.
Minor *int64 `json:"minor,omitempty"`
// Cgroup access permissions format, rwm.
Access string `json:"access,omitempty"`
}
// LinuxSeccomp represents syscall restrictions
type LinuxSeccomp struct {
DefaultAction LinuxSeccompAction `json:"defaultAction"`
Architectures []Arch `json:"architectures"`
Syscalls []LinuxSyscall `json:"syscalls,omitempty"`
}
// Solaris contains platform specific configuration for Solaris application containers.
type Solaris struct {
// SMF FMRI which should go "online" before we start the container process.
Milestone string `json:"milestone,omitempty"`
// Maximum set of privileges any process in this container can obtain.
LimitPriv string `json:"limitpriv,omitempty"`
// The maximum amount of shared memory allowed for this container.
MaxShmMemory string `json:"maxShmMemory,omitempty"`
// Specification for automatic creation of network resources for this container.
Anet []SolarisAnet `json:"anet,omitempty"`
// Set limit on the amount of CPU time that can be used by container.
CappedCPU *SolarisCappedCPU `json:"cappedCPU,omitempty"`
// The physical and swap caps on the memory that can be used by this container.
CappedMemory *SolarisCappedMemory `json:"cappedMemory,omitempty"`
}
// SolarisCappedCPU allows users to set limit on the amount of CPU time that can be used by container.
type SolarisCappedCPU struct {
Ncpus string `json:"ncpus,omitempty"`
}
// SolarisCappedMemory allows users to set the physical and swap caps on the memory that can be used by this container.
type SolarisCappedMemory struct {
Physical string `json:"physical,omitempty"`
Swap string `json:"swap,omitempty"`
}
// SolarisAnet provides the specification for automatic creation of network resources for this container.
type SolarisAnet struct {
// Specify a name for the automatically created VNIC datalink.
Linkname string `json:"linkname,omitempty"`
// Specify the link over which the VNIC will be created.
Lowerlink string `json:"lowerLink,omitempty"`
// The set of IP addresses that the container can use.
Allowedaddr string `json:"allowedAddress,omitempty"`
// Specifies whether allowedAddress limitation is to be applied to the VNIC.
Configallowedaddr string `json:"configureAllowedAddress,omitempty"`
// The value of the optional default router.
Defrouter string `json:"defrouter,omitempty"`
// Enable one or more types of link protection.
Linkprotection string `json:"linkProtection,omitempty"`
// Set the VNIC's macAddress
Macaddress string `json:"macAddress,omitempty"`
}
// Windows defines the runtime configuration for Windows based containers, including Hyper-V containers.
type Windows struct {
// Resources contains information for handling resource constraints for the container.
Resources *WindowsResources `json:"resources,omitempty"`
}
// WindowsResources has container runtime resource constraints for containers running on Windows.
type WindowsResources struct {
// Memory restriction configuration.
Memory *WindowsMemoryResources `json:"memory,omitempty"`
// CPU resource restriction configuration.
CPU *WindowsCPUResources `json:"cpu,omitempty"`
// Storage restriction configuration.
Storage *WindowsStorageResources `json:"storage,omitempty"`
// Network restriction configuration.
Network *WindowsNetworkResources `json:"network,omitempty"`
}
// WindowsMemoryResources contains memory resource management settings.
type WindowsMemoryResources struct {
// Memory limit in bytes.
Limit *uint64 `json:"limit,omitempty"`
// Memory reservation in bytes.
Reservation *uint64 `json:"reservation,omitempty"`
}
// WindowsCPUResources contains CPU resource management settings.
type WindowsCPUResources struct {
// Number of CPUs available to the container.
Count *uint64 `json:"count,omitempty"`
// CPU shares (relative weight to other containers with cpu shares). Range is from 1 to 10000.
Shares *uint16 `json:"shares,omitempty"`
// Percent of available CPUs usable by the container.
Percent *uint8 `json:"percent,omitempty"`
}
// WindowsStorageResources contains storage resource management settings.
type WindowsStorageResources struct {
// Specifies maximum Iops for the system drive.
Iops *uint64 `json:"iops,omitempty"`
// Specifies maximum bytes per second for the system drive.
Bps *uint64 `json:"bps,omitempty"`
// Sandbox size specifies the minimum size of the system drive in bytes.
SandboxSize *uint64 `json:"sandboxSize,omitempty"`
}
// WindowsNetworkResources contains network resource management settings.
type WindowsNetworkResources struct {
// EgressBandwidth is the maximum egress bandwidth in bytes per second.
EgressBandwidth *uint64 `json:"egressBandwidth,omitempty"`
}
// Arch used for additional architectures
type Arch string
// Additional architectures permitted to be used for system calls
// By default only the native architecture of the kernel is permitted
const (
ArchX86 Arch = "SCMP_ARCH_X86"
ArchX86_64 Arch = "SCMP_ARCH_X86_64"
ArchX32 Arch = "SCMP_ARCH_X32"
ArchARM Arch = "SCMP_ARCH_ARM"
ArchAARCH64 Arch = "SCMP_ARCH_AARCH64"
ArchMIPS Arch = "SCMP_ARCH_MIPS"
ArchMIPS64 Arch = "SCMP_ARCH_MIPS64"
ArchMIPS64N32 Arch = "SCMP_ARCH_MIPS64N32"
ArchMIPSEL Arch = "SCMP_ARCH_MIPSEL"
ArchMIPSEL64 Arch = "SCMP_ARCH_MIPSEL64"
ArchMIPSEL64N32 Arch = "SCMP_ARCH_MIPSEL64N32"
ArchPPC Arch = "SCMP_ARCH_PPC"
ArchPPC64 Arch = "SCMP_ARCH_PPC64"
ArchPPC64LE Arch = "SCMP_ARCH_PPC64LE"
ArchS390 Arch = "SCMP_ARCH_S390"
ArchS390X Arch = "SCMP_ARCH_S390X"
)
// LinuxSeccompAction taken upon Seccomp rule match
type LinuxSeccompAction string
// Define actions for Seccomp rules
const (
ActKill LinuxSeccompAction = "SCMP_ACT_KILL"
ActTrap LinuxSeccompAction = "SCMP_ACT_TRAP"
ActErrno LinuxSeccompAction = "SCMP_ACT_ERRNO"
ActTrace LinuxSeccompAction = "SCMP_ACT_TRACE"
ActAllow LinuxSeccompAction = "SCMP_ACT_ALLOW"
)
// LinuxSeccompOperator used to match syscall arguments in Seccomp
type LinuxSeccompOperator string
// Define operators for syscall arguments in Seccomp
const (
OpNotEqual LinuxSeccompOperator = "SCMP_CMP_NE"
OpLessThan LinuxSeccompOperator = "SCMP_CMP_LT"
OpLessEqual LinuxSeccompOperator = "SCMP_CMP_LE"
OpEqualTo LinuxSeccompOperator = "SCMP_CMP_EQ"
OpGreaterEqual LinuxSeccompOperator = "SCMP_CMP_GE"
OpGreaterThan LinuxSeccompOperator = "SCMP_CMP_GT"
OpMaskedEqual LinuxSeccompOperator = "SCMP_CMP_MASKED_EQ"
)
// LinuxSeccompArg used for matching specific syscall arguments in Seccomp
type LinuxSeccompArg struct {
Index uint `json:"index"`
Value uint64 `json:"value"`
ValueTwo uint64 `json:"valueTwo"`
Op LinuxSeccompOperator `json:"op"`
}
// LinuxSyscall is used to match a syscall in Seccomp
type LinuxSyscall struct {
Name string `json:"name"`
Action LinuxSeccompAction `json:"action"`
Args []LinuxSeccompArg `json:"args,omitempty"`
}

View File

@@ -1,17 +0,0 @@
package specs
// State holds information about the runtime state of the container.
type State struct {
// Version is the version of the specification that is supported.
Version string `json:"ociVersion"`
// ID is the container ID
ID string `json:"id"`
// Status is the runtime state of the container.
Status string `json:"status"`
// Pid is the process ID for the container process.
Pid int `json:"pid"`
// BundlePath is the path to the container's bundle directory.
BundlePath string `json:"bundlePath"`
// Annotations are the annotations associated with the container.
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -1,18 +0,0 @@
package specs
import "fmt"
const (
// VersionMajor is for an API incompatible changes
VersionMajor = 1
// VersionMinor is for functionality in a backwards-compatible manner
VersionMinor = 0
// VersionPatch is for backwards-compatible bug fixes
VersionPatch = 0
// VersionDev indicates development branch. Releases will be empty string.
VersionDev = "-rc4"
)
// Version is the specification version that the package types support.
var Version = fmt.Sprintf("%d.%d.%d%s", VersionMajor, VersionMinor, VersionPatch, VersionDev)

View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2015 xeipuuv
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,8 +0,0 @@
# gojsonpointer
An implementation of JSON Pointer - Go language
## References
http://tools.ietf.org/html/draft-ietf-appsawg-json-pointer-07
### Note
The 4.Evaluation part of the previous reference, starting with 'If the currently referenced value is a JSON array, the reference token MUST contain either...' is not implemented.

View File

@@ -1,190 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonpointer
// repository-desc An implementation of JSON Pointer - Go language
//
// description Main and unique file.
//
// created 25-02-2013
package gojsonpointer
import (
"errors"
"fmt"
"reflect"
"strconv"
"strings"
)
const (
const_empty_pointer = ``
const_pointer_separator = `/`
const_invalid_start = `JSON pointer must be empty or start with a "` + const_pointer_separator + `"`
)
type implStruct struct {
mode string // "SET" or "GET"
inDocument interface{}
setInValue interface{}
getOutNode interface{}
getOutKind reflect.Kind
outError error
}
type JsonPointer struct {
referenceTokens []string
}
// NewJsonPointer parses the given string JSON pointer and returns an object
func NewJsonPointer(jsonPointerString string) (p JsonPointer, err error) {
// Pointer to the root of the document
if len(jsonPointerString) == 0 {
// Keep referenceTokens nil
return
}
if jsonPointerString[0] != '/' {
return p, errors.New(const_invalid_start)
}
p.referenceTokens = strings.Split(jsonPointerString[1:], const_pointer_separator)
return
}
// Uses the pointer to retrieve a value from a JSON document
func (p *JsonPointer) Get(document interface{}) (interface{}, reflect.Kind, error) {
is := &implStruct{mode: "GET", inDocument: document}
p.implementation(is)
return is.getOutNode, is.getOutKind, is.outError
}
// Uses the pointer to update a value from a JSON document
func (p *JsonPointer) Set(document interface{}, value interface{}) (interface{}, error) {
is := &implStruct{mode: "SET", inDocument: document, setInValue: value}
p.implementation(is)
return document, is.outError
}
// Both Get and Set functions use the same implementation to avoid code duplication
func (p *JsonPointer) implementation(i *implStruct) {
kind := reflect.Invalid
// Full document when empty
if len(p.referenceTokens) == 0 {
i.getOutNode = i.inDocument
i.outError = nil
i.getOutKind = kind
i.outError = nil
return
}
node := i.inDocument
for ti, token := range p.referenceTokens {
isLastToken := ti == len(p.referenceTokens)-1
switch v := node.(type) {
case map[string]interface{}:
decodedToken := decodeReferenceToken(token)
if _, ok := v[decodedToken]; ok {
node = v[decodedToken]
if isLastToken && i.mode == "SET" {
v[decodedToken] = i.setInValue
}
} else {
i.outError = fmt.Errorf("Object has no key '%s'", decodedToken)
i.getOutKind = reflect.Map
i.getOutNode = nil
return
}
case []interface{}:
tokenIndex, err := strconv.Atoi(token)
if err != nil {
i.outError = fmt.Errorf("Invalid array index '%s'", token)
i.getOutKind = reflect.Slice
i.getOutNode = nil
return
}
if tokenIndex < 0 || tokenIndex >= len(v) {
i.outError = fmt.Errorf("Out of bound array[0,%d] index '%d'", len(v), tokenIndex)
i.getOutKind = reflect.Slice
i.getOutNode = nil
return
}
node = v[tokenIndex]
if isLastToken && i.mode == "SET" {
v[tokenIndex] = i.setInValue
}
default:
i.outError = fmt.Errorf("Invalid token reference '%s'", token)
i.getOutKind = reflect.ValueOf(node).Kind()
i.getOutNode = nil
return
}
}
i.getOutNode = node
i.getOutKind = reflect.ValueOf(node).Kind()
i.outError = nil
}
// Pointer to string representation function
func (p *JsonPointer) String() string {
if len(p.referenceTokens) == 0 {
return const_empty_pointer
}
pointerString := const_pointer_separator + strings.Join(p.referenceTokens, const_pointer_separator)
return pointerString
}
// Specific JSON pointer encoding here
// ~0 => ~
// ~1 => /
// ... and vice versa
func decodeReferenceToken(token string) string {
step1 := strings.Replace(token, `~1`, `/`, -1)
step2 := strings.Replace(step1, `~0`, `~`, -1)
return step2
}
func encodeReferenceToken(token string) string {
step1 := strings.Replace(token, `~`, `~0`, -1)
step2 := strings.Replace(step1, `/`, `~1`, -1)
return step2
}

View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2015 xeipuuv
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,10 +0,0 @@
# gojsonreference
An implementation of JSON Reference - Go language
## Dependencies
https://github.com/xeipuuv/gojsonpointer
## References
http://tools.ietf.org/html/draft-ietf-appsawg-json-pointer-07
http://tools.ietf.org/html/draft-pbryan-zyp-json-ref-03

View File

@@ -1,141 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonreference
// repository-desc An implementation of JSON Reference - Go language
//
// description Main and unique file.
//
// created 26-02-2013
package gojsonreference
import (
"errors"
"github.com/xeipuuv/gojsonpointer"
"net/url"
"path/filepath"
"runtime"
"strings"
)
const (
const_fragment_char = `#`
)
func NewJsonReference(jsonReferenceString string) (JsonReference, error) {
var r JsonReference
err := r.parse(jsonReferenceString)
return r, err
}
type JsonReference struct {
referenceUrl *url.URL
referencePointer gojsonpointer.JsonPointer
HasFullUrl bool
HasUrlPathOnly bool
HasFragmentOnly bool
HasFileScheme bool
HasFullFilePath bool
}
func (r *JsonReference) GetUrl() *url.URL {
return r.referenceUrl
}
func (r *JsonReference) GetPointer() *gojsonpointer.JsonPointer {
return &r.referencePointer
}
func (r *JsonReference) String() string {
if r.referenceUrl != nil {
return r.referenceUrl.String()
}
if r.HasFragmentOnly {
return const_fragment_char + r.referencePointer.String()
}
return r.referencePointer.String()
}
func (r *JsonReference) IsCanonical() bool {
return (r.HasFileScheme && r.HasFullFilePath) || (!r.HasFileScheme && r.HasFullUrl)
}
// "Constructor", parses the given string JSON reference
func (r *JsonReference) parse(jsonReferenceString string) (err error) {
r.referenceUrl, err = url.Parse(jsonReferenceString)
if err != nil {
return
}
refUrl := r.referenceUrl
if refUrl.Scheme != "" && refUrl.Host != "" {
r.HasFullUrl = true
} else {
if refUrl.Path != "" {
r.HasUrlPathOnly = true
} else if refUrl.RawQuery == "" && refUrl.Fragment != "" {
r.HasFragmentOnly = true
}
}
r.HasFileScheme = refUrl.Scheme == "file"
if runtime.GOOS == "windows" {
// on Windows, a file URL may have an extra leading slash, and if it
// doesn't then its first component will be treated as the host by the
// Go runtime
if refUrl.Host == "" && strings.HasPrefix(refUrl.Path, "/") {
r.HasFullFilePath = filepath.IsAbs(refUrl.Path[1:])
} else {
r.HasFullFilePath = filepath.IsAbs(refUrl.Host + refUrl.Path)
}
} else {
r.HasFullFilePath = filepath.IsAbs(refUrl.Path)
}
// invalid json-pointer error means url has no json-pointer fragment. simply ignore error
r.referencePointer, _ = gojsonpointer.NewJsonPointer(refUrl.Fragment)
return
}
// Creates a new reference from a parent and a child
// If the child cannot inherit from the parent, an error is returned
func (r *JsonReference) Inherits(child JsonReference) (*JsonReference, error) {
childUrl := child.GetUrl()
parentUrl := r.GetUrl()
if childUrl == nil {
return nil, errors.New("childUrl is nil!")
}
if parentUrl == nil {
return nil, errors.New("parentUrl is nil!")
}
ref, err := NewJsonReference(parentUrl.ResolveReference(childUrl).String())
if err != nil {
return nil, err
}
return &ref, err
}

View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2015 xeipuuv
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,236 +0,0 @@
[![Build Status](https://travis-ci.org/xeipuuv/gojsonschema.svg)](https://travis-ci.org/xeipuuv/gojsonschema)
# gojsonschema
## Description
An implementation of JSON Schema, based on IETF's draft v4 - Go language
References :
* http://json-schema.org
* http://json-schema.org/latest/json-schema-core.html
* http://json-schema.org/latest/json-schema-validation.html
## Installation
```
go get github.com/xeipuuv/gojsonschema
```
Dependencies :
* [github.com/xeipuuv/gojsonpointer](https://github.com/xeipuuv/gojsonpointer)
* [github.com/xeipuuv/gojsonreference](https://github.com/xeipuuv/gojsonreference)
* [github.com/stretchr/testify/assert](https://github.com/stretchr/testify#assert-package)
## Usage
### Example
```go
package main
import (
"fmt"
"github.com/xeipuuv/gojsonschema"
)
func main() {
schemaLoader := gojsonschema.NewReferenceLoader("file:///home/me/schema.json")
documentLoader := gojsonschema.NewReferenceLoader("file:///home/me/document.json")
result, err := gojsonschema.Validate(schemaLoader, documentLoader)
if err != nil {
panic(err.Error())
}
if result.Valid() {
fmt.Printf("The document is valid\n")
} else {
fmt.Printf("The document is not valid. see errors :\n")
for _, desc := range result.Errors() {
fmt.Printf("- %s\n", desc)
}
}
}
```
#### Loaders
There are various ways to load your JSON data.
In order to load your schemas and documents,
first declare an appropriate loader :
* Web / HTTP, using a reference :
```go
loader := gojsonschema.NewReferenceLoader("http://www.some_host.com/schema.json")
```
* Local file, using a reference :
```go
loader := gojsonschema.NewReferenceLoader("file:///home/me/schema.json")
```
References use the URI scheme, the prefix (file://) and a full path to the file are required.
* JSON strings :
```go
loader := gojsonschema.NewStringLoader(`{"type": "string"}`)
```
* Custom Go types :
```go
m := map[string]interface{}{"type": "string"}
loader := gojsonschema.NewGoLoader(m)
```
And
```go
type Root struct {
Users []User `json:"users"`
}
type User struct {
Name string `json:"name"`
}
...
data := Root{}
data.Users = append(data.Users, User{"John"})
data.Users = append(data.Users, User{"Sophia"})
data.Users = append(data.Users, User{"Bill"})
loader := gojsonschema.NewGoLoader(data)
```
#### Validation
Once the loaders are set, validation is easy :
```go
result, err := gojsonschema.Validate(schemaLoader, documentLoader)
```
Alternatively, you might want to load a schema only once and process to multiple validations :
```go
schema, err := gojsonschema.NewSchema(schemaLoader)
...
result1, err := schema.Validate(documentLoader1)
...
result2, err := schema.Validate(documentLoader2)
...
// etc ...
```
To check the result :
```go
if result.Valid() {
fmt.Printf("The document is valid\n")
} else {
fmt.Printf("The document is not valid. see errors :\n")
for _, err := range result.Errors() {
// Err implements the ResultError interface
fmt.Printf("- %s\n", err)
}
}
```
## Working with Errors
The library handles string error codes which you can customize by creating your own gojsonschema.locale and setting it
```go
gojsonschema.Locale = YourCustomLocale{}
```
However, each error contains additional contextual information.
**err.Type()**: *string* Returns the "type" of error that occurred. Note you can also type check. See below
Note: An error of RequiredType has an err.Type() return value of "required"
"required": RequiredError
"invalid_type": InvalidTypeError
"number_any_of": NumberAnyOfError
"number_one_of": NumberOneOfError
"number_all_of": NumberAllOfError
"number_not": NumberNotError
"missing_dependency": MissingDependencyError
"internal": InternalError
"enum": EnumError
"array_no_additional_items": ArrayNoAdditionalItemsError
"array_min_items": ArrayMinItemsError
"array_max_items": ArrayMaxItemsError
"unique": ItemsMustBeUniqueError
"array_min_properties": ArrayMinPropertiesError
"array_max_properties": ArrayMaxPropertiesError
"additional_property_not_allowed": AdditionalPropertyNotAllowedError
"invalid_property_pattern": InvalidPropertyPatternError
"string_gte": StringLengthGTEError
"string_lte": StringLengthLTEError
"pattern": DoesNotMatchPatternError
"multiple_of": MultipleOfError
"number_gte": NumberGTEError
"number_gt": NumberGTError
"number_lte": NumberLTEError
"number_lt": NumberLTError
**err.Value()**: *interface{}* Returns the value given
**err.Context()**: *gojsonschema.jsonContext* Returns the context. This has a String() method that will print something like this: (root).firstName
**err.Field()**: *string* Returns the fieldname in the format firstName, or for embedded properties, person.firstName. This returns the same as the String() method on *err.Context()* but removes the (root). prefix.
**err.Description()**: *string* The error description. This is based on the locale you are using. See the beginning of this section for overwriting the locale with a custom implementation.
**err.Details()**: *gojsonschema.ErrorDetails* Returns a map[string]interface{} of additional error details specific to the error. For example, GTE errors will have a "min" value, LTE will have a "max" value. See errors.go for a full description of all the error details. Every error always contains a "field" key that holds the value of *err.Field()*
Note in most cases, the err.Details() will be used to generate replacement strings in your locales, and not used directly. These strings follow the text/template format i.e.
```
{{.field}} must be greater than or equal to {{.min}}
```
## Formats
JSON Schema allows for optional "format" property to validate strings against well-known formats. gojsonschema ships with all of the formats defined in the spec that you can use like this:
````json
{"type": "string", "format": "email"}
````
Available formats: date-time, hostname, email, ipv4, ipv6, uri.
For repetitive or more complex formats, you can create custom format checkers and add them to gojsonschema like this:
```go
// Define the format checker
type RoleFormatChecker struct {}
// Ensure it meets the gojsonschema.FormatChecker interface
func (f RoleFormatChecker) IsFormat(input string) bool {
return strings.HasPrefix("ROLE_", input)
}
// Add it to the library
gojsonschema.FormatCheckers.Add("role", RoleFormatChecker{})
````
Now to use in your json schema:
````json
{"type": "string", "format": "role"}
````
## Uses
gojsonschema uses the following test suite :
https://github.com/json-schema/JSON-Schema-Test-Suite

View File

@@ -1,274 +0,0 @@
package gojsonschema
import (
"bytes"
"sync"
"text/template"
)
var errorTemplates errorTemplate = errorTemplate{template.New("errors-new"), sync.RWMutex{}}
// template.Template is not thread-safe for writing, so some locking is done
// sync.RWMutex is used for efficiently locking when new templates are created
type errorTemplate struct {
*template.Template
sync.RWMutex
}
type (
// RequiredError. ErrorDetails: property string
RequiredError struct {
ResultErrorFields
}
// InvalidTypeError. ErrorDetails: expected, given
InvalidTypeError struct {
ResultErrorFields
}
// NumberAnyOfError. ErrorDetails: -
NumberAnyOfError struct {
ResultErrorFields
}
// NumberOneOfError. ErrorDetails: -
NumberOneOfError struct {
ResultErrorFields
}
// NumberAllOfError. ErrorDetails: -
NumberAllOfError struct {
ResultErrorFields
}
// NumberNotError. ErrorDetails: -
NumberNotError struct {
ResultErrorFields
}
// MissingDependencyError. ErrorDetails: dependency
MissingDependencyError struct {
ResultErrorFields
}
// InternalError. ErrorDetails: error
InternalError struct {
ResultErrorFields
}
// EnumError. ErrorDetails: allowed
EnumError struct {
ResultErrorFields
}
// ArrayNoAdditionalItemsError. ErrorDetails: -
ArrayNoAdditionalItemsError struct {
ResultErrorFields
}
// ArrayMinItemsError. ErrorDetails: min
ArrayMinItemsError struct {
ResultErrorFields
}
// ArrayMaxItemsError. ErrorDetails: max
ArrayMaxItemsError struct {
ResultErrorFields
}
// ItemsMustBeUniqueError. ErrorDetails: type
ItemsMustBeUniqueError struct {
ResultErrorFields
}
// ArrayMinPropertiesError. ErrorDetails: min
ArrayMinPropertiesError struct {
ResultErrorFields
}
// ArrayMaxPropertiesError. ErrorDetails: max
ArrayMaxPropertiesError struct {
ResultErrorFields
}
// AdditionalPropertyNotAllowedError. ErrorDetails: property
AdditionalPropertyNotAllowedError struct {
ResultErrorFields
}
// InvalidPropertyPatternError. ErrorDetails: property, pattern
InvalidPropertyPatternError struct {
ResultErrorFields
}
// StringLengthGTEError. ErrorDetails: min
StringLengthGTEError struct {
ResultErrorFields
}
// StringLengthLTEError. ErrorDetails: max
StringLengthLTEError struct {
ResultErrorFields
}
// DoesNotMatchPatternError. ErrorDetails: pattern
DoesNotMatchPatternError struct {
ResultErrorFields
}
// DoesNotMatchFormatError. ErrorDetails: format
DoesNotMatchFormatError struct {
ResultErrorFields
}
// MultipleOfError. ErrorDetails: multiple
MultipleOfError struct {
ResultErrorFields
}
// NumberGTEError. ErrorDetails: min
NumberGTEError struct {
ResultErrorFields
}
// NumberGTError. ErrorDetails: min
NumberGTError struct {
ResultErrorFields
}
// NumberLTEError. ErrorDetails: max
NumberLTEError struct {
ResultErrorFields
}
// NumberLTError. ErrorDetails: max
NumberLTError struct {
ResultErrorFields
}
)
// newError takes a ResultError type and sets the type, context, description, details, value, and field
func newError(err ResultError, context *jsonContext, value interface{}, locale locale, details ErrorDetails) {
var t string
var d string
switch err.(type) {
case *RequiredError:
t = "required"
d = locale.Required()
case *InvalidTypeError:
t = "invalid_type"
d = locale.InvalidType()
case *NumberAnyOfError:
t = "number_any_of"
d = locale.NumberAnyOf()
case *NumberOneOfError:
t = "number_one_of"
d = locale.NumberOneOf()
case *NumberAllOfError:
t = "number_all_of"
d = locale.NumberAllOf()
case *NumberNotError:
t = "number_not"
d = locale.NumberNot()
case *MissingDependencyError:
t = "missing_dependency"
d = locale.MissingDependency()
case *InternalError:
t = "internal"
d = locale.Internal()
case *EnumError:
t = "enum"
d = locale.Enum()
case *ArrayNoAdditionalItemsError:
t = "array_no_additional_items"
d = locale.ArrayNoAdditionalItems()
case *ArrayMinItemsError:
t = "array_min_items"
d = locale.ArrayMinItems()
case *ArrayMaxItemsError:
t = "array_max_items"
d = locale.ArrayMaxItems()
case *ItemsMustBeUniqueError:
t = "unique"
d = locale.Unique()
case *ArrayMinPropertiesError:
t = "array_min_properties"
d = locale.ArrayMinProperties()
case *ArrayMaxPropertiesError:
t = "array_max_properties"
d = locale.ArrayMaxProperties()
case *AdditionalPropertyNotAllowedError:
t = "additional_property_not_allowed"
d = locale.AdditionalPropertyNotAllowed()
case *InvalidPropertyPatternError:
t = "invalid_property_pattern"
d = locale.InvalidPropertyPattern()
case *StringLengthGTEError:
t = "string_gte"
d = locale.StringGTE()
case *StringLengthLTEError:
t = "string_lte"
d = locale.StringLTE()
case *DoesNotMatchPatternError:
t = "pattern"
d = locale.DoesNotMatchPattern()
case *DoesNotMatchFormatError:
t = "format"
d = locale.DoesNotMatchFormat()
case *MultipleOfError:
t = "multiple_of"
d = locale.MultipleOf()
case *NumberGTEError:
t = "number_gte"
d = locale.NumberGTE()
case *NumberGTError:
t = "number_gt"
d = locale.NumberGT()
case *NumberLTEError:
t = "number_lte"
d = locale.NumberLTE()
case *NumberLTError:
t = "number_lt"
d = locale.NumberLT()
}
err.SetType(t)
err.SetContext(context)
err.SetValue(value)
err.SetDetails(details)
details["field"] = err.Field()
err.SetDescription(formatErrorDescription(d, details))
}
// formatErrorDescription takes a string in the default text/template
// format and converts it to a string with replacements. The fields come
// from the ErrorDetails struct and vary for each type of error.
func formatErrorDescription(s string, details ErrorDetails) string {
var tpl *template.Template
var descrAsBuffer bytes.Buffer
var err error
errorTemplates.RLock()
tpl = errorTemplates.Lookup(s)
errorTemplates.RUnlock()
if tpl == nil {
errorTemplates.Lock()
tpl = errorTemplates.New(s)
tpl, err = tpl.Parse(s)
errorTemplates.Unlock()
if err != nil {
return err.Error()
}
}
err = tpl.Execute(&descrAsBuffer, details)
if err != nil {
return err.Error()
}
return descrAsBuffer.String()
}

View File

@@ -1,194 +0,0 @@
package gojsonschema
import (
"net"
"net/url"
"reflect"
"regexp"
"strings"
"time"
)
type (
// FormatChecker is the interface all formatters added to FormatCheckerChain must implement
FormatChecker interface {
IsFormat(input string) bool
}
// FormatCheckerChain holds the formatters
FormatCheckerChain struct {
formatters map[string]FormatChecker
}
// EmailFormatter verifies email address formats
EmailFormatChecker struct{}
// IPV4FormatChecker verifies IP addresses in the ipv4 format
IPV4FormatChecker struct{}
// IPV6FormatChecker verifies IP addresses in the ipv6 format
IPV6FormatChecker struct{}
// DateTimeFormatChecker verifies date/time formats per RFC3339 5.6
//
// Valid formats:
// Partial Time: HH:MM:SS
// Full Date: YYYY-MM-DD
// Full Time: HH:MM:SSZ-07:00
// Date Time: YYYY-MM-DDTHH:MM:SSZ-0700
//
// Where
// YYYY = 4DIGIT year
// MM = 2DIGIT month ; 01-12
// DD = 2DIGIT day-month ; 01-28, 01-29, 01-30, 01-31 based on month/year
// HH = 2DIGIT hour ; 00-23
// MM = 2DIGIT ; 00-59
// SS = 2DIGIT ; 00-58, 00-60 based on leap second rules
// T = Literal
// Z = Literal
//
// Note: Nanoseconds are also suported in all formats
//
// http://tools.ietf.org/html/rfc3339#section-5.6
DateTimeFormatChecker struct{}
// URIFormatCheckers validates a URI with a valid Scheme per RFC3986
URIFormatChecker struct{}
// HostnameFormatChecker validates a hostname is in the correct format
HostnameFormatChecker struct{}
// UUIDFormatChecker validates a UUID is in the correct format
UUIDFormatChecker struct{}
// RegexFormatChecker validates a regex is in the correct format
RegexFormatChecker struct{}
)
var (
// Formatters holds the valid formatters, and is a public variable
// so library users can add custom formatters
FormatCheckers = FormatCheckerChain{
formatters: map[string]FormatChecker{
"date-time": DateTimeFormatChecker{},
"hostname": HostnameFormatChecker{},
"email": EmailFormatChecker{},
"ipv4": IPV4FormatChecker{},
"ipv6": IPV6FormatChecker{},
"uri": URIFormatChecker{},
"uuid": UUIDFormatChecker{},
"regex": RegexFormatChecker{},
},
}
// Regex credit: https://github.com/asaskevich/govalidator
rxEmail = regexp.MustCompile("^(((([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+(\\.([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+)*)|((\\x22)((((\\x20|\\x09)*(\\x0d\\x0a))?(\\x20|\\x09)+)?(([\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x7f]|\\x21|[\\x23-\\x5b]|[\\x5d-\\x7e]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(\\([\\x01-\\x09\\x0b\\x0c\\x0d-\\x7f]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}]))))*(((\\x20|\\x09)*(\\x0d\\x0a))?(\\x20|\\x09)+)?(\\x22)))@((([a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(([a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])([a-zA-Z]|\\d|-|\\.|_|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*([a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.)+(([a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(([a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])([a-zA-Z]|\\d|-|\\.|_|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*([a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.?$")
// Regex credit: https://www.socketloop.com/tutorials/golang-validate-hostname
rxHostname = regexp.MustCompile(`^([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9])(\.([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9]))*$`)
rxUUID = regexp.MustCompile("^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$")
)
// Add adds a FormatChecker to the FormatCheckerChain
// The name used will be the value used for the format key in your json schema
func (c *FormatCheckerChain) Add(name string, f FormatChecker) *FormatCheckerChain {
c.formatters[name] = f
return c
}
// Remove deletes a FormatChecker from the FormatCheckerChain (if it exists)
func (c *FormatCheckerChain) Remove(name string) *FormatCheckerChain {
delete(c.formatters, name)
return c
}
// Has checks to see if the FormatCheckerChain holds a FormatChecker with the given name
func (c *FormatCheckerChain) Has(name string) bool {
_, ok := c.formatters[name]
return ok
}
// IsFormat will check an input against a FormatChecker with the given name
// to see if it is the correct format
func (c *FormatCheckerChain) IsFormat(name string, input interface{}) bool {
f, ok := c.formatters[name]
if !ok {
return false
}
if !isKind(input, reflect.String) {
return false
}
inputString := input.(string)
return f.IsFormat(inputString)
}
func (f EmailFormatChecker) IsFormat(input string) bool {
return rxEmail.MatchString(input)
}
// Credit: https://github.com/asaskevich/govalidator
func (f IPV4FormatChecker) IsFormat(input string) bool {
ip := net.ParseIP(input)
return ip != nil && strings.Contains(input, ".")
}
// Credit: https://github.com/asaskevich/govalidator
func (f IPV6FormatChecker) IsFormat(input string) bool {
ip := net.ParseIP(input)
return ip != nil && strings.Contains(input, ":")
}
func (f DateTimeFormatChecker) IsFormat(input string) bool {
formats := []string{
"15:04:05",
"15:04:05Z07:00",
"2006-01-02",
time.RFC3339,
time.RFC3339Nano,
}
for _, format := range formats {
if _, err := time.Parse(format, input); err == nil {
return true
}
}
return false
}
func (f URIFormatChecker) IsFormat(input string) bool {
u, err := url.Parse(input)
if err != nil || u.Scheme == "" {
return false
}
return true
}
func (f HostnameFormatChecker) IsFormat(input string) bool {
return rxHostname.MatchString(input) && len(input) < 256
}
func (f UUIDFormatChecker) IsFormat(input string) bool {
return rxUUID.MatchString(input)
}
// IsFormat implements FormatChecker interface.
func (f RegexFormatChecker) IsFormat(input string) bool {
if input == "" {
return true
}
_, err := regexp.Compile(input)
if err != nil {
return false
}
return true
}

View File

@@ -1,37 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonschema
// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language.
//
// description Very simple log wrapper.
// Used for debugging/testing purposes.
//
// created 01-01-2015
package gojsonschema
import (
"log"
)
const internalLogEnabled = false
func internalLog(format string, v ...interface{}) {
log.Printf(format, v...)
}

View File

@@ -1,72 +0,0 @@
// Copyright 2013 MongoDB, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author tolsen
// author-github https://github.com/tolsen
//
// repository-name gojsonschema
// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language.
//
// description Implements a persistent (immutable w/ shared structure) singly-linked list of strings for the purpose of storing a json context
//
// created 04-09-2013
package gojsonschema
import "bytes"
// jsonContext implements a persistent linked-list of strings
type jsonContext struct {
head string
tail *jsonContext
}
func newJsonContext(head string, tail *jsonContext) *jsonContext {
return &jsonContext{head, tail}
}
// String displays the context in reverse.
// This plays well with the data structure's persistent nature with
// Cons and a json document's tree structure.
func (c *jsonContext) String(del ...string) string {
byteArr := make([]byte, 0, c.stringLen())
buf := bytes.NewBuffer(byteArr)
c.writeStringToBuffer(buf, del)
return buf.String()
}
func (c *jsonContext) stringLen() int {
length := 0
if c.tail != nil {
length = c.tail.stringLen() + 1 // add 1 for "."
}
length += len(c.head)
return length
}
func (c *jsonContext) writeStringToBuffer(buf *bytes.Buffer, del []string) {
if c.tail != nil {
c.tail.writeStringToBuffer(buf, del)
if len(del) > 0 {
buf.WriteString(del[0])
} else {
buf.WriteString(".")
}
}
buf.WriteString(c.head)
}

View File

@@ -1,340 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonschema
// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language.
//
// description Different strategies to load JSON files.
// Includes References (file and HTTP), JSON strings and Go types.
//
// created 01-02-2015
package gojsonschema
import (
"bytes"
"encoding/json"
"errors"
"io"
"io/ioutil"
"net/http"
"os"
"path/filepath"
"runtime"
"strings"
"github.com/xeipuuv/gojsonreference"
)
var osFS = osFileSystem(os.Open)
// JSON loader interface
type JSONLoader interface {
JsonSource() interface{}
LoadJSON() (interface{}, error)
JsonReference() (gojsonreference.JsonReference, error)
LoaderFactory() JSONLoaderFactory
}
type JSONLoaderFactory interface {
New(source string) JSONLoader
}
type DefaultJSONLoaderFactory struct {
}
type FileSystemJSONLoaderFactory struct {
fs http.FileSystem
}
func (d DefaultJSONLoaderFactory) New(source string) JSONLoader {
return &jsonReferenceLoader{
fs: osFS,
source: source,
}
}
func (f FileSystemJSONLoaderFactory) New(source string) JSONLoader {
return &jsonReferenceLoader{
fs: f.fs,
source: source,
}
}
// osFileSystem is a functional wrapper for os.Open that implements http.FileSystem.
type osFileSystem func(string) (*os.File, error)
func (o osFileSystem) Open(name string) (http.File, error) {
return o(name)
}
// JSON Reference loader
// references are used to load JSONs from files and HTTP
type jsonReferenceLoader struct {
fs http.FileSystem
source string
}
func (l *jsonReferenceLoader) JsonSource() interface{} {
return l.source
}
func (l *jsonReferenceLoader) JsonReference() (gojsonreference.JsonReference, error) {
return gojsonreference.NewJsonReference(l.JsonSource().(string))
}
func (l *jsonReferenceLoader) LoaderFactory() JSONLoaderFactory {
return &FileSystemJSONLoaderFactory{
fs: l.fs,
}
}
// NewReferenceLoader returns a JSON reference loader using the given source and the local OS file system.
func NewReferenceLoader(source string) *jsonReferenceLoader {
return &jsonReferenceLoader{
fs: osFS,
source: source,
}
}
// NewReferenceLoaderFileSystem returns a JSON reference loader using the given source and file system.
func NewReferenceLoaderFileSystem(source string, fs http.FileSystem) *jsonReferenceLoader {
return &jsonReferenceLoader{
fs: fs,
source: source,
}
}
func (l *jsonReferenceLoader) LoadJSON() (interface{}, error) {
var err error
reference, err := gojsonreference.NewJsonReference(l.JsonSource().(string))
if err != nil {
return nil, err
}
refToUrl := reference
refToUrl.GetUrl().Fragment = ""
var document interface{}
if reference.HasFileScheme {
filename := strings.Replace(refToUrl.GetUrl().Path, "file://", "", -1)
if runtime.GOOS == "windows" {
// on Windows, a file URL may have an extra leading slash, use slashes
// instead of backslashes, and have spaces escaped
if strings.HasPrefix(filename, "/") {
filename = filename[1:]
}
filename = filepath.FromSlash(filename)
}
document, err = l.loadFromFile(filename)
if err != nil {
return nil, err
}
} else {
document, err = l.loadFromHTTP(refToUrl.String())
if err != nil {
return nil, err
}
}
return document, nil
}
func (l *jsonReferenceLoader) loadFromHTTP(address string) (interface{}, error) {
resp, err := http.Get(address)
if err != nil {
return nil, err
}
// must return HTTP Status 200 OK
if resp.StatusCode != http.StatusOK {
return nil, errors.New(formatErrorDescription(Locale.HttpBadStatus(), ErrorDetails{"status": resp.Status}))
}
bodyBuff, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, err
}
return decodeJsonUsingNumber(bytes.NewReader(bodyBuff))
}
func (l *jsonReferenceLoader) loadFromFile(path string) (interface{}, error) {
f, err := l.fs.Open(path)
if err != nil {
return nil, err
}
defer f.Close()
bodyBuff, err := ioutil.ReadAll(f)
if err != nil {
return nil, err
}
return decodeJsonUsingNumber(bytes.NewReader(bodyBuff))
}
// JSON string loader
type jsonStringLoader struct {
source string
}
func (l *jsonStringLoader) JsonSource() interface{} {
return l.source
}
func (l *jsonStringLoader) JsonReference() (gojsonreference.JsonReference, error) {
return gojsonreference.NewJsonReference("#")
}
func (l *jsonStringLoader) LoaderFactory() JSONLoaderFactory {
return &DefaultJSONLoaderFactory{}
}
func NewStringLoader(source string) *jsonStringLoader {
return &jsonStringLoader{source: source}
}
func (l *jsonStringLoader) LoadJSON() (interface{}, error) {
return decodeJsonUsingNumber(strings.NewReader(l.JsonSource().(string)))
}
// JSON bytes loader
type jsonBytesLoader struct {
source []byte
}
func (l *jsonBytesLoader) JsonSource() interface{} {
return l.source
}
func (l *jsonBytesLoader) JsonReference() (gojsonreference.JsonReference, error) {
return gojsonreference.NewJsonReference("#")
}
func (l *jsonBytesLoader) LoaderFactory() JSONLoaderFactory {
return &DefaultJSONLoaderFactory{}
}
func NewBytesLoader(source []byte) *jsonBytesLoader {
return &jsonBytesLoader{source: source}
}
func (l *jsonBytesLoader) LoadJSON() (interface{}, error) {
return decodeJsonUsingNumber(bytes.NewReader(l.JsonSource().([]byte)))
}
// JSON Go (types) loader
// used to load JSONs from the code as maps, interface{}, structs ...
type jsonGoLoader struct {
source interface{}
}
func (l *jsonGoLoader) JsonSource() interface{} {
return l.source
}
func (l *jsonGoLoader) JsonReference() (gojsonreference.JsonReference, error) {
return gojsonreference.NewJsonReference("#")
}
func (l *jsonGoLoader) LoaderFactory() JSONLoaderFactory {
return &DefaultJSONLoaderFactory{}
}
func NewGoLoader(source interface{}) *jsonGoLoader {
return &jsonGoLoader{source: source}
}
func (l *jsonGoLoader) LoadJSON() (interface{}, error) {
// convert it to a compliant JSON first to avoid types "mismatches"
jsonBytes, err := json.Marshal(l.JsonSource())
if err != nil {
return nil, err
}
return decodeJsonUsingNumber(bytes.NewReader(jsonBytes))
}
type jsonIOLoader struct {
buf *bytes.Buffer
}
func NewReaderLoader(source io.Reader) (*jsonIOLoader, io.Reader) {
buf := &bytes.Buffer{}
return &jsonIOLoader{buf: buf}, io.TeeReader(source, buf)
}
func NewWriterLoader(source io.Writer) (*jsonIOLoader, io.Writer) {
buf := &bytes.Buffer{}
return &jsonIOLoader{buf: buf}, io.MultiWriter(source, buf)
}
func (l *jsonIOLoader) JsonSource() interface{} {
return l.buf.String()
}
func (l *jsonIOLoader) LoadJSON() (interface{}, error) {
return decodeJsonUsingNumber(l.buf)
}
func (l *jsonIOLoader) JsonReference() (gojsonreference.JsonReference, error) {
return gojsonreference.NewJsonReference("#")
}
func (l *jsonIOLoader) LoaderFactory() JSONLoaderFactory {
return &DefaultJSONLoaderFactory{}
}
func decodeJsonUsingNumber(r io.Reader) (interface{}, error) {
var document interface{}
decoder := json.NewDecoder(r)
decoder.UseNumber()
err := decoder.Decode(&document)
if err != nil {
return nil, err
}
return document, nil
}

View File

@@ -1,280 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonschema
// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language.
//
// description Contains const string and messages.
//
// created 01-01-2015
package gojsonschema
type (
// locale is an interface for defining custom error strings
locale interface {
Required() string
InvalidType() string
NumberAnyOf() string
NumberOneOf() string
NumberAllOf() string
NumberNot() string
MissingDependency() string
Internal() string
Enum() string
ArrayNotEnoughItems() string
ArrayNoAdditionalItems() string
ArrayMinItems() string
ArrayMaxItems() string
Unique() string
ArrayMinProperties() string
ArrayMaxProperties() string
AdditionalPropertyNotAllowed() string
InvalidPropertyPattern() string
StringGTE() string
StringLTE() string
DoesNotMatchPattern() string
DoesNotMatchFormat() string
MultipleOf() string
NumberGTE() string
NumberGT() string
NumberLTE() string
NumberLT() string
// Schema validations
RegexPattern() string
GreaterThanZero() string
MustBeOfA() string
MustBeOfAn() string
CannotBeUsedWithout() string
CannotBeGT() string
MustBeOfType() string
MustBeValidRegex() string
MustBeValidFormat() string
MustBeGTEZero() string
KeyCannotBeGreaterThan() string
KeyItemsMustBeOfType() string
KeyItemsMustBeUnique() string
ReferenceMustBeCanonical() string
NotAValidType() string
Duplicated() string
HttpBadStatus() string
// ErrorFormat
ErrorFormat() string
}
// DefaultLocale is the default locale for this package
DefaultLocale struct{}
)
func (l DefaultLocale) Required() string {
return `{{.property}} is required`
}
func (l DefaultLocale) InvalidType() string {
return `Invalid type. Expected: {{.expected}}, given: {{.given}}`
}
func (l DefaultLocale) NumberAnyOf() string {
return `Must validate at least one schema (anyOf)`
}
func (l DefaultLocale) NumberOneOf() string {
return `Must validate one and only one schema (oneOf)`
}
func (l DefaultLocale) NumberAllOf() string {
return `Must validate all the schemas (allOf)`
}
func (l DefaultLocale) NumberNot() string {
return `Must not validate the schema (not)`
}
func (l DefaultLocale) MissingDependency() string {
return `Has a dependency on {{.dependency}}`
}
func (l DefaultLocale) Internal() string {
return `Internal Error {{.error}}`
}
func (l DefaultLocale) Enum() string {
return `{{.field}} must be one of the following: {{.allowed}}`
}
func (l DefaultLocale) ArrayNoAdditionalItems() string {
return `No additional items allowed on array`
}
func (l DefaultLocale) ArrayNotEnoughItems() string {
return `Not enough items on array to match positional list of schema`
}
func (l DefaultLocale) ArrayMinItems() string {
return `Array must have at least {{.min}} items`
}
func (l DefaultLocale) ArrayMaxItems() string {
return `Array must have at most {{.max}} items`
}
func (l DefaultLocale) Unique() string {
return `{{.type}} items must be unique`
}
func (l DefaultLocale) ArrayMinProperties() string {
return `Must have at least {{.min}} properties`
}
func (l DefaultLocale) ArrayMaxProperties() string {
return `Must have at most {{.max}} properties`
}
func (l DefaultLocale) AdditionalPropertyNotAllowed() string {
return `Additional property {{.property}} is not allowed`
}
func (l DefaultLocale) InvalidPropertyPattern() string {
return `Property "{{.property}}" does not match pattern {{.pattern}}`
}
func (l DefaultLocale) StringGTE() string {
return `String length must be greater than or equal to {{.min}}`
}
func (l DefaultLocale) StringLTE() string {
return `String length must be less than or equal to {{.max}}`
}
func (l DefaultLocale) DoesNotMatchPattern() string {
return `Does not match pattern '{{.pattern}}'`
}
func (l DefaultLocale) DoesNotMatchFormat() string {
return `Does not match format '{{.format}}'`
}
func (l DefaultLocale) MultipleOf() string {
return `Must be a multiple of {{.multiple}}`
}
func (l DefaultLocale) NumberGTE() string {
return `Must be greater than or equal to {{.min}}`
}
func (l DefaultLocale) NumberGT() string {
return `Must be greater than {{.min}}`
}
func (l DefaultLocale) NumberLTE() string {
return `Must be less than or equal to {{.max}}`
}
func (l DefaultLocale) NumberLT() string {
return `Must be less than {{.max}}`
}
// Schema validators
func (l DefaultLocale) RegexPattern() string {
return `Invalid regex pattern '{{.pattern}}'`
}
func (l DefaultLocale) GreaterThanZero() string {
return `{{.number}} must be strictly greater than 0`
}
func (l DefaultLocale) MustBeOfA() string {
return `{{.x}} must be of a {{.y}}`
}
func (l DefaultLocale) MustBeOfAn() string {
return `{{.x}} must be of an {{.y}}`
}
func (l DefaultLocale) CannotBeUsedWithout() string {
return `{{.x}} cannot be used without {{.y}}`
}
func (l DefaultLocale) CannotBeGT() string {
return `{{.x}} cannot be greater than {{.y}}`
}
func (l DefaultLocale) MustBeOfType() string {
return `{{.key}} must be of type {{.type}}`
}
func (l DefaultLocale) MustBeValidRegex() string {
return `{{.key}} must be a valid regex`
}
func (l DefaultLocale) MustBeValidFormat() string {
return `{{.key}} must be a valid format {{.given}}`
}
func (l DefaultLocale) MustBeGTEZero() string {
return `{{.key}} must be greater than or equal to 0`
}
func (l DefaultLocale) KeyCannotBeGreaterThan() string {
return `{{.key}} cannot be greater than {{.y}}`
}
func (l DefaultLocale) KeyItemsMustBeOfType() string {
return `{{.key}} items must be {{.type}}`
}
func (l DefaultLocale) KeyItemsMustBeUnique() string {
return `{{.key}} items must be unique`
}
func (l DefaultLocale) ReferenceMustBeCanonical() string {
return `Reference {{.reference}} must be canonical`
}
func (l DefaultLocale) NotAValidType() string {
return `{{.type}} is not a valid type -- `
}
func (l DefaultLocale) Duplicated() string {
return `{{.type}} type is duplicated`
}
func (l DefaultLocale) HttpBadStatus() string {
return `Could not read schema from HTTP, response status is {{.status}}`
}
// Replacement options: field, description, context, value
func (l DefaultLocale) ErrorFormat() string {
return `{{.field}}: {{.description}}`
}
const (
STRING_NUMBER = "number"
STRING_ARRAY_OF_STRINGS = "array of strings"
STRING_ARRAY_OF_SCHEMAS = "array of schemas"
STRING_SCHEMA = "schema"
STRING_SCHEMA_OR_ARRAY_OF_STRINGS = "schema or array of strings"
STRING_PROPERTIES = "properties"
STRING_DEPENDENCY = "dependency"
STRING_PROPERTY = "property"
STRING_UNDEFINED = "undefined"
STRING_CONTEXT_ROOT = "(root)"
STRING_ROOT_SCHEMA_PROPERTY = "(root)"
)

View File

@@ -1,172 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonschema
// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language.
//
// description Result and ResultError implementations.
//
// created 01-01-2015
package gojsonschema
import (
"fmt"
"strings"
)
type (
// ErrorDetails is a map of details specific to each error.
// While the values will vary, every error will contain a "field" value
ErrorDetails map[string]interface{}
// ResultError is the interface that library errors must implement
ResultError interface {
Field() string
SetType(string)
Type() string
SetContext(*jsonContext)
Context() *jsonContext
SetDescription(string)
Description() string
SetValue(interface{})
Value() interface{}
SetDetails(ErrorDetails)
Details() ErrorDetails
String() string
}
// ResultErrorFields holds the fields for each ResultError implementation.
// ResultErrorFields implements the ResultError interface, so custom errors
// can be defined by just embedding this type
ResultErrorFields struct {
errorType string // A string with the type of error (i.e. invalid_type)
context *jsonContext // Tree like notation of the part that failed the validation. ex (root).a.b ...
description string // A human readable error message
value interface{} // Value given by the JSON file that is the source of the error
details ErrorDetails
}
Result struct {
errors []ResultError
// Scores how well the validation matched. Useful in generating
// better error messages for anyOf and oneOf.
score int
}
)
// Field outputs the field name without the root context
// i.e. firstName or person.firstName instead of (root).firstName or (root).person.firstName
func (v *ResultErrorFields) Field() string {
if p, ok := v.Details()["property"]; ok {
if str, isString := p.(string); isString {
return str
}
}
return strings.TrimPrefix(v.context.String(), STRING_ROOT_SCHEMA_PROPERTY+".")
}
func (v *ResultErrorFields) SetType(errorType string) {
v.errorType = errorType
}
func (v *ResultErrorFields) Type() string {
return v.errorType
}
func (v *ResultErrorFields) SetContext(context *jsonContext) {
v.context = context
}
func (v *ResultErrorFields) Context() *jsonContext {
return v.context
}
func (v *ResultErrorFields) SetDescription(description string) {
v.description = description
}
func (v *ResultErrorFields) Description() string {
return v.description
}
func (v *ResultErrorFields) SetValue(value interface{}) {
v.value = value
}
func (v *ResultErrorFields) Value() interface{} {
return v.value
}
func (v *ResultErrorFields) SetDetails(details ErrorDetails) {
v.details = details
}
func (v *ResultErrorFields) Details() ErrorDetails {
return v.details
}
func (v ResultErrorFields) String() string {
// as a fallback, the value is displayed go style
valueString := fmt.Sprintf("%v", v.value)
// marshal the go value value to json
if v.value == nil {
valueString = TYPE_NULL
} else {
if vs, err := marshalToJsonString(v.value); err == nil {
if vs == nil {
valueString = TYPE_NULL
} else {
valueString = *vs
}
}
}
return formatErrorDescription(Locale.ErrorFormat(), ErrorDetails{
"context": v.context.String(),
"description": v.description,
"value": valueString,
"field": v.Field(),
})
}
func (v *Result) Valid() bool {
return len(v.errors) == 0
}
func (v *Result) Errors() []ResultError {
return v.errors
}
func (v *Result) addError(err ResultError, context *jsonContext, value interface{}, details ErrorDetails) {
newError(err, context, value, Locale, details)
v.errors = append(v.errors, err)
v.score -= 2 // results in a net -1 when added to the +1 we get at the end of the validation function
}
// Used to copy errors from a sub-schema to the main one
func (v *Result) mergeErrors(otherResult *Result) {
v.errors = append(v.errors, otherResult.Errors()...)
v.score += otherResult.score
}
func (v *Result) incrementScore() {
v.score++
}

View File

@@ -1,930 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonschema
// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language.
//
// description Defines Schema, the main entry to every subSchema.
// Contains the parsing logic and error checking.
//
// created 26-02-2013
package gojsonschema
import (
// "encoding/json"
"errors"
"reflect"
"regexp"
"github.com/xeipuuv/gojsonreference"
)
var (
// Locale is the default locale to use
// Library users can overwrite with their own implementation
Locale locale = DefaultLocale{}
)
func NewSchema(l JSONLoader) (*Schema, error) {
ref, err := l.JsonReference()
if err != nil {
return nil, err
}
d := Schema{}
d.pool = newSchemaPool(l.LoaderFactory())
d.documentReference = ref
d.referencePool = newSchemaReferencePool()
var doc interface{}
if ref.String() != "" {
// Get document from schema pool
spd, err := d.pool.GetDocument(d.documentReference)
if err != nil {
return nil, err
}
doc = spd.Document
} else {
// Load JSON directly
doc, err = l.LoadJSON()
if err != nil {
return nil, err
}
d.pool.SetStandaloneDocument(doc)
}
err = d.parse(doc)
if err != nil {
return nil, err
}
return &d, nil
}
type Schema struct {
documentReference gojsonreference.JsonReference
rootSchema *subSchema
pool *schemaPool
referencePool *schemaReferencePool
}
func (d *Schema) parse(document interface{}) error {
d.rootSchema = &subSchema{property: STRING_ROOT_SCHEMA_PROPERTY}
return d.parseSchema(document, d.rootSchema)
}
func (d *Schema) SetRootSchemaName(name string) {
d.rootSchema.property = name
}
// Parses a subSchema
//
// Pretty long function ( sorry :) )... but pretty straight forward, repetitive and boring
// Not much magic involved here, most of the job is to validate the key names and their values,
// then the values are copied into subSchema struct
//
func (d *Schema) parseSchema(documentNode interface{}, currentSchema *subSchema) error {
if !isKind(documentNode, reflect.Map) {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": TYPE_OBJECT,
"given": STRING_SCHEMA,
},
))
}
m := documentNode.(map[string]interface{})
if currentSchema == d.rootSchema {
currentSchema.ref = &d.documentReference
}
// $subSchema
if existsMapKey(m, KEY_SCHEMA) {
if !isKind(m[KEY_SCHEMA], reflect.String) {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": TYPE_STRING,
"given": KEY_SCHEMA,
},
))
}
schemaRef := m[KEY_SCHEMA].(string)
schemaReference, err := gojsonreference.NewJsonReference(schemaRef)
currentSchema.subSchema = &schemaReference
if err != nil {
return err
}
}
// $ref
if existsMapKey(m, KEY_REF) && !isKind(m[KEY_REF], reflect.String) {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": TYPE_STRING,
"given": KEY_REF,
},
))
}
if k, ok := m[KEY_REF].(string); ok {
jsonReference, err := gojsonreference.NewJsonReference(k)
if err != nil {
return err
}
if jsonReference.HasFullUrl {
currentSchema.ref = &jsonReference
} else {
inheritedReference, err := currentSchema.ref.Inherits(jsonReference)
if err != nil {
return err
}
currentSchema.ref = inheritedReference
}
if sch, ok := d.referencePool.Get(currentSchema.ref.String() + k); ok {
currentSchema.refSchema = sch
} else {
err := d.parseReference(documentNode, currentSchema, k)
if err != nil {
return err
}
return nil
}
}
// definitions
if existsMapKey(m, KEY_DEFINITIONS) {
if isKind(m[KEY_DEFINITIONS], reflect.Map) {
currentSchema.definitions = make(map[string]*subSchema)
for dk, dv := range m[KEY_DEFINITIONS].(map[string]interface{}) {
if isKind(dv, reflect.Map) {
newSchema := &subSchema{property: KEY_DEFINITIONS, parent: currentSchema, ref: currentSchema.ref}
currentSchema.definitions[dk] = newSchema
err := d.parseSchema(dv, newSchema)
if err != nil {
return errors.New(err.Error())
}
} else {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": STRING_ARRAY_OF_SCHEMAS,
"given": KEY_DEFINITIONS,
},
))
}
}
} else {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": STRING_ARRAY_OF_SCHEMAS,
"given": KEY_DEFINITIONS,
},
))
}
}
// id
if existsMapKey(m, KEY_ID) && !isKind(m[KEY_ID], reflect.String) {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": TYPE_STRING,
"given": KEY_ID,
},
))
}
if k, ok := m[KEY_ID].(string); ok {
currentSchema.id = &k
}
// title
if existsMapKey(m, KEY_TITLE) && !isKind(m[KEY_TITLE], reflect.String) {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": TYPE_STRING,
"given": KEY_TITLE,
},
))
}
if k, ok := m[KEY_TITLE].(string); ok {
currentSchema.title = &k
}
// description
if existsMapKey(m, KEY_DESCRIPTION) && !isKind(m[KEY_DESCRIPTION], reflect.String) {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": TYPE_STRING,
"given": KEY_DESCRIPTION,
},
))
}
if k, ok := m[KEY_DESCRIPTION].(string); ok {
currentSchema.description = &k
}
// type
if existsMapKey(m, KEY_TYPE) {
if isKind(m[KEY_TYPE], reflect.String) {
if k, ok := m[KEY_TYPE].(string); ok {
err := currentSchema.types.Add(k)
if err != nil {
return err
}
}
} else {
if isKind(m[KEY_TYPE], reflect.Slice) {
arrayOfTypes := m[KEY_TYPE].([]interface{})
for _, typeInArray := range arrayOfTypes {
if reflect.ValueOf(typeInArray).Kind() != reflect.String {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": TYPE_STRING + "/" + STRING_ARRAY_OF_STRINGS,
"given": KEY_TYPE,
},
))
} else {
currentSchema.types.Add(typeInArray.(string))
}
}
} else {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": TYPE_STRING + "/" + STRING_ARRAY_OF_STRINGS,
"given": KEY_TYPE,
},
))
}
}
}
// properties
if existsMapKey(m, KEY_PROPERTIES) {
err := d.parseProperties(m[KEY_PROPERTIES], currentSchema)
if err != nil {
return err
}
}
// additionalProperties
if existsMapKey(m, KEY_ADDITIONAL_PROPERTIES) {
if isKind(m[KEY_ADDITIONAL_PROPERTIES], reflect.Bool) {
currentSchema.additionalProperties = m[KEY_ADDITIONAL_PROPERTIES].(bool)
} else if isKind(m[KEY_ADDITIONAL_PROPERTIES], reflect.Map) {
newSchema := &subSchema{property: KEY_ADDITIONAL_PROPERTIES, parent: currentSchema, ref: currentSchema.ref}
currentSchema.additionalProperties = newSchema
err := d.parseSchema(m[KEY_ADDITIONAL_PROPERTIES], newSchema)
if err != nil {
return errors.New(err.Error())
}
} else {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": TYPE_BOOLEAN + "/" + STRING_SCHEMA,
"given": KEY_ADDITIONAL_PROPERTIES,
},
))
}
}
// patternProperties
if existsMapKey(m, KEY_PATTERN_PROPERTIES) {
if isKind(m[KEY_PATTERN_PROPERTIES], reflect.Map) {
patternPropertiesMap := m[KEY_PATTERN_PROPERTIES].(map[string]interface{})
if len(patternPropertiesMap) > 0 {
currentSchema.patternProperties = make(map[string]*subSchema)
for k, v := range patternPropertiesMap {
_, err := regexp.MatchString(k, "")
if err != nil {
return errors.New(formatErrorDescription(
Locale.RegexPattern(),
ErrorDetails{"pattern": k},
))
}
newSchema := &subSchema{property: k, parent: currentSchema, ref: currentSchema.ref}
err = d.parseSchema(v, newSchema)
if err != nil {
return errors.New(err.Error())
}
currentSchema.patternProperties[k] = newSchema
}
}
} else {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": STRING_SCHEMA,
"given": KEY_PATTERN_PROPERTIES,
},
))
}
}
// dependencies
if existsMapKey(m, KEY_DEPENDENCIES) {
err := d.parseDependencies(m[KEY_DEPENDENCIES], currentSchema)
if err != nil {
return err
}
}
// items
if existsMapKey(m, KEY_ITEMS) {
if isKind(m[KEY_ITEMS], reflect.Slice) {
for _, itemElement := range m[KEY_ITEMS].([]interface{}) {
if isKind(itemElement, reflect.Map) {
newSchema := &subSchema{parent: currentSchema, property: KEY_ITEMS}
newSchema.ref = currentSchema.ref
currentSchema.AddItemsChild(newSchema)
err := d.parseSchema(itemElement, newSchema)
if err != nil {
return err
}
} else {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": STRING_SCHEMA + "/" + STRING_ARRAY_OF_SCHEMAS,
"given": KEY_ITEMS,
},
))
}
currentSchema.itemsChildrenIsSingleSchema = false
}
} else if isKind(m[KEY_ITEMS], reflect.Map) {
newSchema := &subSchema{parent: currentSchema, property: KEY_ITEMS}
newSchema.ref = currentSchema.ref
currentSchema.AddItemsChild(newSchema)
err := d.parseSchema(m[KEY_ITEMS], newSchema)
if err != nil {
return err
}
currentSchema.itemsChildrenIsSingleSchema = true
} else {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": STRING_SCHEMA + "/" + STRING_ARRAY_OF_SCHEMAS,
"given": KEY_ITEMS,
},
))
}
}
// additionalItems
if existsMapKey(m, KEY_ADDITIONAL_ITEMS) {
if isKind(m[KEY_ADDITIONAL_ITEMS], reflect.Bool) {
currentSchema.additionalItems = m[KEY_ADDITIONAL_ITEMS].(bool)
} else if isKind(m[KEY_ADDITIONAL_ITEMS], reflect.Map) {
newSchema := &subSchema{property: KEY_ADDITIONAL_ITEMS, parent: currentSchema, ref: currentSchema.ref}
currentSchema.additionalItems = newSchema
err := d.parseSchema(m[KEY_ADDITIONAL_ITEMS], newSchema)
if err != nil {
return errors.New(err.Error())
}
} else {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": TYPE_BOOLEAN + "/" + STRING_SCHEMA,
"given": KEY_ADDITIONAL_ITEMS,
},
))
}
}
// validation : number / integer
if existsMapKey(m, KEY_MULTIPLE_OF) {
multipleOfValue := mustBeNumber(m[KEY_MULTIPLE_OF])
if multipleOfValue == nil {
return errors.New(formatErrorDescription(
Locale.InvalidType(),
ErrorDetails{
"expected": STRING_NUMBER,
"given": KEY_MULTIPLE_OF,
},
))
}
if *multipleOfValue <= 0 {
return errors.New(formatErrorDescription(
Locale.GreaterThanZero(),
ErrorDetails{"number": KEY_MULTIPLE_OF},
))
}
currentSchema.multipleOf = multipleOfValue
}
if existsMapKey(m, KEY_MINIMUM) {
minimumValue := mustBeNumber(m[KEY_MINIMUM])
if minimumValue == nil {
return errors.New(formatErrorDescription(
Locale.MustBeOfA(),
ErrorDetails{"x": KEY_MINIMUM, "y": STRING_NUMBER},
))
}
currentSchema.minimum = minimumValue
}
if existsMapKey(m, KEY_EXCLUSIVE_MINIMUM) {
if isKind(m[KEY_EXCLUSIVE_MINIMUM], reflect.Bool) {
if currentSchema.minimum == nil {
return errors.New(formatErrorDescription(
Locale.CannotBeUsedWithout(),
ErrorDetails{"x": KEY_EXCLUSIVE_MINIMUM, "y": KEY_MINIMUM},
))
}
exclusiveMinimumValue := m[KEY_EXCLUSIVE_MINIMUM].(bool)
currentSchema.exclusiveMinimum = exclusiveMinimumValue
} else {
return errors.New(formatErrorDescription(
Locale.MustBeOfA(),
ErrorDetails{"x": KEY_EXCLUSIVE_MINIMUM, "y": TYPE_BOOLEAN},
))
}
}
if existsMapKey(m, KEY_MAXIMUM) {
maximumValue := mustBeNumber(m[KEY_MAXIMUM])
if maximumValue == nil {
return errors.New(formatErrorDescription(
Locale.MustBeOfA(),
ErrorDetails{"x": KEY_MAXIMUM, "y": STRING_NUMBER},
))
}
currentSchema.maximum = maximumValue
}
if existsMapKey(m, KEY_EXCLUSIVE_MAXIMUM) {
if isKind(m[KEY_EXCLUSIVE_MAXIMUM], reflect.Bool) {
if currentSchema.maximum == nil {
return errors.New(formatErrorDescription(
Locale.CannotBeUsedWithout(),
ErrorDetails{"x": KEY_EXCLUSIVE_MAXIMUM, "y": KEY_MAXIMUM},
))
}
exclusiveMaximumValue := m[KEY_EXCLUSIVE_MAXIMUM].(bool)
currentSchema.exclusiveMaximum = exclusiveMaximumValue
} else {
return errors.New(formatErrorDescription(
Locale.MustBeOfA(),
ErrorDetails{"x": KEY_EXCLUSIVE_MAXIMUM, "y": STRING_NUMBER},
))
}
}
if currentSchema.minimum != nil && currentSchema.maximum != nil {
if *currentSchema.minimum > *currentSchema.maximum {
return errors.New(formatErrorDescription(
Locale.CannotBeGT(),
ErrorDetails{"x": KEY_MINIMUM, "y": KEY_MAXIMUM},
))
}
}
// validation : string
if existsMapKey(m, KEY_MIN_LENGTH) {
minLengthIntegerValue := mustBeInteger(m[KEY_MIN_LENGTH])
if minLengthIntegerValue == nil {
return errors.New(formatErrorDescription(
Locale.MustBeOfAn(),
ErrorDetails{"x": KEY_MIN_LENGTH, "y": TYPE_INTEGER},
))
}
if *minLengthIntegerValue < 0 {
return errors.New(formatErrorDescription(
Locale.MustBeGTEZero(),
ErrorDetails{"key": KEY_MIN_LENGTH},
))
}
currentSchema.minLength = minLengthIntegerValue
}
if existsMapKey(m, KEY_MAX_LENGTH) {
maxLengthIntegerValue := mustBeInteger(m[KEY_MAX_LENGTH])
if maxLengthIntegerValue == nil {
return errors.New(formatErrorDescription(
Locale.MustBeOfAn(),
ErrorDetails{"x": KEY_MAX_LENGTH, "y": TYPE_INTEGER},
))
}
if *maxLengthIntegerValue < 0 {
return errors.New(formatErrorDescription(
Locale.MustBeGTEZero(),
ErrorDetails{"key": KEY_MAX_LENGTH},
))
}
currentSchema.maxLength = maxLengthIntegerValue
}
if currentSchema.minLength != nil && currentSchema.maxLength != nil {
if *currentSchema.minLength > *currentSchema.maxLength {
return errors.New(formatErrorDescription(
Locale.CannotBeGT(),
ErrorDetails{"x": KEY_MIN_LENGTH, "y": KEY_MAX_LENGTH},
))
}
}
if existsMapKey(m, KEY_PATTERN) {
if isKind(m[KEY_PATTERN], reflect.String) {
regexpObject, err := regexp.Compile(m[KEY_PATTERN].(string))
if err != nil {
return errors.New(formatErrorDescription(
Locale.MustBeValidRegex(),
ErrorDetails{"key": KEY_PATTERN},
))
}
currentSchema.pattern = regexpObject
} else {
return errors.New(formatErrorDescription(
Locale.MustBeOfA(),
ErrorDetails{"x": KEY_PATTERN, "y": TYPE_STRING},
))
}
}
if existsMapKey(m, KEY_FORMAT) {
formatString, ok := m[KEY_FORMAT].(string)
if ok && FormatCheckers.Has(formatString) {
currentSchema.format = formatString
} else {
return errors.New(formatErrorDescription(
Locale.MustBeValidFormat(),
ErrorDetails{"key": KEY_FORMAT, "given": m[KEY_FORMAT]},
))
}
}
// validation : object
if existsMapKey(m, KEY_MIN_PROPERTIES) {
minPropertiesIntegerValue := mustBeInteger(m[KEY_MIN_PROPERTIES])
if minPropertiesIntegerValue == nil {
return errors.New(formatErrorDescription(
Locale.MustBeOfAn(),
ErrorDetails{"x": KEY_MIN_PROPERTIES, "y": TYPE_INTEGER},
))
}
if *minPropertiesIntegerValue < 0 {
return errors.New(formatErrorDescription(
Locale.MustBeGTEZero(),
ErrorDetails{"key": KEY_MIN_PROPERTIES},
))
}
currentSchema.minProperties = minPropertiesIntegerValue
}
if existsMapKey(m, KEY_MAX_PROPERTIES) {
maxPropertiesIntegerValue := mustBeInteger(m[KEY_MAX_PROPERTIES])
if maxPropertiesIntegerValue == nil {
return errors.New(formatErrorDescription(
Locale.MustBeOfAn(),
ErrorDetails{"x": KEY_MAX_PROPERTIES, "y": TYPE_INTEGER},
))
}
if *maxPropertiesIntegerValue < 0 {
return errors.New(formatErrorDescription(
Locale.MustBeGTEZero(),
ErrorDetails{"key": KEY_MAX_PROPERTIES},
))
}
currentSchema.maxProperties = maxPropertiesIntegerValue
}
if currentSchema.minProperties != nil && currentSchema.maxProperties != nil {
if *currentSchema.minProperties > *currentSchema.maxProperties {
return errors.New(formatErrorDescription(
Locale.KeyCannotBeGreaterThan(),
ErrorDetails{"key": KEY_MIN_PROPERTIES, "y": KEY_MAX_PROPERTIES},
))
}
}
if existsMapKey(m, KEY_REQUIRED) {
if isKind(m[KEY_REQUIRED], reflect.Slice) {
requiredValues := m[KEY_REQUIRED].([]interface{})
for _, requiredValue := range requiredValues {
if isKind(requiredValue, reflect.String) {
err := currentSchema.AddRequired(requiredValue.(string))
if err != nil {
return err
}
} else {
return errors.New(formatErrorDescription(
Locale.KeyItemsMustBeOfType(),
ErrorDetails{"key": KEY_REQUIRED, "type": TYPE_STRING},
))
}
}
} else {
return errors.New(formatErrorDescription(
Locale.MustBeOfAn(),
ErrorDetails{"x": KEY_REQUIRED, "y": TYPE_ARRAY},
))
}
}
// validation : array
if existsMapKey(m, KEY_MIN_ITEMS) {
minItemsIntegerValue := mustBeInteger(m[KEY_MIN_ITEMS])
if minItemsIntegerValue == nil {
return errors.New(formatErrorDescription(
Locale.MustBeOfAn(),
ErrorDetails{"x": KEY_MIN_ITEMS, "y": TYPE_INTEGER},
))
}
if *minItemsIntegerValue < 0 {
return errors.New(formatErrorDescription(
Locale.MustBeGTEZero(),
ErrorDetails{"key": KEY_MIN_ITEMS},
))
}
currentSchema.minItems = minItemsIntegerValue
}
if existsMapKey(m, KEY_MAX_ITEMS) {
maxItemsIntegerValue := mustBeInteger(m[KEY_MAX_ITEMS])
if maxItemsIntegerValue == nil {
return errors.New(formatErrorDescription(
Locale.MustBeOfAn(),
ErrorDetails{"x": KEY_MAX_ITEMS, "y": TYPE_INTEGER},
))
}
if *maxItemsIntegerValue < 0 {
return errors.New(formatErrorDescription(
Locale.MustBeGTEZero(),
ErrorDetails{"key": KEY_MAX_ITEMS},
))
}
currentSchema.maxItems = maxItemsIntegerValue
}
if existsMapKey(m, KEY_UNIQUE_ITEMS) {
if isKind(m[KEY_UNIQUE_ITEMS], reflect.Bool) {
currentSchema.uniqueItems = m[KEY_UNIQUE_ITEMS].(bool)
} else {
return errors.New(formatErrorDescription(
Locale.MustBeOfA(),
ErrorDetails{"x": KEY_UNIQUE_ITEMS, "y": TYPE_BOOLEAN},
))
}
}
// validation : all
if existsMapKey(m, KEY_ENUM) {
if isKind(m[KEY_ENUM], reflect.Slice) {
for _, v := range m[KEY_ENUM].([]interface{}) {
err := currentSchema.AddEnum(v)
if err != nil {
return err
}
}
} else {
return errors.New(formatErrorDescription(
Locale.MustBeOfAn(),
ErrorDetails{"x": KEY_ENUM, "y": TYPE_ARRAY},
))
}
}
// validation : subSchema
if existsMapKey(m, KEY_ONE_OF) {
if isKind(m[KEY_ONE_OF], reflect.Slice) {
for _, v := range m[KEY_ONE_OF].([]interface{}) {
newSchema := &subSchema{property: KEY_ONE_OF, parent: currentSchema, ref: currentSchema.ref}
currentSchema.AddOneOf(newSchema)
err := d.parseSchema(v, newSchema)
if err != nil {
return err
}
}
} else {
return errors.New(formatErrorDescription(
Locale.MustBeOfAn(),
ErrorDetails{"x": KEY_ONE_OF, "y": TYPE_ARRAY},
))
}
}
if existsMapKey(m, KEY_ANY_OF) {
if isKind(m[KEY_ANY_OF], reflect.Slice) {
for _, v := range m[KEY_ANY_OF].([]interface{}) {
newSchema := &subSchema{property: KEY_ANY_OF, parent: currentSchema, ref: currentSchema.ref}
currentSchema.AddAnyOf(newSchema)
err := d.parseSchema(v, newSchema)
if err != nil {
return err
}
}
} else {
return errors.New(formatErrorDescription(
Locale.MustBeOfAn(),
ErrorDetails{"x": KEY_ANY_OF, "y": TYPE_ARRAY},
))
}
}
if existsMapKey(m, KEY_ALL_OF) {
if isKind(m[KEY_ALL_OF], reflect.Slice) {
for _, v := range m[KEY_ALL_OF].([]interface{}) {
newSchema := &subSchema{property: KEY_ALL_OF, parent: currentSchema, ref: currentSchema.ref}
currentSchema.AddAllOf(newSchema)
err := d.parseSchema(v, newSchema)
if err != nil {
return err
}
}
} else {
return errors.New(formatErrorDescription(
Locale.MustBeOfAn(),
ErrorDetails{"x": KEY_ANY_OF, "y": TYPE_ARRAY},
))
}
}
if existsMapKey(m, KEY_NOT) {
if isKind(m[KEY_NOT], reflect.Map) {
newSchema := &subSchema{property: KEY_NOT, parent: currentSchema, ref: currentSchema.ref}
currentSchema.SetNot(newSchema)
err := d.parseSchema(m[KEY_NOT], newSchema)
if err != nil {
return err
}
} else {
return errors.New(formatErrorDescription(
Locale.MustBeOfAn(),
ErrorDetails{"x": KEY_NOT, "y": TYPE_OBJECT},
))
}
}
return nil
}
func (d *Schema) parseReference(documentNode interface{}, currentSchema *subSchema, reference string) error {
var refdDocumentNode interface{}
jsonPointer := currentSchema.ref.GetPointer()
standaloneDocument := d.pool.GetStandaloneDocument()
if standaloneDocument != nil {
var err error
refdDocumentNode, _, err = jsonPointer.Get(standaloneDocument)
if err != nil {
return err
}
} else {
dsp, err := d.pool.GetDocument(*currentSchema.ref)
if err != nil {
return err
}
refdDocumentNode, _, err = jsonPointer.Get(dsp.Document)
if err != nil {
return err
}
}
if !isKind(refdDocumentNode, reflect.Map) {
return errors.New(formatErrorDescription(
Locale.MustBeOfType(),
ErrorDetails{"key": STRING_SCHEMA, "type": TYPE_OBJECT},
))
}
// returns the loaded referenced subSchema for the caller to update its current subSchema
newSchemaDocument := refdDocumentNode.(map[string]interface{})
newSchema := &subSchema{property: KEY_REF, parent: currentSchema, ref: currentSchema.ref}
d.referencePool.Add(currentSchema.ref.String()+reference, newSchema)
err := d.parseSchema(newSchemaDocument, newSchema)
if err != nil {
return err
}
currentSchema.refSchema = newSchema
return nil
}
func (d *Schema) parseProperties(documentNode interface{}, currentSchema *subSchema) error {
if !isKind(documentNode, reflect.Map) {
return errors.New(formatErrorDescription(
Locale.MustBeOfType(),
ErrorDetails{"key": STRING_PROPERTIES, "type": TYPE_OBJECT},
))
}
m := documentNode.(map[string]interface{})
for k := range m {
schemaProperty := k
newSchema := &subSchema{property: schemaProperty, parent: currentSchema, ref: currentSchema.ref}
currentSchema.AddPropertiesChild(newSchema)
err := d.parseSchema(m[k], newSchema)
if err != nil {
return err
}
}
return nil
}
func (d *Schema) parseDependencies(documentNode interface{}, currentSchema *subSchema) error {
if !isKind(documentNode, reflect.Map) {
return errors.New(formatErrorDescription(
Locale.MustBeOfType(),
ErrorDetails{"key": KEY_DEPENDENCIES, "type": TYPE_OBJECT},
))
}
m := documentNode.(map[string]interface{})
currentSchema.dependencies = make(map[string]interface{})
for k := range m {
switch reflect.ValueOf(m[k]).Kind() {
case reflect.Slice:
values := m[k].([]interface{})
var valuesToRegister []string
for _, value := range values {
if !isKind(value, reflect.String) {
return errors.New(formatErrorDescription(
Locale.MustBeOfType(),
ErrorDetails{
"key": STRING_DEPENDENCY,
"type": STRING_SCHEMA_OR_ARRAY_OF_STRINGS,
},
))
} else {
valuesToRegister = append(valuesToRegister, value.(string))
}
currentSchema.dependencies[k] = valuesToRegister
}
case reflect.Map:
depSchema := &subSchema{property: k, parent: currentSchema, ref: currentSchema.ref}
err := d.parseSchema(m[k], depSchema)
if err != nil {
return err
}
currentSchema.dependencies[k] = depSchema
default:
return errors.New(formatErrorDescription(
Locale.MustBeOfType(),
ErrorDetails{
"key": STRING_DEPENDENCY,
"type": STRING_SCHEMA_OR_ARRAY_OF_STRINGS,
},
))
}
}
return nil
}

View File

@@ -1,109 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonschema
// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language.
//
// description Defines resources pooling.
// Eases referencing and avoids downloading the same resource twice.
//
// created 26-02-2013
package gojsonschema
import (
"errors"
"github.com/xeipuuv/gojsonreference"
)
type schemaPoolDocument struct {
Document interface{}
}
type schemaPool struct {
schemaPoolDocuments map[string]*schemaPoolDocument
standaloneDocument interface{}
jsonLoaderFactory JSONLoaderFactory
}
func newSchemaPool(f JSONLoaderFactory) *schemaPool {
p := &schemaPool{}
p.schemaPoolDocuments = make(map[string]*schemaPoolDocument)
p.standaloneDocument = nil
p.jsonLoaderFactory = f
return p
}
func (p *schemaPool) SetStandaloneDocument(document interface{}) {
p.standaloneDocument = document
}
func (p *schemaPool) GetStandaloneDocument() (document interface{}) {
return p.standaloneDocument
}
func (p *schemaPool) GetDocument(reference gojsonreference.JsonReference) (*schemaPoolDocument, error) {
if internalLogEnabled {
internalLog("Get Document ( %s )", reference.String())
}
var err error
// It is not possible to load anything that is not canonical...
if !reference.IsCanonical() {
return nil, errors.New(formatErrorDescription(
Locale.ReferenceMustBeCanonical(),
ErrorDetails{"reference": reference},
))
}
refToUrl := reference
refToUrl.GetUrl().Fragment = ""
var spd *schemaPoolDocument
// Try to find the requested document in the pool
for k := range p.schemaPoolDocuments {
if k == refToUrl.String() {
spd = p.schemaPoolDocuments[k]
}
}
if spd != nil {
if internalLogEnabled {
internalLog(" From pool")
}
return spd, nil
}
jsonReferenceLoader := p.jsonLoaderFactory.New(reference.String())
document, err := jsonReferenceLoader.LoadJSON()
if err != nil {
return nil, err
}
spd = &schemaPoolDocument{Document: document}
// add the document to the pool for potential later use
p.schemaPoolDocuments[refToUrl.String()] = spd
return spd, nil
}

View File

@@ -1,67 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonschema
// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language.
//
// description Pool of referenced schemas.
//
// created 25-06-2013
package gojsonschema
import (
"fmt"
)
type schemaReferencePool struct {
documents map[string]*subSchema
}
func newSchemaReferencePool() *schemaReferencePool {
p := &schemaReferencePool{}
p.documents = make(map[string]*subSchema)
return p
}
func (p *schemaReferencePool) Get(ref string) (r *subSchema, o bool) {
if internalLogEnabled {
internalLog(fmt.Sprintf("Schema Reference ( %s )", ref))
}
if sch, ok := p.documents[ref]; ok {
if internalLogEnabled {
internalLog(fmt.Sprintf(" From pool"))
}
return sch, true
}
return nil, false
}
func (p *schemaReferencePool) Add(ref string, sch *subSchema) {
if internalLogEnabled {
internalLog(fmt.Sprintf("Add Schema Reference %s to pool", ref))
}
p.documents[ref] = sch
}

View File

@@ -1,83 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonschema
// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language.
//
// description Helper structure to handle schema types, and the combination of them.
//
// created 28-02-2013
package gojsonschema
import (
"errors"
"fmt"
"strings"
)
type jsonSchemaType struct {
types []string
}
// Is the schema typed ? that is containing at least one type
// When not typed, the schema does not need any type validation
func (t *jsonSchemaType) IsTyped() bool {
return len(t.types) > 0
}
func (t *jsonSchemaType) Add(etype string) error {
if !isStringInSlice(JSON_TYPES, etype) {
return errors.New(formatErrorDescription(Locale.NotAValidType(), ErrorDetails{"type": etype}))
}
if t.Contains(etype) {
return errors.New(formatErrorDescription(Locale.Duplicated(), ErrorDetails{"type": etype}))
}
t.types = append(t.types, etype)
return nil
}
func (t *jsonSchemaType) Contains(etype string) bool {
for _, v := range t.types {
if v == etype {
return true
}
}
return false
}
func (t *jsonSchemaType) String() string {
if len(t.types) == 0 {
return STRING_UNDEFINED // should never happen
}
// Displayed as a list [type1,type2,...]
if len(t.types) > 1 {
return fmt.Sprintf("[%s]", strings.Join(t.types, ","))
}
// Only one type: name only
return t.types[0]
}

View File

@@ -1,227 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonschema
// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language.
//
// description Defines the structure of a sub-subSchema.
// A sub-subSchema can contain other sub-schemas.
//
// created 27-02-2013
package gojsonschema
import (
"errors"
"regexp"
"strings"
"github.com/xeipuuv/gojsonreference"
)
const (
KEY_SCHEMA = "$subSchema"
KEY_ID = "$id"
KEY_REF = "$ref"
KEY_TITLE = "title"
KEY_DESCRIPTION = "description"
KEY_TYPE = "type"
KEY_ITEMS = "items"
KEY_ADDITIONAL_ITEMS = "additionalItems"
KEY_PROPERTIES = "properties"
KEY_PATTERN_PROPERTIES = "patternProperties"
KEY_ADDITIONAL_PROPERTIES = "additionalProperties"
KEY_DEFINITIONS = "definitions"
KEY_MULTIPLE_OF = "multipleOf"
KEY_MINIMUM = "minimum"
KEY_MAXIMUM = "maximum"
KEY_EXCLUSIVE_MINIMUM = "exclusiveMinimum"
KEY_EXCLUSIVE_MAXIMUM = "exclusiveMaximum"
KEY_MIN_LENGTH = "minLength"
KEY_MAX_LENGTH = "maxLength"
KEY_PATTERN = "pattern"
KEY_FORMAT = "format"
KEY_MIN_PROPERTIES = "minProperties"
KEY_MAX_PROPERTIES = "maxProperties"
KEY_DEPENDENCIES = "dependencies"
KEY_REQUIRED = "required"
KEY_MIN_ITEMS = "minItems"
KEY_MAX_ITEMS = "maxItems"
KEY_UNIQUE_ITEMS = "uniqueItems"
KEY_ENUM = "enum"
KEY_ONE_OF = "oneOf"
KEY_ANY_OF = "anyOf"
KEY_ALL_OF = "allOf"
KEY_NOT = "not"
)
type subSchema struct {
// basic subSchema meta properties
id *string
title *string
description *string
property string
// Types associated with the subSchema
types jsonSchemaType
// Reference url
ref *gojsonreference.JsonReference
// Schema referenced
refSchema *subSchema
// Json reference
subSchema *gojsonreference.JsonReference
// hierarchy
parent *subSchema
definitions map[string]*subSchema
definitionsChildren []*subSchema
itemsChildren []*subSchema
itemsChildrenIsSingleSchema bool
propertiesChildren []*subSchema
// validation : number / integer
multipleOf *float64
maximum *float64
exclusiveMaximum bool
minimum *float64
exclusiveMinimum bool
// validation : string
minLength *int
maxLength *int
pattern *regexp.Regexp
format string
// validation : object
minProperties *int
maxProperties *int
required []string
dependencies map[string]interface{}
additionalProperties interface{}
patternProperties map[string]*subSchema
// validation : array
minItems *int
maxItems *int
uniqueItems bool
additionalItems interface{}
// validation : all
enum []string
// validation : subSchema
oneOf []*subSchema
anyOf []*subSchema
allOf []*subSchema
not *subSchema
}
func (s *subSchema) AddEnum(i interface{}) error {
is, err := marshalToJsonString(i)
if err != nil {
return err
}
if isStringInSlice(s.enum, *is) {
return errors.New(formatErrorDescription(
Locale.KeyItemsMustBeUnique(),
ErrorDetails{"key": KEY_ENUM},
))
}
s.enum = append(s.enum, *is)
return nil
}
func (s *subSchema) ContainsEnum(i interface{}) (bool, error) {
is, err := marshalToJsonString(i)
if err != nil {
return false, err
}
return isStringInSlice(s.enum, *is), nil
}
func (s *subSchema) AddOneOf(subSchema *subSchema) {
s.oneOf = append(s.oneOf, subSchema)
}
func (s *subSchema) AddAllOf(subSchema *subSchema) {
s.allOf = append(s.allOf, subSchema)
}
func (s *subSchema) AddAnyOf(subSchema *subSchema) {
s.anyOf = append(s.anyOf, subSchema)
}
func (s *subSchema) SetNot(subSchema *subSchema) {
s.not = subSchema
}
func (s *subSchema) AddRequired(value string) error {
if isStringInSlice(s.required, value) {
return errors.New(formatErrorDescription(
Locale.KeyItemsMustBeUnique(),
ErrorDetails{"key": KEY_REQUIRED},
))
}
s.required = append(s.required, value)
return nil
}
func (s *subSchema) AddDefinitionChild(child *subSchema) {
s.definitionsChildren = append(s.definitionsChildren, child)
}
func (s *subSchema) AddItemsChild(child *subSchema) {
s.itemsChildren = append(s.itemsChildren, child)
}
func (s *subSchema) AddPropertiesChild(child *subSchema) {
s.propertiesChildren = append(s.propertiesChildren, child)
}
func (s *subSchema) PatternPropertiesString() string {
if s.patternProperties == nil || len(s.patternProperties) == 0 {
return STRING_UNDEFINED // should never happen
}
patternPropertiesKeySlice := []string{}
for pk := range s.patternProperties {
patternPropertiesKeySlice = append(patternPropertiesKeySlice, `"`+pk+`"`)
}
if len(patternPropertiesKeySlice) == 1 {
return patternPropertiesKeySlice[0]
}
return "[" + strings.Join(patternPropertiesKeySlice, ",") + "]"
}

View File

@@ -1,58 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonschema
// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language.
//
// description Contains const types for schema and JSON.
//
// created 28-02-2013
package gojsonschema
const (
TYPE_ARRAY = `array`
TYPE_BOOLEAN = `boolean`
TYPE_INTEGER = `integer`
TYPE_NUMBER = `number`
TYPE_NULL = `null`
TYPE_OBJECT = `object`
TYPE_STRING = `string`
)
var JSON_TYPES []string
var SCHEMA_TYPES []string
func init() {
JSON_TYPES = []string{
TYPE_ARRAY,
TYPE_BOOLEAN,
TYPE_INTEGER,
TYPE_NUMBER,
TYPE_NULL,
TYPE_OBJECT,
TYPE_STRING}
SCHEMA_TYPES = []string{
TYPE_ARRAY,
TYPE_BOOLEAN,
TYPE_INTEGER,
TYPE_NUMBER,
TYPE_OBJECT,
TYPE_STRING}
}

View File

@@ -1,208 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonschema
// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language.
//
// description Various utility functions.
//
// created 26-02-2013
package gojsonschema
import (
"encoding/json"
"fmt"
"math"
"reflect"
"strconv"
)
func isKind(what interface{}, kind reflect.Kind) bool {
target := what
if isJsonNumber(what) {
// JSON Numbers are strings!
target = *mustBeNumber(what)
}
return reflect.ValueOf(target).Kind() == kind
}
func existsMapKey(m map[string]interface{}, k string) bool {
_, ok := m[k]
return ok
}
func isStringInSlice(s []string, what string) bool {
for i := range s {
if s[i] == what {
return true
}
}
return false
}
func marshalToJsonString(value interface{}) (*string, error) {
mBytes, err := json.Marshal(value)
if err != nil {
return nil, err
}
sBytes := string(mBytes)
return &sBytes, nil
}
func isJsonNumber(what interface{}) bool {
switch what.(type) {
case json.Number:
return true
}
return false
}
func checkJsonNumber(what interface{}) (isValidFloat64 bool, isValidInt64 bool, isValidInt32 bool) {
jsonNumber := what.(json.Number)
f64, errFloat64 := jsonNumber.Float64()
s64 := strconv.FormatFloat(f64, 'f', -1, 64)
_, errInt64 := strconv.ParseInt(s64, 10, 64)
isValidFloat64 = errFloat64 == nil
isValidInt64 = errInt64 == nil
_, errInt32 := strconv.ParseInt(s64, 10, 32)
isValidInt32 = isValidInt64 && errInt32 == nil
return
}
// same as ECMA Number.MAX_SAFE_INTEGER and Number.MIN_SAFE_INTEGER
const (
max_json_float = float64(1<<53 - 1) // 9007199254740991.0 2^53 - 1
min_json_float = -float64(1<<53 - 1) //-9007199254740991.0 -2^53 - 1
)
func isFloat64AnInteger(f float64) bool {
if math.IsNaN(f) || math.IsInf(f, 0) || f < min_json_float || f > max_json_float {
return false
}
return f == float64(int64(f)) || f == float64(uint64(f))
}
func mustBeInteger(what interface{}) *int {
if isJsonNumber(what) {
number := what.(json.Number)
_, _, isValidInt32 := checkJsonNumber(number)
if isValidInt32 {
int64Value, err := number.Int64()
if err != nil {
return nil
}
int32Value := int(int64Value)
return &int32Value
} else {
return nil
}
}
return nil
}
func mustBeNumber(what interface{}) *float64 {
if isJsonNumber(what) {
number := what.(json.Number)
float64Value, err := number.Float64()
if err == nil {
return &float64Value
} else {
return nil
}
}
return nil
}
// formats a number so that it is displayed as the smallest string possible
func resultErrorFormatJsonNumber(n json.Number) string {
if int64Value, err := n.Int64(); err == nil {
return fmt.Sprintf("%d", int64Value)
}
float64Value, _ := n.Float64()
return fmt.Sprintf("%g", float64Value)
}
// formats a number so that it is displayed as the smallest string possible
func resultErrorFormatNumber(n float64) string {
if isFloat64AnInteger(n) {
return fmt.Sprintf("%d", int64(n))
}
return fmt.Sprintf("%g", n)
}
func convertDocumentNode(val interface{}) interface{} {
if lval, ok := val.([]interface{}); ok {
res := []interface{}{}
for _, v := range lval {
res = append(res, convertDocumentNode(v))
}
return res
}
if mval, ok := val.(map[interface{}]interface{}); ok {
res := map[string]interface{}{}
for k, v := range mval {
res[k.(string)] = convertDocumentNode(v)
}
return res
}
return val
}

View File

@@ -1,832 +0,0 @@
// Copyright 2015 xeipuuv ( https://github.com/xeipuuv )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author xeipuuv
// author-github https://github.com/xeipuuv
// author-mail xeipuuv@gmail.com
//
// repository-name gojsonschema
// repository-desc An implementation of JSON Schema, based on IETF's draft v4 - Go language.
//
// description Extends Schema and subSchema, implements the validation phase.
//
// created 28-02-2013
package gojsonschema
import (
"encoding/json"
"reflect"
"regexp"
"strconv"
"strings"
"unicode/utf8"
)
func Validate(ls JSONLoader, ld JSONLoader) (*Result, error) {
var err error
// load schema
schema, err := NewSchema(ls)
if err != nil {
return nil, err
}
// begine validation
return schema.Validate(ld)
}
func (v *Schema) Validate(l JSONLoader) (*Result, error) {
// load document
root, err := l.LoadJSON()
if err != nil {
return nil, err
}
// begin validation
result := &Result{}
context := newJsonContext(STRING_CONTEXT_ROOT, nil)
v.rootSchema.validateRecursive(v.rootSchema, root, result, context)
return result, nil
}
func (v *subSchema) subValidateWithContext(document interface{}, context *jsonContext) *Result {
result := &Result{}
v.validateRecursive(v, document, result, context)
return result
}
// Walker function to validate the json recursively against the subSchema
func (v *subSchema) validateRecursive(currentSubSchema *subSchema, currentNode interface{}, result *Result, context *jsonContext) {
if internalLogEnabled {
internalLog("validateRecursive %s", context.String())
internalLog(" %v", currentNode)
}
// Handle referenced schemas, returns directly when a $ref is found
if currentSubSchema.refSchema != nil {
v.validateRecursive(currentSubSchema.refSchema, currentNode, result, context)
return
}
// Check for null value
if currentNode == nil {
if currentSubSchema.types.IsTyped() && !currentSubSchema.types.Contains(TYPE_NULL) {
result.addError(
new(InvalidTypeError),
context,
currentNode,
ErrorDetails{
"expected": currentSubSchema.types.String(),
"given": TYPE_NULL,
},
)
return
}
currentSubSchema.validateSchema(currentSubSchema, currentNode, result, context)
v.validateCommon(currentSubSchema, currentNode, result, context)
} else { // Not a null value
if isJsonNumber(currentNode) {
value := currentNode.(json.Number)
_, isValidInt64, _ := checkJsonNumber(value)
validType := currentSubSchema.types.Contains(TYPE_NUMBER) || (isValidInt64 && currentSubSchema.types.Contains(TYPE_INTEGER))
if currentSubSchema.types.IsTyped() && !validType {
givenType := TYPE_INTEGER
if !isValidInt64 {
givenType = TYPE_NUMBER
}
result.addError(
new(InvalidTypeError),
context,
currentNode,
ErrorDetails{
"expected": currentSubSchema.types.String(),
"given": givenType,
},
)
return
}
currentSubSchema.validateSchema(currentSubSchema, value, result, context)
v.validateNumber(currentSubSchema, value, result, context)
v.validateCommon(currentSubSchema, value, result, context)
v.validateString(currentSubSchema, value, result, context)
} else {
rValue := reflect.ValueOf(currentNode)
rKind := rValue.Kind()
switch rKind {
// Slice => JSON array
case reflect.Slice:
if currentSubSchema.types.IsTyped() && !currentSubSchema.types.Contains(TYPE_ARRAY) {
result.addError(
new(InvalidTypeError),
context,
currentNode,
ErrorDetails{
"expected": currentSubSchema.types.String(),
"given": TYPE_ARRAY,
},
)
return
}
castCurrentNode := currentNode.([]interface{})
currentSubSchema.validateSchema(currentSubSchema, castCurrentNode, result, context)
v.validateArray(currentSubSchema, castCurrentNode, result, context)
v.validateCommon(currentSubSchema, castCurrentNode, result, context)
// Map => JSON object
case reflect.Map:
if currentSubSchema.types.IsTyped() && !currentSubSchema.types.Contains(TYPE_OBJECT) {
result.addError(
new(InvalidTypeError),
context,
currentNode,
ErrorDetails{
"expected": currentSubSchema.types.String(),
"given": TYPE_OBJECT,
},
)
return
}
castCurrentNode, ok := currentNode.(map[string]interface{})
if !ok {
castCurrentNode = convertDocumentNode(currentNode).(map[string]interface{})
}
currentSubSchema.validateSchema(currentSubSchema, castCurrentNode, result, context)
v.validateObject(currentSubSchema, castCurrentNode, result, context)
v.validateCommon(currentSubSchema, castCurrentNode, result, context)
for _, pSchema := range currentSubSchema.propertiesChildren {
nextNode, ok := castCurrentNode[pSchema.property]
if ok {
subContext := newJsonContext(pSchema.property, context)
v.validateRecursive(pSchema, nextNode, result, subContext)
}
}
// Simple JSON values : string, number, boolean
case reflect.Bool:
if currentSubSchema.types.IsTyped() && !currentSubSchema.types.Contains(TYPE_BOOLEAN) {
result.addError(
new(InvalidTypeError),
context,
currentNode,
ErrorDetails{
"expected": currentSubSchema.types.String(),
"given": TYPE_BOOLEAN,
},
)
return
}
value := currentNode.(bool)
currentSubSchema.validateSchema(currentSubSchema, value, result, context)
v.validateNumber(currentSubSchema, value, result, context)
v.validateCommon(currentSubSchema, value, result, context)
v.validateString(currentSubSchema, value, result, context)
case reflect.String:
if currentSubSchema.types.IsTyped() && !currentSubSchema.types.Contains(TYPE_STRING) {
result.addError(
new(InvalidTypeError),
context,
currentNode,
ErrorDetails{
"expected": currentSubSchema.types.String(),
"given": TYPE_STRING,
},
)
return
}
value := currentNode.(string)
currentSubSchema.validateSchema(currentSubSchema, value, result, context)
v.validateNumber(currentSubSchema, value, result, context)
v.validateCommon(currentSubSchema, value, result, context)
v.validateString(currentSubSchema, value, result, context)
}
}
}
result.incrementScore()
}
// Different kinds of validation there, subSchema / common / array / object / string...
func (v *subSchema) validateSchema(currentSubSchema *subSchema, currentNode interface{}, result *Result, context *jsonContext) {
if internalLogEnabled {
internalLog("validateSchema %s", context.String())
internalLog(" %v", currentNode)
}
if len(currentSubSchema.anyOf) > 0 {
validatedAnyOf := false
var bestValidationResult *Result
for _, anyOfSchema := range currentSubSchema.anyOf {
if !validatedAnyOf {
validationResult := anyOfSchema.subValidateWithContext(currentNode, context)
validatedAnyOf = validationResult.Valid()
if !validatedAnyOf && (bestValidationResult == nil || validationResult.score > bestValidationResult.score) {
bestValidationResult = validationResult
}
}
}
if !validatedAnyOf {
result.addError(new(NumberAnyOfError), context, currentNode, ErrorDetails{})
if bestValidationResult != nil {
// add error messages of closest matching subSchema as
// that's probably the one the user was trying to match
result.mergeErrors(bestValidationResult)
}
}
}
if len(currentSubSchema.oneOf) > 0 {
nbValidated := 0
var bestValidationResult *Result
for _, oneOfSchema := range currentSubSchema.oneOf {
validationResult := oneOfSchema.subValidateWithContext(currentNode, context)
if validationResult.Valid() {
nbValidated++
} else if nbValidated == 0 && (bestValidationResult == nil || validationResult.score > bestValidationResult.score) {
bestValidationResult = validationResult
}
}
if nbValidated != 1 {
result.addError(new(NumberOneOfError), context, currentNode, ErrorDetails{})
if nbValidated == 0 {
// add error messages of closest matching subSchema as
// that's probably the one the user was trying to match
result.mergeErrors(bestValidationResult)
}
}
}
if len(currentSubSchema.allOf) > 0 {
nbValidated := 0
for _, allOfSchema := range currentSubSchema.allOf {
validationResult := allOfSchema.subValidateWithContext(currentNode, context)
if validationResult.Valid() {
nbValidated++
}
result.mergeErrors(validationResult)
}
if nbValidated != len(currentSubSchema.allOf) {
result.addError(new(NumberAllOfError), context, currentNode, ErrorDetails{})
}
}
if currentSubSchema.not != nil {
validationResult := currentSubSchema.not.subValidateWithContext(currentNode, context)
if validationResult.Valid() {
result.addError(new(NumberNotError), context, currentNode, ErrorDetails{})
}
}
if currentSubSchema.dependencies != nil && len(currentSubSchema.dependencies) > 0 {
if isKind(currentNode, reflect.Map) {
for elementKey := range currentNode.(map[string]interface{}) {
if dependency, ok := currentSubSchema.dependencies[elementKey]; ok {
switch dependency := dependency.(type) {
case []string:
for _, dependOnKey := range dependency {
if _, dependencyResolved := currentNode.(map[string]interface{})[dependOnKey]; !dependencyResolved {
result.addError(
new(MissingDependencyError),
context,
currentNode,
ErrorDetails{"dependency": dependOnKey},
)
}
}
case *subSchema:
dependency.validateRecursive(dependency, currentNode, result, context)
}
}
}
}
}
result.incrementScore()
}
func (v *subSchema) validateCommon(currentSubSchema *subSchema, value interface{}, result *Result, context *jsonContext) {
if internalLogEnabled {
internalLog("validateCommon %s", context.String())
internalLog(" %v", value)
}
// enum:
if len(currentSubSchema.enum) > 0 {
has, err := currentSubSchema.ContainsEnum(value)
if err != nil {
result.addError(new(InternalError), context, value, ErrorDetails{"error": err})
}
if !has {
result.addError(
new(EnumError),
context,
value,
ErrorDetails{
"allowed": strings.Join(currentSubSchema.enum, ", "),
},
)
}
}
result.incrementScore()
}
func (v *subSchema) validateArray(currentSubSchema *subSchema, value []interface{}, result *Result, context *jsonContext) {
if internalLogEnabled {
internalLog("validateArray %s", context.String())
internalLog(" %v", value)
}
nbValues := len(value)
// TODO explain
if currentSubSchema.itemsChildrenIsSingleSchema {
for i := range value {
subContext := newJsonContext(strconv.Itoa(i), context)
validationResult := currentSubSchema.itemsChildren[0].subValidateWithContext(value[i], subContext)
result.mergeErrors(validationResult)
}
} else {
if currentSubSchema.itemsChildren != nil && len(currentSubSchema.itemsChildren) > 0 {
nbItems := len(currentSubSchema.itemsChildren)
// while we have both schemas and values, check them against each other
for i := 0; i != nbItems && i != nbValues; i++ {
subContext := newJsonContext(strconv.Itoa(i), context)
validationResult := currentSubSchema.itemsChildren[i].subValidateWithContext(value[i], subContext)
result.mergeErrors(validationResult)
}
if nbItems < nbValues {
// we have less schemas than elements in the instance array,
// but that might be ok if "additionalItems" is specified.
switch currentSubSchema.additionalItems.(type) {
case bool:
if !currentSubSchema.additionalItems.(bool) {
result.addError(new(ArrayNoAdditionalItemsError), context, value, ErrorDetails{})
}
case *subSchema:
additionalItemSchema := currentSubSchema.additionalItems.(*subSchema)
for i := nbItems; i != nbValues; i++ {
subContext := newJsonContext(strconv.Itoa(i), context)
validationResult := additionalItemSchema.subValidateWithContext(value[i], subContext)
result.mergeErrors(validationResult)
}
}
}
}
}
// minItems & maxItems
if currentSubSchema.minItems != nil {
if nbValues < int(*currentSubSchema.minItems) {
result.addError(
new(ArrayMinItemsError),
context,
value,
ErrorDetails{"min": *currentSubSchema.minItems},
)
}
}
if currentSubSchema.maxItems != nil {
if nbValues > int(*currentSubSchema.maxItems) {
result.addError(
new(ArrayMaxItemsError),
context,
value,
ErrorDetails{"max": *currentSubSchema.maxItems},
)
}
}
// uniqueItems:
if currentSubSchema.uniqueItems {
var stringifiedItems []string
for _, v := range value {
vString, err := marshalToJsonString(v)
if err != nil {
result.addError(new(InternalError), context, value, ErrorDetails{"err": err})
}
if isStringInSlice(stringifiedItems, *vString) {
result.addError(
new(ItemsMustBeUniqueError),
context,
value,
ErrorDetails{"type": TYPE_ARRAY},
)
}
stringifiedItems = append(stringifiedItems, *vString)
}
}
result.incrementScore()
}
func (v *subSchema) validateObject(currentSubSchema *subSchema, value map[string]interface{}, result *Result, context *jsonContext) {
if internalLogEnabled {
internalLog("validateObject %s", context.String())
internalLog(" %v", value)
}
// minProperties & maxProperties:
if currentSubSchema.minProperties != nil {
if len(value) < int(*currentSubSchema.minProperties) {
result.addError(
new(ArrayMinPropertiesError),
context,
value,
ErrorDetails{"min": *currentSubSchema.minProperties},
)
}
}
if currentSubSchema.maxProperties != nil {
if len(value) > int(*currentSubSchema.maxProperties) {
result.addError(
new(ArrayMaxPropertiesError),
context,
value,
ErrorDetails{"max": *currentSubSchema.maxProperties},
)
}
}
// required:
for _, requiredProperty := range currentSubSchema.required {
_, ok := value[requiredProperty]
if ok {
result.incrementScore()
} else {
result.addError(
new(RequiredError),
context,
value,
ErrorDetails{"property": requiredProperty},
)
}
}
// additionalProperty & patternProperty:
if currentSubSchema.additionalProperties != nil {
switch currentSubSchema.additionalProperties.(type) {
case bool:
if !currentSubSchema.additionalProperties.(bool) {
for pk := range value {
found := false
for _, spValue := range currentSubSchema.propertiesChildren {
if pk == spValue.property {
found = true
}
}
pp_has, pp_match := v.validatePatternProperty(currentSubSchema, pk, value[pk], result, context)
if found {
if pp_has && !pp_match {
result.addError(
new(AdditionalPropertyNotAllowedError),
context,
value[pk],
ErrorDetails{"property": pk},
)
}
} else {
if !pp_has || !pp_match {
result.addError(
new(AdditionalPropertyNotAllowedError),
context,
value[pk],
ErrorDetails{"property": pk},
)
}
}
}
}
case *subSchema:
additionalPropertiesSchema := currentSubSchema.additionalProperties.(*subSchema)
for pk := range value {
found := false
for _, spValue := range currentSubSchema.propertiesChildren {
if pk == spValue.property {
found = true
}
}
pp_has, pp_match := v.validatePatternProperty(currentSubSchema, pk, value[pk], result, context)
if found {
if pp_has && !pp_match {
validationResult := additionalPropertiesSchema.subValidateWithContext(value[pk], context)
result.mergeErrors(validationResult)
}
} else {
if !pp_has || !pp_match {
validationResult := additionalPropertiesSchema.subValidateWithContext(value[pk], context)
result.mergeErrors(validationResult)
}
}
}
}
} else {
for pk := range value {
pp_has, pp_match := v.validatePatternProperty(currentSubSchema, pk, value[pk], result, context)
if pp_has && !pp_match {
result.addError(
new(InvalidPropertyPatternError),
context,
value[pk],
ErrorDetails{
"property": pk,
"pattern": currentSubSchema.PatternPropertiesString(),
},
)
}
}
}
result.incrementScore()
}
func (v *subSchema) validatePatternProperty(currentSubSchema *subSchema, key string, value interface{}, result *Result, context *jsonContext) (has bool, matched bool) {
if internalLogEnabled {
internalLog("validatePatternProperty %s", context.String())
internalLog(" %s %v", key, value)
}
has = false
validatedkey := false
for pk, pv := range currentSubSchema.patternProperties {
if matches, _ := regexp.MatchString(pk, key); matches {
has = true
subContext := newJsonContext(key, context)
validationResult := pv.subValidateWithContext(value, subContext)
result.mergeErrors(validationResult)
if validationResult.Valid() {
validatedkey = true
}
}
}
if !validatedkey {
return has, false
}
result.incrementScore()
return has, true
}
func (v *subSchema) validateString(currentSubSchema *subSchema, value interface{}, result *Result, context *jsonContext) {
// Ignore JSON numbers
if isJsonNumber(value) {
return
}
// Ignore non strings
if !isKind(value, reflect.String) {
return
}
if internalLogEnabled {
internalLog("validateString %s", context.String())
internalLog(" %v", value)
}
stringValue := value.(string)
// minLength & maxLength:
if currentSubSchema.minLength != nil {
if utf8.RuneCount([]byte(stringValue)) < int(*currentSubSchema.minLength) {
result.addError(
new(StringLengthGTEError),
context,
value,
ErrorDetails{"min": *currentSubSchema.minLength},
)
}
}
if currentSubSchema.maxLength != nil {
if utf8.RuneCount([]byte(stringValue)) > int(*currentSubSchema.maxLength) {
result.addError(
new(StringLengthLTEError),
context,
value,
ErrorDetails{"max": *currentSubSchema.maxLength},
)
}
}
// pattern:
if currentSubSchema.pattern != nil {
if !currentSubSchema.pattern.MatchString(stringValue) {
result.addError(
new(DoesNotMatchPatternError),
context,
value,
ErrorDetails{"pattern": currentSubSchema.pattern},
)
}
}
// format
if currentSubSchema.format != "" {
if !FormatCheckers.IsFormat(currentSubSchema.format, stringValue) {
result.addError(
new(DoesNotMatchFormatError),
context,
value,
ErrorDetails{"format": currentSubSchema.format},
)
}
}
result.incrementScore()
}
func (v *subSchema) validateNumber(currentSubSchema *subSchema, value interface{}, result *Result, context *jsonContext) {
// Ignore non numbers
if !isJsonNumber(value) {
return
}
if internalLogEnabled {
internalLog("validateNumber %s", context.String())
internalLog(" %v", value)
}
number := value.(json.Number)
float64Value, _ := number.Float64()
// multipleOf:
if currentSubSchema.multipleOf != nil {
if !isFloat64AnInteger(float64Value / *currentSubSchema.multipleOf) {
result.addError(
new(MultipleOfError),
context,
resultErrorFormatJsonNumber(number),
ErrorDetails{"multiple": *currentSubSchema.multipleOf},
)
}
}
//maximum & exclusiveMaximum:
if currentSubSchema.maximum != nil {
if currentSubSchema.exclusiveMaximum {
if float64Value >= *currentSubSchema.maximum {
result.addError(
new(NumberLTError),
context,
resultErrorFormatJsonNumber(number),
ErrorDetails{
"max": resultErrorFormatNumber(*currentSubSchema.maximum),
},
)
}
} else {
if float64Value > *currentSubSchema.maximum {
result.addError(
new(NumberLTEError),
context,
resultErrorFormatJsonNumber(number),
ErrorDetails{
"max": resultErrorFormatNumber(*currentSubSchema.maximum),
},
)
}
}
}
//minimum & exclusiveMinimum:
if currentSubSchema.minimum != nil {
if currentSubSchema.exclusiveMinimum {
if float64Value <= *currentSubSchema.minimum {
result.addError(
new(NumberGTError),
context,
resultErrorFormatJsonNumber(number),
ErrorDetails{
"min": resultErrorFormatNumber(*currentSubSchema.minimum),
},
)
}
} else {
if float64Value < *currentSubSchema.minimum {
result.addError(
new(NumberGTEError),
context,
resultErrorFormatJsonNumber(number),
ErrorDetails{
"min": resultErrorFormatNumber(*currentSubSchema.minimum),
},
)
}
}
}
result.incrementScore()
}

202
vendor/go4.org/LICENSE generated vendored
View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

88
vendor/go4.org/README.md generated vendored
View File

@@ -1,88 +0,0 @@
# go4
[![travis badge](https://travis-ci.org/camlistore/go4.svg?branch=master)](https://travis-ci.org/camlistore/go4 "Travis CI")
[go4.org](http://go4.org) is a collection of packages for
Go programmers.
They started out living in [Camlistore](https://camlistore.org)'s repo
and elsewhere but they have nothing to do with Camlistore, so we're
moving them here.
## Details
* **single repo**. go4 is a single repo. That means things can be
changed and rearranged globally atomically with ease and
confidence.
* **no backwards compatibility**. go4 makes no backwards compatibility
promises. If you want to use go4, vendor it. And next time you
update your vendor tree, update to the latest API if things in go4
changed. The plan is to eventually provide tools to make this
easier.
* **forward progress** because we have no backwards compatibility,
it's always okay to change things to make things better. That also
means the bar for contributions is lower. We don't have to get the
API 100% correct in the first commit.
* **no Go version policy** go4 packages are usually built and tested
with the latest Go stable version. However, go4 has no overarching
version policy; each package can declare its own set of supported
Go versions.
* **code review** contributions must be code-reviewed. We're trying
out Gerrithub, to see if we can find a mix of Github Pull Requests
and Gerrit that works well for many people. We'll see.
* **CLA compliant** contributors must agree to the Google CLA (the
same as Go itself). This ensures we can move things into Go as
necessary in the future. It also makes lawyers at various
companies happy. The CLA is **not** a copyright *assignment*; you
retain the copyright on your work. The CLA just says that your
work is open source and you have permission to open source it. See
https://golang.org/doc/contribute.html#tmp_6
* **docs, tests, portability** all code should be documented in the
normal Go style, have tests, and be portable to different
operating systems and architectures. We'll try to get builders in
place to help run the tests on different OS/arches. For now we
have Travis at least.
## Contributing
To add code to go4, send a pull request or push a change to Gerrithub.
We assume you already have your $GOPATH set and the go4 code cloned at
$GOPATH/src/go4.org. For example:
* `git clone https://review.gerrithub.io/camlistore/go4 $GOPATH/src/go4.org`
### To push a code review to Gerrithub directly:
* Sign in to [http://gerrithub.io](http://gerrithub.io "Gerrithub") with your Github account.
* Install the git hook that adds the magic "Change-Id" line to your commit messages:
`curl "https://camlistore.googlesource.com/camlistore/+/master/misc/commit-msg.githook?format=TEXT" | base64 -d > $GOPATH/src/go4.org/.git/hooks/commit-msg`
* make changes
* commit (the unit of code review is a single commit identified by the Change-ID, **NOT** a series of commits on a branch)
* `git push ssh://$YOUR_GITHUB_USERNAME@review.gerrithub.io:29418/camlistore/go4 HEAD:refs/for/master`
### Using Github Pull Requests
* send a pull request with a single commit
* create a Gerrithub code review at https://review.gerrithub.io/plugins/github-plugin/static/pullrequests.html, selecting the pull request you just created.
### Problems contributing?
* Please file an issue or contact the [Camlistore mailing list](https://groups.google.com/forum/#!forum/camlistore) for any problems with the above.
See [https://review.gerrithub.io/Documentation/user-upload.html](https://review.gerrithub.io/Documentation/user-upload.html) for more generic documentation.
(TODO: more docs on Gerrit, integrate git-codereview, etc.)

View File

@@ -1,58 +0,0 @@
/*
Copyright 2011 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package errorutil helps make better error messages.
package errorutil // import "go4.org/errorutil"
import (
"bufio"
"bytes"
"fmt"
"io"
"strings"
)
// HighlightBytePosition takes a reader and the location in bytes of a parse
// error (for instance, from json.SyntaxError.Offset) and returns the line, column,
// and pretty-printed context around the error with an arrow indicating the exact
// position of the syntax error.
func HighlightBytePosition(f io.Reader, pos int64) (line, col int, highlight string) {
line = 1
br := bufio.NewReader(f)
lastLine := ""
thisLine := new(bytes.Buffer)
for n := int64(0); n < pos; n++ {
b, err := br.ReadByte()
if err != nil {
break
}
if b == '\n' {
lastLine = thisLine.String()
thisLine.Reset()
line++
col = 1
} else {
col++
thisLine.WriteByte(b)
}
}
if line > 1 {
highlight += fmt.Sprintf("%5d: %s\n", line-1, lastLine)
}
highlight += fmt.Sprintf("%5d: %s\n", line, thisLine.String())
highlight += fmt.Sprintf("%s^\n", strings.Repeat(" ", col+5))
return
}

View File

@@ -1,4 +1,4 @@
package version
// Version is the version of the build.
const Version = "0.1.19"
const Version = "0.1.20"