Merge pull request #3954 from deitch/sbom-inheritor

sbom support
This commit is contained in:
Avi Deitcher 2023-11-14 06:16:56 -08:00 committed by GitHub
commit bbd9b85fc1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
91 changed files with 6621 additions and 883 deletions

72
docs/sbom.md Normal file
View File

@ -0,0 +1,72 @@
# Software Bill-of-Materials
LinuxKit bootable images are composed of existing OCI images.
OCI images, when built, often are scanned to create a
software bill-of-materials (SBoM). The buildkit builder
system itself contains the [ability to integrate SBoM scanning and generation into the build process](https://docs.docker.com/build/attestations/sbom/).
When LinuxKit composes an operating system image using `linuxkit build`,
it will, by default, combine the SBoMs of all the OCI images used to create
the final image.
It looks for SBoMs in the following locations:
* [image attestation storage](https://docs.docker.com/build/attestations/attestation-storage/)
Future support for [OCI Image-Spec v1.1 Artifacts](https://github.com/opencontainers/image-spec)
is under consideration, and will be reviewed when it is generally available.
When building packages with `linuxkit pkg build`, it also has the ability to generate an SBoM for the
package, which later can be consumed by `linuxkit build`.
## Consuming SBoM From Packages
When `linuxkit build` is run, it does the following for dealing with SBoMs:
1. For each OCI image that it processes:
1. check if the image contains an SBoM attestation; it not, skip this step.
1. Retrieve the SBoM attestation.
1. After generating the root filesystem, combine all of the individual SBoMs into a single unified SBoM.
1. Save the output single SBoM into the root of the image as `sbom.spdx.json`.
Currently, only SPDX json format is supported.
### SBoM Scanner and Output Format
By default, linuxkit combines the SBoMs into a file with output format SPDX json,
and the file saved to the filename `sbom.spdx.json`.
In addition, in order to assist with reproducible builds, the creation date/time of the SBoM is
a fixed date/time set by linuxkit, rather than the current date/time. Note, however, that even
with a fixed date/time, reproducible builds depends on reproducible SBoMs on the underlying container images.
This is not always the case, as the unique IDs for each package and file might be deterministic, but it might not.
This can be overridden by using the CLI flags:
* `--no-sbom`: do not find and consolidate the SBoMs
* `--sbom-output <filename>`: the filename to save the output to in the image.
* `--sbom-current-time true|false`: whether or not to use the current time for the SBoM creation date/time (default `false`)
### Disable SBoM for Images
To disable SBoM generation when running `linuxkit build`, use the CLI flag `--sbom false`.
## Generating SBoM For Packages
When `linuxkit pkg build` is run, by default it enables generating an SBoM using the
[SBoM generating capabilities of buildkit](https://www.docker.com/blog/generate-sboms-with-buildkit/).
This means that it inherits all of those capabilities as well, and saves the SBoM in the same location,
as an attestation on the image.
### SBoM Scanner
By default, buildkit runs [syft](http://hub.docker.com/r/anchore/syft) with output format SPDX json,
specifically via its integration image [buildkit-syft-scanner](docker.io/docker/buildkit-syft-scanner).
You can select a different image to run a scanner, provided it complies with the
[buildkit SBoM protocol](https://github.com/moby/buildkit/blob/master/docs/attestations/sbom-protocol.md),
by passing the CLI flag `--sbom-scanner <image>`.
### Disable SBoM for Packages
To disable SBoM generation when running `linuxkit pkg build`, use the CLI flag `--sbom-scanner=false`.

View File

@ -15,7 +15,10 @@ import (
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
const defaultNameForStdin = "moby" const (
defaultNameForStdin = "moby"
defaultSbomFilename = "sbom.spdx.json"
)
type formatList []string type formatList []string
@ -37,17 +40,20 @@ func (f *formatList) Type() string {
func buildCmd() *cobra.Command { func buildCmd() *cobra.Command {
var ( var (
name string name string
dir string dir string
outputFile string outputFile string
sizeString string sizeString string
pull bool pull bool
docker bool docker bool
decompressKernel bool decompressKernel bool
arch string arch string
cacheDir flagOverEnvVarOverDefaultString cacheDir flagOverEnvVarOverDefaultString
buildFormats formatList buildFormats formatList
outputTypes = moby.OutputTypes() outputTypes = moby.OutputTypes()
noSbom bool
sbomOutputFilename string
sbomCurrentTime bool
) )
cmd := &cobra.Command{ cmd := &cobra.Command{
Use: "build", Use: "build",
@ -192,7 +198,14 @@ The generated image can be in one of multiple formats which can be run on variou
if moby.Streamable(buildFormats[0]) { if moby.Streamable(buildFormats[0]) {
tp = buildFormats[0] tp = buildFormats[0]
} }
err = moby.Build(m, w, moby.BuildOpts{Pull: pull, BuilderType: tp, DecompressKernel: decompressKernel, CacheDir: cacheDir.String(), DockerCache: docker, Arch: arch}) var sbomGenerator *moby.SbomGenerator
if !noSbom {
sbomGenerator, err = moby.NewSbomGenerator(sbomOutputFilename, sbomCurrentTime)
if err != nil {
return fmt.Errorf("error creating sbom generator: %v", err)
}
}
err = moby.Build(m, w, moby.BuildOpts{Pull: pull, BuilderType: tp, DecompressKernel: decompressKernel, CacheDir: cacheDir.String(), DockerCache: docker, Arch: arch, SbomGenerator: sbomGenerator})
if err != nil { if err != nil {
return fmt.Errorf("%v", err) return fmt.Errorf("%v", err)
} }
@ -224,6 +237,9 @@ The generated image can be in one of multiple formats which can be run on variou
cmd.Flags().VarP(&buildFormats, "format", "f", "Formats to create [ "+strings.Join(outputTypes, " ")+" ]") cmd.Flags().VarP(&buildFormats, "format", "f", "Formats to create [ "+strings.Join(outputTypes, " ")+" ]")
cacheDir = flagOverEnvVarOverDefaultString{def: defaultLinuxkitCache(), envVar: envVarCacheDir} cacheDir = flagOverEnvVarOverDefaultString{def: defaultLinuxkitCache(), envVar: envVarCacheDir}
cmd.Flags().Var(&cacheDir, "cache", fmt.Sprintf("Directory for caching and finding cached image, overrides env var %s", envVarCacheDir)) cmd.Flags().Var(&cacheDir, "cache", fmt.Sprintf("Directory for caching and finding cached image, overrides env var %s", envVarCacheDir))
cmd.Flags().BoolVar(&noSbom, "no-sbom", false, "suppress consolidation of sboms on input container images to a single sbom and saving in the output filesystem")
cmd.Flags().BoolVar(&sbomCurrentTime, "sbom-current-time", false, "whether to use the current time as the build time in the sbom; this will make the build non-reproducible (default false)")
cmd.Flags().StringVar(&sbomOutputFilename, "sbom-output", defaultSbomFilename, "filename to save the output to in the root filesystem")
return cmd return cmd
} }

View File

@ -28,6 +28,24 @@ func matchPlatformsOSArch(platforms ...v1.Platform) match.Matcher {
} }
} }
// matchAllAnnotations returns a matcher that matches all annotations
func matchAllAnnotations(annotations map[string]string) match.Matcher {
return func(desc v1.Descriptor) bool {
if desc.Annotations == nil {
return false
}
if len(annotations) == 0 {
return true
}
for key, value := range annotations {
if aValue, ok := desc.Annotations[key]; !ok || aValue != value {
return false
}
}
return true
}
}
func (p *Provider) findImage(imageName, architecture string) (v1.Image, error) { func (p *Provider) findImage(imageName, architecture string) (v1.Image, error) {
root, err := p.FindRoot(imageName) root, err := p.FindRoot(imageName)
if err != nil { if err != nil {
@ -50,6 +68,18 @@ func (p *Provider) findImage(imageName, architecture string) (v1.Image, error) {
return nil, fmt.Errorf("no image found for %s", imageName) return nil, fmt.Errorf("no image found for %s", imageName)
} }
func (p *Provider) findIndex(imageName string) (v1.ImageIndex, error) {
root, err := p.FindRoot(imageName)
if err != nil {
return nil, err
}
ii, err := root.ImageIndex()
if err != nil {
return nil, fmt.Errorf("no image index found for %s", imageName)
}
return ii, nil
}
// FindDescriptor get the first descriptor pointed to by the image reference, whether tagged or digested // FindDescriptor get the first descriptor pointed to by the image reference, whether tagged or digested
func (p *Provider) FindDescriptor(ref *reference.Spec) (*v1.Descriptor, error) { func (p *Provider) FindDescriptor(ref *reference.Spec) (*v1.Descriptor, error) {
index, err := p.cache.ImageIndex() index, err := p.cache.ImageIndex()

View File

@ -1,6 +1,7 @@
package cache package cache
import ( import (
"bytes"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
@ -8,12 +9,24 @@ import (
"github.com/containerd/containerd/reference" "github.com/containerd/containerd/reference"
"github.com/google/go-containerregistry/pkg/name" "github.com/google/go-containerregistry/pkg/name"
v1 "github.com/google/go-containerregistry/pkg/v1" v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/match"
"github.com/google/go-containerregistry/pkg/v1/mutate" "github.com/google/go-containerregistry/pkg/v1/mutate"
"github.com/google/go-containerregistry/pkg/v1/partial"
"github.com/google/go-containerregistry/pkg/v1/tarball" "github.com/google/go-containerregistry/pkg/v1/tarball"
intoto "github.com/in-toto/in-toto-golang/in_toto"
lktspec "github.com/linuxkit/linuxkit/src/cmd/linuxkit/spec" lktspec "github.com/linuxkit/linuxkit/src/cmd/linuxkit/spec"
imagespec "github.com/opencontainers/image-spec/specs-go/v1" imagespec "github.com/opencontainers/image-spec/specs-go/v1"
) )
const (
annotationDockerReferenceType = "vnd.docker.reference.type"
annotationAttestationManifest = "attestation-manifest"
annotationDockerReferenceDigest = "vnd.docker.reference.digest"
annotationInTotoPredicateType = "in-toto.io/predicate-type"
annotationSPDXDoc = "https://spdx.dev/Document"
inTotoJsonMediaType = "application/vnd.in-toto+json"
)
// ImageSource a source for an image in the OCI distribution cache. // ImageSource a source for an image in the OCI distribution cache.
// Implements a spec.ImageSource. // Implements a spec.ImageSource.
type ImageSource struct { type ImageSource struct {
@ -23,6 +36,11 @@ type ImageSource struct {
descriptor *v1.Descriptor descriptor *v1.Descriptor
} }
type spdxStatement struct {
intoto.StatementHeader
Predicate json.RawMessage `json:"predicate"`
}
// NewSource return an ImageSource for a specific ref and architecture in the given // NewSource return an ImageSource for a specific ref and architecture in the given
// cache directory. // cache directory.
func (p *Provider) NewSource(ref *reference.Spec, architecture string, descriptor *v1.Descriptor) lktspec.ImageSource { func (p *Provider) NewSource(ref *reference.Spec, architecture string, descriptor *v1.Descriptor) lktspec.ImageSource {
@ -101,3 +119,95 @@ func (c ImageSource) V1TarReader(overrideName string) (io.ReadCloser, error) {
func (c ImageSource) Descriptor() *v1.Descriptor { func (c ImageSource) Descriptor() *v1.Descriptor {
return c.descriptor return c.descriptor
} }
// SBoM return the sbom for the image
func (c ImageSource) SBoMs() ([]io.ReadCloser, error) {
index, err := c.provider.findIndex(c.ref.String())
// if it is not an index, we actually do not care much
if err != nil {
return nil, nil
}
// get the digest of the manifest that represents our targeted architecture
descs, err := partial.FindManifests(index, matchPlatformsOSArch(v1.Platform{OS: "linux", Architecture: c.architecture}))
if err != nil {
return nil, err
}
if len(descs) < 1 {
return nil, fmt.Errorf("no manifest found for %s arch %s", c.ref.String(), c.architecture)
}
if len(descs) > 1 {
return nil, fmt.Errorf("multiple manifests found for %s arch %s", c.ref.String(), c.architecture)
}
// get the digest of the manifest that represents our targeted architecture
desc := descs[0]
annotations := map[string]string{
annotationDockerReferenceType: annotationAttestationManifest,
annotationDockerReferenceDigest: desc.Digest.String(),
}
descs, err = partial.FindManifests(index, matchAllAnnotations(annotations))
if err != nil {
return nil, err
}
if len(descs) > 1 {
return nil, fmt.Errorf("multiple manifests found for %s arch %s", c.ref.String(), c.architecture)
}
if len(descs) < 1 {
return nil, nil
}
// get the layers for the first descriptor
images, err := partial.FindImages(index, match.Digests(descs[0].Digest))
if err != nil {
return nil, err
}
if len(images) < 1 {
return nil, fmt.Errorf("no attestation image found for %s arch %s, even though the manifest exists", c.ref.String(), c.architecture)
}
if len(images) > 1 {
return nil, fmt.Errorf("multiple attestation images found for %s arch %s", c.ref.String(), c.architecture)
}
image := images[0]
manifest, err := image.Manifest()
if err != nil {
return nil, err
}
layers, err := image.Layers()
if err != nil {
return nil, err
}
if len(manifest.Layers) != len(layers) {
return nil, fmt.Errorf("manifest layers and image layers do not match for the attestation for %s arch %s", c.ref.String(), c.architecture)
}
var readers []io.ReadCloser
for i, layer := range manifest.Layers {
annotations := layer.Annotations
if annotations[annotationInTotoPredicateType] != annotationSPDXDoc || layer.MediaType != inTotoJsonMediaType {
continue
}
// get the actual blob of the layer
layer, err := layers[i].Compressed()
if err != nil {
return nil, err
}
// read the layer, we want just the predicate, stripping off the header
var buf bytes.Buffer
if _, err := io.Copy(&buf, layer); err != nil {
return nil, err
}
layer.Close()
var stmt spdxStatement
if err := json.Unmarshal(buf.Bytes(), &stmt); err != nil {
return nil, err
}
if stmt.PredicateType != annotationSPDXDoc {
return nil, fmt.Errorf("unexpected predicate type %s", stmt.PredicateType)
}
sbom := stmt.Predicate
readers = append(readers, io.NopCloser(bytes.NewReader(sbom)))
}
// get the content of the single descriptor
return readers, nil
}

View File

@ -115,13 +115,13 @@ func (p *Provider) ImagePull(ref *reference.Spec, trustedRef, architecture strin
// ImageLoad takes an OCI format image tar stream and writes it locally. It should be // ImageLoad takes an OCI format image tar stream and writes it locally. It should be
// efficient and only write missing blobs, based on their content hash. // efficient and only write missing blobs, based on their content hash.
func (p *Provider) ImageLoad(ref *reference.Spec, architecture string, r io.Reader) (lktspec.ImageSource, error) { func (p *Provider) ImageLoad(ref *reference.Spec, architecture string, r io.Reader) ([]v1.Descriptor, error) {
var ( var (
tr = tar.NewReader(r) tr = tar.NewReader(r)
index bytes.Buffer index bytes.Buffer
) )
if !util.IsValidOSArch(linux, architecture, "") { if !util.IsValidOSArch(linux, architecture, "") {
return ImageSource{}, fmt.Errorf("unknown arch %s", architecture) return nil, fmt.Errorf("unknown arch %s", architecture)
} }
suffix := "-" + architecture suffix := "-" + architecture
imageName := ref.String() + suffix imageName := ref.String() + suffix
@ -132,7 +132,7 @@ func (p *Provider) ImageLoad(ref *reference.Spec, architecture string, r io.Read
break // End of archive break // End of archive
} }
if err != nil { if err != nil {
return ImageSource{}, err return nil, err
} }
// get the filename and decide what to do with the file on that basis // get the filename and decide what to do with the file on that basis
@ -153,7 +153,7 @@ func (p *Provider) ImageLoad(ref *reference.Spec, architecture string, r io.Read
log.Debugf("saving %s to memory to parse", filename) log.Debugf("saving %s to memory to parse", filename)
// any errors should stop and get reported // any errors should stop and get reported
if _, err := io.Copy(&index, tr); err != nil { if _, err := io.Copy(&index, tr); err != nil {
return ImageSource{}, fmt.Errorf("error reading data for file %s : %v", filename, err) return nil, fmt.Errorf("error reading data for file %s : %v", filename, err)
} }
case strings.HasPrefix(filename, "blobs/sha256/"): case strings.HasPrefix(filename, "blobs/sha256/"):
// must have a file named blob/sha256/<hash> // must have a file named blob/sha256/<hash>
@ -166,54 +166,63 @@ func (p *Provider) ImageLoad(ref *reference.Spec, architecture string, r io.Read
hash, err := v1.NewHash(fmt.Sprintf("%s:%s", parts[1], parts[2])) hash, err := v1.NewHash(fmt.Sprintf("%s:%s", parts[1], parts[2]))
if err != nil { if err != nil {
// malformed file // malformed file
return ImageSource{}, fmt.Errorf("invalid hash filename for %s: %v", filename, err) return nil, fmt.Errorf("invalid hash filename for %s: %v", filename, err)
} }
log.Debugf("writing %s as hash %s", filename, hash) log.Debugf("writing %s as hash %s", filename, hash)
if err := p.cache.WriteBlob(hash, io.NopCloser(tr)); err != nil { if err := p.cache.WriteBlob(hash, io.NopCloser(tr)); err != nil {
return ImageSource{}, fmt.Errorf("error reading data for file %s : %v", filename, err) return nil, fmt.Errorf("error reading data for file %s : %v", filename, err)
} }
} }
} }
// update the index in the cache directory // update the index in the cache directory
var descriptor *v1.Descriptor var descs []v1.Descriptor
if index.Len() != 0 { if index.Len() != 0 {
im, err := v1.ParseIndexManifest(&index) im, err := v1.ParseIndexManifest(&index)
if err != nil { if err != nil {
return ImageSource{}, fmt.Errorf("error reading index.json") return nil, fmt.Errorf("error reading index.json")
} }
// in theory, we should support a tar stream with multiple images in it. However, how would we // in theory, we should support a tar stream with multiple images in it. However, how would we
// know which one gets the single name annotation we have? We will find some way in the future. // know which one gets the single name annotation we have? We will find some way in the future.
if len(im.Manifests) != 1 { if len(im.Manifests) != 1 {
return ImageSource{}, fmt.Errorf("currently only support OCI tar stream that has a single image") return nil, fmt.Errorf("currently only support OCI tar stream that has a single image")
} }
if err := p.cache.RemoveDescriptors(match.Name(imageName)); err != nil { if err := p.cache.RemoveDescriptors(match.Name(imageName)); err != nil {
return ImageSource{}, fmt.Errorf("unable to remove old descriptors for %s: %v", imageName, err) return nil, fmt.Errorf("unable to remove old descriptors for %s: %v", imageName, err)
} }
for _, desc := range im.Manifests { desc := im.Manifests[0]
// make sure that we have the correct image name annotation // is this an image or an index?
if desc.Annotations == nil { if desc.MediaType.IsIndex() {
desc.Annotations = map[string]string{} rc, err := p.cache.Blob(desc.Digest)
if err != nil {
return nil, fmt.Errorf("unable to get index blob: %v", err)
} }
desc.Annotations[imagespec.AnnotationRefName] = imageName ii, err := v1.ParseIndexManifest(rc)
descriptor = &desc if err != nil {
return nil, fmt.Errorf("unable to parse index blob: %v", err)
log.Debugf("appending descriptor %#v", descriptor) }
if err := p.cache.AppendDescriptor(desc); err != nil { for _, m := range ii.Manifests {
return ImageSource{}, fmt.Errorf("error appending descriptor to layout index: %v", err) if m.MediaType.IsImage() {
descs = append(descs, m)
}
}
} else if desc.MediaType.IsImage() {
descs = append(descs, desc)
}
for _, desc := range descs {
if desc.Platform != nil && desc.Platform.Architecture == architecture {
// make sure that we have the correct image name annotation
if desc.Annotations == nil {
desc.Annotations = map[string]string{}
}
desc.Annotations[imagespec.AnnotationRefName] = imageName
log.Debugf("appending descriptor %#v", desc)
if err := p.cache.AppendDescriptor(desc); err != nil {
return nil, fmt.Errorf("error appending descriptor to layout index: %v", err)
}
} }
} }
} }
if descriptor != nil && descriptor.Platform == nil { return descs, nil
descriptor.Platform = &v1.Platform{
OS: linux,
Architecture: architecture,
}
}
return p.NewSource(
ref,
architecture,
descriptor,
), nil
} }
// IndexWrite takes an image name and creates an index for the targets to which it points. // IndexWrite takes an image name and creates an index for the targets to which it points.
@ -255,35 +264,82 @@ func (p *Provider) IndexWrite(ref *reference.Spec, descriptors ...v1.Descriptor)
return ImageSource{}, fmt.Errorf("unable to get hash of existing index: %v", err) return ImageSource{}, fmt.Errorf("unable to get hash of existing index: %v", err)
} }
// we only care about avoiding duplicate arch/OS/Variant // we only care about avoiding duplicate arch/OS/Variant
descReplace := map[string]v1.Descriptor{} var (
descReplace = map[string]v1.Descriptor{}
descNonReplace []v1.Descriptor
)
for _, desc := range descriptors { for _, desc := range descriptors {
// we do not replace "unknown" because those are attestations; we might remove attestations that point at things we remove
if desc.Platform == nil || (desc.Platform.Architecture == "unknown" && desc.Platform.OS == "unknown") {
descNonReplace = append(descNonReplace, desc)
continue
}
descReplace[fmt.Sprintf("%s/%s/%s", desc.Platform.OS, desc.Platform.Architecture, desc.Platform.OSVersion)] = desc descReplace[fmt.Sprintf("%s/%s/%s", desc.Platform.OS, desc.Platform.Architecture, desc.Platform.OSVersion)] = desc
} }
// now we can go through each one and see if it already exists, and, if so, replace it // now we can go through each one and see if it already exists, and, if so, replace it
var manifests []v1.Descriptor // however, we do not replace attestations unless they point at something we are removing
var (
manifests []v1.Descriptor
referencedDigests = map[string]bool{}
)
for _, m := range manifest.Manifests { for _, m := range manifest.Manifests {
if m.Platform != nil { if m.Platform != nil {
lookup := fmt.Sprintf("%s/%s/%s", m.Platform.OS, m.Platform.Architecture, m.Platform.OSVersion) lookup := fmt.Sprintf("%s/%s/%s", m.Platform.OS, m.Platform.Architecture, m.Platform.OSVersion)
if desc, ok := descReplace[lookup]; ok { if desc, ok := descReplace[lookup]; ok {
manifests = append(manifests, desc) manifests = append(manifests, desc)
referencedDigests[desc.Digest.String()] = true
// already added, so do not need it in the lookup list any more // already added, so do not need it in the lookup list any more
delete(descReplace, lookup) delete(descReplace, lookup)
continue continue
} }
} }
manifests = append(manifests, m) manifests = append(manifests, m)
referencedDigests[m.Digest.String()] = true
} }
// any left get added // any left get added
for _, desc := range descReplace { for _, desc := range descReplace {
manifests = append(manifests, desc) manifests = append(manifests, desc)
referencedDigests[desc.Digest.String()] = true
} }
manifest.Manifests = manifests for _, desc := range descNonReplace {
manifests = append(manifests, desc)
referencedDigests[desc.Digest.String()] = true
}
// before we complete, go through the manifests, and if any are attestations that point to something
// no longer there, remove them
// everything in the list already has its digest marked in the digests map, so we can just check that
manifest.Manifests = []v1.Descriptor{}
appliedManifests := map[v1.Hash]bool{}
for _, m := range manifests {
// we already added it; do not add it twice
if _, ok := appliedManifests[m.Digest]; ok {
continue
}
if len(m.Annotations) < 1 {
manifest.Manifests = append(manifest.Manifests, m)
appliedManifests[m.Digest] = true
continue
}
value, ok := m.Annotations[annotationDockerReferenceDigest]
if !ok {
manifest.Manifests = append(manifest.Manifests, m)
appliedManifests[m.Digest] = true
continue
}
if _, ok := referencedDigests[value]; ok {
manifest.Manifests = append(manifest.Manifests, m)
appliedManifests[m.Digest] = true
continue
}
// if we got this far, we have an attestation that points to something no longer in the index,
// do do not add it
}
im = *manifest im = *manifest
// remove the old index // remove the old index
if err := p.cache.RemoveBlob(oldhash); err != nil { if err := p.cache.RemoveBlob(oldhash); err != nil {
return ImageSource{}, fmt.Errorf("unable to remove old index file: %v", err) return ImageSource{}, fmt.Errorf("unable to remove old index file: %v", err)
} }
} else { } else {
// we did not have one, so create an index, store it, update the root index.json, and return // we did not have one, so create an index, store it, update the root index.json, and return
im = v1.IndexManifest{ im = v1.IndexManifest{

View File

@ -90,3 +90,8 @@ func (d ImageSource) V1TarReader(overrideName string) (io.ReadCloser, error) {
func (d ImageSource) Descriptor() *v1.Descriptor { func (d ImageSource) Descriptor() *v1.Descriptor {
return nil return nil
} }
// SBoM not supported in docker, but it is not an error, so just return nil.
func (d ImageSource) SBoMs() ([]io.ReadCloser, error) {
return nil, nil
}

View File

@ -38,7 +38,7 @@ require (
github.com/rn/iso9660wrap v0.0.0-20171120145750-baf8d62ad315 github.com/rn/iso9660wrap v0.0.0-20171120145750-baf8d62ad315
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.6 github.com/scaleway/scaleway-sdk-go v1.0.0-beta.6
github.com/sirupsen/logrus v1.9.0 github.com/sirupsen/logrus v1.9.0
github.com/stretchr/testify v1.8.0 github.com/stretchr/testify v1.8.4
github.com/surma/gocpio v1.0.2-0.20160926205914-fcb68777e7dc github.com/surma/gocpio v1.0.2-0.20160926205914-fcb68777e7dc
github.com/vmware/govmomi v0.20.3 github.com/vmware/govmomi v0.20.3
github.com/xeipuuv/gojsonschema v1.2.0 github.com/xeipuuv/gojsonschema v1.2.0
@ -54,6 +54,8 @@ require (
require ( require (
github.com/Code-Hex/vz/v3 v3.0.0 github.com/Code-Hex/vz/v3 v3.0.0
github.com/in-toto/in-toto-golang v0.5.0
github.com/spdx/tools-golang v0.5.3
github.com/spf13/cobra v1.6.1 github.com/spf13/cobra v1.6.1
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635
) )
@ -67,6 +69,7 @@ require (
github.com/Azure/go-autorest/tracing v0.6.0 // indirect github.com/Azure/go-autorest/tracing v0.6.0 // indirect
github.com/Microsoft/hcsshim v0.9.6 // indirect github.com/Microsoft/hcsshim v0.9.6 // indirect
github.com/agext/levenshtein v1.2.3 // indirect github.com/agext/levenshtein v1.2.3 // indirect
github.com/anchore/go-struct-converter v0.0.0-20221118182256-c68fdcfa2092 // indirect
github.com/containerd/cgroups v1.0.4 // indirect github.com/containerd/cgroups v1.0.4 // indirect
github.com/containerd/console v1.0.3 // indirect github.com/containerd/console v1.0.3 // indirect
github.com/containerd/continuity v0.3.0 // indirect github.com/containerd/continuity v0.3.0 // indirect
@ -97,7 +100,6 @@ require (
github.com/gorilla/websocket v1.4.2 // indirect github.com/gorilla/websocket v1.4.2 // indirect
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 // indirect github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 // indirect
github.com/grpc-ecosystem/grpc-gateway v1.16.0 // indirect github.com/grpc-ecosystem/grpc-gateway v1.16.0 // indirect
github.com/in-toto/in-toto-golang v0.5.0 // indirect
github.com/inconshreveable/mousetrap v1.0.1 // indirect github.com/inconshreveable/mousetrap v1.0.1 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/klauspost/compress v1.15.12 // indirect github.com/klauspost/compress v1.15.12 // indirect

View File

@ -213,6 +213,8 @@ github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRF
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho= github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/alexflint/go-filemutex v0.0.0-20171022225611-72bdc8eae2ae/go.mod h1:CgnQgUtFrFz9mxFNtED3jI5tLDjKlOM+oUF/sTk6ps0= github.com/alexflint/go-filemutex v0.0.0-20171022225611-72bdc8eae2ae/go.mod h1:CgnQgUtFrFz9mxFNtED3jI5tLDjKlOM+oUF/sTk6ps0=
github.com/alexflint/go-filemutex v1.1.0/go.mod h1:7P4iRhttt/nUvUOrYIhcpMzv2G6CY9UnI16Z+UJqRyk= github.com/alexflint/go-filemutex v1.1.0/go.mod h1:7P4iRhttt/nUvUOrYIhcpMzv2G6CY9UnI16Z+UJqRyk=
github.com/anchore/go-struct-converter v0.0.0-20221118182256-c68fdcfa2092 h1:aM1rlcoLz8y5B2r4tTLMiVTrMtpfY0O8EScKJxaSaEc=
github.com/anchore/go-struct-converter v0.0.0-20221118182256-c68fdcfa2092/go.mod h1:rYqSE9HbjzpHTI74vwPvae4ZVYZd1lue2ta6xHPdblA=
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c= github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ= github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
@ -1329,6 +1331,9 @@ github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE
github.com/sourcegraph/go-diff v0.5.1/go.mod h1:j2dHj3m8aZgQO8lMTcTnBcXkRRRqi34cd2MNlA9u1mE= github.com/sourcegraph/go-diff v0.5.1/go.mod h1:j2dHj3m8aZgQO8lMTcTnBcXkRRRqi34cd2MNlA9u1mE=
github.com/sourcegraph/go-diff v0.5.3/go.mod h1:v9JDtjCE4HHHCZGId75rg8gkKKa98RVjBcBGsVmMmak= github.com/sourcegraph/go-diff v0.5.3/go.mod h1:v9JDtjCE4HHHCZGId75rg8gkKKa98RVjBcBGsVmMmak=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spdx/gordf v0.0.0-20201111095634-7098f93598fb/go.mod h1:uKWaldnbMnjsSAXRurWqqrdyZen1R7kxl8TkmWk2OyM=
github.com/spdx/tools-golang v0.5.3 h1:ialnHeEYUC4+hkm5vJm4qz2x+oEJbS0mAMFrNXdQraY=
github.com/spdx/tools-golang v0.5.3/go.mod h1:/ETOahiAo96Ob0/RAIBmFZw6XN0yTnyr/uFZm2NTMhI=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk= github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I= github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
@ -1363,6 +1368,7 @@ github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v0.0.0-20151208002404-e3a8ff8ce365/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v0.0.0-20151208002404-e3a8ff8ce365/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v0.0.0-20180303142811-b89eecf5ca5d/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v0.0.0-20180303142811-b89eecf5ca5d/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
@ -1372,8 +1378,9 @@ github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/surma/gocpio v1.0.2-0.20160926205914-fcb68777e7dc h1:iA3Eg1OVd2o0M4M+0PBsBBssMz98L8CUH7x0xVkuyUA= github.com/surma/gocpio v1.0.2-0.20160926205914-fcb68777e7dc h1:iA3Eg1OVd2o0M4M+0PBsBBssMz98L8CUH7x0xVkuyUA=
github.com/surma/gocpio v1.0.2-0.20160926205914-fcb68777e7dc/go.mod h1:zaLNaN+EDnfSnNdWPJJf9OZxWF817w5dt8JNzF9LCVI= github.com/surma/gocpio v1.0.2-0.20160926205914-fcb68777e7dc/go.mod h1:zaLNaN+EDnfSnNdWPJJf9OZxWF817w5dt8JNzF9LCVI=
@ -2388,5 +2395,6 @@ sigs.k8s.io/structured-merge-diff/v4 v4.1.2/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZa
sigs.k8s.io/structured-merge-diff/v4 v4.2.1/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4= sigs.k8s.io/structured-merge-diff/v4 v4.2.1/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o= sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc= sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=
sourcegraph.com/sqs/pbtypes v0.0.0-20180604144634-d3ebe8f20ae4/go.mod h1:ketZ/q3QxT9HOBeFhu6RdvsftgpsbFHBF5Cas6cDKZ0= sourcegraph.com/sqs/pbtypes v0.0.0-20180604144634-d3ebe8f20ae4/go.mod h1:ketZ/q3QxT9HOBeFhu6RdvsftgpsbFHBF5Cas6cDKZ0=
sourcegraph.com/sqs/pbtypes v1.0.0/go.mod h1:3AciMUv4qUuRHRHhOG4TZOB+72GdPVz5k+c648qsFS4= sourcegraph.com/sqs/pbtypes v1.0.0/go.mod h1:3AciMUv4qUuRHRHhOG4TZOB+72GdPVz5k+c648qsFS4=

View File

@ -9,7 +9,7 @@ import (
// drop-in 100% compatible replacement and 17% faster than compress/gzip. // drop-in 100% compatible replacement and 17% faster than compress/gzip.
gzip "github.com/klauspost/pgzip" gzip "github.com/klauspost/pgzip"
"github.com/surma/gocpio" cpio "github.com/surma/gocpio"
) )
// Writer is an io.WriteCloser that writes to an initrd // Writer is an io.WriteCloser that writes to an initrd
@ -23,8 +23,6 @@ func typeconv(thdr *tar.Header) int64 {
switch thdr.Typeflag { switch thdr.Typeflag {
case tar.TypeReg: case tar.TypeReg:
return cpio.TYPE_REG return cpio.TYPE_REG
case tar.TypeRegA:
return cpio.TYPE_REG
// Currently hard links not supported very well :) // Currently hard links not supported very well :)
// Convert to relative symlink as absolute will not work in container // Convert to relative symlink as absolute will not work in container
// cpio does support hardlinks but file contents still duplicated, so rely // cpio does support hardlinks but file contents still duplicated, so rely

View File

@ -219,7 +219,14 @@ func Build(m Moby, w io.Writer, opts BuildOpts) error {
if addition != nil { if addition != nil {
err = addition(iw) err = addition(iw)
if err != nil { if err != nil {
return fmt.Errorf("Failed to add additional files: %v", err) return fmt.Errorf("failed to add additional files: %v", err)
}
}
// complete the sbom consolidation
if opts.SbomGenerator != nil {
if err := opts.SbomGenerator.Close(iw); err != nil {
return err
} }
} }

View File

@ -16,11 +16,11 @@ import (
// dockerRun is outside the linuxkit/docker package, because that is for caching, this is // dockerRun is outside the linuxkit/docker package, because that is for caching, this is
// used for running to build images. runEnv is passed through to the docker run command. // used for running to build images. runEnv is passed through to the docker run command.
func dockerRun(input io.Reader, output io.Writer, img string, runEnv []string, args ...string) error { func dockerRun(input io.Reader, output io.Writer, img string, runEnv []string, imageArgs ...string) error {
log.Debugf("docker run %s (input): %s", img, strings.Join(args, " ")) log.Debugf("docker run %s (input): %s", img, strings.Join(imageArgs, " "))
docker, err := exec.LookPath("docker") docker, err := exec.LookPath("docker")
if err != nil { if err != nil {
return errors.New("Docker does not seem to be installed") return errors.New("docker does not seem to be installed")
} }
env := os.Environ() env := os.Environ()
@ -36,12 +36,13 @@ func dockerRun(input io.Reader, output io.Writer, img string, runEnv []string, a
} }
var errbuf strings.Builder var errbuf strings.Builder
args = []string{"run", "--network=none", "--log-driver=none", "--rm", "-i"} args := []string{"run", "--network=none", "--log-driver=none", "--rm", "-i"}
for _, e := range runEnv { for _, e := range runEnv {
args = append(args, "-e", e) args = append(args, "-e", e)
} }
args = append(args, img) args = append(args, img)
args = append(args, imageArgs...)
cmd := exec.Command(docker, args...) cmd := exec.Command(docker, args...)
cmd.Stderr = &errbuf cmd.Stderr = &errbuf
cmd.Stdin = input cmd.Stdin = input

View File

@ -310,6 +310,21 @@ func ImageTar(ref *reference.Spec, prefix string, tw tarWriter, resolv string, o
} }
} }
} }
// save the sbom to the sbom writer
if opts.SbomGenerator != nil {
sboms, err := src.SBoMs()
if err != nil {
return err
}
for _, sbom := range sboms {
// sbomWriter will escape out any problematic characters for us
if err := opts.SbomGenerator.Add(prefix, sbom); err != nil {
return err
}
}
}
return nil return nil
} }

View File

@ -8,4 +8,5 @@ type BuildOpts struct {
CacheDir string CacheDir string
DockerCache bool DockerCache bool
Arch string Arch string
SbomGenerator *SbomGenerator
} }

View File

@ -0,0 +1,113 @@
package moby
import (
"archive/tar"
"bytes"
"errors"
"fmt"
"io"
"path/filepath"
"time"
"github.com/google/uuid"
spdxjson "github.com/spdx/tools-golang/json"
"github.com/spdx/tools-golang/spdx"
spdxcommon "github.com/spdx/tools-golang/spdx/v2/common"
spdxversion "github.com/spdx/tools-golang/spdx/v2/v2_3"
)
// SbomGenerator handler for generating sbom
type SbomGenerator struct {
filename string
closed bool
sboms []*spdx.Document
buildTime time.Time
}
func NewSbomGenerator(filename string, currentBuildTime bool) (*SbomGenerator, error) {
if filename == "" {
return nil, errors.New("filename must be specified")
}
buildTime := defaultModTime
if currentBuildTime {
buildTime = time.Now()
}
return &SbomGenerator{filename, false, nil, buildTime}, nil
}
func (s *SbomGenerator) Add(prefix string, sbom io.ReadCloser) error {
if s.closed {
return fmt.Errorf("sbom generator already closed")
}
doc, err := spdxjson.Read(sbom)
if err != nil {
return err
}
if err := sbom.Close(); err != nil {
return err
}
// change any paths to include the prefix
for i := range doc.Files {
doc.Files[i].FileName = filepath.Join(prefix, doc.Files[i].FileName)
}
for i := range doc.Packages {
doc.Packages[i].PackageFileName = filepath.Join(prefix, doc.Packages[i].PackageFileName)
// we should need to add the prefix to each of doc.Packages[i].Files[], but those are pointers,
// so they point to the actual file structs we handled above
}
s.sboms = append(s.sboms, doc)
return nil
}
// Close finalize generation of the sbom, including merging any together and writing the output file to a tar stream,
// and cleaning up any temporary files.
func (s *SbomGenerator) Close(tw *tar.Writer) error {
// merge all of the sboms together
doc := spdx.Document{
SPDXVersion: spdxversion.Version,
DataLicense: spdxversion.DataLicense,
DocumentName: "sbom",
DocumentNamespace: fmt.Sprintf("https://github.com/linuxkit/linuxkit/sbom-%s", uuid.New().String()),
CreationInfo: &spdx.CreationInfo{
LicenseListVersion: "3.20",
Creators: []spdxcommon.Creator{
{CreatorType: "Organization", Creator: "LinuxKit"},
{CreatorType: "Tool", Creator: "linuxkit"},
},
Created: s.buildTime.UTC().Format("2006-01-02T15:04:05Z"),
},
SPDXIdentifier: spdxcommon.ElementID("DOCUMENT"),
}
for _, sbom := range s.sboms {
doc.Packages = append(doc.Packages, sbom.Packages...)
doc.Files = append(doc.Files, sbom.Files...)
doc.OtherLicenses = append(doc.OtherLicenses, sbom.OtherLicenses...)
doc.Relationships = append(doc.Relationships, sbom.Relationships...)
doc.Annotations = append(doc.Annotations, sbom.Annotations...)
doc.ExternalDocumentReferences = append(doc.ExternalDocumentReferences, sbom.ExternalDocumentReferences...)
}
var buf bytes.Buffer
if err := spdxjson.Write(&doc, &buf); err != nil {
return err
}
// create
hdr := &tar.Header{
Name: s.filename,
Typeflag: tar.TypeReg,
Mode: 0o644,
ModTime: defaultModTime,
Uid: int(0),
Gid: int(0),
Format: tar.FormatPAX,
Size: int64(buf.Len()),
}
if err := tw.WriteHeader(hdr); err != nil {
return err
}
if _, err := io.Copy(tw, &buf); err != nil && err != io.EOF {
return fmt.Errorf("failed to write sbom: %v", err)
}
s.closed = true
return nil
}

View File

@ -14,7 +14,7 @@ import (
const ( const (
buildersEnvVar = "LINUXKIT_BUILDERS" buildersEnvVar = "LINUXKIT_BUILDERS"
envVarCacheDir = "LINUXKIT_CACHE" envVarCacheDir = "LINUXKIT_CACHE"
defaultBuilderImage = "moby/buildkit:v0.11.0-rc2" defaultBuilderImage = "moby/buildkit:v0.12.3"
) )
// some logic clarification: // some logic clarification:
@ -44,6 +44,7 @@ func addCmdRunPkgBuildPush(cmd *cobra.Command, withPush bool) *cobra.Command {
nobuild bool nobuild bool
manifest bool manifest bool
cacheDir = flagOverEnvVarOverDefaultString{def: defaultLinuxkitCache(), envVar: envVarCacheDir} cacheDir = flagOverEnvVarOverDefaultString{def: defaultLinuxkitCache(), envVar: envVarCacheDir}
sbomScanner string
) )
cmd.RunE = func(cmd *cobra.Command, args []string) error { cmd.RunE = func(cmd *cobra.Command, args []string) error {
@ -88,6 +89,10 @@ func addCmdRunPkgBuildPush(cmd *cobra.Command, withPush bool) *cobra.Command {
opts = append(opts, pkglib.WithBuildTargetDockerCache()) opts = append(opts, pkglib.WithBuildTargetDockerCache())
} }
if sbomScanner != "false" {
opts = append(opts, pkglib.WithBuildSbomScanner(sbomScanner))
}
// skipPlatformsMap contains platforms that should be skipped // skipPlatformsMap contains platforms that should be skipped
skipPlatformsMap := make(map[string]bool) skipPlatformsMap := make(map[string]bool)
if skipPlatforms != "" { if skipPlatforms != "" {
@ -196,6 +201,7 @@ func addCmdRunPkgBuildPush(cmd *cobra.Command, withPush bool) *cobra.Command {
cmd.Flags().StringVar(&release, "release", "", "Release the given version") cmd.Flags().StringVar(&release, "release", "", "Release the given version")
cmd.Flags().BoolVar(&nobuild, "nobuild", false, "Skip building the image before pushing, conflicts with -force") cmd.Flags().BoolVar(&nobuild, "nobuild", false, "Skip building the image before pushing, conflicts with -force")
cmd.Flags().BoolVar(&manifest, "manifest", true, "Create and push multi-arch manifest") cmd.Flags().BoolVar(&manifest, "manifest", true, "Create and push multi-arch manifest")
cmd.Flags().StringVar(&sbomScanner, "sbom-scanner", "", "SBOM scanner to use, must match the buildkit spec; set to blank to use the buildkit default; set to 'false' for no scanning")
return cmd return cmd
} }

View File

@ -25,22 +25,24 @@ import (
) )
type buildOpts struct { type buildOpts struct {
skipBuild bool skipBuild bool
force bool force bool
pull bool pull bool
ignoreCache bool ignoreCache bool
push bool push bool
release string release string
manifest bool manifest bool
targetDocker bool targetDocker bool
cacheDir string cacheDir string
cacheProvider lktspec.CacheProvider cacheProvider lktspec.CacheProvider
platforms []imagespec.Platform platforms []imagespec.Platform
builders map[string]string builders map[string]string
runner dockerRunner runner dockerRunner
writer io.Writer writer io.Writer
builderImage string builderImage string
builderRestart bool builderRestart bool
sbomScan bool
sbomScannerImage string
} }
// BuildOpt allows callers to specify options to Build // BuildOpt allows callers to specify options to Build
@ -175,6 +177,15 @@ func WithBuildIgnoreCache() BuildOpt {
} }
} }
// WithBuildSbomScanner when building an image, scan using the provided scanner image; if blank, uses the default
func WithBuildSbomScanner(scanner string) BuildOpt {
return func(bo *buildOpts) error {
bo.sbomScan = true
bo.sbomScannerImage = scanner
return nil
}
}
// Build builds the package // Build builds the package
func (p Pkg) Build(bos ...BuildOpt) error { func (p Pkg) Build(bos ...BuildOpt) error {
var bo buildOpts var bo buildOpts
@ -366,17 +377,19 @@ func (p Pkg) Build(bos ...BuildOpt) error {
// build for each arch and save in the linuxkit cache // build for each arch and save in the linuxkit cache
for _, platform := range platformsToBuild { for _, platform := range platformsToBuild {
desc, err := p.buildArch(ctx, d, c, bo.builderImage, platform.Architecture, bo.builderRestart, writer, bo, imageBuildOpts) builtDescs, err := p.buildArch(ctx, d, c, bo.builderImage, platform.Architecture, bo.builderRestart, writer, bo, imageBuildOpts)
if err != nil { if err != nil {
return fmt.Errorf("error building for arch %s: %v", platform.Architecture, err) return fmt.Errorf("error building for arch %s: %v", platform.Architecture, err)
} }
if desc == nil { if len(builtDescs) == 0 {
return fmt.Errorf("no valid descriptor returned for image for arch %s", platform.Architecture) return fmt.Errorf("no valid descriptor returned for image for arch %s", platform.Architecture)
} }
if desc.Platform == nil { for i, desc := range builtDescs {
return fmt.Errorf("descriptor for platform %v has no information on the platform: %#v", platform, desc) if desc.Platform == nil {
return fmt.Errorf("descriptor %d for platform %v has no information on the platform: %#v", i, platform, desc)
}
} }
descs = append(descs, *desc) descs = append(descs, builtDescs...)
} }
// after build is done: // after build is done:
@ -507,9 +520,9 @@ func (p Pkg) Build(bos ...BuildOpt) error {
} }
// buildArch builds the package for a single arch // buildArch builds the package for a single arch
func (p Pkg) buildArch(ctx context.Context, d dockerRunner, c lktspec.CacheProvider, builderImage, arch string, restart bool, writer io.Writer, bo buildOpts, imageBuildOpts types.ImageBuildOptions) (*registry.Descriptor, error) { func (p Pkg) buildArch(ctx context.Context, d dockerRunner, c lktspec.CacheProvider, builderImage, arch string, restart bool, writer io.Writer, bo buildOpts, imageBuildOpts types.ImageBuildOptions) ([]registry.Descriptor, error) {
var ( var (
desc *registry.Descriptor descs []registry.Descriptor
tagArch string tagArch string
tag = p.Tag() tag = p.Tag()
) )
@ -527,7 +540,7 @@ func (p Pkg) buildArch(ctx context.Context, d dockerRunner, c lktspec.CacheProvi
if err != nil { if err != nil {
return nil, fmt.Errorf("could not find root descriptor for %s: %v", ref, err) return nil, fmt.Errorf("could not find root descriptor for %s: %v", ref, err)
} }
return desc, nil return []registry.Descriptor{*desc}, nil
} }
fmt.Fprintf(writer, "No image pulled for arch %s, continuing with build\n", arch) fmt.Fprintf(writer, "No image pulled for arch %s, continuing with build\n", arch)
} }
@ -559,12 +572,12 @@ func (p Pkg) buildArch(ctx context.Context, d dockerRunner, c lktspec.CacheProvi
stdout = pipew stdout = pipew
eg.Go(func() error { eg.Go(func() error {
source, err := c.ImageLoad(&ref, arch, piper) d, err := c.ImageLoad(&ref, arch, piper)
// send the error down the channel // send the error down the channel
if err != nil { if err != nil {
fmt.Fprintf(stdout, "cache.ImageLoad goroutine ended with error: %v\n", err) fmt.Fprintf(stdout, "cache.ImageLoad goroutine ended with error: %v\n", err)
} else { } else {
desc = source.Descriptor() descs = d
} }
piper.Close() piper.Close()
return err return err
@ -577,7 +590,7 @@ func (p Pkg) buildArch(ctx context.Context, d dockerRunner, c lktspec.CacheProvi
if bo.ignoreCache { if bo.ignoreCache {
passCache = nil passCache = nil
} }
if err := d.build(ctx, tagArch, p.path, builderName, builderImage, platform, restart, passCache, buildCtx.Reader(), stdout, imageBuildOpts); err != nil { if err := d.build(ctx, tagArch, p.path, builderName, builderImage, platform, restart, passCache, buildCtx.Reader(), stdout, bo.sbomScan, bo.sbomScannerImage, imageBuildOpts); err != nil {
stdoutCloser() stdoutCloser()
if strings.Contains(err.Error(), "executor failed running [/dev/.buildkit_qemu_emulator") { if strings.Contains(err.Error(), "executor failed running [/dev/.buildkit_qemu_emulator") {
return nil, fmt.Errorf("buildkit was unable to emulate %s. check binfmt has been set up and works for this platform: %v", platform, err) return nil, fmt.Errorf("buildkit was unable to emulate %s. check binfmt has been set up and works for this platform: %v", platform, err)
@ -591,7 +604,7 @@ func (p Pkg) buildArch(ctx context.Context, d dockerRunner, c lktspec.CacheProvi
return nil, err return nil, err
} }
return desc, nil return descs, nil
} }
type buildCtx struct { type buildCtx struct {

View File

@ -3,11 +3,11 @@ package pkglib
import ( import (
"bytes" "bytes"
"context" "context"
"crypto/rand"
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"io" "io"
"math/rand"
"os" "os"
"strings" "strings"
"testing" "testing"
@ -55,7 +55,7 @@ func (d *dockerMocker) contextSupportCheck() error {
func (d *dockerMocker) builder(_ context.Context, _, _, _ string, _ bool) (*buildkitClient.Client, error) { func (d *dockerMocker) builder(_ context.Context, _, _, _ string, _ bool) (*buildkitClient.Client, error) {
return nil, fmt.Errorf("not implemented") return nil, fmt.Errorf("not implemented")
} }
func (d *dockerMocker) build(ctx context.Context, tag, pkg, dockerContext, builderImage, platform string, builderRestart bool, c lktspec.CacheProvider, r io.Reader, stdout io.Writer, imageBuildOpts dockertypes.ImageBuildOptions) error { func (d *dockerMocker) build(ctx context.Context, tag, pkg, dockerContext, builderImage, platform string, builderRestart bool, c lktspec.CacheProvider, r io.Reader, stdout io.Writer, sbomScan bool, sbomScannerImage string, imageBuildOpts dockertypes.ImageBuildOptions) error {
if !d.enableBuild { if !d.enableBuild {
return errors.New("build disabled") return errors.New("build disabled")
} }
@ -84,7 +84,7 @@ func (d *dockerMocker) load(src io.Reader) error {
func (d *dockerMocker) pull(img string) (bool, error) { func (d *dockerMocker) pull(img string) (bool, error) {
if d.enablePull { if d.enablePull {
b := make([]byte, 256) b := make([]byte, 256)
rand.Read(b) _, _ = rand.Read(b)
d.images[img] = b d.images[img] = b
return true, nil return true, nil
} }
@ -107,8 +107,15 @@ func (c *cacheMocker) ImagePull(ref *reference.Spec, trustedRef, architecture st
} }
// make some random data for a layer // make some random data for a layer
b := make([]byte, 256) b := make([]byte, 256)
rand.Read(b) _, _ = rand.Read(b)
return c.imageWriteStream(ref, architecture, bytes.NewReader(b)) descs, err := c.imageWriteStream(ref, architecture, bytes.NewReader(b))
if err != nil {
return nil, err
}
if len(descs) != 1 {
return nil, fmt.Errorf("expected 1 descriptor, got %d", len(descs))
}
return c.NewSource(ref, architecture, &descs[1]), nil
} }
func (c *cacheMocker) ImageInCache(ref *reference.Spec, trustedRef, architecture string) (bool, error) { func (c *cacheMocker) ImageInCache(ref *reference.Spec, trustedRef, architecture string) (bool, error) {
@ -129,14 +136,14 @@ func (c *cacheMocker) ImageInRegistry(ref *reference.Spec, trustedRef, architect
return false, nil return false, nil
} }
func (c *cacheMocker) ImageLoad(ref *reference.Spec, architecture string, r io.Reader) (lktspec.ImageSource, error) { func (c *cacheMocker) ImageLoad(ref *reference.Spec, architecture string, r io.Reader) ([]registry.Descriptor, error) {
if !c.enableImageLoad { if !c.enableImageLoad {
return nil, errors.New("ImageLoad disabled") return nil, errors.New("ImageLoad disabled")
} }
return c.imageWriteStream(ref, architecture, r) return c.imageWriteStream(ref, architecture, r)
} }
func (c *cacheMocker) imageWriteStream(ref *reference.Spec, architecture string, r io.Reader) (lktspec.ImageSource, error) { func (c *cacheMocker) imageWriteStream(ref *reference.Spec, architecture string, r io.Reader) ([]registry.Descriptor, error) {
image := fmt.Sprintf("%s-%s", ref.String(), architecture) image := fmt.Sprintf("%s-%s", ref.String(), architecture)
// make some random data for a layer // make some random data for a layer
@ -181,8 +188,7 @@ func (c *cacheMocker) imageWriteStream(ref *reference.Spec, architecture string,
}, },
} }
c.appendImage(image, desc) c.appendImage(image, desc)
return []registry.Descriptor{desc}, nil
return c.NewSource(ref, architecture, &desc), nil
} }
func (c *cacheMocker) IndexWrite(ref *reference.Spec, descriptors ...registry.Descriptor) (lktspec.ImageSource, error) { func (c *cacheMocker) IndexWrite(ref *reference.Spec, descriptors ...registry.Descriptor) (lktspec.ImageSource, error) {
@ -309,12 +315,15 @@ func (c cacheMockerSource) V1TarReader(overrideName string) (io.ReadCloser, erro
return nil, fmt.Errorf("no image found with ref: %s", c.ref.String()) return nil, fmt.Errorf("no image found with ref: %s", c.ref.String())
} }
b := make([]byte, 256) b := make([]byte, 256)
rand.Read(b) _, _ = rand.Read(b)
return io.NopCloser(bytes.NewReader(b)), nil return io.NopCloser(bytes.NewReader(b)), nil
} }
func (c cacheMockerSource) Descriptor() *registry.Descriptor { func (c cacheMockerSource) Descriptor() *registry.Descriptor {
return c.descriptor return c.descriptor
} }
func (c cacheMockerSource) SBoMs() ([]io.ReadCloser, error) {
return nil, nil
}
func TestBuild(t *testing.T) { func TestBuild(t *testing.T) {
var ( var (

View File

@ -48,11 +48,12 @@ const (
buildkitSocketPath = "/run/buildkit/buildkitd.sock" buildkitSocketPath = "/run/buildkit/buildkitd.sock"
buildkitWaitServer = 30 // seconds buildkitWaitServer = 30 // seconds
buildkitCheckInterval = 1 // seconds buildkitCheckInterval = 1 // seconds
sbomFrontEndKey = "attest:sbom"
) )
type dockerRunner interface { type dockerRunner interface {
tag(ref, tag string) error tag(ref, tag string) error
build(ctx context.Context, tag, pkg, dockerContext, builderImage, platform string, restart bool, c spec.CacheProvider, r io.Reader, stdout io.Writer, imageBuildOpts types.ImageBuildOptions) error build(ctx context.Context, tag, pkg, dockerContext, builderImage, platform string, restart bool, c spec.CacheProvider, r io.Reader, stdout io.Writer, sbomScan bool, sbomScannerImage string, imageBuildOpts types.ImageBuildOptions) error
save(tgt string, refs ...string) error save(tgt string, refs ...string) error
load(src io.Reader) error load(src io.Reader) error
pull(img string) (bool, error) pull(img string) (bool, error)
@ -401,7 +402,7 @@ func (dr *dockerRunnerImpl) tag(ref, tag string) error {
return dr.command(nil, nil, nil, "image", "tag", ref, tag) return dr.command(nil, nil, nil, "image", "tag", ref, tag)
} }
func (dr *dockerRunnerImpl) build(ctx context.Context, tag, pkg, dockerContext, builderImage, platform string, restart bool, c spec.CacheProvider, stdin io.Reader, stdout io.Writer, imageBuildOpts types.ImageBuildOptions) error { func (dr *dockerRunnerImpl) build(ctx context.Context, tag, pkg, dockerContext, builderImage, platform string, restart bool, c spec.CacheProvider, stdin io.Reader, stdout io.Writer, sbomScan bool, sbomScannerImage string, imageBuildOpts types.ImageBuildOptions) error {
// ensure we have a builder // ensure we have a builder
client, err := dr.builder(ctx, dockerContext, builderImage, platform, restart) client, err := dr.builder(ctx, dockerContext, builderImage, platform, restart)
if err != nil { if err != nil {
@ -443,6 +444,14 @@ func (dr *dockerRunnerImpl) build(ctx context.Context, tag, pkg, dockerContext,
frontendAttrs[fmt.Sprintf("label:%s", k)] = v frontendAttrs[fmt.Sprintf("label:%s", k)] = v
} }
if sbomScan {
var sbomValue string
if sbomScannerImage != "" {
sbomValue = fmt.Sprintf("generator=%s", sbomScannerImage)
}
frontendAttrs[sbomFrontEndKey] = sbomValue
}
solveOpts := buildkitClient.SolveOpt{ solveOpts := buildkitClient.SolveOpt{
Frontend: "dockerfile.v0", Frontend: "dockerfile.v0",
FrontendAttrs: frontendAttrs, FrontendAttrs: frontendAttrs,

View File

@ -33,7 +33,7 @@ type CacheProvider interface {
IndexWrite(ref *reference.Spec, descriptors ...v1.Descriptor) (ImageSource, error) IndexWrite(ref *reference.Spec, descriptors ...v1.Descriptor) (ImageSource, error)
// ImageLoad takes an OCI format image tar stream in the io.Reader and writes it to the cache. It should be // ImageLoad takes an OCI format image tar stream in the io.Reader and writes it to the cache. It should be
// efficient and only write missing blobs, based on their content hash. // efficient and only write missing blobs, based on their content hash.
ImageLoad(ref *reference.Spec, architecture string, r io.Reader) (ImageSource, error) ImageLoad(ref *reference.Spec, architecture string, r io.Reader) ([]v1.Descriptor, error)
// DescriptorWrite writes a descriptor to the cache index; it validates that it has a name // DescriptorWrite writes a descriptor to the cache index; it validates that it has a name
// and replaces any existing one // and replaces any existing one
DescriptorWrite(ref *reference.Spec, descriptors v1.Descriptor) (ImageSource, error) DescriptorWrite(ref *reference.Spec, descriptors v1.Descriptor) (ImageSource, error)

View File

@ -18,4 +18,6 @@ type ImageSource interface {
Descriptor() *v1.Descriptor Descriptor() *v1.Descriptor
// V1TarReader get the image as v1 tarball, also compatible with `docker load`. If name arg is not "", override name of image in tarfile from default of image. // V1TarReader get the image as v1 tarball, also compatible with `docker load`. If name arg is not "", override name of image in tarfile from default of image.
V1TarReader(overrideName string) (io.ReadCloser, error) V1TarReader(overrideName string) (io.ReadCloser, error)
// SBoM get the sbom for the image, if any is available
SBoMs() ([]io.ReadCloser, error)
} }

View File

@ -0,0 +1,10 @@
permit:
- BSD.*
- CC0.*
- MIT.*
- Apache.*
- MPL.*
- ISC
- WTFPL
ignore-packages:

View File

@ -0,0 +1,30 @@
# If you prefer the allow list template instead of the deny list, see community template:
# https://github.com/github/gitignore/blob/main/community/Golang/Go.AllowList.gitignore
#
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
# vendor/
# Go workspace file
go.work
# tools
.tmp
# test output
test/results
# IDE project files
.idea

View File

@ -0,0 +1,78 @@
#issues:
# # The list of ids of default excludes to include or disable.
# include:
# - EXC0002 # disable excluding of issues about comments from golint
linters:
# inverted configuration with `enable-all` and `disable` is not scalable during updates of golangci-lint
disable-all: true
enable:
- asciicheck
- bodyclose
- depguard
- dogsled
- dupl
- errcheck
- exportloopref
- funlen
- gocognit
- goconst
- gocritic
- gocyclo
- gofmt
- goprintffuncname
- gosec
- gosimple
- govet
- ineffassign
- misspell
- nakedret
- nolintlint
- revive
- staticcheck
- stylecheck
- typecheck
- unconvert
- unparam
- unused
- whitespace
# do not enable...
# - gochecknoglobals
# - gochecknoinits # this is too aggressive
# - rowserrcheck disabled per generics https://github.com/golangci/golangci-lint/issues/2649
# - godot
# - godox
# - goerr113
# - goimports # we're using gosimports now instead to account for extra whitespaces (see https://github.com/golang/go/issues/20818)
# - golint # deprecated
# - gomnd # this is too aggressive
# - interfacer # this is a good idea, but is no longer supported and is prone to false positives
# - lll # without a way to specify per-line exception cases, this is not usable
# - maligned # this is an excellent linter, but tricky to optimize and we are not sensitive to memory layout optimizations
# - nestif
# - prealloc # following this rule isn't consistently a good idea, as it sometimes forces unnecessary allocations that result in less idiomatic code
# - scopelint # deprecated
# - testpackage
# - wsl # this doens't have an auto-fixer yet and is pretty noisy (https://github.com/bombsimon/wsl/issues/90)
linters-settings:
funlen:
# Checks the number of lines in a function.
# If lower than 0, disable the check.
# Default: 60
lines: 140
# Checks the number of statements in a function.
# If lower than 0, disable the check.
# Default: 40
statements: 100
gocognit:
# Minimal code complexity to report
# Default: 30 (but we recommend 10-20)
min-complexity: 80
gocyclo:
# Minimal code complexity to report.
# Default: 30 (but we recommend 10-20)
min-complexity: 50

View File

@ -0,0 +1,86 @@
# Contributing to go-struct-converter
If you are looking to contribute to this project and want to open a GitHub pull request ("PR"), there are a few guidelines of what we are looking for in patches. Make sure you go through this document and ensure that your code proposal is aligned.
## Sign off your work
The `sign-off` is an added line at the end of the explanation for the commit, certifying that you wrote it or otherwise have the right to submit it as an open-source patch. By submitting a contribution, you agree to be bound by the terms of the DCO Version 1.1 and Apache License Version 2.0.
Signing off a commit certifies the below Developer's Certificate of Origin (DCO):
```text
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
All contributions to this project are licensed under the [Apache License Version 2.0, January 2004](http://www.apache.org/licenses/).
When committing your change, you can add the required line manually so that it looks like this:
```text
Signed-off-by: John Doe <john.doe@example.com>
```
Alternatively, configure your Git client with your name and email to use the `-s` flag when creating a commit:
```text
$ git config --global user.name "John Doe"
$ git config --global user.email "john.doe@example.com"
```
Creating a signed-off commit is then possible with `-s` or `--signoff`:
```text
$ git commit -s -m "this is a commit message"
```
To double-check that the commit was signed-off, look at the log output:
```text
$ git log -1
commit 37ceh170e4hb283bb73d958f2036ee5k07e7fde7 (HEAD -> issue-35, origin/main, main)
Author: John Doe <john.doe@example.com>
Date: Mon Aug 1 11:27:13 2020 -0400
this is a commit message
Signed-off-by: John Doe <john.doe@example.com>
```
[//]: # "TODO: Commit guidelines, granular commits"
[//]: # "TODO: Commit guidelines, descriptive messages"
[//]: # "TODO: Commit guidelines, commit title, extra body description"
[//]: # "TODO: PR title and description"
## Test your changes
Ensure that your changes have passed the test suite.
Simply run `make test` to have all tests run and validate changes work properly.
## Document your changes
When proposed changes are modifying user-facing functionality or output, it is expected the PR will include updates to the documentation as well.

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,81 @@
TEMPDIR = ./.tmp
# commands and versions
LINTCMD = $(TEMPDIR)/golangci-lint run --tests=false --timeout=5m --config .golangci.yaml
GOIMPORTS_CMD = $(TEMPDIR)/gosimports -local github.com/anchore
# tool versions
GOLANGCILINT_VERSION = v1.50.1
GOSIMPORTS_VERSION = v0.3.4
BOUNCER_VERSION = v0.4.0
# formatting variables
BOLD := $(shell tput -T linux bold)
PURPLE := $(shell tput -T linux setaf 5)
GREEN := $(shell tput -T linux setaf 2)
CYAN := $(shell tput -T linux setaf 6)
RED := $(shell tput -T linux setaf 1)
RESET := $(shell tput -T linux sgr0)
TITLE := $(BOLD)$(PURPLE)
SUCCESS := $(BOLD)$(GREEN)
# test variables
RESULTSDIR = test/results
COVER_REPORT = $(RESULTSDIR)/unit-coverage-details.txt
COVER_TOTAL = $(RESULTSDIR)/unit-coverage-summary.txt
# the quality gate lower threshold for unit test total % coverage (by function statements)
COVERAGE_THRESHOLD := 80
$(RESULTSDIR):
mkdir -p $(RESULTSDIR)
$(TEMPDIR):
mkdir -p $(TEMPDIR)
.PHONY: bootstrap-tools
bootstrap-tools: $(TEMPDIR)
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(TEMPDIR)/ $(GOLANGCILINT_VERSION)
curl -sSfL https://raw.githubusercontent.com/wagoodman/go-bouncer/master/bouncer.sh | sh -s -- -b $(TEMPDIR)/ $(BOUNCER_VERSION)
# the only difference between goimports and gosimports is that gosimports removes extra whitespace between import blocks (see https://github.com/golang/go/issues/20818)
GOBIN="$(realpath $(TEMPDIR))" go install github.com/rinchsan/gosimports/cmd/gosimports@$(GOSIMPORTS_VERSION)
.PHONY: static-analysis
static-analysis: check-licenses lint
.PHONY: lint
lint: ## Run gofmt + golangci lint checks
$(call title,Running linters)
# ensure there are no go fmt differences
@printf "files with gofmt issues: [$(shell gofmt -l -s .)]\n"
@test -z "$(shell gofmt -l -s .)"
# run all golangci-lint rules
$(LINTCMD)
@[ -z "$(shell $(GOIMPORTS_CMD) -d .)" ] || (echo "goimports needs to be fixed" && false)
# go tooling does not play well with certain filename characters, ensure the common cases don't result in future "go get" failures
$(eval MALFORMED_FILENAMES := $(shell find . | grep -e ':'))
@bash -c "[[ '$(MALFORMED_FILENAMES)' == '' ]] || (printf '\nfound unsupported filename characters:\n$(MALFORMED_FILENAMES)\n\n' && false)"
.PHONY: lint-fix
lint-fix: ## Auto-format all source code + run golangci lint fixers
$(call title,Running lint fixers)
gofmt -w -s .
$(GOIMPORTS_CMD) -w .
$(LINTCMD) --fix
go mod tidy
.PHONY: check-licenses
check-licenses: ## Ensure transitive dependencies are compliant with the current license policy
$(TEMPDIR)/bouncer check ./...
.PHONY: unit
unit: $(RESULTSDIR) ## Run unit tests (with coverage)
$(call title,Running unit tests)
go test -coverprofile $(COVER_REPORT) $(shell go list ./... | grep -v anchore/syft/test)
@go tool cover -func $(COVER_REPORT) | grep total | awk '{print substr($$3, 1, length($$3)-1)}' > $(COVER_TOTAL)
@echo "Coverage: $$(cat $(COVER_TOTAL))"
@if [ $$(echo "$$(cat $(COVER_TOTAL)) >= $(COVERAGE_THRESHOLD)" | bc -l) -ne 1 ]; then echo "$(RED)$(BOLD)Failed coverage quality gate (> $(COVERAGE_THRESHOLD)%)$(RESET)" && false; fi
.PHONY: test
test: unit

View File

@ -0,0 +1,166 @@
# Go `struct` Converter
A library for converting between Go structs.
```go
chain := converter.NewChain(V1{}, V2{}, V3{})
chain.Convert(myV1struct, &myV3struct)
```
## Details
At its core, this library provides a `Convert` function, which automatically
handles converting fields with the same name, and "convertable"
types. Some examples are:
* `string` -> `string`
* `string` -> `*string`
* `int` -> `string`
* `string` -> `[]string`
The automatic conversions are implemented when there is an obvious way
to convert between the types. A lot more automatic conversions happen
-- see [the converter tests](converter_test.go) for a more comprehensive
list of what is currently supported.
Not everything can be handled automatically, however, so there is also
a `ConvertFrom` interface any struct in the graph can implement to
perform custom conversion, similar to how the stdlib `MarshalJSON` and
`UnmarshalJSON` would be implemented.
Additionally, and maybe most importantly, there is a `converter.Chain` available,
which orchestrates conversions between _multiple versions_ of structs. This could
be thought of similar to database migrations: given a starting struct and a target
struct, the `chain.Convert` function iterates through every intermediary migration
in order to arrive at the target struct.
## Basic Usage
To illustrate usage we'll start with a few basic structs, some of which
implement the `ConvertFrom` interface due to breaking changes:
```go
// --------- V1 struct definition below ---------
type V1 struct {
Name string
OldField string
}
// --------- V2 struct definition below ---------
type V2 struct {
Name string
NewField string // this was a renamed field
}
func (to *V2) ConvertFrom(from interface{}) error {
if from, ok := from.(V1); ok { // forward migration
to.NewField = from.OldField
}
return nil
}
// --------- V3 struct definition below ---------
type V3 struct {
Name []string
FinalField []string // this field was renamed and the type was changed
}
func (to *V3) ConvertFrom(from interface{}) error {
if from, ok := from.(V2); ok { // forward migration
to.FinalField = []string{from.NewField}
}
return nil
}
```
Given these type definitions, we can easily set up a conversion chain
like this:
```go
chain := converter.NewChain(V1{}, V2{}, V3{})
```
This chain can then be used to convert from an _older version_ to a _newer
version_. This is because our `ConvertFrom` definitions are only handling
_forward_ migrations.
This chain can be used to convert from a `V1` struct to a `V3` struct easily,
like this:
```go
v1 := // somehow get a populated v1 struct
v3 := V3{}
chain.Convert(v1, &v3)
```
Since we've defined our chain as `V1` &rarr; `V2` &rarr; `V3`, the chain will execute
conversions to all intermediary structs (`V2`, in this case) and ultimately end
when we've populated the `v3` instance.
Note we haven't needed to define any conversions on the `Name` field of any structs
since this one is convertible between structs: `string` &rarr; `string` &rarr; `[]string`.
## Backwards Migrations
If we wanted to _also_ provide backwards migrations, we could also easily add a case
to the `ConvertFrom` methods. The whole set of structs would look something like this:
```go
// --------- V1 struct definition below ---------
type V1 struct {
Name string
OldField string
}
func (to *V1) ConvertFrom(from interface{}) error {
if from, ok := from.(V2); ok { // backward migration
to.OldField = from.NewField
}
return nil
}
// --------- V2 struct definition below ---------
type V2 struct {
Name string
NewField string
}
func (to *V2) ConvertFrom(from interface{}) error {
if from, ok := from.(V1); ok { // forward migration
to.NewField = from.OldField
}
if from, ok := from.(V3); ok { // backward migration
to.NewField = from.FinalField[0]
}
return nil
}
// --------- V3 struct definition below ---------
type V3 struct {
Name []string
FinalField []string
}
func (to *V3) ConvertFrom(from interface{}) error {
if from, ok := from.(V2); ok { // forward migration
to.FinalField = []string{from.NewField}
}
return nil
}
```
At this point we could convert in either direction, for example a
`V3` struct could convert to a `V1` struct, with the caveat that there
may be data loss, as might need to happen due to changes in the data shapes.
## Contributing
If you would like to contribute to this repository, please see the
[CONTRIBUTING.md](CONTRIBUTING.md).

View File

@ -0,0 +1,95 @@
package converter
import (
"fmt"
"reflect"
)
// NewChain takes a set of structs, in order, to allow for accurate chain.Convert(from, &to) calls. NewChain should
// be called with struct values in a manner similar to this:
// converter.NewChain(v1.Document{}, v2.Document{}, v3.Document{})
func NewChain(structs ...interface{}) Chain {
out := Chain{}
for _, s := range structs {
typ := reflect.TypeOf(s)
if isPtr(typ) { // these shouldn't be pointers, but check just to be safe
typ = typ.Elem()
}
out.Types = append(out.Types, typ)
}
return out
}
// Chain holds a set of types with which to migrate through when a `chain.Convert` call is made
type Chain struct {
Types []reflect.Type
}
// Convert converts from one type in the chain to the target type, calling each conversion in between
func (c Chain) Convert(from interface{}, to interface{}) (err error) {
fromValue := reflect.ValueOf(from)
fromType := fromValue.Type()
// handle incoming pointers
for isPtr(fromType) {
fromValue = fromValue.Elem()
fromType = fromType.Elem()
}
toValuePtr := reflect.ValueOf(to)
toTypePtr := toValuePtr.Type()
if !isPtr(toTypePtr) {
return fmt.Errorf("TO struct provided not a pointer, unable to set values: %v", to)
}
// toValue must be a pointer but need a reference to the struct type directly
toValue := toValuePtr.Elem()
toType := toValue.Type()
fromIdx := -1
toIdx := -1
for i, typ := range c.Types {
if typ == fromType {
fromIdx = i
}
if typ == toType {
toIdx = i
}
}
if fromIdx == -1 {
return fmt.Errorf("invalid FROM type provided, not in the conversion chain: %s", fromType.Name())
}
if toIdx == -1 {
return fmt.Errorf("invalid TO type provided, not in the conversion chain: %s", toType.Name())
}
last := from
for i := fromIdx; i != toIdx; {
// skip the first index, because that is the from type - start with the next conversion in the chain
if fromIdx < toIdx {
i++
} else {
i--
}
var next interface{}
if i == toIdx {
next = to
} else {
nextVal := reflect.New(c.Types[i])
next = nextVal.Interface() // this will be a pointer, which is fine to pass to both from and to in Convert
}
if err = Convert(last, next); err != nil {
return err
}
last = next
}
return nil
}

View File

@ -0,0 +1,334 @@
package converter
import (
"fmt"
"reflect"
"strconv"
)
// ConvertFrom interface allows structs to define custom conversion functions if the automated reflection-based Convert
// is not able to convert properties due to name changes or other factors.
type ConvertFrom interface {
ConvertFrom(interface{}) error
}
// Convert takes two objects, e.g. v2_1.Document and &v2_2.Document{} and attempts to map all the properties from one
// to the other. After the automatic mapping, if a struct implements the ConvertFrom interface, this is called to
// perform any additional conversion logic necessary.
func Convert(from interface{}, to interface{}) error {
fromValue := reflect.ValueOf(from)
toValuePtr := reflect.ValueOf(to)
toTypePtr := toValuePtr.Type()
if !isPtr(toTypePtr) {
return fmt.Errorf("TO value provided was not a pointer, unable to set value: %v", to)
}
toValue, err := getValue(fromValue, toTypePtr)
if err != nil {
return err
}
// don't set nil values
if toValue == nilValue {
return nil
}
// toValuePtr is the passed-in pointer, toValue is also the same type of pointer
toValuePtr.Elem().Set(toValue.Elem())
return nil
}
func getValue(fromValue reflect.Value, targetType reflect.Type) (reflect.Value, error) {
var err error
fromType := fromValue.Type()
var toValue reflect.Value
// handle incoming pointer Types
if isPtr(fromType) {
if fromValue.IsNil() {
return nilValue, nil
}
fromValue = fromValue.Elem()
if !fromValue.IsValid() || fromValue.IsZero() {
return nilValue, nil
}
fromType = fromValue.Type()
}
baseTargetType := targetType
if isPtr(targetType) {
baseTargetType = targetType.Elem()
}
switch {
case isStruct(fromType) && isStruct(baseTargetType):
// this always creates a pointer type
toValue = reflect.New(baseTargetType)
toValue = toValue.Elem()
for i := 0; i < fromType.NumField(); i++ {
fromField := fromType.Field(i)
fromFieldValue := fromValue.Field(i)
toField, exists := baseTargetType.FieldByName(fromField.Name)
if !exists {
continue
}
toFieldType := toField.Type
toFieldValue := toValue.FieldByName(toField.Name)
newValue, err := getValue(fromFieldValue, toFieldType)
if err != nil {
return nilValue, err
}
if newValue == nilValue {
continue
}
toFieldValue.Set(newValue)
}
// allow structs to implement a custom convert function from previous/next version struct
if reflect.PtrTo(baseTargetType).Implements(convertFromType) {
convertFrom := toValue.Addr().MethodByName(convertFromName)
if !convertFrom.IsValid() {
return nilValue, fmt.Errorf("unable to get ConvertFrom method")
}
args := []reflect.Value{fromValue}
out := convertFrom.Call(args)
err := out[0].Interface()
if err != nil {
return nilValue, fmt.Errorf("an error occurred calling %s.%s: %v", baseTargetType.Name(), convertFromName, err)
}
}
case isSlice(fromType) && isSlice(baseTargetType):
if fromValue.IsNil() {
return nilValue, nil
}
length := fromValue.Len()
targetElementType := baseTargetType.Elem()
toValue = reflect.MakeSlice(baseTargetType, length, length)
for i := 0; i < length; i++ {
v, err := getValue(fromValue.Index(i), targetElementType)
if err != nil {
return nilValue, err
}
if v.IsValid() {
toValue.Index(i).Set(v)
}
}
case isMap(fromType) && isMap(baseTargetType):
if fromValue.IsNil() {
return nilValue, nil
}
keyType := baseTargetType.Key()
elementType := baseTargetType.Elem()
toValue = reflect.MakeMap(baseTargetType)
for _, fromKey := range fromValue.MapKeys() {
fromVal := fromValue.MapIndex(fromKey)
k, err := getValue(fromKey, keyType)
if err != nil {
return nilValue, err
}
v, err := getValue(fromVal, elementType)
if err != nil {
return nilValue, err
}
if k == nilValue || v == nilValue {
continue
}
if v == nilValue {
continue
}
if k.IsValid() && v.IsValid() {
toValue.SetMapIndex(k, v)
}
}
default:
// TODO determine if there are other conversions
toValue = fromValue
}
// handle non-pointer returns -- the reflect.New earlier always creates a pointer
if !isPtr(baseTargetType) {
toValue = fromPtr(toValue)
}
toValue, err = convertValueTypes(toValue, baseTargetType)
if err != nil {
return nilValue, err
}
// handle elements which are now pointers
if isPtr(targetType) {
toValue = toPtr(toValue)
}
return toValue, nil
}
// convertValueTypes takes a value and a target type, and attempts to convert
// between the Types - e.g. string -> int. when this function is called the value
func convertValueTypes(value reflect.Value, targetType reflect.Type) (reflect.Value, error) {
typ := value.Type()
switch {
// if the Types are the same, just return the value
case typ.Kind() == targetType.Kind():
return value, nil
case value.IsZero() && isPrimitive(targetType):
case isPrimitive(typ) && isPrimitive(targetType):
// get a string representation of the value
str := fmt.Sprintf("%v", value.Interface()) // TODO is there a better way to get a string representation?
var err error
var out interface{}
switch {
case isString(targetType):
out = str
case isBool(targetType):
out, err = strconv.ParseBool(str)
case isInt(targetType):
out, err = strconv.Atoi(str)
case isUint(targetType):
out, err = strconv.ParseUint(str, 10, 64)
case isFloat(targetType):
out, err = strconv.ParseFloat(str, 64)
}
if err != nil {
return nilValue, err
}
v := reflect.ValueOf(out)
v = v.Convert(targetType)
return v, nil
case isSlice(typ) && isSlice(targetType):
// this should already be handled in getValue
case isSlice(typ):
// this may be lossy
if value.Len() > 0 {
v := value.Index(0)
v, err := convertValueTypes(v, targetType)
if err != nil {
return nilValue, err
}
return v, nil
}
return convertValueTypes(nilValue, targetType)
case isSlice(targetType):
elementType := targetType.Elem()
v, err := convertValueTypes(value, elementType)
if err != nil {
return nilValue, err
}
if v == nilValue {
return v, nil
}
slice := reflect.MakeSlice(targetType, 1, 1)
slice.Index(0).Set(v)
return slice, nil
}
return nilValue, fmt.Errorf("unable to convert from: %v to %v", value.Interface(), targetType.Name())
}
func isPtr(typ reflect.Type) bool {
return typ.Kind() == reflect.Ptr
}
func isPrimitive(typ reflect.Type) bool {
return isString(typ) || isBool(typ) || isInt(typ) || isUint(typ) || isFloat(typ)
}
func isString(typ reflect.Type) bool {
return typ.Kind() == reflect.String
}
func isBool(typ reflect.Type) bool {
return typ.Kind() == reflect.Bool
}
func isInt(typ reflect.Type) bool {
switch typ.Kind() {
case reflect.Int,
reflect.Int8,
reflect.Int16,
reflect.Int32,
reflect.Int64:
return true
}
return false
}
func isUint(typ reflect.Type) bool {
switch typ.Kind() {
case reflect.Uint,
reflect.Uint8,
reflect.Uint16,
reflect.Uint32,
reflect.Uint64:
return true
}
return false
}
func isFloat(typ reflect.Type) bool {
switch typ.Kind() {
case reflect.Float32,
reflect.Float64:
return true
}
return false
}
func isStruct(typ reflect.Type) bool {
return typ.Kind() == reflect.Struct
}
func isSlice(typ reflect.Type) bool {
return typ.Kind() == reflect.Slice
}
func isMap(typ reflect.Type) bool {
return typ.Kind() == reflect.Map
}
func toPtr(val reflect.Value) reflect.Value {
typ := val.Type()
if !isPtr(typ) {
// this creates a pointer type inherently
ptrVal := reflect.New(typ)
ptrVal.Elem().Set(val)
val = ptrVal
}
return val
}
func fromPtr(val reflect.Value) reflect.Value {
if isPtr(val.Type()) {
val = val.Elem()
}
return val
}
// convertFromName constant to find the ConvertFrom method
const convertFromName = "ConvertFrom"
var (
// nilValue is returned in a number of cases when a value should not be set
nilValue = reflect.ValueOf(nil)
// convertFromType is the type to check for ConvertFrom implementations
convertFromType = reflect.TypeOf((*ConvertFrom)(nil)).Elem()
)

View File

@ -0,0 +1,550 @@
The tools-golang source code is provided and may be used, at your option,
under either:
* Apache License, version 2.0 (Apache-2.0), OR
* GNU General Public License, version 2.0 or later (GPL-2.0-or-later).
Copies of both licenses are included below.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.

View File

@ -0,0 +1,399 @@
The tools-golang documentation is provided under the Creative Commons Attribution
4.0 International license (CC-BY-4.0), a copy of which is provided below.
Attribution 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution 4.0 International Public License ("Public License"). To the
extent this Public License may be interpreted as a contract, You are
granted the Licensed Rights in consideration of Your acceptance of
these terms and conditions, and the Licensor grants You such rights in
consideration of benefits the Licensor receives from making the
Licensed Material available under these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
d. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
g. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part; and
b. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified
form), You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's
License You apply must not prevent recipients of the Adapted
Material from complying with this Public License.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public
licenses. Notwithstanding, Creative Commons may elect to apply one of
its public licenses to material it publishes and in those instances
will be considered the “Licensor.” The text of the Creative Commons
public licenses is dedicated to the public domain under the CC0 Public
Domain Dedication. Except for the limited purpose of indicating that
material is shared under a Creative Commons public license or as
otherwise permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the
public licenses.
Creative Commons may be contacted at creativecommons.org.

View File

@ -0,0 +1,39 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package convert
import (
"fmt"
"reflect"
converter "github.com/anchore/go-struct-converter"
"github.com/spdx/tools-golang/spdx/common"
"github.com/spdx/tools-golang/spdx/v2/v2_1"
"github.com/spdx/tools-golang/spdx/v2/v2_2"
"github.com/spdx/tools-golang/spdx/v2/v2_3"
)
var DocumentChain = converter.NewChain(
v2_1.Document{},
v2_2.Document{},
v2_3.Document{},
)
// Document converts from one document to another document
// For example, converting a document to the latest version could be done like:
//
// sourceDoc := // e.g. a v2_2.Document from somewhere
// var targetDoc spdx.Document // this can be any document version
// err := convert.Document(sourceDoc, &targetDoc) // the target must be passed as a pointer
func Document(from common.AnyDocument, to common.AnyDocument) error {
if !IsPtr(to) {
return fmt.Errorf("struct to convert to must be a pointer")
}
from = FromPtr(from)
if reflect.TypeOf(from) == reflect.TypeOf(FromPtr(to)) {
reflect.ValueOf(to).Elem().Set(reflect.ValueOf(from))
return nil
}
return DocumentChain.Convert(from, to)
}

View File

@ -0,0 +1,50 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package convert
import (
"fmt"
"reflect"
"github.com/spdx/tools-golang/spdx/common"
)
// FromPtr accepts a document or a document pointer and returns the direct struct reference
func FromPtr(doc common.AnyDocument) common.AnyDocument {
value := reflect.ValueOf(doc)
for value.Type().Kind() == reflect.Ptr {
value = value.Elem()
}
return value.Interface()
}
func IsPtr(obj common.AnyDocument) bool {
t := reflect.TypeOf(obj)
if t.Kind() == reflect.Interface {
t = t.Elem()
}
return t.Kind() == reflect.Ptr
}
func Describe(o interface{}) string {
value := reflect.ValueOf(o)
typ := value.Type()
prefix := ""
for typ.Kind() == reflect.Ptr {
prefix += "*"
value = value.Elem()
typ = value.Type()
}
str := limit(fmt.Sprintf("%+v", value.Interface()), 300)
name := fmt.Sprintf("%s.%s%s", typ.PkgPath(), prefix, typ.Name())
return fmt.Sprintf("%s: %s", name, str)
}
func limit(text string, length int) string {
if length <= 0 || len(text) <= length+3 {
return text
}
r := []rune(text)
r = r[:length]
return string(r) + "..."
}

View File

@ -0,0 +1,83 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package json
import (
"bytes"
"encoding/json"
"fmt"
"io"
"github.com/spdx/tools-golang/convert"
"github.com/spdx/tools-golang/spdx"
"github.com/spdx/tools-golang/spdx/common"
"github.com/spdx/tools-golang/spdx/v2/v2_1"
"github.com/spdx/tools-golang/spdx/v2/v2_2"
"github.com/spdx/tools-golang/spdx/v2/v2_3"
)
// Read takes an io.Reader and returns a fully-parsed current model SPDX Document
// or an error if any error is encountered.
func Read(content io.Reader) (*spdx.Document, error) {
doc := spdx.Document{}
err := ReadInto(content, &doc)
return &doc, err
}
// ReadInto takes an io.Reader, reads in the SPDX document at the version provided
// and converts to the doc version
func ReadInto(content io.Reader, doc common.AnyDocument) error {
if !convert.IsPtr(doc) {
return fmt.Errorf("doc to read into must be a pointer")
}
buf := new(bytes.Buffer)
_, err := buf.ReadFrom(content)
if err != nil {
return err
}
var data interface{}
err = json.Unmarshal(buf.Bytes(), &data)
if err != nil {
return err
}
val, ok := data.(map[string]interface{})
if !ok {
return fmt.Errorf("not a valid SPDX JSON document")
}
version, ok := val["spdxVersion"]
if !ok {
return fmt.Errorf("JSON document does not contain spdxVersion field")
}
switch version {
case v2_1.Version:
var doc v2_1.Document
err = json.Unmarshal(buf.Bytes(), &doc)
if err != nil {
return err
}
data = doc
case v2_2.Version:
var doc v2_2.Document
err = json.Unmarshal(buf.Bytes(), &doc)
if err != nil {
return err
}
data = doc
case v2_3.Version:
var doc v2_3.Document
err = json.Unmarshal(buf.Bytes(), &doc)
if err != nil {
return err
}
data = doc
default:
return fmt.Errorf("unsupported SDPX version: %s", version)
}
return convert.Document(data, doc)
}

View File

@ -0,0 +1,33 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package json
import (
"encoding/json"
"io"
"github.com/spdx/tools-golang/spdx/common"
)
type WriteOption func(*json.Encoder)
func Indent(indent string) WriteOption {
return func(e *json.Encoder) {
e.SetIndent("", indent)
}
}
func EscapeHTML(escape bool) WriteOption {
return func(e *json.Encoder) {
e.SetEscapeHTML(escape)
}
}
// Write takes an SPDX Document and an io.Writer, and writes the document to the writer in JSON format.
func Write(doc common.AnyDocument, w io.Writer, opts ...WriteOption) error {
e := json.NewEncoder(w)
for _, opt := range opts {
opt(e)
}
return e.Encode(doc)
}

View File

@ -0,0 +1,6 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package common
// AnyDocument a placeholder for allowing any SPDX document to be used in function args
type AnyDocument interface{}

View File

@ -0,0 +1,133 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
// Package spdx contains references to the latest spdx version
package spdx
import (
"github.com/spdx/tools-golang/spdx/v2/common"
latest "github.com/spdx/tools-golang/spdx/v2/v2_3"
)
const (
Version = latest.Version
DataLicense = latest.DataLicense
)
type (
Annotation = latest.Annotation
ArtifactOfProject = latest.ArtifactOfProject
CreationInfo = latest.CreationInfo
Document = latest.Document
ExternalDocumentRef = latest.ExternalDocumentRef
File = latest.File
OtherLicense = latest.OtherLicense
Package = latest.Package
PackageExternalReference = latest.PackageExternalReference
Relationship = latest.Relationship
Review = latest.Review
Snippet = latest.Snippet
)
type (
Annotator = common.Annotator
Checksum = common.Checksum
ChecksumAlgorithm = common.ChecksumAlgorithm
Creator = common.Creator
DocElementID = common.DocElementID
ElementID = common.ElementID
Originator = common.Originator
PackageVerificationCode = common.PackageVerificationCode
SnippetRange = common.SnippetRange
SnippetRangePointer = common.SnippetRangePointer
Supplier = common.Supplier
)
const (
SHA224 = common.SHA224
SHA1 = common.SHA1
SHA256 = common.SHA256
SHA384 = common.SHA384
SHA512 = common.SHA512
MD2 = common.MD2
MD4 = common.MD4
MD5 = common.MD5
MD6 = common.MD6
SHA3_256 = common.SHA3_256
SHA3_384 = common.SHA3_384
SHA3_512 = common.SHA3_512
BLAKE2b_256 = common.BLAKE2b_256
BLAKE2b_384 = common.BLAKE2b_384
BLAKE2b_512 = common.BLAKE2b_512
BLAKE3 = common.BLAKE3
ADLER32 = common.ADLER32
)
const (
// F.2 Security types
CategorySecurity = common.CategorySecurity
SecurityCPE23Type = common.TypeSecurityCPE23Type
SecurityCPE22Type = common.TypeSecurityCPE22Type
SecurityAdvisory = common.TypeSecurityAdvisory
SecurityFix = common.TypeSecurityFix
SecurityUrl = common.TypeSecurityUrl
SecuritySwid = common.TypeSecuritySwid
// F.3 Package-Manager types
CategoryPackageManager = common.CategoryPackageManager
PackageManagerMavenCentral = common.TypePackageManagerMavenCentral
PackageManagerNpm = common.TypePackageManagerNpm
PackageManagerNuGet = common.TypePackageManagerNuGet
PackageManagerBower = common.TypePackageManagerBower
PackageManagerPURL = common.TypePackageManagerPURL
// F.4 Persistent-Id types
CategoryPersistentId = common.CategoryPersistentId
TypePersistentIdSwh = common.TypePersistentIdSwh
TypePersistentIdGitoid = common.TypePersistentIdGitoid
// 11.1 Relationship field types
RelationshipDescribes = common.TypeRelationshipDescribe
RelationshipDescribedBy = common.TypeRelationshipDescribeBy
RelationshipContains = common.TypeRelationshipContains
RelationshipContainedBy = common.TypeRelationshipContainedBy
RelationshipDependsOn = common.TypeRelationshipDependsOn
RelationshipDependencyOf = common.TypeRelationshipDependencyOf
RelationshipBuildDependencyOf = common.TypeRelationshipBuildDependencyOf
RelationshipDevDependencyOf = common.TypeRelationshipDevDependencyOf
RelationshipOptionalDependencyOf = common.TypeRelationshipOptionalDependencyOf
RelationshipProvidedDependencyOf = common.TypeRelationshipProvidedDependencyOf
RelationshipTestDependencyOf = common.TypeRelationshipTestDependencyOf
RelationshipRuntimeDependencyOf = common.TypeRelationshipRuntimeDependencyOf
RelationshipExampleOf = common.TypeRelationshipExampleOf
RelationshipGenerates = common.TypeRelationshipGenerates
RelationshipGeneratedFrom = common.TypeRelationshipGeneratedFrom
RelationshipAncestorOf = common.TypeRelationshipAncestorOf
RelationshipDescendantOf = common.TypeRelationshipDescendantOf
RelationshipVariantOf = common.TypeRelationshipVariantOf
RelationshipDistributionArtifact = common.TypeRelationshipDistributionArtifact
RelationshipPatchFor = common.TypeRelationshipPatchFor
RelationshipPatchApplied = common.TypeRelationshipPatchApplied
RelationshipCopyOf = common.TypeRelationshipCopyOf
RelationshipFileAdded = common.TypeRelationshipFileAdded
RelationshipFileDeleted = common.TypeRelationshipFileDeleted
RelationshipFileModified = common.TypeRelationshipFileModified
RelationshipExpandedFromArchive = common.TypeRelationshipExpandedFromArchive
RelationshipDynamicLink = common.TypeRelationshipDynamicLink
RelationshipStaticLink = common.TypeRelationshipStaticLink
RelationshipDataFileOf = common.TypeRelationshipDataFileOf
RelationshipTestCaseOf = common.TypeRelationshipTestCaseOf
RelationshipBuildToolOf = common.TypeRelationshipBuildToolOf
RelationshipDevToolOf = common.TypeRelationshipDevToolOf
RelationshipTestOf = common.TypeRelationshipTestOf
RelationshipTestToolOf = common.TypeRelationshipTestToolOf
RelationshipDocumentationOf = common.TypeRelationshipDocumentationOf
RelationshipOptionalComponentOf = common.TypeRelationshipOptionalComponentOf
RelationshipMetafileOf = common.TypeRelationshipMetafileOf
RelationshipPackageOf = common.TypeRelationshipPackageOf
RelationshipAmends = common.TypeRelationshipAmends
RelationshipPrerequisiteFor = common.TypeRelationshipPrerequisiteFor
RelationshipHasPrerequisite = common.TypeRelationshipHasPrerequisite
RelationshipRequirementDescriptionFor = common.TypeRelationshipRequirementDescriptionFor
RelationshipSpecificationFor = common.TypeRelationshipSpecificationFor
RelationshipOther = common.TypeRelationshipOther
)

View File

@ -0,0 +1,44 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package common
import (
"encoding/json"
"fmt"
"strings"
)
type Annotator struct {
Annotator string
// including AnnotatorType: one of "Person", "Organization" or "Tool"
AnnotatorType string
}
// UnmarshalJSON takes an annotator in the typical one-line format and parses it into an Annotator struct.
// This function is also used when unmarshalling YAML
func (a *Annotator) UnmarshalJSON(data []byte) error {
// annotator will simply be a string
annotatorStr := string(data)
annotatorStr = strings.Trim(annotatorStr, "\"")
annotatorFields := strings.SplitN(annotatorStr, ": ", 2)
if len(annotatorFields) != 2 {
return fmt.Errorf("failed to parse Annotator '%s'", annotatorStr)
}
a.AnnotatorType = annotatorFields[0]
a.Annotator = annotatorFields[1]
return nil
}
// MarshalJSON converts the receiver into a slice of bytes representing an Annotator in string form.
// This function is also used when marshalling to YAML
func (a Annotator) MarshalJSON() ([]byte, error) {
if a.Annotator != "" {
return json.Marshal(fmt.Sprintf("%s: %s", a.AnnotatorType, a.Annotator))
}
return []byte{}, nil
}

View File

@ -0,0 +1,34 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package common
// ChecksumAlgorithm represents the algorithm used to generate the file checksum in the Checksum struct.
type ChecksumAlgorithm string
// The checksum algorithms mentioned in the spec https://spdx.github.io/spdx-spec/4-file-information/#44-file-checksum
const (
SHA224 ChecksumAlgorithm = "SHA224"
SHA1 ChecksumAlgorithm = "SHA1"
SHA256 ChecksumAlgorithm = "SHA256"
SHA384 ChecksumAlgorithm = "SHA384"
SHA512 ChecksumAlgorithm = "SHA512"
MD2 ChecksumAlgorithm = "MD2"
MD4 ChecksumAlgorithm = "MD4"
MD5 ChecksumAlgorithm = "MD5"
MD6 ChecksumAlgorithm = "MD6"
SHA3_256 ChecksumAlgorithm = "SHA3-256"
SHA3_384 ChecksumAlgorithm = "SHA3-384"
SHA3_512 ChecksumAlgorithm = "SHA3-512"
BLAKE2b_256 ChecksumAlgorithm = "BLAKE2b-256"
BLAKE2b_384 ChecksumAlgorithm = "BLAKE2b-384"
BLAKE2b_512 ChecksumAlgorithm = "BLAKE2b-512"
BLAKE3 ChecksumAlgorithm = "BLAKE3"
ADLER32 ChecksumAlgorithm = "ADLER32"
)
// Checksum provides a unique identifier to match analysis information on each specific file in a package.
// The Algorithm field describes the ChecksumAlgorithm used and the Value represents the file checksum
type Checksum struct {
Algorithm ChecksumAlgorithm `json:"algorithm"`
Value string `json:"checksumValue"`
}

View File

@ -0,0 +1,44 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package common
import (
"encoding/json"
"fmt"
"strings"
)
// Creator is a wrapper around the Creator SPDX field. The SPDX field contains two values, which requires special
// handling in order to marshal/unmarshal it to/from Go data types.
type Creator struct {
Creator string
// CreatorType should be one of "Person", "Organization", or "Tool"
CreatorType string
}
// UnmarshalJSON takes an annotator in the typical one-line format and parses it into a Creator struct.
// This function is also used when unmarshalling YAML
func (c *Creator) UnmarshalJSON(data []byte) error {
str := string(data)
str = strings.Trim(str, "\"")
fields := strings.SplitN(str, ": ", 2)
if len(fields) != 2 {
return fmt.Errorf("failed to parse Creator '%s'", str)
}
c.CreatorType = fields[0]
c.Creator = fields[1]
return nil
}
// MarshalJSON converts the receiver into a slice of bytes representing a Creator in string form.
// This function is also used with marshalling to YAML
func (c Creator) MarshalJSON() ([]byte, error) {
if c.Creator != "" {
return json.Marshal(fmt.Sprintf("%s: %s", c.CreatorType, c.Creator))
}
return []byte{}, nil
}

View File

@ -0,0 +1,74 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package common
// Constants for various string types
const (
// F.2 Security types
CategorySecurity string = "SECURITY"
TypeSecurityCPE23Type string = "cpe23Type"
TypeSecurityCPE22Type string = "cpe22Type"
TypeSecurityAdvisory string = "advisory"
TypeSecurityFix string = "fix"
TypeSecurityUrl string = "url"
TypeSecuritySwid string = "swid"
// F.3 Package-Manager types
CategoryPackageManager string = "PACKAGE-MANAGER"
TypePackageManagerMavenCentral string = "maven-central"
TypePackageManagerNpm string = "npm"
TypePackageManagerNuGet string = "nuget"
TypePackageManagerBower string = "bower"
TypePackageManagerPURL string = "purl"
// F.4 Persistent-Id types
CategoryPersistentId string = "PERSISTENT-ID"
TypePersistentIdSwh string = "swh"
TypePersistentIdGitoid string = "gitoid"
// 11.1 Relationship field types
TypeRelationshipDescribe string = "DESCRIBES"
TypeRelationshipDescribeBy string = "DESCRIBED_BY"
TypeRelationshipContains string = "CONTAINS"
TypeRelationshipContainedBy string = "CONTAINED_BY"
TypeRelationshipDependsOn string = "DEPENDS_ON"
TypeRelationshipDependencyOf string = "DEPENDENCY_OF"
TypeRelationshipBuildDependencyOf string = "BUILD_DEPENDENCY_OF"
TypeRelationshipDevDependencyOf string = "DEV_DEPENDENCY_OF"
TypeRelationshipOptionalDependencyOf string = "OPTIONAL_DEPENDENCY_OF"
TypeRelationshipProvidedDependencyOf string = "PROVIDED_DEPENDENCY_OF"
TypeRelationshipTestDependencyOf string = "TEST_DEPENDENCY_OF"
TypeRelationshipRuntimeDependencyOf string = "RUNTIME_DEPENDENCY_OF"
TypeRelationshipExampleOf string = "EXAMPLE_OF"
TypeRelationshipGenerates string = "GENERATES"
TypeRelationshipGeneratedFrom string = "GENERATED_FROM"
TypeRelationshipAncestorOf string = "ANCESTOR_OF"
TypeRelationshipDescendantOf string = "DESCENDANT_OF"
TypeRelationshipVariantOf string = "VARIANT_OF"
TypeRelationshipDistributionArtifact string = "DISTRIBUTION_ARTIFACT"
TypeRelationshipPatchFor string = "PATCH_FOR"
TypeRelationshipPatchApplied string = "PATCH_APPLIED"
TypeRelationshipCopyOf string = "COPY_OF"
TypeRelationshipFileAdded string = "FILE_ADDED"
TypeRelationshipFileDeleted string = "FILE_DELETED"
TypeRelationshipFileModified string = "FILE_MODIFIED"
TypeRelationshipExpandedFromArchive string = "EXPANDED_FROM_ARCHIVE"
TypeRelationshipDynamicLink string = "DYNAMIC_LINK"
TypeRelationshipStaticLink string = "STATIC_LINK"
TypeRelationshipDataFileOf string = "DATA_FILE_OF"
TypeRelationshipTestCaseOf string = "TEST_CASE_OF"
TypeRelationshipBuildToolOf string = "BUILD_TOOL_OF"
TypeRelationshipDevToolOf string = "DEV_TOOL_OF"
TypeRelationshipTestOf string = "TEST_OF"
TypeRelationshipTestToolOf string = "TEST_TOOL_OF"
TypeRelationshipDocumentationOf string = "DOCUMENTATION_OF"
TypeRelationshipOptionalComponentOf string = "OPTIONAL_COMPONENT_OF"
TypeRelationshipMetafileOf string = "METAFILE_OF"
TypeRelationshipPackageOf string = "PACKAGE_OF"
TypeRelationshipAmends string = "AMENDS"
TypeRelationshipPrerequisiteFor string = "PREREQUISITE_FOR"
TypeRelationshipHasPrerequisite string = "HAS_PREREQUISITE"
TypeRelationshipRequirementDescriptionFor string = "REQUIREMENT_DESCRIPTION_FOR"
TypeRelationshipSpecificationFor string = "SPECIFICATION_FOR"
TypeRelationshipOther string = "OTHER"
)

View File

@ -0,0 +1,173 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package common
import (
"encoding/json"
"fmt"
"strings"
)
const (
spdxRefPrefix = "SPDXRef-"
documentRefPrefix = "DocumentRef-"
)
// ElementID represents the identifier string portion of an SPDX element
// identifier. DocElementID should be used for any attributes which can
// contain identifiers defined in a different SPDX document.
// ElementIDs should NOT contain the mandatory 'SPDXRef-' portion.
type ElementID string
// MarshalJSON returns an SPDXRef- prefixed JSON string
func (d ElementID) MarshalJSON() ([]byte, error) {
return json.Marshal(prefixElementId(d))
}
// UnmarshalJSON validates SPDXRef- prefixes and removes them when processing ElementIDs
func (d *ElementID) UnmarshalJSON(data []byte) error {
// SPDX identifier will simply be a string
idStr := string(data)
idStr = strings.Trim(idStr, "\"")
e, err := trimElementIdPrefix(idStr)
if err != nil {
return err
}
*d = e
return nil
}
// prefixElementId adds the SPDXRef- prefix to an element ID if it does not have one
func prefixElementId(id ElementID) string {
val := string(id)
if !strings.HasPrefix(val, spdxRefPrefix) {
return spdxRefPrefix + val
}
return val
}
// trimElementIdPrefix removes the SPDXRef- prefix from an element ID string or returns an error if it
// does not start with SPDXRef-
func trimElementIdPrefix(id string) (ElementID, error) {
// handle SPDXRef-
idFields := strings.SplitN(id, spdxRefPrefix, 2)
if len(idFields) != 2 {
return "", fmt.Errorf("failed to parse SPDX identifier '%s'", id)
}
e := ElementID(idFields[1])
return e, nil
}
// DocElementID represents an SPDX element identifier that could be defined
// in a different SPDX document, and therefore could have a "DocumentRef-"
// portion, such as Relationships and Annotations.
// ElementID is used for attributes in which a "DocumentRef-" portion cannot
// appear, such as a Package or File definition (since it is necessarily
// being defined in the present document).
// DocumentRefID will be the empty string for elements defined in the
// present document.
// DocElementIDs should NOT contain the mandatory 'DocumentRef-' or
// 'SPDXRef-' portions.
// SpecialID is used ONLY if the DocElementID matches a defined set of
// permitted special values for a particular field, e.g. "NONE" or
// "NOASSERTION" for the right-hand side of Relationships. If SpecialID
// is set, DocumentRefID and ElementRefID should be empty (and vice versa).
type DocElementID struct {
DocumentRefID string
ElementRefID ElementID
SpecialID string
}
// MarshalJSON converts the receiver into a slice of bytes representing a DocElementID in string form.
// This function is also used when marshalling to YAML
func (d DocElementID) MarshalJSON() ([]byte, error) {
if d.DocumentRefID != "" && d.ElementRefID != "" {
idStr := prefixElementId(d.ElementRefID)
return json.Marshal(fmt.Sprintf("%s%s:%s", documentRefPrefix, d.DocumentRefID, idStr))
} else if d.ElementRefID != "" {
return json.Marshal(prefixElementId(d.ElementRefID))
} else if d.SpecialID != "" {
return json.Marshal(d.SpecialID)
}
return []byte{}, fmt.Errorf("failed to marshal empty DocElementID")
}
// UnmarshalJSON takes a SPDX Identifier string parses it into a DocElementID struct.
// This function is also used when unmarshalling YAML
func (d *DocElementID) UnmarshalJSON(data []byte) (err error) {
// SPDX identifier will simply be a string
idStr := string(data)
idStr = strings.Trim(idStr, "\"")
// handle special cases
if idStr == "NONE" || idStr == "NOASSERTION" {
d.SpecialID = idStr
return nil
}
var idFields []string
// handle DocumentRef- if present
if strings.HasPrefix(idStr, documentRefPrefix) {
// strip out the "DocumentRef-" so we can get the value
idFields = strings.SplitN(idStr, documentRefPrefix, 2)
idStr = idFields[1]
// an SPDXRef can appear after a DocumentRef, separated by a colon
idFields = strings.SplitN(idStr, ":", 2)
d.DocumentRefID = idFields[0]
if len(idFields) == 2 {
idStr = idFields[1]
} else {
return nil
}
}
d.ElementRefID, err = trimElementIdPrefix(idStr)
return err
}
// TODO: add equivalents for LicenseRef- identifiers
// MakeDocElementID takes strings (without prefixes) for the DocumentRef-
// and SPDXRef- identifiers, and returns a DocElementID. An empty string
// should be used for the DocumentRef- portion if it is referring to the
// present document.
func MakeDocElementID(docRef string, eltRef string) DocElementID {
return DocElementID{
DocumentRefID: docRef,
ElementRefID: ElementID(eltRef),
}
}
// MakeDocElementSpecial takes a "special" string (e.g. "NONE" or
// "NOASSERTION" for the right side of a Relationship), nd returns
// a DocElementID with it in the SpecialID field. Other fields will
// be empty.
func MakeDocElementSpecial(specialID string) DocElementID {
return DocElementID{SpecialID: specialID}
}
// RenderElementID takes an ElementID and returns the string equivalent,
// with the SPDXRef- prefix reinserted.
func RenderElementID(eID ElementID) string {
return spdxRefPrefix + string(eID)
}
// RenderDocElementID takes a DocElementID and returns the string equivalent,
// with the SPDXRef- prefix (and, if applicable, the DocumentRef- prefix)
// reinserted. If a SpecialID is present, it will be rendered verbatim and
// DocumentRefID and ElementRefID will be ignored.
func RenderDocElementID(deID DocElementID) string {
if deID.SpecialID != "" {
return deID.SpecialID
}
prefix := ""
if deID.DocumentRefID != "" {
prefix = documentRefPrefix + deID.DocumentRefID + ":"
}
return prefix + spdxRefPrefix + string(deID.ElementRefID)
}

View File

@ -0,0 +1,103 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package common
import (
"encoding/json"
"fmt"
"strings"
)
type Supplier struct {
// can be "NOASSERTION"
Supplier string
// SupplierType can be one of "Person", "Organization", or empty if Supplier is "NOASSERTION"
SupplierType string
}
// UnmarshalJSON takes a supplier in the typical one-line format and parses it into a Supplier struct.
// This function is also used when unmarshalling YAML
func (s *Supplier) UnmarshalJSON(data []byte) error {
// the value is just a string presented as a slice of bytes
supplierStr := string(data)
supplierStr = strings.Trim(supplierStr, "\"")
if supplierStr == "NOASSERTION" {
s.Supplier = supplierStr
return nil
}
supplierFields := strings.SplitN(supplierStr, ": ", 2)
if len(supplierFields) != 2 {
return fmt.Errorf("failed to parse Supplier '%s'", supplierStr)
}
s.SupplierType = supplierFields[0]
s.Supplier = supplierFields[1]
return nil
}
// MarshalJSON converts the receiver into a slice of bytes representing a Supplier in string form.
// This function is also used when marshalling to YAML
func (s Supplier) MarshalJSON() ([]byte, error) {
if s.Supplier == "NOASSERTION" {
return json.Marshal(s.Supplier)
} else if s.SupplierType != "" && s.Supplier != "" {
return json.Marshal(fmt.Sprintf("%s: %s", s.SupplierType, s.Supplier))
}
return []byte{}, fmt.Errorf("failed to marshal invalid Supplier: %+v", s)
}
type Originator struct {
// can be "NOASSERTION"
Originator string
// OriginatorType can be one of "Person", "Organization", or empty if Originator is "NOASSERTION"
OriginatorType string
}
// UnmarshalJSON takes an originator in the typical one-line format and parses it into an Originator struct.
// This function is also used when unmarshalling YAML
func (o *Originator) UnmarshalJSON(data []byte) error {
// the value is just a string presented as a slice of bytes
originatorStr := string(data)
originatorStr = strings.Trim(originatorStr, "\"")
if originatorStr == "NOASSERTION" {
o.Originator = originatorStr
return nil
}
originatorFields := strings.SplitN(originatorStr, ":", 2)
if len(originatorFields) != 2 {
return fmt.Errorf("failed to parse Originator '%s'", originatorStr)
}
o.OriginatorType = originatorFields[0]
o.Originator = strings.TrimLeft(originatorFields[1], " \t")
return nil
}
// MarshalJSON converts the receiver into a slice of bytes representing an Originator in string form.
// This function is also used when marshalling to YAML
func (o Originator) MarshalJSON() ([]byte, error) {
if o.Originator == "NOASSERTION" {
return json.Marshal(o.Originator)
} else if o.Originator != "" {
return json.Marshal(fmt.Sprintf("%s: %s", o.OriginatorType, o.Originator))
}
return []byte{}, nil
}
type PackageVerificationCode struct {
// Cardinality: mandatory, one if filesAnalyzed is true / omitted;
// zero (must be omitted) if filesAnalyzed is false
Value string `json:"packageVerificationCodeValue"`
// Spec also allows specifying files to exclude from the
// verification code algorithm; intended to enable exclusion of
// the SPDX document file itself.
ExcludedFiles []string `json:"packageVerificationCodeExcludedFiles,omitempty"`
}

View File

@ -0,0 +1,20 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package common
type SnippetRangePointer struct {
// 5.3: Snippet Byte Range: [start byte]:[end byte]
// Cardinality: mandatory, one
Offset int `json:"offset,omitempty"`
// 5.4: Snippet Line Range: [start line]:[end line]
// Cardinality: optional, one
LineNumber int `json:"lineNumber,omitempty"`
FileSPDXIdentifier ElementID `json:"reference"`
}
type SnippetRange struct {
StartPointer SnippetRangePointer `json:"startPointer"`
EndPointer SnippetRangePointer `json:"endPointer"`
}

View File

@ -0,0 +1,31 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_1
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// Annotation is an Annotation section of an SPDX Document for version 2.1 of the spec.
type Annotation struct {
// 8.1: Annotator
// Cardinality: conditional (mandatory, one) if there is an Annotation
Annotator common.Annotator `json:"annotator"`
// 8.2: Annotation Date: YYYY-MM-DDThh:mm:ssZ
// Cardinality: conditional (mandatory, one) if there is an Annotation
AnnotationDate string `json:"annotationDate"`
// 8.3: Annotation Type: "REVIEW" or "OTHER"
// Cardinality: conditional (mandatory, one) if there is an Annotation
AnnotationType string `json:"annotationType"`
// 8.4: SPDX Identifier Reference
// Cardinality: conditional (mandatory, one) if there is an Annotation
// This field is not used in hierarchical data formats where the referenced element is clear, such as JSON or YAML.
AnnotationSPDXIdentifier common.DocElementID `json:"-"`
// 8.5: Annotation Comment
// Cardinality: conditional (mandatory, one) if there is an Annotation
AnnotationComment string `json:"comment"`
}

View File

@ -0,0 +1,28 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_1
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// CreationInfo is a Document Creation Information section of an
// SPDX Document for version 2.1 of the spec.
type CreationInfo struct {
// 2.7: License List Version
// Cardinality: optional, one
LicenseListVersion string `json:"licenseListVersion,omitempty"`
// 2.8: Creators: may have multiple keys for Person, Organization
// and/or Tool
// Cardinality: mandatory, one or many
Creators []common.Creator `json:"creators"`
// 2.9: Created: data format YYYY-MM-DDThh:mm:ssZ
// Cardinality: mandatory, one
Created string `json:"created"`
// 2.10: Creator Comment
// Cardinality: optional, one
CreatorComment string `json:"comment,omitempty"`
}

View File

@ -0,0 +1,79 @@
// Package spdx contains the struct definition for an SPDX Document
// and its constituent parts.
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_1
import (
"github.com/anchore/go-struct-converter"
"github.com/spdx/tools-golang/spdx/v2/common"
)
const Version = "SPDX-2.1"
const DataLicense = "CC0-1.0"
// ExternalDocumentRef is a reference to an external SPDX document
// as defined in section 2.6 for version 2.1 of the spec.
type ExternalDocumentRef struct {
// DocumentRefID is the ID string defined in the start of the
// reference. It should _not_ contain the "DocumentRef-" part
// of the mandatory ID string.
DocumentRefID string `json:"externalDocumentId"`
// URI is the URI defined for the external document
URI string `json:"spdxDocument"`
// Checksum is the actual hash data
Checksum common.Checksum `json:"checksum"`
}
// Document is an SPDX Document for version 2.1 of the spec.
// See https://spdx.org/sites/cpstandard/files/pages/files/spdxversion2.1.pdf
type Document struct {
// 2.1: SPDX Version; should be in the format "SPDX-2.1"
// Cardinality: mandatory, one
SPDXVersion string `json:"spdxVersion"`
// 2.2: Data License; should be "CC0-1.0"
// Cardinality: mandatory, one
DataLicense string `json:"dataLicense"`
// 2.3: SPDX Identifier; should be "DOCUMENT" to represent
// mandatory identifier of SPDXRef-DOCUMENT
// Cardinality: mandatory, one
SPDXIdentifier common.ElementID `json:"SPDXID"`
// 2.4: Document Name
// Cardinality: mandatory, one
DocumentName string `json:"name"`
// 2.5: Document Namespace
// Cardinality: mandatory, one
DocumentNamespace string `json:"documentNamespace"`
// 2.6: External Document References
// Cardinality: optional, one or many
ExternalDocumentReferences []ExternalDocumentRef `json:"externalDocumentRefs,omitempty"`
// 2.11: Document Comment
// Cardinality: optional, one
DocumentComment string `json:"comment,omitempty"`
CreationInfo *CreationInfo `json:"creationInfo"`
Packages []*Package `json:"packages,omitempty"`
Files []*File `json:"files,omitempty"`
OtherLicenses []*OtherLicense `json:"hasExtractedLicensingInfos,omitempty"`
Relationships []*Relationship `json:"relationships,omitempty"`
Annotations []*Annotation `json:"annotations,omitempty"`
Snippets []Snippet `json:"snippets,omitempty"`
// DEPRECATED in version 2.0 of spec
Reviews []*Review `json:"-"`
}
func (d *Document) ConvertFrom(_ interface{}) error {
d.SPDXVersion = Version
return nil
}
var _ converter.ConvertFrom = (*Document)(nil)

View File

@ -0,0 +1,92 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_1
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// File is a File section of an SPDX Document for version 2.1 of the spec.
type File struct {
// 4.1: File Name
// Cardinality: mandatory, one
FileName string `json:"fileName"`
// 4.2: File SPDX Identifier: "SPDXRef-[idstring]"
// Cardinality: mandatory, one
FileSPDXIdentifier common.ElementID `json:"SPDXID"`
// 4.3: File Types
// Cardinality: optional, multiple
FileTypes []string `json:"fileTypes,omitempty"`
// 4.4: File Checksum: may have keys for SHA1, SHA256 and/or MD5
// Cardinality: mandatory, one SHA1, others may be optionally provided
Checksums []common.Checksum `json:"checksums"`
// 4.5: Concluded License: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
LicenseConcluded string `json:"licenseConcluded"`
// 4.6: License Information in File: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one or many
LicenseInfoInFiles []string `json:"licenseInfoInFiles"`
// 4.7: Comments on License
// Cardinality: optional, one
LicenseComments string `json:"licenseComments,omitempty"`
// 4.8: Copyright Text: copyright notice(s) text, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
FileCopyrightText string `json:"copyrightText"`
// DEPRECATED in version 2.1 of spec
// 4.9-4.11: Artifact of Project variables (defined below)
// Cardinality: optional, one or many
ArtifactOfProjects []*ArtifactOfProject `json:"-"`
// 4.12: File Comment
// Cardinality: optional, one
FileComment string `json:"comment,omitempty"`
// 4.13: File Notice
// Cardinality: optional, one
FileNotice string `json:"noticeText,omitempty"`
// 4.14: File Contributor
// Cardinality: optional, one or many
FileContributors []string `json:"fileContributors,omitempty"`
// DEPRECATED in version 2.0 of spec
// 4.15: File Dependencies
// Cardinality: optional, one or many
FileDependencies []string `json:"-"`
// Snippets contained in this File
// Note that Snippets could be defined in a different Document! However,
// the only ones that _THIS_ document can contain are the ones that are
// defined here -- so this should just be an ElementID.
Snippets map[common.ElementID]*Snippet `json:"-"`
Annotations []Annotation `json:"annotations,omitempty"`
}
// ArtifactOfProject is a DEPRECATED collection of data regarding
// a Package, as defined in sections 4.9-4.11 in version 2.1 of the spec.
type ArtifactOfProject struct {
// DEPRECATED in version 2.1 of spec
// 4.9: Artifact of Project Name
// Cardinality: conditional, required if present, one per AOP
Name string
// DEPRECATED in version 2.1 of spec
// 4.10: Artifact of Project Homepage: URL or "UNKNOWN"
// Cardinality: optional, one per AOP
HomePage string
// DEPRECATED in version 2.1 of spec
// 4.11: Artifact of Project Uniform Resource Identifier
// Cardinality: optional, one per AOP
URI string
}

View File

@ -0,0 +1,31 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_1
// OtherLicense is an Other License Information section of an
// SPDX Document for version 2.1 of the spec.
type OtherLicense struct {
// 6.1: License Identifier: "LicenseRef-[idstring]"
// Cardinality: conditional (mandatory, one) if license is not
// on SPDX License List
LicenseIdentifier string `json:"licenseId"`
// 6.2: Extracted Text
// Cardinality: conditional (mandatory, one) if there is a
// License Identifier assigned
ExtractedText string `json:"extractedText"`
// 6.3: License Name: single line of text or "NOASSERTION"
// Cardinality: conditional (mandatory, one) if license is not
// on SPDX License List
LicenseName string `json:"name,omitempty"`
// 6.4: License Cross Reference
// Cardinality: conditional (optional, one or many) if license
// is not on SPDX License List
LicenseCrossReferences []string `json:"seeAlsos,omitempty"`
// 6.5: License Comment
// Cardinality: optional, one
LicenseComment string `json:"comment,omitempty"`
}

View File

@ -0,0 +1,122 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_1
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// Package is a Package section of an SPDX Document for version 2.1 of the spec.
type Package struct {
// 3.1: Package Name
// Cardinality: mandatory, one
PackageName string `json:"name"`
// 3.2: Package SPDX Identifier: "SPDXRef-[idstring]"
// Cardinality: mandatory, one
PackageSPDXIdentifier common.ElementID `json:"SPDXID"`
// 3.3: Package Version
// Cardinality: optional, one
PackageVersion string `json:"versionInfo,omitempty"`
// 3.4: Package File Name
// Cardinality: optional, one
PackageFileName string `json:"packageFileName,omitempty"`
// 3.5: Package Supplier: may have single result for either Person or Organization,
// or NOASSERTION
// Cardinality: optional, one
PackageSupplier *common.Supplier `json:"supplier,omitempty"`
// 3.6: Package Originator: may have single result for either Person or Organization,
// or NOASSERTION
// Cardinality: optional, one
PackageOriginator *common.Originator `json:"originator,omitempty"`
// 3.7: Package Download Location
// Cardinality: mandatory, one
PackageDownloadLocation string `json:"downloadLocation"`
// 3.8: FilesAnalyzed
// Cardinality: optional, one; default value is "true" if omitted
FilesAnalyzed bool `json:"filesAnalyzed,omitempty"`
// NOT PART OF SPEC: did FilesAnalyzed tag appear?
IsFilesAnalyzedTagPresent bool `json:"-"`
// 3.9: Package Verification Code
PackageVerificationCode common.PackageVerificationCode `json:"packageVerificationCode,omitempty"`
// 3.10: Package Checksum: may have keys for SHA1, SHA256 and/or MD5
// Cardinality: optional, one or many
PackageChecksums []common.Checksum `json:"checksums,omitempty"`
// 3.11: Package Home Page
// Cardinality: optional, one
PackageHomePage string `json:"homepage,omitempty"`
// 3.12: Source Information
// Cardinality: optional, one
PackageSourceInfo string `json:"sourceInfo,omitempty"`
// 3.13: Concluded License: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
PackageLicenseConcluded string `json:"licenseConcluded"`
// 3.14: All Licenses Info from Files: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one or many if filesAnalyzed is true / omitted;
// zero (must be omitted) if filesAnalyzed is false
PackageLicenseInfoFromFiles []string `json:"licenseInfoFromFiles"`
// 3.15: Declared License: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
PackageLicenseDeclared string `json:"licenseDeclared"`
// 3.16: Comments on License
// Cardinality: optional, one
PackageLicenseComments string `json:"licenseComments,omitempty"`
// 3.17: Copyright Text: copyright notice(s) text, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
PackageCopyrightText string `json:"copyrightText"`
// 3.18: Package Summary Description
// Cardinality: optional, one
PackageSummary string `json:"summary,omitempty"`
// 3.19: Package Detailed Description
// Cardinality: optional, one
PackageDescription string `json:"description,omitempty"`
// 3.20: Package Comment
// Cardinality: optional, one
PackageComment string `json:"comment,omitempty"`
// 3.21: Package External Reference
// Cardinality: optional, one or many
PackageExternalReferences []*PackageExternalReference `json:"externalRefs,omitempty"`
// Files contained in this Package
Files []*File `json:"files,omitempty"`
Annotations []Annotation `json:"annotations,omitempty"`
}
// PackageExternalReference is an External Reference to additional info
// about a Package, as defined in section 3.21 in version 2.1 of the spec.
type PackageExternalReference struct {
// category is "SECURITY", "PACKAGE-MANAGER" or "OTHER"
Category string `json:"referenceCategory"`
// type is an [idstring] as defined in Appendix VI;
// called RefType here due to "type" being a Golang keyword
RefType string `json:"referenceType"`
// locator is a unique string to access the package-specific
// info, metadata or content within the target location
Locator string `json:"referenceLocator"`
// 3.22: Package External Reference Comment
// Cardinality: conditional (optional, one) for each External Reference
ExternalRefComment string `json:"comment,omitempty"`
}

View File

@ -0,0 +1,25 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_1
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// Relationship is a Relationship section of an SPDX Document for
// version 2.1 of the spec.
type Relationship struct {
// 7.1: Relationship
// Cardinality: optional, one or more; one per Relationship
// one mandatory for SPDX Document with multiple packages
// RefA and RefB are first and second item
// Relationship is type from 7.1.1
RefA common.DocElementID `json:"spdxElementId"`
RefB common.DocElementID `json:"relatedSpdxElement"`
Relationship string `json:"relationshipType"`
// 7.2: Relationship Comment
// Cardinality: optional, one
RelationshipComment string `json:"comment,omitempty"`
}

View File

@ -0,0 +1,25 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_1
// Review is a Review section of an SPDX Document for version 2.1 of the spec.
// DEPRECATED in version 2.0 of spec; retained here for compatibility.
type Review struct {
// DEPRECATED in version 2.0 of spec
// 9.1: Reviewer
// Cardinality: optional, one
Reviewer string
// including AnnotatorType: one of "Person", "Organization" or "Tool"
ReviewerType string
// DEPRECATED in version 2.0 of spec
// 9.2: Review Date: YYYY-MM-DDThh:mm:ssZ
// Cardinality: conditional (mandatory, one) if there is a Reviewer
ReviewDate string
// DEPRECATED in version 2.0 of spec
// 9.3: Review Comment
// Cardinality: optional, one
ReviewComment string
}

View File

@ -0,0 +1,46 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_1
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// Snippet is a Snippet section of an SPDX Document for version 2.1 of the spec.
type Snippet struct {
// 5.1: Snippet SPDX Identifier: "SPDXRef-[idstring]"
// Cardinality: mandatory, one
SnippetSPDXIdentifier common.ElementID `json:"SPDXID"`
// 5.2: Snippet from File SPDX Identifier
// Cardinality: mandatory, one
SnippetFromFileSPDXIdentifier common.ElementID `json:"snippetFromFile"`
// Ranges denotes the start/end byte offsets or line numbers that the snippet is relevant to
Ranges []common.SnippetRange `json:"ranges"`
// 5.5: Snippet Concluded License: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
SnippetLicenseConcluded string `json:"licenseConcluded"`
// 5.6: License Information in Snippet: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: optional, one or many
LicenseInfoInSnippet []string `json:"licenseInfoInSnippets,omitempty"`
// 5.7: Snippet Comments on License
// Cardinality: optional, one
SnippetLicenseComments string `json:"licenseComments,omitempty"`
// 5.8: Snippet Copyright Text: copyright notice(s) text, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
SnippetCopyrightText string `json:"copyrightText"`
// 5.9: Snippet Comment
// Cardinality: optional, one
SnippetComment string `json:"comment,omitempty"`
// 5.10: Snippet Name
// Cardinality: optional, one
SnippetName string `json:"name,omitempty"`
}

View File

@ -0,0 +1,31 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_2
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// Annotation is an Annotation section of an SPDX Document for version 2.2 of the spec.
type Annotation struct {
// 12.1: Annotator
// Cardinality: conditional (mandatory, one) if there is an Annotation
Annotator common.Annotator `json:"annotator"`
// 12.2: Annotation Date: YYYY-MM-DDThh:mm:ssZ
// Cardinality: conditional (mandatory, one) if there is an Annotation
AnnotationDate string `json:"annotationDate"`
// 12.3: Annotation Type: "REVIEW" or "OTHER"
// Cardinality: conditional (mandatory, one) if there is an Annotation
AnnotationType string `json:"annotationType"`
// 12.4: SPDX Identifier Reference
// Cardinality: conditional (mandatory, one) if there is an Annotation
// This field is not used in hierarchical data formats where the referenced element is clear, such as JSON or YAML.
AnnotationSPDXIdentifier common.DocElementID `json:"-"`
// 12.5: Annotation Comment
// Cardinality: conditional (mandatory, one) if there is an Annotation
AnnotationComment string `json:"comment"`
}

View File

@ -0,0 +1,28 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_2
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// CreationInfo is a Document Creation Information section of an
// SPDX Document for version 2.2 of the spec.
type CreationInfo struct {
// 6.7: License List Version
// Cardinality: optional, one
LicenseListVersion string `json:"licenseListVersion,omitempty"`
// 6.8: Creators: may have multiple keys for Person, Organization
// and/or Tool
// Cardinality: mandatory, one or many
Creators []common.Creator `json:"creators"`
// 6.9: Created: data format YYYY-MM-DDThh:mm:ssZ
// Cardinality: mandatory, one
Created string `json:"created"`
// 6.10: Creator Comment
// Cardinality: optional, one
CreatorComment string `json:"comment,omitempty"`
}

View File

@ -0,0 +1,166 @@
// Package spdx contains the struct definition for an SPDX Document
// and its constituent parts.
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_2
import (
"encoding/json"
"fmt"
converter "github.com/anchore/go-struct-converter"
"github.com/spdx/tools-golang/spdx/v2/common"
)
const Version = "SPDX-2.2"
const DataLicense = "CC0-1.0"
// ExternalDocumentRef is a reference to an external SPDX document
// as defined in section 6.6 for version 2.2 of the spec.
type ExternalDocumentRef struct {
// DocumentRefID is the ID string defined in the start of the
// reference. It should _not_ contain the "DocumentRef-" part
// of the mandatory ID string.
DocumentRefID string `json:"externalDocumentId"`
// URI is the URI defined for the external document
URI string `json:"spdxDocument"`
// Checksum is the actual hash data
Checksum common.Checksum `json:"checksum"`
}
// Document is an SPDX Document for version 2.2 of the spec.
// See https://spdx.github.io/spdx-spec/v2-draft/ (DRAFT)
type Document struct {
// 6.1: SPDX Version; should be in the format "SPDX-2.2"
// Cardinality: mandatory, one
SPDXVersion string `json:"spdxVersion"`
// 6.2: Data License; should be "CC0-1.0"
// Cardinality: mandatory, one
DataLicense string `json:"dataLicense"`
// 6.3: SPDX Identifier; should be "DOCUMENT" to represent
// mandatory identifier of SPDXRef-DOCUMENT
// Cardinality: mandatory, one
SPDXIdentifier common.ElementID `json:"SPDXID"`
// 6.4: Document Name
// Cardinality: mandatory, one
DocumentName string `json:"name"`
// 6.5: Document Namespace
// Cardinality: mandatory, one
DocumentNamespace string `json:"documentNamespace"`
// 6.6: External Document References
// Cardinality: optional, one or many
ExternalDocumentReferences []ExternalDocumentRef `json:"externalDocumentRefs,omitempty"`
// 6.11: Document Comment
// Cardinality: optional, one
DocumentComment string `json:"comment,omitempty"`
CreationInfo *CreationInfo `json:"creationInfo"`
Packages []*Package `json:"packages,omitempty"`
Files []*File `json:"files,omitempty"`
OtherLicenses []*OtherLicense `json:"hasExtractedLicensingInfos,omitempty"`
Relationships []*Relationship `json:"relationships,omitempty"`
Annotations []*Annotation `json:"annotations,omitempty"`
Snippets []Snippet `json:"snippets,omitempty"`
// DEPRECATED in version 2.0 of spec
Reviews []*Review `json:"-"`
}
func (d *Document) ConvertFrom(_ interface{}) error {
d.SPDXVersion = Version
return nil
}
var _ converter.ConvertFrom = (*Document)(nil)
func (d *Document) UnmarshalJSON(b []byte) error {
type doc Document
type extras struct {
DocumentDescribes []common.DocElementID `json:"documentDescribes"`
}
var d2 doc
if err := json.Unmarshal(b, &d2); err != nil {
return err
}
var e extras
if err := json.Unmarshal(b, &e); err != nil {
return err
}
*d = Document(d2)
relationshipExists := map[string]bool{}
serializeRel := func(r *Relationship) string {
refA := r.RefA
refB := r.RefB
rel := r.Relationship
// we need to serialize the opposite for CONTAINED_BY and DESCRIBED_BY
// so that it will match when we try to de-duplicate during deserialization.
switch r.Relationship {
case common.TypeRelationshipContainedBy:
rel = common.TypeRelationshipContains
refA = r.RefB
refB = r.RefA
case common.TypeRelationshipDescribeBy:
rel = common.TypeRelationshipDescribe
refA = r.RefB
refB = r.RefA
}
return fmt.Sprintf("%v-%v->%v", common.RenderDocElementID(refA), rel, common.RenderDocElementID(refB))
}
// index current list of relationships to ensure no duplication
for _, r := range d.Relationships {
relationshipExists[serializeRel(r)] = true
}
// build relationships for documentDescribes field
for _, id := range e.DocumentDescribes {
r := &Relationship{
RefA: common.DocElementID{
ElementRefID: d.SPDXIdentifier,
},
RefB: id,
Relationship: common.TypeRelationshipDescribe,
}
if !relationshipExists[serializeRel(r)] {
d.Relationships = append(d.Relationships, r)
relationshipExists[serializeRel(r)] = true
}
}
// build relationships for package hasFiles field
for _, p := range d.Packages {
for _, f := range p.hasFiles {
r := &Relationship{
RefA: common.DocElementID{
ElementRefID: p.PackageSPDXIdentifier,
},
RefB: f,
Relationship: common.TypeRelationshipContains,
}
if !relationshipExists[serializeRel(r)] {
d.Relationships = append(d.Relationships, r)
relationshipExists[serializeRel(r)] = true
}
}
p.hasFiles = nil
}
return nil
}
var _ json.Unmarshaler = (*Document)(nil)

View File

@ -0,0 +1,96 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_2
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// File is a File section of an SPDX Document for version 2.2 of the spec.
type File struct {
// 8.1: File Name
// Cardinality: mandatory, one
FileName string `json:"fileName"`
// 8.2: File SPDX Identifier: "SPDXRef-[idstring]"
// Cardinality: mandatory, one
FileSPDXIdentifier common.ElementID `json:"SPDXID"`
// 8.3: File Types
// Cardinality: optional, multiple
FileTypes []string `json:"fileTypes,omitempty"`
// 8.4: File Checksum: may have keys for SHA1, SHA256 and/or MD5
// Cardinality: mandatory, one SHA1, others may be optionally provided
Checksums []common.Checksum `json:"checksums"`
// 8.5: Concluded License: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
LicenseConcluded string `json:"licenseConcluded"`
// 8.6: License Information in File: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one or many
LicenseInfoInFiles []string `json:"licenseInfoInFiles"`
// 8.7: Comments on License
// Cardinality: optional, one
LicenseComments string `json:"licenseComments,omitempty"`
// 8.8: Copyright Text: copyright notice(s) text, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
FileCopyrightText string `json:"copyrightText"`
// DEPRECATED in version 2.1 of spec
// 8.9-8.11: Artifact of Project variables (defined below)
// Cardinality: optional, one or many
ArtifactOfProjects []*ArtifactOfProject `json:"-"`
// 8.12: File Comment
// Cardinality: optional, one
FileComment string `json:"comment,omitempty"`
// 8.13: File Notice
// Cardinality: optional, one
FileNotice string `json:"noticeText,omitempty"`
// 8.14: File Contributor
// Cardinality: optional, one or many
FileContributors []string `json:"fileContributors,omitempty"`
// 8.15: File Attribution Text
// Cardinality: optional, one or many
FileAttributionTexts []string `json:"attributionTexts,omitempty"`
// DEPRECATED in version 2.0 of spec
// 8.16: File Dependencies
// Cardinality: optional, one or many
FileDependencies []string `json:"-"`
// Snippets contained in this File
// Note that Snippets could be defined in a different Document! However,
// the only ones that _THIS_ document can contain are this ones that are
// defined here -- so this should just be an ElementID.
Snippets map[common.ElementID]*Snippet `json:"-"`
Annotations []Annotation `json:"annotations,omitempty"`
}
// ArtifactOfProject is a DEPRECATED collection of data regarding
// a Package, as defined in sections 8.9-8.11 in version 2.2 of the spec.
type ArtifactOfProject struct {
// DEPRECATED in version 2.1 of spec
// 8.9: Artifact of Project Name
// Cardinality: conditional, required if present, one per AOP
Name string
// DEPRECATED in version 2.1 of spec
// 8.10: Artifact of Project Homepage: URL or "UNKNOWN"
// Cardinality: optional, one per AOP
HomePage string
// DEPRECATED in version 2.1 of spec
// 8.11: Artifact of Project Uniform Resource Identifier
// Cardinality: optional, one per AOP
URI string
}

View File

@ -0,0 +1,31 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_2
// OtherLicense is an Other License Information section of an
// SPDX Document for version 2.2 of the spec.
type OtherLicense struct {
// 10.1: License Identifier: "LicenseRef-[idstring]"
// Cardinality: conditional (mandatory, one) if license is not
// on SPDX License List
LicenseIdentifier string `json:"licenseId"`
// 10.2: Extracted Text
// Cardinality: conditional (mandatory, one) if there is a
// License Identifier assigned
ExtractedText string `json:"extractedText"`
// 10.3: License Name: single line of text or "NOASSERTION"
// Cardinality: conditional (mandatory, one) if license is not
// on SPDX License List
LicenseName string `json:"name,omitempty"`
// 10.4: License Cross Reference
// Cardinality: conditional (optional, one or many) if license
// is not on SPDX License List
LicenseCrossReferences []string `json:"seeAlsos,omitempty"`
// 10.5: License Comment
// Cardinality: optional, one
LicenseComment string `json:"comment,omitempty"`
}

View File

@ -0,0 +1,203 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_2
import (
"encoding/json"
"strings"
"github.com/spdx/tools-golang/spdx/v2/common"
)
// Package is a Package section of an SPDX Document for version 2.2 of the spec.
type Package struct {
// NOT PART OF SPEC
// flag: does this "package" contain files that were in fact "unpackaged",
// e.g. included directly in the Document without being in a Package?
IsUnpackaged bool `json:"-"`
// 7.1: Package Name
// Cardinality: mandatory, one
PackageName string `json:"name"`
// 7.2: Package SPDX Identifier: "SPDXRef-[idstring]"
// Cardinality: mandatory, one
PackageSPDXIdentifier common.ElementID `json:"SPDXID"`
// 7.3: Package Version
// Cardinality: optional, one
PackageVersion string `json:"versionInfo,omitempty"`
// 7.4: Package File Name
// Cardinality: optional, one
PackageFileName string `json:"packageFileName,omitempty"`
// 7.5: Package Supplier: may have single result for either Person or Organization,
// or NOASSERTION
// Cardinality: optional, one
PackageSupplier *common.Supplier `json:"supplier,omitempty"`
// 7.6: Package Originator: may have single result for either Person or Organization,
// or NOASSERTION
// Cardinality: optional, one
PackageOriginator *common.Originator `json:"originator,omitempty"`
// 7.7: Package Download Location
// Cardinality: mandatory, one
PackageDownloadLocation string `json:"downloadLocation"`
// 7.8: FilesAnalyzed
// Cardinality: optional, one; default value is "true" if omitted
FilesAnalyzed bool `json:"filesAnalyzed"`
// NOT PART OF SPEC: did FilesAnalyzed tag appear?
IsFilesAnalyzedTagPresent bool `json:"-"`
// 7.9: Package Verification Code
PackageVerificationCode common.PackageVerificationCode `json:"packageVerificationCode,omitempty"`
// 7.10: Package Checksum: may have keys for SHA1, SHA256, SHA512 and/or MD5
// Cardinality: optional, one or many
PackageChecksums []common.Checksum `json:"checksums,omitempty"`
// 7.11: Package Home Page
// Cardinality: optional, one
PackageHomePage string `json:"homepage,omitempty"`
// 7.12: Source Information
// Cardinality: optional, one
PackageSourceInfo string `json:"sourceInfo,omitempty"`
// 7.13: Concluded License: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
PackageLicenseConcluded string `json:"licenseConcluded"`
// 7.14: All Licenses Info from Files: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one or many if filesAnalyzed is true / omitted;
// zero (must be omitted) if filesAnalyzed is false
PackageLicenseInfoFromFiles []string `json:"licenseInfoFromFiles"`
// 7.15: Declared License: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
PackageLicenseDeclared string `json:"licenseDeclared"`
// 7.16: Comments on License
// Cardinality: optional, one
PackageLicenseComments string `json:"licenseComments,omitempty"`
// 7.17: Copyright Text: copyright notice(s) text, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
PackageCopyrightText string `json:"copyrightText"`
// 7.18: Package Summary Description
// Cardinality: optional, one
PackageSummary string `json:"summary,omitempty"`
// 7.19: Package Detailed Description
// Cardinality: optional, one
PackageDescription string `json:"description,omitempty"`
// 7.20: Package Comment
// Cardinality: optional, one
PackageComment string `json:"comment,omitempty"`
// 7.21: Package External Reference
// Cardinality: optional, one or many
PackageExternalReferences []*PackageExternalReference `json:"externalRefs,omitempty"`
// 7.22: Package External Reference Comment
// Cardinality: conditional (optional, one) for each External Reference
// contained within PackageExternalReference struct, if present
// 7.23: Package Attribution Text
// Cardinality: optional, one or many
PackageAttributionTexts []string `json:"attributionTexts,omitempty"`
// Files contained in this Package
Files []*File `json:"files,omitempty"`
Annotations []Annotation `json:"annotations,omitempty"`
// this field is only used when decoding JSON to translate the hasFiles
// property to relationships
hasFiles []common.DocElementID
}
func (p *Package) UnmarshalJSON(b []byte) error {
type pkg Package
type extras struct {
HasFiles []common.DocElementID `json:"hasFiles"`
FilesAnalyzed *bool `json:"filesAnalyzed"`
}
var p2 pkg
if err := json.Unmarshal(b, &p2); err != nil {
return err
}
var e extras
if err := json.Unmarshal(b, &e); err != nil {
return err
}
*p = Package(p2)
p.hasFiles = e.HasFiles
// FilesAnalyzed defaults to true if omitted
if e.FilesAnalyzed == nil {
p.FilesAnalyzed = true
} else {
p.IsFilesAnalyzedTagPresent = true
}
return nil
}
var _ json.Unmarshaler = (*Package)(nil)
// PackageExternalReference is an External Reference to additional info
// about a Package, as defined in section 7.21 in version 2.2 of the spec.
type PackageExternalReference struct {
// category is "SECURITY", "PACKAGE-MANAGER" or "OTHER"
Category string `json:"referenceCategory"`
// type is an [idstring] as defined in Appendix VI;
// called RefType here due to "type" being a Golang keyword
RefType string `json:"referenceType"`
// locator is a unique string to access the package-specific
// info, metadata or content within the target location
Locator string `json:"referenceLocator"`
// 7.22: Package External Reference Comment
// Cardinality: conditional (optional, one) for each External Reference
ExternalRefComment string `json:"comment,omitempty"`
}
var _ json.Unmarshaler = (*PackageExternalReference)(nil)
func (r *PackageExternalReference) UnmarshalJSON(b []byte) error {
type ref PackageExternalReference
var rr ref
if err := json.Unmarshal(b, &rr); err != nil {
return err
}
*r = PackageExternalReference(rr)
r.Category = strings.ReplaceAll(r.Category, "_", "-")
return nil
}
var _ json.Marshaler = (*PackageExternalReference)(nil)
// We output as the JSON type enums since in v2.2.0 the JSON schema
// spec only had enums with _ (e.g. PACKAGE_MANAGER)
func (r *PackageExternalReference) MarshalJSON() ([]byte, error) {
type ref PackageExternalReference
var rr ref
rr = ref(*r)
rr.Category = strings.ReplaceAll(rr.Category, "-", "_")
return json.Marshal(&rr)
}

View File

@ -0,0 +1,25 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_2
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// Relationship is a Relationship section of an SPDX Document for
// version 2.2 of the spec.
type Relationship struct {
// 11.1: Relationship
// Cardinality: optional, one or more; one per Relationship
// one mandatory for SPDX Document with multiple packages
// RefA and RefB are first and second item
// Relationship is type from 11.1.1
RefA common.DocElementID `json:"spdxElementId"`
RefB common.DocElementID `json:"relatedSpdxElement"`
Relationship string `json:"relationshipType"`
// 11.2: Relationship Comment
// Cardinality: optional, one
RelationshipComment string `json:"comment,omitempty"`
}

View File

@ -0,0 +1,25 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_2
// Review is a Review section of an SPDX Document for version 2.2 of the spec.
// DEPRECATED in version 2.0 of spec; retained here for compatibility.
type Review struct {
// DEPRECATED in version 2.0 of spec
// 13.1: Reviewer
// Cardinality: optional, one
Reviewer string
// including AnnotatorType: one of "Person", "Organization" or "Tool"
ReviewerType string
// DEPRECATED in version 2.0 of spec
// 13.2: Review Date: YYYY-MM-DDThh:mm:ssZ
// Cardinality: conditional (mandatory, one) if there is a Reviewer
ReviewDate string
// DEPRECATED in version 2.0 of spec
// 13.3: Review Comment
// Cardinality: optional, one
ReviewComment string
}

View File

@ -0,0 +1,50 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_2
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// Snippet is a Snippet section of an SPDX Document for version 2.2 of the spec.
type Snippet struct {
// 9.1: Snippet SPDX Identifier: "SPDXRef-[idstring]"
// Cardinality: mandatory, one
SnippetSPDXIdentifier common.ElementID `json:"SPDXID"`
// 9.2: Snippet from File SPDX Identifier
// Cardinality: mandatory, one
SnippetFromFileSPDXIdentifier common.ElementID `json:"snippetFromFile"`
// Ranges denotes the start/end byte offsets or line numbers that the snippet is relevant to
Ranges []common.SnippetRange `json:"ranges"`
// 9.5: Snippet Concluded License: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
SnippetLicenseConcluded string `json:"licenseConcluded"`
// 9.6: License Information in Snippet: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: optional, one or many
LicenseInfoInSnippet []string `json:"licenseInfoInSnippets,omitempty"`
// 9.7: Snippet Comments on License
// Cardinality: optional, one
SnippetLicenseComments string `json:"licenseComments,omitempty"`
// 9.8: Snippet Copyright Text: copyright notice(s) text, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
SnippetCopyrightText string `json:"copyrightText"`
// 9.9: Snippet Comment
// Cardinality: optional, one
SnippetComment string `json:"comment,omitempty"`
// 9.10: Snippet Name
// Cardinality: optional, one
SnippetName string `json:"name,omitempty"`
// 9.11: Snippet Attribution Text
// Cardinality: optional, one or many
SnippetAttributionTexts []string `json:"-"`
}

View File

@ -0,0 +1,31 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_3
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// Annotation is an Annotation section of an SPDX Document
type Annotation struct {
// 12.1: Annotator
// Cardinality: conditional (mandatory, one) if there is an Annotation
Annotator common.Annotator `json:"annotator"`
// 12.2: Annotation Date: YYYY-MM-DDThh:mm:ssZ
// Cardinality: conditional (mandatory, one) if there is an Annotation
AnnotationDate string `json:"annotationDate"`
// 12.3: Annotation Type: "REVIEW" or "OTHER"
// Cardinality: conditional (mandatory, one) if there is an Annotation
AnnotationType string `json:"annotationType"`
// 12.4: SPDX Identifier Reference
// Cardinality: conditional (mandatory, one) if there is an Annotation
// This field is not used in hierarchical data formats where the referenced element is clear, such as JSON or YAML.
AnnotationSPDXIdentifier common.DocElementID `json:"-" yaml:"-"`
// 12.5: Annotation Comment
// Cardinality: conditional (mandatory, one) if there is an Annotation
AnnotationComment string `json:"comment"`
}

View File

@ -0,0 +1,27 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_3
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// CreationInfo is a Document Creation Information section of an SPDX Document
type CreationInfo struct {
// 6.7: License List Version
// Cardinality: optional, one
LicenseListVersion string `json:"licenseListVersion,omitempty"`
// 6.8: Creators: may have multiple keys for Person, Organization
// and/or Tool
// Cardinality: mandatory, one or many
Creators []common.Creator `json:"creators"`
// 6.9: Created: data format YYYY-MM-DDThh:mm:ssZ
// Cardinality: mandatory, one
Created string `json:"created"`
// 6.10: Creator Comment
// Cardinality: optional, one
CreatorComment string `json:"comment,omitempty"`
}

View File

@ -0,0 +1,166 @@
// Package v2_3 Package contains the struct definition for an SPDX Document
// and its constituent parts.
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_3
import (
"encoding/json"
"fmt"
converter "github.com/anchore/go-struct-converter"
"github.com/spdx/tools-golang/spdx/v2/common"
)
const Version = "SPDX-2.3"
const DataLicense = "CC0-1.0"
// ExternalDocumentRef is a reference to an external SPDX document as defined in section 6.6
type ExternalDocumentRef struct {
// DocumentRefID is the ID string defined in the start of the
// reference. It should _not_ contain the "DocumentRef-" part
// of the mandatory ID string.
DocumentRefID string `json:"externalDocumentId"`
// URI is the URI defined for the external document
URI string `json:"spdxDocument"`
// Checksum is the actual hash data
Checksum common.Checksum `json:"checksum"`
}
// Document is an SPDX Document:
// See https://spdx.github.io/spdx-spec/v2.3/document-creation-information
type Document struct {
// 6.1: SPDX Version; should be in the format "SPDX-<version>"
// Cardinality: mandatory, one
SPDXVersion string `json:"spdxVersion"`
// 6.2: Data License; should be "CC0-1.0"
// Cardinality: mandatory, one
DataLicense string `json:"dataLicense"`
// 6.3: SPDX Identifier; should be "DOCUMENT" to represent
// mandatory identifier of SPDXRef-DOCUMENT
// Cardinality: mandatory, one
SPDXIdentifier common.ElementID `json:"SPDXID"`
// 6.4: Document Name
// Cardinality: mandatory, one
DocumentName string `json:"name"`
// 6.5: Document Namespace
// Cardinality: mandatory, one
DocumentNamespace string `json:"documentNamespace"`
// 6.6: External Document References
// Cardinality: optional, one or many
ExternalDocumentReferences []ExternalDocumentRef `json:"externalDocumentRefs,omitempty"`
// 6.11: Document Comment
// Cardinality: optional, one
DocumentComment string `json:"comment,omitempty"`
CreationInfo *CreationInfo `json:"creationInfo"`
Packages []*Package `json:"packages,omitempty"`
Files []*File `json:"files,omitempty"`
OtherLicenses []*OtherLicense `json:"hasExtractedLicensingInfos,omitempty"`
Relationships []*Relationship `json:"relationships,omitempty"`
Annotations []*Annotation `json:"annotations,omitempty"`
Snippets []Snippet `json:"snippets,omitempty"`
// DEPRECATED in version 2.0 of spec
Reviews []*Review `json:"-" yaml:"-"`
}
func (d *Document) ConvertFrom(_ interface{}) error {
d.SPDXVersion = Version
return nil
}
var _ converter.ConvertFrom = (*Document)(nil)
func (d *Document) UnmarshalJSON(b []byte) error {
type doc Document
type extras struct {
DocumentDescribes []common.DocElementID `json:"documentDescribes"`
}
var d2 doc
if err := json.Unmarshal(b, &d2); err != nil {
return err
}
var e extras
if err := json.Unmarshal(b, &e); err != nil {
return err
}
*d = Document(d2)
relationshipExists := map[string]bool{}
serializeRel := func(r *Relationship) string {
refA := r.RefA
refB := r.RefB
rel := r.Relationship
// we need to serialize the opposite for CONTAINED_BY and DESCRIBED_BY
// so that it will match when we try to de-duplicate during deserialization.
switch r.Relationship {
case common.TypeRelationshipContainedBy:
rel = common.TypeRelationshipContains
refA = r.RefB
refB = r.RefA
case common.TypeRelationshipDescribeBy:
rel = common.TypeRelationshipDescribe
refA = r.RefB
refB = r.RefA
}
return fmt.Sprintf("%v-%v->%v", common.RenderDocElementID(refA), rel, common.RenderDocElementID(refB))
}
// index current list of relationships to ensure no duplication
for _, r := range d.Relationships {
relationshipExists[serializeRel(r)] = true
}
// build relationships for documentDescribes field
for _, id := range e.DocumentDescribes {
r := &Relationship{
RefA: common.DocElementID{
ElementRefID: d.SPDXIdentifier,
},
RefB: id,
Relationship: common.TypeRelationshipDescribe,
}
if !relationshipExists[serializeRel(r)] {
d.Relationships = append(d.Relationships, r)
relationshipExists[serializeRel(r)] = true
}
}
// build relationships for package hasFiles field
// build relationships for package hasFiles field
for _, p := range d.Packages {
for _, f := range p.hasFiles {
r := &Relationship{
RefA: common.DocElementID{
ElementRefID: p.PackageSPDXIdentifier,
},
RefB: f,
Relationship: common.TypeRelationshipContains,
}
if !relationshipExists[serializeRel(r)] {
d.Relationships = append(d.Relationships, r)
relationshipExists[serializeRel(r)] = true
}
}
p.hasFiles = nil
}
return nil
}
var _ json.Unmarshaler = (*Document)(nil)

View File

@ -0,0 +1,98 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_3
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// File is a File section of an SPDX Document
type File struct {
// 8.1: File Name
// Cardinality: mandatory, one
FileName string `json:"fileName"`
// 8.2: File SPDX Identifier: "SPDXRef-[idstring]"
// Cardinality: mandatory, one
FileSPDXIdentifier common.ElementID `json:"SPDXID"`
// 8.3: File Types
// Cardinality: optional, multiple
FileTypes []string `json:"fileTypes,omitempty"`
// 8.4: File Checksum: may have keys for SHA1, SHA256, MD5, SHA3-256, SHA3-384, SHA3-512, BLAKE2b-256, BLAKE2b-384, BLAKE2b-512, BLAKE3, ADLER32
// Cardinality: mandatory, one SHA1, others may be optionally provided
Checksums []common.Checksum `json:"checksums"`
// 8.5: Concluded License: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: optional, one
LicenseConcluded string `json:"licenseConcluded,omitempty"`
// 8.6: License Information in File: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: optional, one or many
LicenseInfoInFiles []string `json:"licenseInfoInFiles,omitempty"`
// 8.7: Comments on License
// Cardinality: optional, one
LicenseComments string `json:"licenseComments,omitempty"`
// 8.8: Copyright Text: copyright notice(s) text, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
FileCopyrightText string `json:"copyrightText"`
// DEPRECATED in version 2.1 of spec
// 8.9-8.11: Artifact of Project variables (defined below)
// Cardinality: optional, one or many
ArtifactOfProjects []*ArtifactOfProject `json:"artifactOfs,omitempty"`
// 8.12: File Comment
// Cardinality: optional, one
FileComment string `json:"comment,omitempty"`
// 8.13: File Notice
// Cardinality: optional, one
FileNotice string `json:"noticeText,omitempty"`
// 8.14: File Contributor
// Cardinality: optional, one or many
FileContributors []string `json:"fileContributors,omitempty"`
// 8.15: File Attribution Text
// Cardinality: optional, one or many
FileAttributionTexts []string `json:"attributionTexts,omitempty"`
// DEPRECATED in version 2.0 of spec
// 8.16: File Dependencies
// Cardinality: optional, one or many
FileDependencies []string `json:"fileDependencies,omitempty"`
// Snippets contained in this File
// Note that Snippets could be defined in a different Document! However,
// the only ones that _THIS_ document can contain are this ones that are
// defined here -- so this should just be an ElementID.
Snippets map[common.ElementID]*Snippet `json:"-" yaml:"-"`
Annotations []Annotation `json:"annotations,omitempty"`
}
// ArtifactOfProject is a DEPRECATED collection of data regarding
// a Package, as defined in sections 8.9-8.11.
// NOTE: the JSON schema does not define the structure of this object:
// https://github.com/spdx/spdx-spec/blob/development/v2.3.1/schemas/spdx-schema.json#L480
type ArtifactOfProject struct {
// DEPRECATED in version 2.1 of spec
// 8.9: Artifact of Project Name
// Cardinality: conditional, required if present, one per AOP
Name string `json:"name"`
// DEPRECATED in version 2.1 of spec
// 8.10: Artifact of Project Homepage: URL or "UNKNOWN"
// Cardinality: optional, one per AOP
HomePage string `json:"homePage"`
// DEPRECATED in version 2.1 of spec
// 8.11: Artifact of Project Uniform Resource Identifier
// Cardinality: optional, one per AOP
URI string `json:"URI"`
}

View File

@ -0,0 +1,30 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_3
// OtherLicense is an Other License Information section of an SPDX Document
type OtherLicense struct {
// 10.1: License Identifier: "LicenseRef-[idstring]"
// Cardinality: conditional (mandatory, one) if license is not
// on SPDX License List
LicenseIdentifier string `json:"licenseId"`
// 10.2: Extracted Text
// Cardinality: conditional (mandatory, one) if there is a
// License Identifier assigned
ExtractedText string `json:"extractedText"`
// 10.3: License Name: single line of text or "NOASSERTION"
// Cardinality: conditional (mandatory, one) if license is not
// on SPDX License List
LicenseName string `json:"name,omitempty"`
// 10.4: License Cross Reference
// Cardinality: conditional (optional, one or many) if license
// is not on SPDX License List
LicenseCrossReferences []string `json:"seeAlsos,omitempty"`
// 10.5: License Comment
// Cardinality: optional, one
LicenseComment string `json:"comment,omitempty"`
}

View File

@ -0,0 +1,221 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_3
import (
"encoding/json"
"strings"
"github.com/spdx/tools-golang/spdx/v2/common"
)
// Package is a Package section of an SPDX Document
type Package struct {
// NOT PART OF SPEC
// flag: does this "package" contain files that were in fact "unpackaged",
// e.g. included directly in the Document without being in a Package?
IsUnpackaged bool `json:"-" yaml:"-"`
// 7.1: Package Name
// Cardinality: mandatory, one
PackageName string `json:"name"`
// 7.2: Package SPDX Identifier: "SPDXRef-[idstring]"
// Cardinality: mandatory, one
PackageSPDXIdentifier common.ElementID `json:"SPDXID"`
// 7.3: Package Version
// Cardinality: optional, one
PackageVersion string `json:"versionInfo,omitempty"`
// 7.4: Package File Name
// Cardinality: optional, one
PackageFileName string `json:"packageFileName,omitempty"`
// 7.5: Package Supplier: may have single result for either Person or Organization,
// or NOASSERTION
// Cardinality: optional, one
PackageSupplier *common.Supplier `json:"supplier,omitempty"`
// 7.6: Package Originator: may have single result for either Person or Organization,
// or NOASSERTION
// Cardinality: optional, one
PackageOriginator *common.Originator `json:"originator,omitempty"`
// 7.7: Package Download Location
// Cardinality: mandatory, one
PackageDownloadLocation string `json:"downloadLocation"`
// 7.8: FilesAnalyzed
// Cardinality: optional, one; default value is "true" if omitted
FilesAnalyzed bool `json:"filesAnalyzed"`
// NOT PART OF SPEC: did FilesAnalyzed tag appear?
IsFilesAnalyzedTagPresent bool `json:"-" yaml:"-"`
// 7.9: Package Verification Code
// Cardinality: if FilesAnalyzed == true must be present, if FilesAnalyzed == false must be omitted
PackageVerificationCode *common.PackageVerificationCode `json:"packageVerificationCode,omitempty"`
// 7.10: Package Checksum: may have keys for SHA1, SHA256, SHA512, MD5, SHA3-256, SHA3-384, SHA3-512, BLAKE2b-256, BLAKE2b-384, BLAKE2b-512, BLAKE3, ADLER32
// Cardinality: optional, one or many
PackageChecksums []common.Checksum `json:"checksums,omitempty"`
// 7.11: Package Home Page
// Cardinality: optional, one
PackageHomePage string `json:"homepage,omitempty"`
// 7.12: Source Information
// Cardinality: optional, one
PackageSourceInfo string `json:"sourceInfo,omitempty"`
// 7.13: Concluded License: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: optional, one
PackageLicenseConcluded string `json:"licenseConcluded,omitempty"`
// 7.14: All Licenses Info from Files: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: optional, one or many if filesAnalyzed is true / omitted;
// zero (must be omitted) if filesAnalyzed is false
PackageLicenseInfoFromFiles []string `json:"licenseInfoFromFiles,omitempty"`
// 7.15: Declared License: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: optional, one
PackageLicenseDeclared string `json:"licenseDeclared,omitempty"`
// 7.16: Comments on License
// Cardinality: optional, one
PackageLicenseComments string `json:"licenseComments,omitempty"`
// 7.17: Copyright Text: copyright notice(s) text, "NONE" or "NOASSERTION"
// Cardinality: optional, zero or one
PackageCopyrightText string `json:"copyrightText,omitempty"`
// 7.18: Package Summary Description
// Cardinality: optional, one
PackageSummary string `json:"summary,omitempty"`
// 7.19: Package Detailed Description
// Cardinality: optional, one
PackageDescription string `json:"description,omitempty"`
// 7.20: Package Comment
// Cardinality: optional, one
PackageComment string `json:"comment,omitempty"`
// 7.21: Package External Reference
// Cardinality: optional, one or many
PackageExternalReferences []*PackageExternalReference `json:"externalRefs,omitempty"`
// 7.22: Package External Reference Comment
// Cardinality: conditional (optional, one) for each External Reference
// contained within PackageExternalReference struct, if present
// 7.23: Package Attribution Text
// Cardinality: optional, one or many
PackageAttributionTexts []string `json:"attributionTexts,omitempty"`
// 7.24: Primary Package Purpose
// Cardinality: optional, one or many
// Allowed values: APPLICATION, FRAMEWORK, LIBRARY, CONTAINER, OPERATING-SYSTEM, DEVICE, FIRMWARE, SOURCE, ARCHIVE, FILE, INSTALL, OTHER
PrimaryPackagePurpose string `json:"primaryPackagePurpose,omitempty"`
// 7.25: Release Date: YYYY-MM-DDThh:mm:ssZ
// Cardinality: optional, one
ReleaseDate string `json:"releaseDate,omitempty"`
// 7.26: Build Date: YYYY-MM-DDThh:mm:ssZ
// Cardinality: optional, one
BuiltDate string `json:"builtDate,omitempty"`
// 7.27: Valid Until Date: YYYY-MM-DDThh:mm:ssZ
// Cardinality: optional, one
ValidUntilDate string `json:"validUntilDate,omitempty"`
// Files contained in this Package
Files []*File `json:"files,omitempty"`
Annotations []Annotation `json:"annotations,omitempty"`
// this field is only used when decoding JSON to translate the hasFiles
// property to relationships
hasFiles []common.DocElementID
}
func (p *Package) UnmarshalJSON(b []byte) error {
type pkg Package
type extras struct {
HasFiles []common.DocElementID `json:"hasFiles"`
FilesAnalyzed *bool `json:"filesAnalyzed"`
}
var p2 pkg
if err := json.Unmarshal(b, &p2); err != nil {
return err
}
var e extras
if err := json.Unmarshal(b, &e); err != nil {
return err
}
*p = Package(p2)
p.hasFiles = e.HasFiles
// FilesAnalyzed defaults to true if omitted
if e.FilesAnalyzed == nil {
p.FilesAnalyzed = true
} else {
p.IsFilesAnalyzedTagPresent = true
}
return nil
}
var _ json.Unmarshaler = (*Package)(nil)
// PackageExternalReference is an External Reference to additional info
// about a Package, as defined in section 7.21
type PackageExternalReference struct {
// category is "SECURITY", "PACKAGE-MANAGER" or "OTHER"
Category string `json:"referenceCategory"`
// type is an [idstring] as defined in Appendix VI;
// called RefType here due to "type" being a Golang keyword
RefType string `json:"referenceType"`
// locator is a unique string to access the package-specific
// info, metadata or content within the target location
Locator string `json:"referenceLocator"`
// 7.22: Package External Reference Comment
// Cardinality: conditional (optional, one) for each External Reference
ExternalRefComment string `json:"comment,omitempty"`
}
var _ json.Unmarshaler = (*PackageExternalReference)(nil)
func (r *PackageExternalReference) UnmarshalJSON(b []byte) error {
type ref PackageExternalReference
var rr ref
if err := json.Unmarshal(b, &rr); err != nil {
return err
}
rr.Category = strings.ReplaceAll(rr.Category, "_", "-")
*r = PackageExternalReference(rr)
return nil
}
var _ json.Marshaler = (*PackageExternalReference)(nil)
func (r *PackageExternalReference) MarshalJSON() ([]byte, error) {
type ref PackageExternalReference
var rr ref
rr = ref(*r)
rr.Category = strings.ReplaceAll(rr.Category, "_", "-")
return json.Marshal(&rr)
}

View File

@ -0,0 +1,24 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_3
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// Relationship is a Relationship section of an SPDX Document
type Relationship struct {
// 11.1: Relationship
// Cardinality: optional, one or more; one per Relationship
// one mandatory for SPDX Document with multiple packages
// RefA and RefB are first and second item
// Relationship is type from 11.1.1
RefA common.DocElementID `json:"spdxElementId"`
RefB common.DocElementID `json:"relatedSpdxElement"`
Relationship string `json:"relationshipType"`
// 11.2: Relationship Comment
// Cardinality: optional, one
RelationshipComment string `json:"comment,omitempty"`
}

View File

@ -0,0 +1,25 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_3
// Review is a Review section of an SPDX Document.
// DEPRECATED in version 2.0 of spec; retained here for compatibility.
type Review struct {
// DEPRECATED in version 2.0 of spec
// 13.1: Reviewer
// Cardinality: optional, one
Reviewer string
// including AnnotatorType: one of "Person", "Organization" or "Tool"
ReviewerType string
// DEPRECATED in version 2.0 of spec
// 13.2: Review Date: YYYY-MM-DDThh:mm:ssZ
// Cardinality: conditional (mandatory, one) if there is a Reviewer
ReviewDate string
// DEPRECATED in version 2.0 of spec
// 13.3: Review Comment
// Cardinality: optional, one
ReviewComment string
}

View File

@ -0,0 +1,50 @@
// SPDX-License-Identifier: Apache-2.0 OR GPL-2.0-or-later
package v2_3
import (
"github.com/spdx/tools-golang/spdx/v2/common"
)
// Snippet is a Snippet section of an SPDX Document
type Snippet struct {
// 9.1: Snippet SPDX Identifier: "SPDXRef-[idstring]"
// Cardinality: mandatory, one
SnippetSPDXIdentifier common.ElementID `json:"SPDXID"`
// 9.2: Snippet from File SPDX Identifier
// Cardinality: mandatory, one
SnippetFromFileSPDXIdentifier common.ElementID `json:"snippetFromFile"`
// Ranges denotes the start/end byte offsets or line numbers that the snippet is relevant to
Ranges []common.SnippetRange `json:"ranges"`
// 9.5: Snippet Concluded License: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: optional, one
SnippetLicenseConcluded string `json:"licenseConcluded,omitempty"`
// 9.6: License Information in Snippet: SPDX License Expression, "NONE" or "NOASSERTION"
// Cardinality: optional, one or many
LicenseInfoInSnippet []string `json:"licenseInfoInSnippets,omitempty"`
// 9.7: Snippet Comments on License
// Cardinality: optional, one
SnippetLicenseComments string `json:"licenseComments,omitempty"`
// 9.8: Snippet Copyright Text: copyright notice(s) text, "NONE" or "NOASSERTION"
// Cardinality: mandatory, one
SnippetCopyrightText string `json:"copyrightText"`
// 9.9: Snippet Comment
// Cardinality: optional, one
SnippetComment string `json:"comment,omitempty"`
// 9.10: Snippet Name
// Cardinality: optional, one
SnippetName string `json:"name,omitempty"`
// 9.11: Snippet Attribution Text
// Cardinality: optional, one or many
SnippetAttributionTexts []string `json:"-" yaml:"-"`
}

View File

@ -352,9 +352,9 @@ func compare(obj1, obj2 interface{}, kind reflect.Kind) (CompareType, bool) {
// Greater asserts that the first element is greater than the second // Greater asserts that the first element is greater than the second
// //
// assert.Greater(t, 2, 1) // assert.Greater(t, 2, 1)
// assert.Greater(t, float64(2), float64(1)) // assert.Greater(t, float64(2), float64(1))
// assert.Greater(t, "b", "a") // assert.Greater(t, "b", "a")
func Greater(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool { func Greater(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -364,10 +364,10 @@ func Greater(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface
// GreaterOrEqual asserts that the first element is greater than or equal to the second // GreaterOrEqual asserts that the first element is greater than or equal to the second
// //
// assert.GreaterOrEqual(t, 2, 1) // assert.GreaterOrEqual(t, 2, 1)
// assert.GreaterOrEqual(t, 2, 2) // assert.GreaterOrEqual(t, 2, 2)
// assert.GreaterOrEqual(t, "b", "a") // assert.GreaterOrEqual(t, "b", "a")
// assert.GreaterOrEqual(t, "b", "b") // assert.GreaterOrEqual(t, "b", "b")
func GreaterOrEqual(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool { func GreaterOrEqual(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -377,9 +377,9 @@ func GreaterOrEqual(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...in
// Less asserts that the first element is less than the second // Less asserts that the first element is less than the second
// //
// assert.Less(t, 1, 2) // assert.Less(t, 1, 2)
// assert.Less(t, float64(1), float64(2)) // assert.Less(t, float64(1), float64(2))
// assert.Less(t, "a", "b") // assert.Less(t, "a", "b")
func Less(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool { func Less(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -389,10 +389,10 @@ func Less(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{})
// LessOrEqual asserts that the first element is less than or equal to the second // LessOrEqual asserts that the first element is less than or equal to the second
// //
// assert.LessOrEqual(t, 1, 2) // assert.LessOrEqual(t, 1, 2)
// assert.LessOrEqual(t, 2, 2) // assert.LessOrEqual(t, 2, 2)
// assert.LessOrEqual(t, "a", "b") // assert.LessOrEqual(t, "a", "b")
// assert.LessOrEqual(t, "b", "b") // assert.LessOrEqual(t, "b", "b")
func LessOrEqual(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool { func LessOrEqual(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -402,8 +402,8 @@ func LessOrEqual(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...inter
// Positive asserts that the specified element is positive // Positive asserts that the specified element is positive
// //
// assert.Positive(t, 1) // assert.Positive(t, 1)
// assert.Positive(t, 1.23) // assert.Positive(t, 1.23)
func Positive(t TestingT, e interface{}, msgAndArgs ...interface{}) bool { func Positive(t TestingT, e interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -414,8 +414,8 @@ func Positive(t TestingT, e interface{}, msgAndArgs ...interface{}) bool {
// Negative asserts that the specified element is negative // Negative asserts that the specified element is negative
// //
// assert.Negative(t, -1) // assert.Negative(t, -1)
// assert.Negative(t, -1.23) // assert.Negative(t, -1.23)
func Negative(t TestingT, e interface{}, msgAndArgs ...interface{}) bool { func Negative(t TestingT, e interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()

View File

@ -22,9 +22,9 @@ func Conditionf(t TestingT, comp Comparison, msg string, args ...interface{}) bo
// Containsf asserts that the specified string, list(array, slice...) or map contains the // Containsf asserts that the specified string, list(array, slice...) or map contains the
// specified substring or element. // specified substring or element.
// //
// assert.Containsf(t, "Hello World", "World", "error message %s", "formatted") // assert.Containsf(t, "Hello World", "World", "error message %s", "formatted")
// assert.Containsf(t, ["Hello", "World"], "World", "error message %s", "formatted") // assert.Containsf(t, ["Hello", "World"], "World", "error message %s", "formatted")
// assert.Containsf(t, {"Hello": "World"}, "Hello", "error message %s", "formatted") // assert.Containsf(t, {"Hello": "World"}, "Hello", "error message %s", "formatted")
func Containsf(t TestingT, s interface{}, contains interface{}, msg string, args ...interface{}) bool { func Containsf(t TestingT, s interface{}, contains interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -56,7 +56,7 @@ func ElementsMatchf(t TestingT, listA interface{}, listB interface{}, msg string
// Emptyf asserts that the specified object is empty. I.e. nil, "", false, 0 or either // Emptyf asserts that the specified object is empty. I.e. nil, "", false, 0 or either
// a slice or a channel with len == 0. // a slice or a channel with len == 0.
// //
// assert.Emptyf(t, obj, "error message %s", "formatted") // assert.Emptyf(t, obj, "error message %s", "formatted")
func Emptyf(t TestingT, object interface{}, msg string, args ...interface{}) bool { func Emptyf(t TestingT, object interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -66,7 +66,7 @@ func Emptyf(t TestingT, object interface{}, msg string, args ...interface{}) boo
// Equalf asserts that two objects are equal. // Equalf asserts that two objects are equal.
// //
// assert.Equalf(t, 123, 123, "error message %s", "formatted") // assert.Equalf(t, 123, 123, "error message %s", "formatted")
// //
// Pointer variable equality is determined based on the equality of the // Pointer variable equality is determined based on the equality of the
// referenced values (as opposed to the memory addresses). Function equality // referenced values (as opposed to the memory addresses). Function equality
@ -81,8 +81,8 @@ func Equalf(t TestingT, expected interface{}, actual interface{}, msg string, ar
// EqualErrorf asserts that a function returned an error (i.e. not `nil`) // EqualErrorf asserts that a function returned an error (i.e. not `nil`)
// and that it is equal to the provided error. // and that it is equal to the provided error.
// //
// actualObj, err := SomeFunction() // actualObj, err := SomeFunction()
// assert.EqualErrorf(t, err, expectedErrorString, "error message %s", "formatted") // assert.EqualErrorf(t, err, expectedErrorString, "error message %s", "formatted")
func EqualErrorf(t TestingT, theError error, errString string, msg string, args ...interface{}) bool { func EqualErrorf(t TestingT, theError error, errString string, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -90,10 +90,27 @@ func EqualErrorf(t TestingT, theError error, errString string, msg string, args
return EqualError(t, theError, errString, append([]interface{}{msg}, args...)...) return EqualError(t, theError, errString, append([]interface{}{msg}, args...)...)
} }
// EqualExportedValuesf asserts that the types of two objects are equal and their public
// fields are also equal. This is useful for comparing structs that have private fields
// that could potentially differ.
//
// type S struct {
// Exported int
// notExported int
// }
// assert.EqualExportedValuesf(t, S{1, 2}, S{1, 3}, "error message %s", "formatted") => true
// assert.EqualExportedValuesf(t, S{1, 2}, S{2, 3}, "error message %s", "formatted") => false
func EqualExportedValuesf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok {
h.Helper()
}
return EqualExportedValues(t, expected, actual, append([]interface{}{msg}, args...)...)
}
// EqualValuesf asserts that two objects are equal or convertable to the same types // EqualValuesf asserts that two objects are equal or convertable to the same types
// and equal. // and equal.
// //
// assert.EqualValuesf(t, uint32(123), int32(123), "error message %s", "formatted") // assert.EqualValuesf(t, uint32(123), int32(123), "error message %s", "formatted")
func EqualValuesf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool { func EqualValuesf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -103,10 +120,10 @@ func EqualValuesf(t TestingT, expected interface{}, actual interface{}, msg stri
// Errorf asserts that a function returned an error (i.e. not `nil`). // Errorf asserts that a function returned an error (i.e. not `nil`).
// //
// actualObj, err := SomeFunction() // actualObj, err := SomeFunction()
// if assert.Errorf(t, err, "error message %s", "formatted") { // if assert.Errorf(t, err, "error message %s", "formatted") {
// assert.Equal(t, expectedErrorf, err) // assert.Equal(t, expectedErrorf, err)
// } // }
func Errorf(t TestingT, err error, msg string, args ...interface{}) bool { func Errorf(t TestingT, err error, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -126,8 +143,8 @@ func ErrorAsf(t TestingT, err error, target interface{}, msg string, args ...int
// ErrorContainsf asserts that a function returned an error (i.e. not `nil`) // ErrorContainsf asserts that a function returned an error (i.e. not `nil`)
// and that the error contains the specified substring. // and that the error contains the specified substring.
// //
// actualObj, err := SomeFunction() // actualObj, err := SomeFunction()
// assert.ErrorContainsf(t, err, expectedErrorSubString, "error message %s", "formatted") // assert.ErrorContainsf(t, err, expectedErrorSubString, "error message %s", "formatted")
func ErrorContainsf(t TestingT, theError error, contains string, msg string, args ...interface{}) bool { func ErrorContainsf(t TestingT, theError error, contains string, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -147,7 +164,7 @@ func ErrorIsf(t TestingT, err error, target error, msg string, args ...interface
// Eventuallyf asserts that given condition will be met in waitFor time, // Eventuallyf asserts that given condition will be met in waitFor time,
// periodically checking target function each tick. // periodically checking target function each tick.
// //
// assert.Eventuallyf(t, func() bool { return true; }, time.Second, 10*time.Millisecond, "error message %s", "formatted") // assert.Eventuallyf(t, func() bool { return true; }, time.Second, 10*time.Millisecond, "error message %s", "formatted")
func Eventuallyf(t TestingT, condition func() bool, waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) bool { func Eventuallyf(t TestingT, condition func() bool, waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -155,9 +172,34 @@ func Eventuallyf(t TestingT, condition func() bool, waitFor time.Duration, tick
return Eventually(t, condition, waitFor, tick, append([]interface{}{msg}, args...)...) return Eventually(t, condition, waitFor, tick, append([]interface{}{msg}, args...)...)
} }
// EventuallyWithTf asserts that given condition will be met in waitFor time,
// periodically checking target function each tick. In contrast to Eventually,
// it supplies a CollectT to the condition function, so that the condition
// function can use the CollectT to call other assertions.
// The condition is considered "met" if no errors are raised in a tick.
// The supplied CollectT collects all errors from one tick (if there are any).
// If the condition is not met before waitFor, the collected errors of
// the last tick are copied to t.
//
// externalValue := false
// go func() {
// time.Sleep(8*time.Second)
// externalValue = true
// }()
// assert.EventuallyWithTf(t, func(c *assert.CollectT, "error message %s", "formatted") {
// // add assertions as needed; any assertion failure will fail the current tick
// assert.True(c, externalValue, "expected 'externalValue' to be true")
// }, 1*time.Second, 10*time.Second, "external state has not changed to 'true'; still false")
func EventuallyWithTf(t TestingT, condition func(collect *CollectT), waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok {
h.Helper()
}
return EventuallyWithT(t, condition, waitFor, tick, append([]interface{}{msg}, args...)...)
}
// Exactlyf asserts that two objects are equal in value and type. // Exactlyf asserts that two objects are equal in value and type.
// //
// assert.Exactlyf(t, int32(123), int64(123), "error message %s", "formatted") // assert.Exactlyf(t, int32(123), int64(123), "error message %s", "formatted")
func Exactlyf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool { func Exactlyf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -183,7 +225,7 @@ func FailNowf(t TestingT, failureMessage string, msg string, args ...interface{}
// Falsef asserts that the specified value is false. // Falsef asserts that the specified value is false.
// //
// assert.Falsef(t, myBool, "error message %s", "formatted") // assert.Falsef(t, myBool, "error message %s", "formatted")
func Falsef(t TestingT, value bool, msg string, args ...interface{}) bool { func Falsef(t TestingT, value bool, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -202,9 +244,9 @@ func FileExistsf(t TestingT, path string, msg string, args ...interface{}) bool
// Greaterf asserts that the first element is greater than the second // Greaterf asserts that the first element is greater than the second
// //
// assert.Greaterf(t, 2, 1, "error message %s", "formatted") // assert.Greaterf(t, 2, 1, "error message %s", "formatted")
// assert.Greaterf(t, float64(2), float64(1), "error message %s", "formatted") // assert.Greaterf(t, float64(2), float64(1), "error message %s", "formatted")
// assert.Greaterf(t, "b", "a", "error message %s", "formatted") // assert.Greaterf(t, "b", "a", "error message %s", "formatted")
func Greaterf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool { func Greaterf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -214,10 +256,10 @@ func Greaterf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...in
// GreaterOrEqualf asserts that the first element is greater than or equal to the second // GreaterOrEqualf asserts that the first element is greater than or equal to the second
// //
// assert.GreaterOrEqualf(t, 2, 1, "error message %s", "formatted") // assert.GreaterOrEqualf(t, 2, 1, "error message %s", "formatted")
// assert.GreaterOrEqualf(t, 2, 2, "error message %s", "formatted") // assert.GreaterOrEqualf(t, 2, 2, "error message %s", "formatted")
// assert.GreaterOrEqualf(t, "b", "a", "error message %s", "formatted") // assert.GreaterOrEqualf(t, "b", "a", "error message %s", "formatted")
// assert.GreaterOrEqualf(t, "b", "b", "error message %s", "formatted") // assert.GreaterOrEqualf(t, "b", "b", "error message %s", "formatted")
func GreaterOrEqualf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool { func GreaterOrEqualf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -228,7 +270,7 @@ func GreaterOrEqualf(t TestingT, e1 interface{}, e2 interface{}, msg string, arg
// HTTPBodyContainsf asserts that a specified handler returns a // HTTPBodyContainsf asserts that a specified handler returns a
// body that contains a string. // body that contains a string.
// //
// assert.HTTPBodyContainsf(t, myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted") // assert.HTTPBodyContainsf(t, myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted")
// //
// Returns whether the assertion was successful (true) or not (false). // Returns whether the assertion was successful (true) or not (false).
func HTTPBodyContainsf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool { func HTTPBodyContainsf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool {
@ -241,7 +283,7 @@ func HTTPBodyContainsf(t TestingT, handler http.HandlerFunc, method string, url
// HTTPBodyNotContainsf asserts that a specified handler returns a // HTTPBodyNotContainsf asserts that a specified handler returns a
// body that does not contain a string. // body that does not contain a string.
// //
// assert.HTTPBodyNotContainsf(t, myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted") // assert.HTTPBodyNotContainsf(t, myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted")
// //
// Returns whether the assertion was successful (true) or not (false). // Returns whether the assertion was successful (true) or not (false).
func HTTPBodyNotContainsf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool { func HTTPBodyNotContainsf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool {
@ -253,7 +295,7 @@ func HTTPBodyNotContainsf(t TestingT, handler http.HandlerFunc, method string, u
// HTTPErrorf asserts that a specified handler returns an error status code. // HTTPErrorf asserts that a specified handler returns an error status code.
// //
// assert.HTTPErrorf(t, myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}} // assert.HTTPErrorf(t, myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}}
// //
// Returns whether the assertion was successful (true) or not (false). // Returns whether the assertion was successful (true) or not (false).
func HTTPErrorf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { func HTTPErrorf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool {
@ -265,7 +307,7 @@ func HTTPErrorf(t TestingT, handler http.HandlerFunc, method string, url string,
// HTTPRedirectf asserts that a specified handler returns a redirect status code. // HTTPRedirectf asserts that a specified handler returns a redirect status code.
// //
// assert.HTTPRedirectf(t, myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}} // assert.HTTPRedirectf(t, myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}}
// //
// Returns whether the assertion was successful (true) or not (false). // Returns whether the assertion was successful (true) or not (false).
func HTTPRedirectf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { func HTTPRedirectf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool {
@ -277,7 +319,7 @@ func HTTPRedirectf(t TestingT, handler http.HandlerFunc, method string, url stri
// HTTPStatusCodef asserts that a specified handler returns a specified status code. // HTTPStatusCodef asserts that a specified handler returns a specified status code.
// //
// assert.HTTPStatusCodef(t, myHandler, "GET", "/notImplemented", nil, 501, "error message %s", "formatted") // assert.HTTPStatusCodef(t, myHandler, "GET", "/notImplemented", nil, 501, "error message %s", "formatted")
// //
// Returns whether the assertion was successful (true) or not (false). // Returns whether the assertion was successful (true) or not (false).
func HTTPStatusCodef(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, statuscode int, msg string, args ...interface{}) bool { func HTTPStatusCodef(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, statuscode int, msg string, args ...interface{}) bool {
@ -289,7 +331,7 @@ func HTTPStatusCodef(t TestingT, handler http.HandlerFunc, method string, url st
// HTTPSuccessf asserts that a specified handler returns a success status code. // HTTPSuccessf asserts that a specified handler returns a success status code.
// //
// assert.HTTPSuccessf(t, myHandler, "POST", "http://www.google.com", nil, "error message %s", "formatted") // assert.HTTPSuccessf(t, myHandler, "POST", "http://www.google.com", nil, "error message %s", "formatted")
// //
// Returns whether the assertion was successful (true) or not (false). // Returns whether the assertion was successful (true) or not (false).
func HTTPSuccessf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { func HTTPSuccessf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool {
@ -301,7 +343,7 @@ func HTTPSuccessf(t TestingT, handler http.HandlerFunc, method string, url strin
// Implementsf asserts that an object is implemented by the specified interface. // Implementsf asserts that an object is implemented by the specified interface.
// //
// assert.Implementsf(t, (*MyInterface)(nil), new(MyObject), "error message %s", "formatted") // assert.Implementsf(t, (*MyInterface)(nil), new(MyObject), "error message %s", "formatted")
func Implementsf(t TestingT, interfaceObject interface{}, object interface{}, msg string, args ...interface{}) bool { func Implementsf(t TestingT, interfaceObject interface{}, object interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -311,7 +353,7 @@ func Implementsf(t TestingT, interfaceObject interface{}, object interface{}, ms
// InDeltaf asserts that the two numerals are within delta of each other. // InDeltaf asserts that the two numerals are within delta of each other.
// //
// assert.InDeltaf(t, math.Pi, 22/7.0, 0.01, "error message %s", "formatted") // assert.InDeltaf(t, math.Pi, 22/7.0, 0.01, "error message %s", "formatted")
func InDeltaf(t TestingT, expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool { func InDeltaf(t TestingT, expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -353,9 +395,9 @@ func InEpsilonSlicef(t TestingT, expected interface{}, actual interface{}, epsil
// IsDecreasingf asserts that the collection is decreasing // IsDecreasingf asserts that the collection is decreasing
// //
// assert.IsDecreasingf(t, []int{2, 1, 0}, "error message %s", "formatted") // assert.IsDecreasingf(t, []int{2, 1, 0}, "error message %s", "formatted")
// assert.IsDecreasingf(t, []float{2, 1}, "error message %s", "formatted") // assert.IsDecreasingf(t, []float{2, 1}, "error message %s", "formatted")
// assert.IsDecreasingf(t, []string{"b", "a"}, "error message %s", "formatted") // assert.IsDecreasingf(t, []string{"b", "a"}, "error message %s", "formatted")
func IsDecreasingf(t TestingT, object interface{}, msg string, args ...interface{}) bool { func IsDecreasingf(t TestingT, object interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -365,9 +407,9 @@ func IsDecreasingf(t TestingT, object interface{}, msg string, args ...interface
// IsIncreasingf asserts that the collection is increasing // IsIncreasingf asserts that the collection is increasing
// //
// assert.IsIncreasingf(t, []int{1, 2, 3}, "error message %s", "formatted") // assert.IsIncreasingf(t, []int{1, 2, 3}, "error message %s", "formatted")
// assert.IsIncreasingf(t, []float{1, 2}, "error message %s", "formatted") // assert.IsIncreasingf(t, []float{1, 2}, "error message %s", "formatted")
// assert.IsIncreasingf(t, []string{"a", "b"}, "error message %s", "formatted") // assert.IsIncreasingf(t, []string{"a", "b"}, "error message %s", "formatted")
func IsIncreasingf(t TestingT, object interface{}, msg string, args ...interface{}) bool { func IsIncreasingf(t TestingT, object interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -377,9 +419,9 @@ func IsIncreasingf(t TestingT, object interface{}, msg string, args ...interface
// IsNonDecreasingf asserts that the collection is not decreasing // IsNonDecreasingf asserts that the collection is not decreasing
// //
// assert.IsNonDecreasingf(t, []int{1, 1, 2}, "error message %s", "formatted") // assert.IsNonDecreasingf(t, []int{1, 1, 2}, "error message %s", "formatted")
// assert.IsNonDecreasingf(t, []float{1, 2}, "error message %s", "formatted") // assert.IsNonDecreasingf(t, []float{1, 2}, "error message %s", "formatted")
// assert.IsNonDecreasingf(t, []string{"a", "b"}, "error message %s", "formatted") // assert.IsNonDecreasingf(t, []string{"a", "b"}, "error message %s", "formatted")
func IsNonDecreasingf(t TestingT, object interface{}, msg string, args ...interface{}) bool { func IsNonDecreasingf(t TestingT, object interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -389,9 +431,9 @@ func IsNonDecreasingf(t TestingT, object interface{}, msg string, args ...interf
// IsNonIncreasingf asserts that the collection is not increasing // IsNonIncreasingf asserts that the collection is not increasing
// //
// assert.IsNonIncreasingf(t, []int{2, 1, 1}, "error message %s", "formatted") // assert.IsNonIncreasingf(t, []int{2, 1, 1}, "error message %s", "formatted")
// assert.IsNonIncreasingf(t, []float{2, 1}, "error message %s", "formatted") // assert.IsNonIncreasingf(t, []float{2, 1}, "error message %s", "formatted")
// assert.IsNonIncreasingf(t, []string{"b", "a"}, "error message %s", "formatted") // assert.IsNonIncreasingf(t, []string{"b", "a"}, "error message %s", "formatted")
func IsNonIncreasingf(t TestingT, object interface{}, msg string, args ...interface{}) bool { func IsNonIncreasingf(t TestingT, object interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -409,7 +451,7 @@ func IsTypef(t TestingT, expectedType interface{}, object interface{}, msg strin
// JSONEqf asserts that two JSON strings are equivalent. // JSONEqf asserts that two JSON strings are equivalent.
// //
// assert.JSONEqf(t, `{"hello": "world", "foo": "bar"}`, `{"foo": "bar", "hello": "world"}`, "error message %s", "formatted") // assert.JSONEqf(t, `{"hello": "world", "foo": "bar"}`, `{"foo": "bar", "hello": "world"}`, "error message %s", "formatted")
func JSONEqf(t TestingT, expected string, actual string, msg string, args ...interface{}) bool { func JSONEqf(t TestingT, expected string, actual string, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -420,7 +462,7 @@ func JSONEqf(t TestingT, expected string, actual string, msg string, args ...int
// Lenf asserts that the specified object has specific length. // Lenf asserts that the specified object has specific length.
// Lenf also fails if the object has a type that len() not accept. // Lenf also fails if the object has a type that len() not accept.
// //
// assert.Lenf(t, mySlice, 3, "error message %s", "formatted") // assert.Lenf(t, mySlice, 3, "error message %s", "formatted")
func Lenf(t TestingT, object interface{}, length int, msg string, args ...interface{}) bool { func Lenf(t TestingT, object interface{}, length int, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -430,9 +472,9 @@ func Lenf(t TestingT, object interface{}, length int, msg string, args ...interf
// Lessf asserts that the first element is less than the second // Lessf asserts that the first element is less than the second
// //
// assert.Lessf(t, 1, 2, "error message %s", "formatted") // assert.Lessf(t, 1, 2, "error message %s", "formatted")
// assert.Lessf(t, float64(1), float64(2), "error message %s", "formatted") // assert.Lessf(t, float64(1), float64(2), "error message %s", "formatted")
// assert.Lessf(t, "a", "b", "error message %s", "formatted") // assert.Lessf(t, "a", "b", "error message %s", "formatted")
func Lessf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool { func Lessf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -442,10 +484,10 @@ func Lessf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...inter
// LessOrEqualf asserts that the first element is less than or equal to the second // LessOrEqualf asserts that the first element is less than or equal to the second
// //
// assert.LessOrEqualf(t, 1, 2, "error message %s", "formatted") // assert.LessOrEqualf(t, 1, 2, "error message %s", "formatted")
// assert.LessOrEqualf(t, 2, 2, "error message %s", "formatted") // assert.LessOrEqualf(t, 2, 2, "error message %s", "formatted")
// assert.LessOrEqualf(t, "a", "b", "error message %s", "formatted") // assert.LessOrEqualf(t, "a", "b", "error message %s", "formatted")
// assert.LessOrEqualf(t, "b", "b", "error message %s", "formatted") // assert.LessOrEqualf(t, "b", "b", "error message %s", "formatted")
func LessOrEqualf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool { func LessOrEqualf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -455,8 +497,8 @@ func LessOrEqualf(t TestingT, e1 interface{}, e2 interface{}, msg string, args .
// Negativef asserts that the specified element is negative // Negativef asserts that the specified element is negative
// //
// assert.Negativef(t, -1, "error message %s", "formatted") // assert.Negativef(t, -1, "error message %s", "formatted")
// assert.Negativef(t, -1.23, "error message %s", "formatted") // assert.Negativef(t, -1.23, "error message %s", "formatted")
func Negativef(t TestingT, e interface{}, msg string, args ...interface{}) bool { func Negativef(t TestingT, e interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -467,7 +509,7 @@ func Negativef(t TestingT, e interface{}, msg string, args ...interface{}) bool
// Neverf asserts that the given condition doesn't satisfy in waitFor time, // Neverf asserts that the given condition doesn't satisfy in waitFor time,
// periodically checking the target function each tick. // periodically checking the target function each tick.
// //
// assert.Neverf(t, func() bool { return false; }, time.Second, 10*time.Millisecond, "error message %s", "formatted") // assert.Neverf(t, func() bool { return false; }, time.Second, 10*time.Millisecond, "error message %s", "formatted")
func Neverf(t TestingT, condition func() bool, waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) bool { func Neverf(t TestingT, condition func() bool, waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -477,7 +519,7 @@ func Neverf(t TestingT, condition func() bool, waitFor time.Duration, tick time.
// Nilf asserts that the specified object is nil. // Nilf asserts that the specified object is nil.
// //
// assert.Nilf(t, err, "error message %s", "formatted") // assert.Nilf(t, err, "error message %s", "formatted")
func Nilf(t TestingT, object interface{}, msg string, args ...interface{}) bool { func Nilf(t TestingT, object interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -496,10 +538,10 @@ func NoDirExistsf(t TestingT, path string, msg string, args ...interface{}) bool
// NoErrorf asserts that a function returned no error (i.e. `nil`). // NoErrorf asserts that a function returned no error (i.e. `nil`).
// //
// actualObj, err := SomeFunction() // actualObj, err := SomeFunction()
// if assert.NoErrorf(t, err, "error message %s", "formatted") { // if assert.NoErrorf(t, err, "error message %s", "formatted") {
// assert.Equal(t, expectedObj, actualObj) // assert.Equal(t, expectedObj, actualObj)
// } // }
func NoErrorf(t TestingT, err error, msg string, args ...interface{}) bool { func NoErrorf(t TestingT, err error, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -519,9 +561,9 @@ func NoFileExistsf(t TestingT, path string, msg string, args ...interface{}) boo
// NotContainsf asserts that the specified string, list(array, slice...) or map does NOT contain the // NotContainsf asserts that the specified string, list(array, slice...) or map does NOT contain the
// specified substring or element. // specified substring or element.
// //
// assert.NotContainsf(t, "Hello World", "Earth", "error message %s", "formatted") // assert.NotContainsf(t, "Hello World", "Earth", "error message %s", "formatted")
// assert.NotContainsf(t, ["Hello", "World"], "Earth", "error message %s", "formatted") // assert.NotContainsf(t, ["Hello", "World"], "Earth", "error message %s", "formatted")
// assert.NotContainsf(t, {"Hello": "World"}, "Earth", "error message %s", "formatted") // assert.NotContainsf(t, {"Hello": "World"}, "Earth", "error message %s", "formatted")
func NotContainsf(t TestingT, s interface{}, contains interface{}, msg string, args ...interface{}) bool { func NotContainsf(t TestingT, s interface{}, contains interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -532,9 +574,9 @@ func NotContainsf(t TestingT, s interface{}, contains interface{}, msg string, a
// NotEmptyf asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either // NotEmptyf asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either
// a slice or a channel with len == 0. // a slice or a channel with len == 0.
// //
// if assert.NotEmptyf(t, obj, "error message %s", "formatted") { // if assert.NotEmptyf(t, obj, "error message %s", "formatted") {
// assert.Equal(t, "two", obj[1]) // assert.Equal(t, "two", obj[1])
// } // }
func NotEmptyf(t TestingT, object interface{}, msg string, args ...interface{}) bool { func NotEmptyf(t TestingT, object interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -544,7 +586,7 @@ func NotEmptyf(t TestingT, object interface{}, msg string, args ...interface{})
// NotEqualf asserts that the specified values are NOT equal. // NotEqualf asserts that the specified values are NOT equal.
// //
// assert.NotEqualf(t, obj1, obj2, "error message %s", "formatted") // assert.NotEqualf(t, obj1, obj2, "error message %s", "formatted")
// //
// Pointer variable equality is determined based on the equality of the // Pointer variable equality is determined based on the equality of the
// referenced values (as opposed to the memory addresses). // referenced values (as opposed to the memory addresses).
@ -557,7 +599,7 @@ func NotEqualf(t TestingT, expected interface{}, actual interface{}, msg string,
// NotEqualValuesf asserts that two objects are not equal even when converted to the same type // NotEqualValuesf asserts that two objects are not equal even when converted to the same type
// //
// assert.NotEqualValuesf(t, obj1, obj2, "error message %s", "formatted") // assert.NotEqualValuesf(t, obj1, obj2, "error message %s", "formatted")
func NotEqualValuesf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool { func NotEqualValuesf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -576,7 +618,7 @@ func NotErrorIsf(t TestingT, err error, target error, msg string, args ...interf
// NotNilf asserts that the specified object is not nil. // NotNilf asserts that the specified object is not nil.
// //
// assert.NotNilf(t, err, "error message %s", "formatted") // assert.NotNilf(t, err, "error message %s", "formatted")
func NotNilf(t TestingT, object interface{}, msg string, args ...interface{}) bool { func NotNilf(t TestingT, object interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -586,7 +628,7 @@ func NotNilf(t TestingT, object interface{}, msg string, args ...interface{}) bo
// NotPanicsf asserts that the code inside the specified PanicTestFunc does NOT panic. // NotPanicsf asserts that the code inside the specified PanicTestFunc does NOT panic.
// //
// assert.NotPanicsf(t, func(){ RemainCalm() }, "error message %s", "formatted") // assert.NotPanicsf(t, func(){ RemainCalm() }, "error message %s", "formatted")
func NotPanicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bool { func NotPanicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -596,8 +638,8 @@ func NotPanicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bo
// NotRegexpf asserts that a specified regexp does not match a string. // NotRegexpf asserts that a specified regexp does not match a string.
// //
// assert.NotRegexpf(t, regexp.MustCompile("starts"), "it's starting", "error message %s", "formatted") // assert.NotRegexpf(t, regexp.MustCompile("starts"), "it's starting", "error message %s", "formatted")
// assert.NotRegexpf(t, "^start", "it's not starting", "error message %s", "formatted") // assert.NotRegexpf(t, "^start", "it's not starting", "error message %s", "formatted")
func NotRegexpf(t TestingT, rx interface{}, str interface{}, msg string, args ...interface{}) bool { func NotRegexpf(t TestingT, rx interface{}, str interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -607,7 +649,7 @@ func NotRegexpf(t TestingT, rx interface{}, str interface{}, msg string, args ..
// NotSamef asserts that two pointers do not reference the same object. // NotSamef asserts that two pointers do not reference the same object.
// //
// assert.NotSamef(t, ptr1, ptr2, "error message %s", "formatted") // assert.NotSamef(t, ptr1, ptr2, "error message %s", "formatted")
// //
// Both arguments must be pointer variables. Pointer variable sameness is // Both arguments must be pointer variables. Pointer variable sameness is
// determined based on the equality of both type and value. // determined based on the equality of both type and value.
@ -621,7 +663,7 @@ func NotSamef(t TestingT, expected interface{}, actual interface{}, msg string,
// NotSubsetf asserts that the specified list(array, slice...) contains not all // NotSubsetf asserts that the specified list(array, slice...) contains not all
// elements given in the specified subset(array, slice...). // elements given in the specified subset(array, slice...).
// //
// assert.NotSubsetf(t, [1, 3, 4], [1, 2], "But [1, 3, 4] does not contain [1, 2]", "error message %s", "formatted") // assert.NotSubsetf(t, [1, 3, 4], [1, 2], "But [1, 3, 4] does not contain [1, 2]", "error message %s", "formatted")
func NotSubsetf(t TestingT, list interface{}, subset interface{}, msg string, args ...interface{}) bool { func NotSubsetf(t TestingT, list interface{}, subset interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -639,7 +681,7 @@ func NotZerof(t TestingT, i interface{}, msg string, args ...interface{}) bool {
// Panicsf asserts that the code inside the specified PanicTestFunc panics. // Panicsf asserts that the code inside the specified PanicTestFunc panics.
// //
// assert.Panicsf(t, func(){ GoCrazy() }, "error message %s", "formatted") // assert.Panicsf(t, func(){ GoCrazy() }, "error message %s", "formatted")
func Panicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bool { func Panicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -651,7 +693,7 @@ func Panicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bool
// panics, and that the recovered panic value is an error that satisfies the // panics, and that the recovered panic value is an error that satisfies the
// EqualError comparison. // EqualError comparison.
// //
// assert.PanicsWithErrorf(t, "crazy error", func(){ GoCrazy() }, "error message %s", "formatted") // assert.PanicsWithErrorf(t, "crazy error", func(){ GoCrazy() }, "error message %s", "formatted")
func PanicsWithErrorf(t TestingT, errString string, f PanicTestFunc, msg string, args ...interface{}) bool { func PanicsWithErrorf(t TestingT, errString string, f PanicTestFunc, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -662,7 +704,7 @@ func PanicsWithErrorf(t TestingT, errString string, f PanicTestFunc, msg string,
// PanicsWithValuef asserts that the code inside the specified PanicTestFunc panics, and that // PanicsWithValuef asserts that the code inside the specified PanicTestFunc panics, and that
// the recovered panic value equals the expected panic value. // the recovered panic value equals the expected panic value.
// //
// assert.PanicsWithValuef(t, "crazy error", func(){ GoCrazy() }, "error message %s", "formatted") // assert.PanicsWithValuef(t, "crazy error", func(){ GoCrazy() }, "error message %s", "formatted")
func PanicsWithValuef(t TestingT, expected interface{}, f PanicTestFunc, msg string, args ...interface{}) bool { func PanicsWithValuef(t TestingT, expected interface{}, f PanicTestFunc, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -672,8 +714,8 @@ func PanicsWithValuef(t TestingT, expected interface{}, f PanicTestFunc, msg str
// Positivef asserts that the specified element is positive // Positivef asserts that the specified element is positive
// //
// assert.Positivef(t, 1, "error message %s", "formatted") // assert.Positivef(t, 1, "error message %s", "formatted")
// assert.Positivef(t, 1.23, "error message %s", "formatted") // assert.Positivef(t, 1.23, "error message %s", "formatted")
func Positivef(t TestingT, e interface{}, msg string, args ...interface{}) bool { func Positivef(t TestingT, e interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -683,8 +725,8 @@ func Positivef(t TestingT, e interface{}, msg string, args ...interface{}) bool
// Regexpf asserts that a specified regexp matches a string. // Regexpf asserts that a specified regexp matches a string.
// //
// assert.Regexpf(t, regexp.MustCompile("start"), "it's starting", "error message %s", "formatted") // assert.Regexpf(t, regexp.MustCompile("start"), "it's starting", "error message %s", "formatted")
// assert.Regexpf(t, "start...$", "it's not starting", "error message %s", "formatted") // assert.Regexpf(t, "start...$", "it's not starting", "error message %s", "formatted")
func Regexpf(t TestingT, rx interface{}, str interface{}, msg string, args ...interface{}) bool { func Regexpf(t TestingT, rx interface{}, str interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -694,7 +736,7 @@ func Regexpf(t TestingT, rx interface{}, str interface{}, msg string, args ...in
// Samef asserts that two pointers reference the same object. // Samef asserts that two pointers reference the same object.
// //
// assert.Samef(t, ptr1, ptr2, "error message %s", "formatted") // assert.Samef(t, ptr1, ptr2, "error message %s", "formatted")
// //
// Both arguments must be pointer variables. Pointer variable sameness is // Both arguments must be pointer variables. Pointer variable sameness is
// determined based on the equality of both type and value. // determined based on the equality of both type and value.
@ -708,7 +750,7 @@ func Samef(t TestingT, expected interface{}, actual interface{}, msg string, arg
// Subsetf asserts that the specified list(array, slice...) contains all // Subsetf asserts that the specified list(array, slice...) contains all
// elements given in the specified subset(array, slice...). // elements given in the specified subset(array, slice...).
// //
// assert.Subsetf(t, [1, 2, 3], [1, 2], "But [1, 2, 3] does contain [1, 2]", "error message %s", "formatted") // assert.Subsetf(t, [1, 2, 3], [1, 2], "But [1, 2, 3] does contain [1, 2]", "error message %s", "formatted")
func Subsetf(t TestingT, list interface{}, subset interface{}, msg string, args ...interface{}) bool { func Subsetf(t TestingT, list interface{}, subset interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -718,7 +760,7 @@ func Subsetf(t TestingT, list interface{}, subset interface{}, msg string, args
// Truef asserts that the specified value is true. // Truef asserts that the specified value is true.
// //
// assert.Truef(t, myBool, "error message %s", "formatted") // assert.Truef(t, myBool, "error message %s", "formatted")
func Truef(t TestingT, value bool, msg string, args ...interface{}) bool { func Truef(t TestingT, value bool, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -728,7 +770,7 @@ func Truef(t TestingT, value bool, msg string, args ...interface{}) bool {
// WithinDurationf asserts that the two times are within duration delta of each other. // WithinDurationf asserts that the two times are within duration delta of each other.
// //
// assert.WithinDurationf(t, time.Now(), time.Now(), 10*time.Second, "error message %s", "formatted") // assert.WithinDurationf(t, time.Now(), time.Now(), 10*time.Second, "error message %s", "formatted")
func WithinDurationf(t TestingT, expected time.Time, actual time.Time, delta time.Duration, msg string, args ...interface{}) bool { func WithinDurationf(t TestingT, expected time.Time, actual time.Time, delta time.Duration, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -738,7 +780,7 @@ func WithinDurationf(t TestingT, expected time.Time, actual time.Time, delta tim
// WithinRangef asserts that a time is within a time range (inclusive). // WithinRangef asserts that a time is within a time range (inclusive).
// //
// assert.WithinRangef(t, time.Now(), time.Now().Add(-time.Second), time.Now().Add(time.Second), "error message %s", "formatted") // assert.WithinRangef(t, time.Now(), time.Now().Add(-time.Second), time.Now().Add(time.Second), "error message %s", "formatted")
func WithinRangef(t TestingT, actual time.Time, start time.Time, end time.Time, msg string, args ...interface{}) bool { func WithinRangef(t TestingT, actual time.Time, start time.Time, end time.Time, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()

File diff suppressed because it is too large Load Diff

View File

@ -46,36 +46,36 @@ func isOrdered(t TestingT, object interface{}, allowedComparesResults []CompareT
// IsIncreasing asserts that the collection is increasing // IsIncreasing asserts that the collection is increasing
// //
// assert.IsIncreasing(t, []int{1, 2, 3}) // assert.IsIncreasing(t, []int{1, 2, 3})
// assert.IsIncreasing(t, []float{1, 2}) // assert.IsIncreasing(t, []float{1, 2})
// assert.IsIncreasing(t, []string{"a", "b"}) // assert.IsIncreasing(t, []string{"a", "b"})
func IsIncreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { func IsIncreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
return isOrdered(t, object, []CompareType{compareLess}, "\"%v\" is not less than \"%v\"", msgAndArgs...) return isOrdered(t, object, []CompareType{compareLess}, "\"%v\" is not less than \"%v\"", msgAndArgs...)
} }
// IsNonIncreasing asserts that the collection is not increasing // IsNonIncreasing asserts that the collection is not increasing
// //
// assert.IsNonIncreasing(t, []int{2, 1, 1}) // assert.IsNonIncreasing(t, []int{2, 1, 1})
// assert.IsNonIncreasing(t, []float{2, 1}) // assert.IsNonIncreasing(t, []float{2, 1})
// assert.IsNonIncreasing(t, []string{"b", "a"}) // assert.IsNonIncreasing(t, []string{"b", "a"})
func IsNonIncreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { func IsNonIncreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
return isOrdered(t, object, []CompareType{compareEqual, compareGreater}, "\"%v\" is not greater than or equal to \"%v\"", msgAndArgs...) return isOrdered(t, object, []CompareType{compareEqual, compareGreater}, "\"%v\" is not greater than or equal to \"%v\"", msgAndArgs...)
} }
// IsDecreasing asserts that the collection is decreasing // IsDecreasing asserts that the collection is decreasing
// //
// assert.IsDecreasing(t, []int{2, 1, 0}) // assert.IsDecreasing(t, []int{2, 1, 0})
// assert.IsDecreasing(t, []float{2, 1}) // assert.IsDecreasing(t, []float{2, 1})
// assert.IsDecreasing(t, []string{"b", "a"}) // assert.IsDecreasing(t, []string{"b", "a"})
func IsDecreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { func IsDecreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
return isOrdered(t, object, []CompareType{compareGreater}, "\"%v\" is not greater than \"%v\"", msgAndArgs...) return isOrdered(t, object, []CompareType{compareGreater}, "\"%v\" is not greater than \"%v\"", msgAndArgs...)
} }
// IsNonDecreasing asserts that the collection is not decreasing // IsNonDecreasing asserts that the collection is not decreasing
// //
// assert.IsNonDecreasing(t, []int{1, 1, 2}) // assert.IsNonDecreasing(t, []int{1, 1, 2})
// assert.IsNonDecreasing(t, []float{1, 2}) // assert.IsNonDecreasing(t, []float{1, 2})
// assert.IsNonDecreasing(t, []string{"a", "b"}) // assert.IsNonDecreasing(t, []string{"a", "b"})
func IsNonDecreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { func IsNonDecreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
return isOrdered(t, object, []CompareType{compareLess, compareEqual}, "\"%v\" is not less than or equal to \"%v\"", msgAndArgs...) return isOrdered(t, object, []CompareType{compareLess, compareEqual}, "\"%v\" is not less than or equal to \"%v\"", msgAndArgs...)
} }

View File

@ -8,7 +8,6 @@ import (
"fmt" "fmt"
"math" "math"
"os" "os"
"path/filepath"
"reflect" "reflect"
"regexp" "regexp"
"runtime" "runtime"
@ -76,6 +75,77 @@ func ObjectsAreEqual(expected, actual interface{}) bool {
return bytes.Equal(exp, act) return bytes.Equal(exp, act)
} }
// copyExportedFields iterates downward through nested data structures and creates a copy
// that only contains the exported struct fields.
func copyExportedFields(expected interface{}) interface{} {
if isNil(expected) {
return expected
}
expectedType := reflect.TypeOf(expected)
expectedKind := expectedType.Kind()
expectedValue := reflect.ValueOf(expected)
switch expectedKind {
case reflect.Struct:
result := reflect.New(expectedType).Elem()
for i := 0; i < expectedType.NumField(); i++ {
field := expectedType.Field(i)
isExported := field.IsExported()
if isExported {
fieldValue := expectedValue.Field(i)
if isNil(fieldValue) || isNil(fieldValue.Interface()) {
continue
}
newValue := copyExportedFields(fieldValue.Interface())
result.Field(i).Set(reflect.ValueOf(newValue))
}
}
return result.Interface()
case reflect.Ptr:
result := reflect.New(expectedType.Elem())
unexportedRemoved := copyExportedFields(expectedValue.Elem().Interface())
result.Elem().Set(reflect.ValueOf(unexportedRemoved))
return result.Interface()
case reflect.Array, reflect.Slice:
result := reflect.MakeSlice(expectedType, expectedValue.Len(), expectedValue.Len())
for i := 0; i < expectedValue.Len(); i++ {
index := expectedValue.Index(i)
if isNil(index) {
continue
}
unexportedRemoved := copyExportedFields(index.Interface())
result.Index(i).Set(reflect.ValueOf(unexportedRemoved))
}
return result.Interface()
case reflect.Map:
result := reflect.MakeMap(expectedType)
for _, k := range expectedValue.MapKeys() {
index := expectedValue.MapIndex(k)
unexportedRemoved := copyExportedFields(index.Interface())
result.SetMapIndex(k, reflect.ValueOf(unexportedRemoved))
}
return result.Interface()
default:
return expected
}
}
// ObjectsExportedFieldsAreEqual determines if the exported (public) fields of two objects are
// considered equal. This comparison of only exported fields is applied recursively to nested data
// structures.
//
// This function does no assertion of any kind.
func ObjectsExportedFieldsAreEqual(expected, actual interface{}) bool {
expectedCleaned := copyExportedFields(expected)
actualCleaned := copyExportedFields(actual)
return ObjectsAreEqualValues(expectedCleaned, actualCleaned)
}
// ObjectsAreEqualValues gets whether two objects are equal, or if their // ObjectsAreEqualValues gets whether two objects are equal, or if their
// values are equal. // values are equal.
func ObjectsAreEqualValues(expected, actual interface{}) bool { func ObjectsAreEqualValues(expected, actual interface{}) bool {
@ -141,12 +211,11 @@ func CallerInfo() []string {
} }
parts := strings.Split(file, "/") parts := strings.Split(file, "/")
file = parts[len(parts)-1]
if len(parts) > 1 { if len(parts) > 1 {
filename := parts[len(parts)-1]
dir := parts[len(parts)-2] dir := parts[len(parts)-2]
if (dir != "assert" && dir != "mock" && dir != "require") || file == "mock_test.go" { if (dir != "assert" && dir != "mock" && dir != "require") || filename == "mock_test.go" {
path, _ := filepath.Abs(file) callers = append(callers, fmt.Sprintf("%s:%d", file, line))
callers = append(callers, fmt.Sprintf("%s:%d", path, line))
} }
} }
@ -273,7 +342,7 @@ type labeledContent struct {
// labeledOutput returns a string consisting of the provided labeledContent. Each labeled output is appended in the following manner: // labeledOutput returns a string consisting of the provided labeledContent. Each labeled output is appended in the following manner:
// //
// \t{{label}}:{{align_spaces}}\t{{content}}\n // \t{{label}}:{{align_spaces}}\t{{content}}\n
// //
// The initial carriage return is required to undo/erase any padding added by testing.T.Errorf. The "\t{{label}}:" is for the label. // The initial carriage return is required to undo/erase any padding added by testing.T.Errorf. The "\t{{label}}:" is for the label.
// If a label is shorter than the longest label provided, padding spaces are added to make all the labels match in length. Once this // If a label is shorter than the longest label provided, padding spaces are added to make all the labels match in length. Once this
@ -296,7 +365,7 @@ func labeledOutput(content ...labeledContent) string {
// Implements asserts that an object is implemented by the specified interface. // Implements asserts that an object is implemented by the specified interface.
// //
// assert.Implements(t, (*MyInterface)(nil), new(MyObject)) // assert.Implements(t, (*MyInterface)(nil), new(MyObject))
func Implements(t TestingT, interfaceObject interface{}, object interface{}, msgAndArgs ...interface{}) bool { func Implements(t TestingT, interfaceObject interface{}, object interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -328,7 +397,7 @@ func IsType(t TestingT, expectedType interface{}, object interface{}, msgAndArgs
// Equal asserts that two objects are equal. // Equal asserts that two objects are equal.
// //
// assert.Equal(t, 123, 123) // assert.Equal(t, 123, 123)
// //
// Pointer variable equality is determined based on the equality of the // Pointer variable equality is determined based on the equality of the
// referenced values (as opposed to the memory addresses). Function equality // referenced values (as opposed to the memory addresses). Function equality
@ -369,7 +438,7 @@ func validateEqualArgs(expected, actual interface{}) error {
// Same asserts that two pointers reference the same object. // Same asserts that two pointers reference the same object.
// //
// assert.Same(t, ptr1, ptr2) // assert.Same(t, ptr1, ptr2)
// //
// Both arguments must be pointer variables. Pointer variable sameness is // Both arguments must be pointer variables. Pointer variable sameness is
// determined based on the equality of both type and value. // determined based on the equality of both type and value.
@ -389,7 +458,7 @@ func Same(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) b
// NotSame asserts that two pointers do not reference the same object. // NotSame asserts that two pointers do not reference the same object.
// //
// assert.NotSame(t, ptr1, ptr2) // assert.NotSame(t, ptr1, ptr2)
// //
// Both arguments must be pointer variables. Pointer variable sameness is // Both arguments must be pointer variables. Pointer variable sameness is
// determined based on the equality of both type and value. // determined based on the equality of both type and value.
@ -457,7 +526,7 @@ func truncatingFormat(data interface{}) string {
// EqualValues asserts that two objects are equal or convertable to the same types // EqualValues asserts that two objects are equal or convertable to the same types
// and equal. // and equal.
// //
// assert.EqualValues(t, uint32(123), int32(123)) // assert.EqualValues(t, uint32(123), int32(123))
func EqualValues(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool { func EqualValues(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -475,9 +544,53 @@ func EqualValues(t TestingT, expected, actual interface{}, msgAndArgs ...interfa
} }
// EqualExportedValues asserts that the types of two objects are equal and their public
// fields are also equal. This is useful for comparing structs that have private fields
// that could potentially differ.
//
// type S struct {
// Exported int
// notExported int
// }
// assert.EqualExportedValues(t, S{1, 2}, S{1, 3}) => true
// assert.EqualExportedValues(t, S{1, 2}, S{2, 3}) => false
func EqualExportedValues(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok {
h.Helper()
}
aType := reflect.TypeOf(expected)
bType := reflect.TypeOf(actual)
if aType != bType {
return Fail(t, fmt.Sprintf("Types expected to match exactly\n\t%v != %v", aType, bType), msgAndArgs...)
}
if aType.Kind() != reflect.Struct {
return Fail(t, fmt.Sprintf("Types expected to both be struct \n\t%v != %v", aType.Kind(), reflect.Struct), msgAndArgs...)
}
if bType.Kind() != reflect.Struct {
return Fail(t, fmt.Sprintf("Types expected to both be struct \n\t%v != %v", bType.Kind(), reflect.Struct), msgAndArgs...)
}
expected = copyExportedFields(expected)
actual = copyExportedFields(actual)
if !ObjectsAreEqualValues(expected, actual) {
diff := diff(expected, actual)
expected, actual = formatUnequalValues(expected, actual)
return Fail(t, fmt.Sprintf("Not equal (comparing only exported fields): \n"+
"expected: %s\n"+
"actual : %s%s", expected, actual, diff), msgAndArgs...)
}
return true
}
// Exactly asserts that two objects are equal in value and type. // Exactly asserts that two objects are equal in value and type.
// //
// assert.Exactly(t, int32(123), int64(123)) // assert.Exactly(t, int32(123), int64(123))
func Exactly(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool { func Exactly(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -496,7 +609,7 @@ func Exactly(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}
// NotNil asserts that the specified object is not nil. // NotNil asserts that the specified object is not nil.
// //
// assert.NotNil(t, err) // assert.NotNil(t, err)
func NotNil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { func NotNil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
if !isNil(object) { if !isNil(object) {
return true return true
@ -530,7 +643,7 @@ func isNil(object interface{}) bool {
[]reflect.Kind{ []reflect.Kind{
reflect.Chan, reflect.Func, reflect.Chan, reflect.Func,
reflect.Interface, reflect.Map, reflect.Interface, reflect.Map,
reflect.Ptr, reflect.Slice}, reflect.Ptr, reflect.Slice, reflect.UnsafePointer},
kind) kind)
if isNilableKind && value.IsNil() { if isNilableKind && value.IsNil() {
@ -542,7 +655,7 @@ func isNil(object interface{}) bool {
// Nil asserts that the specified object is nil. // Nil asserts that the specified object is nil.
// //
// assert.Nil(t, err) // assert.Nil(t, err)
func Nil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { func Nil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
if isNil(object) { if isNil(object) {
return true return true
@ -585,7 +698,7 @@ func isEmpty(object interface{}) bool {
// Empty asserts that the specified object is empty. I.e. nil, "", false, 0 or either // Empty asserts that the specified object is empty. I.e. nil, "", false, 0 or either
// a slice or a channel with len == 0. // a slice or a channel with len == 0.
// //
// assert.Empty(t, obj) // assert.Empty(t, obj)
func Empty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { func Empty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
pass := isEmpty(object) pass := isEmpty(object)
if !pass { if !pass {
@ -602,9 +715,9 @@ func Empty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
// NotEmpty asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either // NotEmpty asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either
// a slice or a channel with len == 0. // a slice or a channel with len == 0.
// //
// if assert.NotEmpty(t, obj) { // if assert.NotEmpty(t, obj) {
// assert.Equal(t, "two", obj[1]) // assert.Equal(t, "two", obj[1])
// } // }
func NotEmpty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { func NotEmpty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
pass := !isEmpty(object) pass := !isEmpty(object)
if !pass { if !pass {
@ -633,7 +746,7 @@ func getLen(x interface{}) (ok bool, length int) {
// Len asserts that the specified object has specific length. // Len asserts that the specified object has specific length.
// Len also fails if the object has a type that len() not accept. // Len also fails if the object has a type that len() not accept.
// //
// assert.Len(t, mySlice, 3) // assert.Len(t, mySlice, 3)
func Len(t TestingT, object interface{}, length int, msgAndArgs ...interface{}) bool { func Len(t TestingT, object interface{}, length int, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -651,7 +764,7 @@ func Len(t TestingT, object interface{}, length int, msgAndArgs ...interface{})
// True asserts that the specified value is true. // True asserts that the specified value is true.
// //
// assert.True(t, myBool) // assert.True(t, myBool)
func True(t TestingT, value bool, msgAndArgs ...interface{}) bool { func True(t TestingT, value bool, msgAndArgs ...interface{}) bool {
if !value { if !value {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
@ -666,7 +779,7 @@ func True(t TestingT, value bool, msgAndArgs ...interface{}) bool {
// False asserts that the specified value is false. // False asserts that the specified value is false.
// //
// assert.False(t, myBool) // assert.False(t, myBool)
func False(t TestingT, value bool, msgAndArgs ...interface{}) bool { func False(t TestingT, value bool, msgAndArgs ...interface{}) bool {
if value { if value {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
@ -681,7 +794,7 @@ func False(t TestingT, value bool, msgAndArgs ...interface{}) bool {
// NotEqual asserts that the specified values are NOT equal. // NotEqual asserts that the specified values are NOT equal.
// //
// assert.NotEqual(t, obj1, obj2) // assert.NotEqual(t, obj1, obj2)
// //
// Pointer variable equality is determined based on the equality of the // Pointer variable equality is determined based on the equality of the
// referenced values (as opposed to the memory addresses). // referenced values (as opposed to the memory addresses).
@ -704,7 +817,7 @@ func NotEqual(t TestingT, expected, actual interface{}, msgAndArgs ...interface{
// NotEqualValues asserts that two objects are not equal even when converted to the same type // NotEqualValues asserts that two objects are not equal even when converted to the same type
// //
// assert.NotEqualValues(t, obj1, obj2) // assert.NotEqualValues(t, obj1, obj2)
func NotEqualValues(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool { func NotEqualValues(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -763,9 +876,9 @@ func containsElement(list interface{}, element interface{}) (ok, found bool) {
// Contains asserts that the specified string, list(array, slice...) or map contains the // Contains asserts that the specified string, list(array, slice...) or map contains the
// specified substring or element. // specified substring or element.
// //
// assert.Contains(t, "Hello World", "World") // assert.Contains(t, "Hello World", "World")
// assert.Contains(t, ["Hello", "World"], "World") // assert.Contains(t, ["Hello", "World"], "World")
// assert.Contains(t, {"Hello": "World"}, "Hello") // assert.Contains(t, {"Hello": "World"}, "Hello")
func Contains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bool { func Contains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -786,9 +899,9 @@ func Contains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bo
// NotContains asserts that the specified string, list(array, slice...) or map does NOT contain the // NotContains asserts that the specified string, list(array, slice...) or map does NOT contain the
// specified substring or element. // specified substring or element.
// //
// assert.NotContains(t, "Hello World", "Earth") // assert.NotContains(t, "Hello World", "Earth")
// assert.NotContains(t, ["Hello", "World"], "Earth") // assert.NotContains(t, ["Hello", "World"], "Earth")
// assert.NotContains(t, {"Hello": "World"}, "Earth") // assert.NotContains(t, {"Hello": "World"}, "Earth")
func NotContains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bool { func NotContains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -796,10 +909,10 @@ func NotContains(t TestingT, s, contains interface{}, msgAndArgs ...interface{})
ok, found := containsElement(s, contains) ok, found := containsElement(s, contains)
if !ok { if !ok {
return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", s), msgAndArgs...) return Fail(t, fmt.Sprintf("%#v could not be applied builtin len()", s), msgAndArgs...)
} }
if found { if found {
return Fail(t, fmt.Sprintf("\"%s\" should not contain \"%s\"", s, contains), msgAndArgs...) return Fail(t, fmt.Sprintf("%#v should not contain %#v", s, contains), msgAndArgs...)
} }
return true return true
@ -809,7 +922,7 @@ func NotContains(t TestingT, s, contains interface{}, msgAndArgs ...interface{})
// Subset asserts that the specified list(array, slice...) contains all // Subset asserts that the specified list(array, slice...) contains all
// elements given in the specified subset(array, slice...). // elements given in the specified subset(array, slice...).
// //
// assert.Subset(t, [1, 2, 3], [1, 2], "But [1, 2, 3] does contain [1, 2]") // assert.Subset(t, [1, 2, 3], [1, 2], "But [1, 2, 3] does contain [1, 2]")
func Subset(t TestingT, list, subset interface{}, msgAndArgs ...interface{}) (ok bool) { func Subset(t TestingT, list, subset interface{}, msgAndArgs ...interface{}) (ok bool) {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -818,49 +931,44 @@ func Subset(t TestingT, list, subset interface{}, msgAndArgs ...interface{}) (ok
return true // we consider nil to be equal to the nil set return true // we consider nil to be equal to the nil set
} }
defer func() {
if e := recover(); e != nil {
ok = false
}
}()
listKind := reflect.TypeOf(list).Kind() listKind := reflect.TypeOf(list).Kind()
subsetKind := reflect.TypeOf(subset).Kind()
if listKind != reflect.Array && listKind != reflect.Slice && listKind != reflect.Map { if listKind != reflect.Array && listKind != reflect.Slice && listKind != reflect.Map {
return Fail(t, fmt.Sprintf("%q has an unsupported type %s", list, listKind), msgAndArgs...) return Fail(t, fmt.Sprintf("%q has an unsupported type %s", list, listKind), msgAndArgs...)
} }
subsetKind := reflect.TypeOf(subset).Kind()
if subsetKind != reflect.Array && subsetKind != reflect.Slice && listKind != reflect.Map { if subsetKind != reflect.Array && subsetKind != reflect.Slice && listKind != reflect.Map {
return Fail(t, fmt.Sprintf("%q has an unsupported type %s", subset, subsetKind), msgAndArgs...) return Fail(t, fmt.Sprintf("%q has an unsupported type %s", subset, subsetKind), msgAndArgs...)
} }
subsetValue := reflect.ValueOf(subset)
if subsetKind == reflect.Map && listKind == reflect.Map { if subsetKind == reflect.Map && listKind == reflect.Map {
listValue := reflect.ValueOf(list) subsetMap := reflect.ValueOf(subset)
subsetKeys := subsetValue.MapKeys() actualMap := reflect.ValueOf(list)
for i := 0; i < len(subsetKeys); i++ { for _, k := range subsetMap.MapKeys() {
subsetKey := subsetKeys[i] ev := subsetMap.MapIndex(k)
subsetElement := subsetValue.MapIndex(subsetKey).Interface() av := actualMap.MapIndex(k)
listElement := listValue.MapIndex(subsetKey).Interface()
if !ObjectsAreEqual(subsetElement, listElement) { if !av.IsValid() {
return Fail(t, fmt.Sprintf("\"%s\" does not contain \"%s\"", list, subsetElement), msgAndArgs...) return Fail(t, fmt.Sprintf("%#v does not contain %#v", list, subset), msgAndArgs...)
}
if !ObjectsAreEqual(ev.Interface(), av.Interface()) {
return Fail(t, fmt.Sprintf("%#v does not contain %#v", list, subset), msgAndArgs...)
} }
} }
return true return true
} }
for i := 0; i < subsetValue.Len(); i++ { subsetList := reflect.ValueOf(subset)
element := subsetValue.Index(i).Interface() for i := 0; i < subsetList.Len(); i++ {
element := subsetList.Index(i).Interface()
ok, found := containsElement(list, element) ok, found := containsElement(list, element)
if !ok { if !ok {
return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", list), msgAndArgs...) return Fail(t, fmt.Sprintf("%#v could not be applied builtin len()", list), msgAndArgs...)
} }
if !found { if !found {
return Fail(t, fmt.Sprintf("\"%s\" does not contain \"%s\"", list, element), msgAndArgs...) return Fail(t, fmt.Sprintf("%#v does not contain %#v", list, element), msgAndArgs...)
} }
} }
@ -870,7 +978,7 @@ func Subset(t TestingT, list, subset interface{}, msgAndArgs ...interface{}) (ok
// NotSubset asserts that the specified list(array, slice...) contains not all // NotSubset asserts that the specified list(array, slice...) contains not all
// elements given in the specified subset(array, slice...). // elements given in the specified subset(array, slice...).
// //
// assert.NotSubset(t, [1, 3, 4], [1, 2], "But [1, 3, 4] does not contain [1, 2]") // assert.NotSubset(t, [1, 3, 4], [1, 2], "But [1, 3, 4] does not contain [1, 2]")
func NotSubset(t TestingT, list, subset interface{}, msgAndArgs ...interface{}) (ok bool) { func NotSubset(t TestingT, list, subset interface{}, msgAndArgs ...interface{}) (ok bool) {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -879,34 +987,28 @@ func NotSubset(t TestingT, list, subset interface{}, msgAndArgs ...interface{})
return Fail(t, "nil is the empty set which is a subset of every set", msgAndArgs...) return Fail(t, "nil is the empty set which is a subset of every set", msgAndArgs...)
} }
defer func() {
if e := recover(); e != nil {
ok = false
}
}()
listKind := reflect.TypeOf(list).Kind() listKind := reflect.TypeOf(list).Kind()
subsetKind := reflect.TypeOf(subset).Kind()
if listKind != reflect.Array && listKind != reflect.Slice && listKind != reflect.Map { if listKind != reflect.Array && listKind != reflect.Slice && listKind != reflect.Map {
return Fail(t, fmt.Sprintf("%q has an unsupported type %s", list, listKind), msgAndArgs...) return Fail(t, fmt.Sprintf("%q has an unsupported type %s", list, listKind), msgAndArgs...)
} }
subsetKind := reflect.TypeOf(subset).Kind()
if subsetKind != reflect.Array && subsetKind != reflect.Slice && listKind != reflect.Map { if subsetKind != reflect.Array && subsetKind != reflect.Slice && listKind != reflect.Map {
return Fail(t, fmt.Sprintf("%q has an unsupported type %s", subset, subsetKind), msgAndArgs...) return Fail(t, fmt.Sprintf("%q has an unsupported type %s", subset, subsetKind), msgAndArgs...)
} }
subsetValue := reflect.ValueOf(subset)
if subsetKind == reflect.Map && listKind == reflect.Map { if subsetKind == reflect.Map && listKind == reflect.Map {
listValue := reflect.ValueOf(list) subsetMap := reflect.ValueOf(subset)
subsetKeys := subsetValue.MapKeys() actualMap := reflect.ValueOf(list)
for i := 0; i < len(subsetKeys); i++ { for _, k := range subsetMap.MapKeys() {
subsetKey := subsetKeys[i] ev := subsetMap.MapIndex(k)
subsetElement := subsetValue.MapIndex(subsetKey).Interface() av := actualMap.MapIndex(k)
listElement := listValue.MapIndex(subsetKey).Interface()
if !ObjectsAreEqual(subsetElement, listElement) { if !av.IsValid() {
return true
}
if !ObjectsAreEqual(ev.Interface(), av.Interface()) {
return true return true
} }
} }
@ -914,8 +1016,9 @@ func NotSubset(t TestingT, list, subset interface{}, msgAndArgs ...interface{})
return Fail(t, fmt.Sprintf("%q is a subset of %q", subset, list), msgAndArgs...) return Fail(t, fmt.Sprintf("%q is a subset of %q", subset, list), msgAndArgs...)
} }
for i := 0; i < subsetValue.Len(); i++ { subsetList := reflect.ValueOf(subset)
element := subsetValue.Index(i).Interface() for i := 0; i < subsetList.Len(); i++ {
element := subsetList.Index(i).Interface()
ok, found := containsElement(list, element) ok, found := containsElement(list, element)
if !ok { if !ok {
return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", list), msgAndArgs...) return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", list), msgAndArgs...)
@ -1060,7 +1163,7 @@ func didPanic(f PanicTestFunc) (didPanic bool, message interface{}, stack string
// Panics asserts that the code inside the specified PanicTestFunc panics. // Panics asserts that the code inside the specified PanicTestFunc panics.
// //
// assert.Panics(t, func(){ GoCrazy() }) // assert.Panics(t, func(){ GoCrazy() })
func Panics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool { func Panics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -1076,7 +1179,7 @@ func Panics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool {
// PanicsWithValue asserts that the code inside the specified PanicTestFunc panics, and that // PanicsWithValue asserts that the code inside the specified PanicTestFunc panics, and that
// the recovered panic value equals the expected panic value. // the recovered panic value equals the expected panic value.
// //
// assert.PanicsWithValue(t, "crazy error", func(){ GoCrazy() }) // assert.PanicsWithValue(t, "crazy error", func(){ GoCrazy() })
func PanicsWithValue(t TestingT, expected interface{}, f PanicTestFunc, msgAndArgs ...interface{}) bool { func PanicsWithValue(t TestingT, expected interface{}, f PanicTestFunc, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -1097,7 +1200,7 @@ func PanicsWithValue(t TestingT, expected interface{}, f PanicTestFunc, msgAndAr
// panics, and that the recovered panic value is an error that satisfies the // panics, and that the recovered panic value is an error that satisfies the
// EqualError comparison. // EqualError comparison.
// //
// assert.PanicsWithError(t, "crazy error", func(){ GoCrazy() }) // assert.PanicsWithError(t, "crazy error", func(){ GoCrazy() })
func PanicsWithError(t TestingT, errString string, f PanicTestFunc, msgAndArgs ...interface{}) bool { func PanicsWithError(t TestingT, errString string, f PanicTestFunc, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -1117,7 +1220,7 @@ func PanicsWithError(t TestingT, errString string, f PanicTestFunc, msgAndArgs .
// NotPanics asserts that the code inside the specified PanicTestFunc does NOT panic. // NotPanics asserts that the code inside the specified PanicTestFunc does NOT panic.
// //
// assert.NotPanics(t, func(){ RemainCalm() }) // assert.NotPanics(t, func(){ RemainCalm() })
func NotPanics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool { func NotPanics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -1132,7 +1235,7 @@ func NotPanics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool {
// WithinDuration asserts that the two times are within duration delta of each other. // WithinDuration asserts that the two times are within duration delta of each other.
// //
// assert.WithinDuration(t, time.Now(), time.Now(), 10*time.Second) // assert.WithinDuration(t, time.Now(), time.Now(), 10*time.Second)
func WithinDuration(t TestingT, expected, actual time.Time, delta time.Duration, msgAndArgs ...interface{}) bool { func WithinDuration(t TestingT, expected, actual time.Time, delta time.Duration, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -1148,7 +1251,7 @@ func WithinDuration(t TestingT, expected, actual time.Time, delta time.Duration,
// WithinRange asserts that a time is within a time range (inclusive). // WithinRange asserts that a time is within a time range (inclusive).
// //
// assert.WithinRange(t, time.Now(), time.Now().Add(-time.Second), time.Now().Add(time.Second)) // assert.WithinRange(t, time.Now(), time.Now().Add(-time.Second), time.Now().Add(time.Second))
func WithinRange(t TestingT, actual, start, end time.Time, msgAndArgs ...interface{}) bool { func WithinRange(t TestingT, actual, start, end time.Time, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -1207,7 +1310,7 @@ func toFloat(x interface{}) (float64, bool) {
// InDelta asserts that the two numerals are within delta of each other. // InDelta asserts that the two numerals are within delta of each other.
// //
// assert.InDelta(t, math.Pi, 22/7.0, 0.01) // assert.InDelta(t, math.Pi, 22/7.0, 0.01)
func InDelta(t TestingT, expected, actual interface{}, delta float64, msgAndArgs ...interface{}) bool { func InDelta(t TestingT, expected, actual interface{}, delta float64, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -1380,10 +1483,10 @@ func InEpsilonSlice(t TestingT, expected, actual interface{}, epsilon float64, m
// NoError asserts that a function returned no error (i.e. `nil`). // NoError asserts that a function returned no error (i.e. `nil`).
// //
// actualObj, err := SomeFunction() // actualObj, err := SomeFunction()
// if assert.NoError(t, err) { // if assert.NoError(t, err) {
// assert.Equal(t, expectedObj, actualObj) // assert.Equal(t, expectedObj, actualObj)
// } // }
func NoError(t TestingT, err error, msgAndArgs ...interface{}) bool { func NoError(t TestingT, err error, msgAndArgs ...interface{}) bool {
if err != nil { if err != nil {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
@ -1397,10 +1500,10 @@ func NoError(t TestingT, err error, msgAndArgs ...interface{}) bool {
// Error asserts that a function returned an error (i.e. not `nil`). // Error asserts that a function returned an error (i.e. not `nil`).
// //
// actualObj, err := SomeFunction() // actualObj, err := SomeFunction()
// if assert.Error(t, err) { // if assert.Error(t, err) {
// assert.Equal(t, expectedError, err) // assert.Equal(t, expectedError, err)
// } // }
func Error(t TestingT, err error, msgAndArgs ...interface{}) bool { func Error(t TestingT, err error, msgAndArgs ...interface{}) bool {
if err == nil { if err == nil {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
@ -1415,8 +1518,8 @@ func Error(t TestingT, err error, msgAndArgs ...interface{}) bool {
// EqualError asserts that a function returned an error (i.e. not `nil`) // EqualError asserts that a function returned an error (i.e. not `nil`)
// and that it is equal to the provided error. // and that it is equal to the provided error.
// //
// actualObj, err := SomeFunction() // actualObj, err := SomeFunction()
// assert.EqualError(t, err, expectedErrorString) // assert.EqualError(t, err, expectedErrorString)
func EqualError(t TestingT, theError error, errString string, msgAndArgs ...interface{}) bool { func EqualError(t TestingT, theError error, errString string, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -1438,8 +1541,8 @@ func EqualError(t TestingT, theError error, errString string, msgAndArgs ...inte
// ErrorContains asserts that a function returned an error (i.e. not `nil`) // ErrorContains asserts that a function returned an error (i.e. not `nil`)
// and that the error contains the specified substring. // and that the error contains the specified substring.
// //
// actualObj, err := SomeFunction() // actualObj, err := SomeFunction()
// assert.ErrorContains(t, err, expectedErrorSubString) // assert.ErrorContains(t, err, expectedErrorSubString)
func ErrorContains(t TestingT, theError error, contains string, msgAndArgs ...interface{}) bool { func ErrorContains(t TestingT, theError error, contains string, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -1472,8 +1575,8 @@ func matchRegexp(rx interface{}, str interface{}) bool {
// Regexp asserts that a specified regexp matches a string. // Regexp asserts that a specified regexp matches a string.
// //
// assert.Regexp(t, regexp.MustCompile("start"), "it's starting") // assert.Regexp(t, regexp.MustCompile("start"), "it's starting")
// assert.Regexp(t, "start...$", "it's not starting") // assert.Regexp(t, "start...$", "it's not starting")
func Regexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface{}) bool { func Regexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -1490,8 +1593,8 @@ func Regexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface
// NotRegexp asserts that a specified regexp does not match a string. // NotRegexp asserts that a specified regexp does not match a string.
// //
// assert.NotRegexp(t, regexp.MustCompile("starts"), "it's starting") // assert.NotRegexp(t, regexp.MustCompile("starts"), "it's starting")
// assert.NotRegexp(t, "^start", "it's not starting") // assert.NotRegexp(t, "^start", "it's not starting")
func NotRegexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface{}) bool { func NotRegexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -1603,7 +1706,7 @@ func NoDirExists(t TestingT, path string, msgAndArgs ...interface{}) bool {
// JSONEq asserts that two JSON strings are equivalent. // JSONEq asserts that two JSON strings are equivalent.
// //
// assert.JSONEq(t, `{"hello": "world", "foo": "bar"}`, `{"foo": "bar", "hello": "world"}`) // assert.JSONEq(t, `{"hello": "world", "foo": "bar"}`, `{"foo": "bar", "hello": "world"}`)
func JSONEq(t TestingT, expected string, actual string, msgAndArgs ...interface{}) bool { func JSONEq(t TestingT, expected string, actual string, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -1726,7 +1829,7 @@ type tHelper interface {
// Eventually asserts that given condition will be met in waitFor time, // Eventually asserts that given condition will be met in waitFor time,
// periodically checking target function each tick. // periodically checking target function each tick.
// //
// assert.Eventually(t, func() bool { return true; }, time.Second, 10*time.Millisecond) // assert.Eventually(t, func() bool { return true; }, time.Second, 10*time.Millisecond)
func Eventually(t TestingT, condition func() bool, waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) bool { func Eventually(t TestingT, condition func() bool, waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()
@ -1756,10 +1859,93 @@ func Eventually(t TestingT, condition func() bool, waitFor time.Duration, tick t
} }
} }
// CollectT implements the TestingT interface and collects all errors.
type CollectT struct {
errors []error
}
// Errorf collects the error.
func (c *CollectT) Errorf(format string, args ...interface{}) {
c.errors = append(c.errors, fmt.Errorf(format, args...))
}
// FailNow panics.
func (c *CollectT) FailNow() {
panic("Assertion failed")
}
// Reset clears the collected errors.
func (c *CollectT) Reset() {
c.errors = nil
}
// Copy copies the collected errors to the supplied t.
func (c *CollectT) Copy(t TestingT) {
if tt, ok := t.(tHelper); ok {
tt.Helper()
}
for _, err := range c.errors {
t.Errorf("%v", err)
}
}
// EventuallyWithT asserts that given condition will be met in waitFor time,
// periodically checking target function each tick. In contrast to Eventually,
// it supplies a CollectT to the condition function, so that the condition
// function can use the CollectT to call other assertions.
// The condition is considered "met" if no errors are raised in a tick.
// The supplied CollectT collects all errors from one tick (if there are any).
// If the condition is not met before waitFor, the collected errors of
// the last tick are copied to t.
//
// externalValue := false
// go func() {
// time.Sleep(8*time.Second)
// externalValue = true
// }()
// assert.EventuallyWithT(t, func(c *assert.CollectT) {
// // add assertions as needed; any assertion failure will fail the current tick
// assert.True(c, externalValue, "expected 'externalValue' to be true")
// }, 1*time.Second, 10*time.Second, "external state has not changed to 'true'; still false")
func EventuallyWithT(t TestingT, condition func(collect *CollectT), waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok {
h.Helper()
}
collect := new(CollectT)
ch := make(chan bool, 1)
timer := time.NewTimer(waitFor)
defer timer.Stop()
ticker := time.NewTicker(tick)
defer ticker.Stop()
for tick := ticker.C; ; {
select {
case <-timer.C:
collect.Copy(t)
return Fail(t, "Condition never satisfied", msgAndArgs...)
case <-tick:
tick = nil
collect.Reset()
go func() {
condition(collect)
ch <- len(collect.errors) == 0
}()
case v := <-ch:
if v {
return true
}
tick = ticker.C
}
}
}
// Never asserts that the given condition doesn't satisfy in waitFor time, // Never asserts that the given condition doesn't satisfy in waitFor time,
// periodically checking the target function each tick. // periodically checking the target function each tick.
// //
// assert.Never(t, func() bool { return false; }, time.Second, 10*time.Millisecond) // assert.Never(t, func() bool { return false; }, time.Second, 10*time.Millisecond)
func Never(t TestingT, condition func() bool, waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) bool { func Never(t TestingT, condition func() bool, waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok { if h, ok := t.(tHelper); ok {
h.Helper() h.Helper()

View File

@ -1,39 +1,40 @@
// Package assert provides a set of comprehensive testing tools for use with the normal Go testing system. // Package assert provides a set of comprehensive testing tools for use with the normal Go testing system.
// //
// Example Usage // # Example Usage
// //
// The following is a complete example using assert in a standard test function: // The following is a complete example using assert in a standard test function:
// import (
// "testing"
// "github.com/stretchr/testify/assert"
// )
// //
// func TestSomething(t *testing.T) { // import (
// "testing"
// "github.com/stretchr/testify/assert"
// )
// //
// var a string = "Hello" // func TestSomething(t *testing.T) {
// var b string = "Hello"
// //
// assert.Equal(t, a, b, "The two words should be the same.") // var a string = "Hello"
// var b string = "Hello"
// //
// } // assert.Equal(t, a, b, "The two words should be the same.")
//
// }
// //
// if you assert many times, use the format below: // if you assert many times, use the format below:
// //
// import ( // import (
// "testing" // "testing"
// "github.com/stretchr/testify/assert" // "github.com/stretchr/testify/assert"
// ) // )
// //
// func TestSomething(t *testing.T) { // func TestSomething(t *testing.T) {
// assert := assert.New(t) // assert := assert.New(t)
// //
// var a string = "Hello" // var a string = "Hello"
// var b string = "Hello" // var b string = "Hello"
// //
// assert.Equal(a, b, "The two words should be the same.") // assert.Equal(a, b, "The two words should be the same.")
// } // }
// //
// Assertions // # Assertions
// //
// Assertions allow you to easily write test code, and are global funcs in the `assert` package. // Assertions allow you to easily write test code, and are global funcs in the `assert` package.
// All assertion functions take, as the first argument, the `*testing.T` object provided by the // All assertion functions take, as the first argument, the `*testing.T` object provided by the

View File

@ -23,7 +23,7 @@ func httpCode(handler http.HandlerFunc, method, url string, values url.Values) (
// HTTPSuccess asserts that a specified handler returns a success status code. // HTTPSuccess asserts that a specified handler returns a success status code.
// //
// assert.HTTPSuccess(t, myHandler, "POST", "http://www.google.com", nil) // assert.HTTPSuccess(t, myHandler, "POST", "http://www.google.com", nil)
// //
// Returns whether the assertion was successful (true) or not (false). // Returns whether the assertion was successful (true) or not (false).
func HTTPSuccess(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool { func HTTPSuccess(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool {
@ -45,7 +45,7 @@ func HTTPSuccess(t TestingT, handler http.HandlerFunc, method, url string, value
// HTTPRedirect asserts that a specified handler returns a redirect status code. // HTTPRedirect asserts that a specified handler returns a redirect status code.
// //
// assert.HTTPRedirect(t, myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}} // assert.HTTPRedirect(t, myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}}
// //
// Returns whether the assertion was successful (true) or not (false). // Returns whether the assertion was successful (true) or not (false).
func HTTPRedirect(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool { func HTTPRedirect(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool {
@ -67,7 +67,7 @@ func HTTPRedirect(t TestingT, handler http.HandlerFunc, method, url string, valu
// HTTPError asserts that a specified handler returns an error status code. // HTTPError asserts that a specified handler returns an error status code.
// //
// assert.HTTPError(t, myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}} // assert.HTTPError(t, myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}}
// //
// Returns whether the assertion was successful (true) or not (false). // Returns whether the assertion was successful (true) or not (false).
func HTTPError(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool { func HTTPError(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool {
@ -89,7 +89,7 @@ func HTTPError(t TestingT, handler http.HandlerFunc, method, url string, values
// HTTPStatusCode asserts that a specified handler returns a specified status code. // HTTPStatusCode asserts that a specified handler returns a specified status code.
// //
// assert.HTTPStatusCode(t, myHandler, "GET", "/notImplemented", nil, 501) // assert.HTTPStatusCode(t, myHandler, "GET", "/notImplemented", nil, 501)
// //
// Returns whether the assertion was successful (true) or not (false). // Returns whether the assertion was successful (true) or not (false).
func HTTPStatusCode(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, statuscode int, msgAndArgs ...interface{}) bool { func HTTPStatusCode(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, statuscode int, msgAndArgs ...interface{}) bool {
@ -124,7 +124,7 @@ func HTTPBody(handler http.HandlerFunc, method, url string, values url.Values) s
// HTTPBodyContains asserts that a specified handler returns a // HTTPBodyContains asserts that a specified handler returns a
// body that contains a string. // body that contains a string.
// //
// assert.HTTPBodyContains(t, myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky") // assert.HTTPBodyContains(t, myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky")
// //
// Returns whether the assertion was successful (true) or not (false). // Returns whether the assertion was successful (true) or not (false).
func HTTPBodyContains(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool { func HTTPBodyContains(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool {
@ -144,7 +144,7 @@ func HTTPBodyContains(t TestingT, handler http.HandlerFunc, method, url string,
// HTTPBodyNotContains asserts that a specified handler returns a // HTTPBodyNotContains asserts that a specified handler returns a
// body that does not contain a string. // body that does not contain a string.
// //
// assert.HTTPBodyNotContains(t, myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky") // assert.HTTPBodyNotContains(t, myHandler, "GET", "www.google.com", nil, "I'm Feeling Lucky")
// //
// Returns whether the assertion was successful (true) or not (false). // Returns whether the assertion was successful (true) or not (false).
func HTTPBodyNotContains(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool { func HTTPBodyNotContains(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool {

View File

@ -1,24 +1,25 @@
// Package require implements the same assertions as the `assert` package but // Package require implements the same assertions as the `assert` package but
// stops test execution when a test fails. // stops test execution when a test fails.
// //
// Example Usage // # Example Usage
// //
// The following is a complete example using require in a standard test function: // The following is a complete example using require in a standard test function:
// import (
// "testing"
// "github.com/stretchr/testify/require"
// )
// //
// func TestSomething(t *testing.T) { // import (
// "testing"
// "github.com/stretchr/testify/require"
// )
// //
// var a string = "Hello" // func TestSomething(t *testing.T) {
// var b string = "Hello"
// //
// require.Equal(t, a, b, "The two words should be the same.") // var a string = "Hello"
// var b string = "Hello"
// //
// } // require.Equal(t, a, b, "The two words should be the same.")
// //
// Assertions // }
//
// # Assertions
// //
// The `require` package have same global functions as in the `assert` package, // The `require` package have same global functions as in the `assert` package,
// but instead of returning a boolean result they call `t.FailNow()`. // but instead of returning a boolean result they call `t.FailNow()`.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -84,6 +84,9 @@ github.com/ScaleFT/sshkeys
# github.com/agext/levenshtein v1.2.3 # github.com/agext/levenshtein v1.2.3
## explicit ## explicit
github.com/agext/levenshtein github.com/agext/levenshtein
# github.com/anchore/go-struct-converter v0.0.0-20221118182256-c68fdcfa2092
## explicit; go 1.18
github.com/anchore/go-struct-converter
# github.com/aws/aws-sdk-go v1.44.82 # github.com/aws/aws-sdk-go v1.44.82
## explicit; go 1.11 ## explicit; go 1.11
github.com/aws/aws-sdk-go/aws github.com/aws/aws-sdk-go/aws
@ -582,14 +585,24 @@ github.com/shibumi/go-pathspec
# github.com/sirupsen/logrus v1.9.0 # github.com/sirupsen/logrus v1.9.0
## explicit; go 1.13 ## explicit; go 1.13
github.com/sirupsen/logrus github.com/sirupsen/logrus
# github.com/spdx/tools-golang v0.5.3
## explicit; go 1.13
github.com/spdx/tools-golang/convert
github.com/spdx/tools-golang/json
github.com/spdx/tools-golang/spdx
github.com/spdx/tools-golang/spdx/common
github.com/spdx/tools-golang/spdx/v2/common
github.com/spdx/tools-golang/spdx/v2/v2_1
github.com/spdx/tools-golang/spdx/v2/v2_2
github.com/spdx/tools-golang/spdx/v2/v2_3
# github.com/spf13/cobra v1.6.1 # github.com/spf13/cobra v1.6.1
## explicit; go 1.15 ## explicit; go 1.15
github.com/spf13/cobra github.com/spf13/cobra
# github.com/spf13/pflag v1.0.5 # github.com/spf13/pflag v1.0.5
## explicit; go 1.12 ## explicit; go 1.12
github.com/spf13/pflag github.com/spf13/pflag
# github.com/stretchr/testify v1.8.0 # github.com/stretchr/testify v1.8.4
## explicit; go 1.13 ## explicit; go 1.20
github.com/stretchr/testify/assert github.com/stretchr/testify/assert
github.com/stretchr/testify/require github.com/stretchr/testify/require
# github.com/surma/gocpio v1.0.2-0.20160926205914-fcb68777e7dc # github.com/surma/gocpio v1.0.2-0.20160926205914-fcb68777e7dc

View File

@ -16,8 +16,10 @@ clean_up() {
trap clean_up EXIT trap clean_up EXIT
linuxkit build --format tar --name "${NAME}-1" ../test.yml # do not include the sbom, because the SBoM unique IDs per file/package are *not* deterministic,
linuxkit build --format tar --name "${NAME}-2" ../test.yml # (currently based upon syft), and thus will make the file non-reproducible
linuxkit build --no-sbom --format tar --name "${NAME}-2" ../test.yml
linuxkit build --no-sbom --format tar --name "${NAME}-1" ../test.yml
diff -q "${NAME}-1.tar" "${NAME}-2.tar" || exit 1 diff -q "${NAME}-1.tar" "${NAME}-2.tar" || exit 1

View File

@ -16,8 +16,8 @@ clean_up() {
trap clean_up EXIT trap clean_up EXIT
linuxkit build --format kernel+initrd --name "${NAME}-1" ../test.yml linuxkit build --no-sbom --format kernel+initrd --name "${NAME}-1" ../test.yml
linuxkit build --format kernel+initrd --name "${NAME}-2" ../test.yml linuxkit build --no-sbom --format kernel+initrd --name "${NAME}-2" ../test.yml
diff -q "${NAME}-1-cmdline" "${NAME}-2-cmdline" || exit 1 diff -q "${NAME}-1-cmdline" "${NAME}-2-cmdline" || exit 1
diff -q "${NAME}-1-kernel" "${NAME}-2-kernel" || exit 1 diff -q "${NAME}-1-kernel" "${NAME}-2-kernel" || exit 1

View File

@ -0,0 +1,24 @@
# SBoM Test
Test that SBoM gets generated and unified.
This test does not launch the image, so it doesn't matter much that what is in it is runnable,
only that it gets built.
This test uses local packages inside the directory, to ensure that we get a known and controlled
SBoM.
How it works:
1. Builds the packages in [./package1](./package1) and [./package2](./package2)
1. Builds the image in [./test.yml](./test.yml)
1. Checks that the image contains an SBoM in the expected location
1. Checks that the SBoM contains at least some expected packages
## To update
If you change the packages in [./package1](./package1) or [./package2](./package2), you will need
to update the [./test.yml](./test.yml) file to reflect the new versions.
1. `linuxkit pkg show-tag ./package1`
1. `linuxkit pkg show-tag ./package2`
1. Update the `onboot` section of [./test.yml](./test.yml) with the new versions

View File

@ -0,0 +1,2 @@
# just something to let the SBoM scanner run
FROM registry:2

View File

@ -0,0 +1,5 @@
image: sbom_package1
network: true
arches:
- arm64
- amd64

View File

@ -0,0 +1,2 @@
# just something to let the SBoM scanner run
FROM alpine:3.18

View File

@ -0,0 +1,5 @@
image: sbom_package2
network: true
arches:
- arm64
- amd64

View File

@ -0,0 +1,28 @@
#!/bin/sh
# SUMMARY: Check that tar output format build is reproducible
# LABELS:
set -e
# Source libraries. Uncomment if needed/defined
#. "${RT_LIB}"
. "${RT_PROJECT_ROOT}/_lib/lib.sh"
NAME=sbom
clean_up() {
rm -f ${NAME}*
}
trap clean_up EXIT
# build the packages we need
linuxkit pkg build ./package1 ./package2
# build the image we need
linuxkit build --format tar --name "${NAME}" ./test.yml
# check that we got the SBoM
tar -tvf ${NAME}.tar sbom.spdx.json
exit 0

View File

@ -0,0 +1,18 @@
# NOTE: Images build from this file likely do not run
kernel:
image: linuxkit/kernel:5.10.104
cmdline: "console=ttyS0"
init:
- linuxkit/init:b7a8f94dfb72f738318cc25daf05451ed85ba194
- linuxkit/runc:436357ce16dd663e24f595bcec26d5ae476c998e
- linuxkit/containerd:d445de33c7f08470187b068d247b1c0dea240f0a
onboot:
- name: package1
image: linuxkit/sbom_package1:68f9fad3d53156e014f1b79e7417e345daab3fd9
services:
- name: package2
image: linuxkit/sbom_package2:70ebd08dfd61080d3b7efb9475007f316e3b4727
files:
- path: etc/linuxkit-config
metadata: yaml