1
0
mirror of https://github.com/rancher/steve.git synced 2025-09-05 09:21:12 +00:00

Index arbitrary labels (#317)

* Add more fields to index when sql-caching is on.

* Restore the gvkKey helper, add event fields.

The UI team wasn't sure whether the event fields should go in the empty-string group or in 'events.k8s.io',
so let's go with both until/unless specified otherwise.

* More fixes to the fields to index:

- Remove the erroneously added management.cattle.io.nodes fields
- Use the builtin Event class, not events.k8s.io (by looking at the dashboard client code)

* Start on the virtual-field work.

* Map `Event.type` to `Event._type` for indexing.

* Add a unit test for field replacement for Event.type

* Add label processing.

* Don't test for transformation of event objects in the common module.

* Parse metadata.label queries differently.

* Improve a variable name that turned out to not be temporary.

* No need to specifically cache certain labels, as all are now cached.

* Add a test to verify simple label (m.labels.foo=blah) queries work.

* 'addLabelFields' never returns an error.

* Delete superseded function.

* Was calling 'addLabelFields' one time too many.

* Start using k8s ParseToRequirements

* Pull in the k8s parser.

* Successfully test for quotation marks.

* Add quoted strings to the lexer.

* Move to a forked k8s label lexer to include non-label tests.

* Improve and test the way quoted strings in the query are detected.

* Reinstate the original Apache license in the derived code.

Following clause 4.3 of the Apache license:
"You must cause any modified files to carry prominent notices stating that You changed the files..."

* Ignore case for operators.

* Test IN multiple-target-values

* Test the not-in operator.

* Ignore case for operators.

SQL is case-insensitive on field names and values, so this just adds consistency.

* Added tests for parsing EXISTS and NOT-EXISTS queries.

* Parse less-than and greater-than ops

* Lasso's `CacheFor` now takes a `watchable` argument.

* Support 'gt' and 'lt' as synonyms for '<' and '>'.

I see both types of operators being bandied about -- it's easy to support the aliases.

* typo fix

* Have the filter parser allow exist tests only on labels.

Also reduce the case where there's no namespace function.

* Specify hard-wired fields to index alphabetically.

* Remove unused variable.

* Parser: 'metadata.labels[FIELD]' is valid

* Pull in new gvk fields from main (and keep in alpha order).

* Fixed a couple of drops done during the last rebase.

* Add a reminder to keep the entries in alpha order.

* Test TransformLabels

* Remove TransformLabels

* Remove unused/unneeded code.

* Describe diffs between our label-selector parser and upstream's.

* Use the merged lasso 46333 work.

* Drop unused field.

* Tighten up the code.

* Specify which commit the label selector parser is based on.

* Allow both single-quoted and double-quoted value matching, doc difference.

* More review-driven changes:

- Stricter processing of m.l.name keys:
  Require ending close-bracket for a start-bracket

- Comment fix

- Moving sql processing from lasso to steve: some changes missed in rebase

* Drop support for double-quotes for string values.

For now on only single-quotes (or none where possible) are allowed.

* Renaming and dropping an init block.

* Quoted strings are dropped from the filter queries

In particular, label values have a specific syntax: they must
start and end with a letter, and their innards may contain
only alnums '.', '-' and '_'.  So there's no need for quoting.

And that means now that `=` and `==` do exact matches,
and the `~` operator does a partial match.

`!=` and `!~` negate -- note that `!~` is a stricter operation than `!=`,
in that given a set of possible string values, `!=` will accept more
of them than `!~`. Maybe I shouldn't have gone here, but these operators
reminded me of learning about `nicht durfen` and `nicht sollen` in German,
or something like that.

* Move a constant definition to the module level.

* Remove commented-out code.

* Remove unused func and adjacent redundant comment.
This commit is contained in:
Eric Promislow
2025-01-30 11:57:23 -08:00
committed by GitHub
parent 7da1377c89
commit d794bfe4e8
12 changed files with 2103 additions and 107 deletions

View File

@@ -108,7 +108,7 @@ is empty.
#### `filter` #### `filter`
Filter results by a designated field. Filter keys use dot notation to denote Filter results by a designated field. Filter keys use dot notation to denote
the subfield of an object to filter on. The filter value is matched as a the subfield of an object to filter on. The filter value is normally matched as a
substring. substring.
Example, filtering by object name: Example, filtering by object name:
@@ -117,6 +117,23 @@ Example, filtering by object name:
/v1/{type}?filter=metadata.name=foo /v1/{type}?filter=metadata.name=foo
``` ```
if a target value is surrounded by single-quotes, it succeeds only on an exact match:
Example, filtering by object name:
```
/v1/{type}?filter=metadata.name='match-this-exactly'
```
```
A target value can be delimited by double-quotes, but this will succeed on a partial match:
Example, filtering by object name:
```
/v1/{type}?filter=metadata.name="can-be-a-substri"
```
One filter can list multiple possible fields to match, these are ORed together: One filter can list multiple possible fields to match, these are ORed together:
``` ```

2
go.mod
View File

@@ -14,6 +14,7 @@ require (
github.com/adrg/xdg v0.5.0 github.com/adrg/xdg v0.5.0
github.com/golang/protobuf v1.5.4 github.com/golang/protobuf v1.5.4
github.com/google/gnostic-models v0.6.8 github.com/google/gnostic-models v0.6.8
github.com/google/go-cmp v0.6.0
github.com/gorilla/mux v1.8.1 github.com/gorilla/mux v1.8.1
github.com/gorilla/websocket v1.5.1 github.com/gorilla/websocket v1.5.1
github.com/pborman/uuid v1.2.1 github.com/pborman/uuid v1.2.1
@@ -78,7 +79,6 @@ require (
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/google/cel-go v0.20.1 // indirect github.com/google/cel-go v0.20.1 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect github.com/google/gofuzz v1.2.0 // indirect
github.com/google/uuid v1.6.0 // indirect github.com/google/uuid v1.6.0 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect

View File

@@ -2,17 +2,17 @@ package virtual_test
import ( import (
"fmt" "fmt"
"github.com/rancher/steve/pkg/resources/virtual"
"k8s.io/apimachinery/pkg/runtime/schema"
"strings" "strings"
"testing" "testing"
"github.com/rancher/steve/pkg/resources/virtual"
"github.com/rancher/steve/pkg/resources/virtual/common" "github.com/rancher/steve/pkg/resources/virtual/common"
"github.com/rancher/steve/pkg/summarycache" "github.com/rancher/steve/pkg/summarycache"
"github.com/rancher/wrangler/v3/pkg/summary" "github.com/rancher/wrangler/v3/pkg/summary"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1" v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
) )
func TestTransformChain(t *testing.T) { func TestTransformChain(t *testing.T) {

View File

@@ -9,7 +9,6 @@ import (
"io/ioutil" "io/ioutil"
"net/http" "net/http"
"os" "os"
"regexp"
"strconv" "strconv"
"github.com/pkg/errors" "github.com/pkg/errors"
@@ -46,7 +45,6 @@ const (
) )
var ( var (
lowerChars = regexp.MustCompile("[a-z]+")
paramScheme = runtime.NewScheme() paramScheme = runtime.NewScheme()
paramCodec = runtime.NewParameterCodec(paramScheme) paramCodec = runtime.NewParameterCodec(paramScheme)
) )

View File

@@ -13,6 +13,8 @@ import (
"github.com/rancher/steve/pkg/sqlcache/informer" "github.com/rancher/steve/pkg/sqlcache/informer"
"github.com/rancher/steve/pkg/sqlcache/partition" "github.com/rancher/steve/pkg/sqlcache/partition"
"github.com/rancher/steve/pkg/stores/queryhelper" "github.com/rancher/steve/pkg/stores/queryhelper"
"github.com/rancher/steve/pkg/stores/sqlpartition/queryparser"
"github.com/rancher/steve/pkg/stores/sqlpartition/selection"
"github.com/rancher/wrangler/v3/pkg/schemas/validation" "github.com/rancher/wrangler/v3/pkg/schemas/validation"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
) )
@@ -33,7 +35,20 @@ const (
notOp = "!" notOp = "!"
) )
var opReg = regexp.MustCompile(`[!]?=`) var labelsRegex = regexp.MustCompile(`^(metadata)\.(labels)\[(.+)\]$`)
var mapK8sOpToRancherOp = map[selection.Operator]informer.Op{
selection.Equals: informer.Eq,
selection.DoubleEquals: informer.Eq,
selection.PartialEquals: informer.Eq,
selection.NotEquals: informer.NotEq,
selection.NotPartialEquals: informer.NotEq,
selection.In: informer.In,
selection.NotIn: informer.NotIn,
selection.Exists: informer.Exists,
selection.DoesNotExist: informer.NotExists,
selection.LessThan: informer.Lt,
selection.GreaterThan: informer.Gt,
}
// ListOptions represents the query parameters that may be included in a list request. // ListOptions represents the query parameters that may be included in a list request.
type ListOptions struct { type ListOptions struct {
@@ -54,6 +69,26 @@ type Cache interface {
ListByOptions(ctx context.Context, lo informer.ListOptions, partitions []partition.Partition, namespace string) (*unstructured.UnstructuredList, int, string, error) ListByOptions(ctx context.Context, lo informer.ListOptions, partitions []partition.Partition, namespace string) (*unstructured.UnstructuredList, int, string, error)
} }
func k8sOpToRancherOp(k8sOp selection.Operator) (informer.Op, bool, error) {
v, ok := mapK8sOpToRancherOp[k8sOp]
if ok {
return v, k8sOp == selection.PartialEquals || k8sOp == selection.NotPartialEquals, nil
}
return "", false, fmt.Errorf("unknown k8sOp: %s", k8sOp)
}
func k8sRequirementToOrFilter(requirement queryparser.Requirement) (informer.Filter, error) {
values := requirement.Values()
queryFields := splitQuery(requirement.Key())
op, usePartialMatch, err := k8sOpToRancherOp(requirement.Operator())
return informer.Filter{
Field: queryFields,
Matches: values,
Op: op,
Partial: usePartialMatch,
}, err
}
// ParseQuery parses the query params of a request and returns a ListOptions. // ParseQuery parses the query params of a request and returns a ListOptions.
func ParseQuery(apiOp *types.APIRequest, namespaceCache Cache) (informer.ListOptions, error) { func ParseQuery(apiOp *types.APIRequest, namespaceCache Cache) (informer.ListOptions, error) {
opts := informer.ListOptions{} opts := informer.ListOptions{}
@@ -67,20 +102,17 @@ func ParseQuery(apiOp *types.APIRequest, namespaceCache Cache) (informer.ListOpt
filterParams := q[filterParam] filterParams := q[filterParam]
filterOpts := []informer.OrFilter{} filterOpts := []informer.OrFilter{}
for _, filters := range filterParams { for _, filters := range filterParams {
orFilters := strings.Split(filters, orOp) requirements, err := queryparser.ParseToRequirements(filters)
if err != nil {
return informer.ListOptions{}, err
}
orFilter := informer.OrFilter{} orFilter := informer.OrFilter{}
for _, filter := range orFilters { for _, requirement := range requirements {
var op informer.Op filter, err := k8sRequirementToOrFilter(requirement)
if strings.Contains(filter, "!=") { if err != nil {
op = "!=" return opts, err
} }
filter := opReg.Split(filter, -1) orFilter.Filters = append(orFilter.Filters, filter)
if len(filter) != 2 {
continue
}
usePartialMatch := !(strings.HasPrefix(filter[1], `'`) && strings.HasSuffix(filter[1], `'`))
value := strings.TrimSuffix(strings.TrimPrefix(filter[1], "'"), "'")
orFilter.Filters = append(orFilter.Filters, informer.Filter{Field: strings.Split(filter[0], "."), Matches: []string{value}, Op: op, Partial: usePartialMatch})
} }
filterOpts = append(filterOpts, orFilter) filterOpts = append(filterOpts, orFilter)
} }
@@ -119,7 +151,7 @@ func ParseQuery(apiOp *types.APIRequest, namespaceCache Cache) (informer.ListOpt
} }
opts.Pagination = pagination opts.Pagination = pagination
var op informer.Op op := informer.Eq
projectsOrNamespaces := q.Get(projectsOrNamespacesVar) projectsOrNamespaces := q.Get(projectsOrNamespacesVar)
if projectsOrNamespaces == "" { if projectsOrNamespaces == "" {
projectsOrNamespaces = q.Get(projectsOrNamespacesVar + notOp) projectsOrNamespaces = q.Get(projectsOrNamespacesVar + notOp)
@@ -133,7 +165,7 @@ func ParseQuery(apiOp *types.APIRequest, namespaceCache Cache) (informer.ListOpt
return opts, err return opts, err
} }
if projOrNSFilters == nil { if projOrNSFilters == nil {
return opts, apierror.NewAPIError(validation.NotFound, fmt.Sprintf("could not find any namespacess named [%s] or namespaces belonging to project named [%s]", projectsOrNamespaces, projectsOrNamespaces)) return opts, apierror.NewAPIError(validation.NotFound, fmt.Sprintf("could not find any namespaces named [%s] or namespaces belonging to project named [%s]", projectsOrNamespaces, projectsOrNamespaces))
} }
if op == informer.NotEq { if op == informer.NotEq {
for _, filter := range projOrNSFilters { for _, filter := range projOrNSFilters {
@@ -159,6 +191,21 @@ func getLimit(apiOp *types.APIRequest) int {
return limit return limit
} }
// splitQuery takes a single-string metadata-labels filter and converts it into an array of 3 accessor strings,
// where the first two strings are always "metadata" and "labels", and the third is the label name.
// This is more complex than doing something like `strings.Split(".", "metadata.labels.fieldName")
// because the fieldName can be more complex - in particular it can contain "."s) and needs to be
// bracketed, as in `metadata.labels[rancher.io/cattle.and.beef]".
// The `labelsRegex` looks for the bracketed form.
func splitQuery(query string) []string {
m := labelsRegex.FindStringSubmatch(query)
if m != nil && len(m) == 4 {
// m[0] contains the entire string, so just return all but that first item in `m`
return m[1:]
}
return strings.Split(query, ".")
}
func parseNamespaceOrProjectFilters(ctx context.Context, projOrNS string, op informer.Op, namespaceInformer Cache) ([]informer.Filter, error) { func parseNamespaceOrProjectFilters(ctx context.Context, projOrNS string, op informer.Op, namespaceInformer Cache) ([]informer.Filter, error) {
var filters []informer.Filter var filters []informer.Filter
for _, pn := range strings.Split(projOrNS, ",") { for _, pn := range strings.Split(projOrNS, ",") {

View File

@@ -25,6 +25,7 @@ func TestParseQuery(t *testing.T) {
req *types.APIRequest req *types.APIRequest
expectedLO informer.ListOptions expectedLO informer.ListOptions
errExpected bool errExpected bool
errorText string
} }
var tests []testCase var tests []testCase
tests = append(tests, testCase{ tests = append(tests, testCase{
@@ -41,9 +42,6 @@ func TestParseQuery(t *testing.T) {
Page: 1, Page: 1,
}, },
}, },
setupNSCache: func() Cache {
return nil
},
}) })
tests = append(tests, testCase{ tests = append(tests, testCase{
description: "ParseQuery() with no errors returned should returned no errors. If projectsornamespaces is not empty" + description: "ParseQuery() with no errors returned should returned no errors. If projectsornamespaces is not empty" +
@@ -61,7 +59,7 @@ func TestParseQuery(t *testing.T) {
{ {
Field: []string{"metadata", "namespace"}, Field: []string{"metadata", "namespace"},
Matches: []string{"ns1"}, Matches: []string{"ns1"},
Op: "", Op: informer.Eq,
Partial: false, Partial: false,
}, },
}, },
@@ -121,7 +119,7 @@ func TestParseQuery(t *testing.T) {
{ {
Field: []string{"metadata", "namespace"}, Field: []string{"metadata", "namespace"},
Matches: []string{"ns1"}, Matches: []string{"ns1"},
Op: "", Op: informer.Eq,
Partial: false, Partial: false,
}, },
}, },
@@ -171,7 +169,7 @@ func TestParseQuery(t *testing.T) {
{ {
Field: []string{"metadata", "namespace"}, Field: []string{"metadata", "namespace"},
Matches: []string{"ns1"}, Matches: []string{"ns1"},
Op: "", Op: informer.Eq,
Partial: false, Partial: false,
}, },
}, },
@@ -210,6 +208,32 @@ func TestParseQuery(t *testing.T) {
}) })
tests = append(tests, testCase{ tests = append(tests, testCase{
description: "ParseQuery() with filter param set should include filter with partial set to true in list options.", description: "ParseQuery() with filter param set should include filter with partial set to true in list options.",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: "filter=a~c"},
},
},
expectedLO: informer.ListOptions{
ChunkSize: defaultLimit,
Filters: []informer.OrFilter{
{
Filters: []informer.Filter{
{
Field: []string{"a"},
Matches: []string{"c"},
Op: informer.Eq,
Partial: true,
},
},
},
},
Pagination: informer.Pagination{
Page: 1,
},
},
})
tests = append(tests, testCase{
description: "ParseQuery() with filter param set, should include filter with partial set to false in list options.",
req: &types.APIRequest{ req: &types.APIRequest{
Request: &http.Request{ Request: &http.Request{
URL: &url.URL{RawQuery: "filter=a=c"}, URL: &url.URL{RawQuery: "filter=a=c"},
@@ -223,36 +247,7 @@ func TestParseQuery(t *testing.T) {
{ {
Field: []string{"a"}, Field: []string{"a"},
Matches: []string{"c"}, Matches: []string{"c"},
Op: "", Op: informer.Eq,
Partial: true,
},
},
},
},
Pagination: informer.Pagination{
Page: 1,
},
},
setupNSCache: func() Cache {
return nil
},
})
tests = append(tests, testCase{
description: "ParseQuery() with filter param set, with value in single quotes, should include filter with partial set to false in list options.",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: "filter=a='c'"},
},
},
expectedLO: informer.ListOptions{
ChunkSize: defaultLimit,
Filters: []informer.OrFilter{
{
Filters: []informer.Filter{
{
Field: []string{"a"},
Matches: []string{"c"},
Op: "",
Partial: false, Partial: false,
}, },
}, },
@@ -262,8 +257,40 @@ func TestParseQuery(t *testing.T) {
Page: 1, Page: 1,
}, },
}, },
setupNSCache: func() Cache { })
return nil tests = append(tests, testCase{
description: "ParseQuery() with filter param set, with value in double quotes should return an error.",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: `filter=a1="c1"`},
},
},
errExpected: true,
})
tests = append(tests, testCase{
description: "ParseQuery() with a labels filter param should create a labels-specific filter.",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: "filter=metadata.labels[grover.example.com/fish]~heads"},
},
},
expectedLO: informer.ListOptions{
ChunkSize: defaultLimit,
Filters: []informer.OrFilter{
{
Filters: []informer.Filter{
{
Field: []string{"metadata", "labels", "grover.example.com/fish"},
Matches: []string{"heads"},
Op: informer.Eq,
Partial: true,
},
},
},
},
Pagination: informer.Pagination{
Page: 1,
},
}, },
}) })
tests = append(tests, testCase{ tests = append(tests, testCase{
@@ -281,8 +308,8 @@ func TestParseQuery(t *testing.T) {
{ {
Field: []string{"a"}, Field: []string{"a"},
Matches: []string{"c"}, Matches: []string{"c"},
Op: "", Op: informer.Eq,
Partial: true, Partial: false,
}, },
}, },
}, },
@@ -291,8 +318,8 @@ func TestParseQuery(t *testing.T) {
{ {
Field: []string{"b"}, Field: []string{"b"},
Matches: []string{"d"}, Matches: []string{"d"},
Op: "", Op: informer.Eq,
Partial: true, Partial: false,
}, },
}, },
}, },
@@ -301,16 +328,12 @@ func TestParseQuery(t *testing.T) {
Page: 1, Page: 1,
}, },
}, },
setupNSCache: func() Cache {
return nil
},
}) })
tests = append(tests, testCase{ tests = append(tests, testCase{
description: "ParseQuery() with a filter param with a comma separate value, should include a single or filter with" + description: "ParseQuery() with multiple filter params, should include multiple or filters.",
" multiple filters.",
req: &types.APIRequest{ req: &types.APIRequest{
Request: &http.Request{ Request: &http.Request{
URL: &url.URL{RawQuery: "filter=a=c,b=d"}, URL: &url.URL{RawQuery: "filter=a=c&filter=b=d"},
}, },
}, },
expectedLO: informer.ListOptions{ expectedLO: informer.ListOptions{
@@ -321,13 +344,49 @@ func TestParseQuery(t *testing.T) {
{ {
Field: []string{"a"}, Field: []string{"a"},
Matches: []string{"c"}, Matches: []string{"c"},
Op: "", Op: informer.Eq,
Partial: true, Partial: false,
}, },
},
},
{
Filters: []informer.Filter{
{ {
Field: []string{"b"}, Field: []string{"b"},
Matches: []string{"d"}, Matches: []string{"d"},
Op: "", Op: informer.Eq,
Partial: false,
},
},
},
},
Pagination: informer.Pagination{
Page: 1,
},
},
})
tests = append(tests, testCase{
description: "ParseQuery() should handle comma-separated standard and labels filters.",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: "filter=beer=pabst,metadata.labels[beer2.io/ale] ~schlitz"},
},
},
expectedLO: informer.ListOptions{
ChunkSize: defaultLimit,
Filters: []informer.OrFilter{
{
Filters: []informer.Filter{
{
Field: []string{"beer"},
Matches: []string{"pabst"},
Op: informer.Eq,
Partial: false,
},
{
Field: []string{"metadata", "labels", "beer2.io/ale"},
Matches: []string{"schlitz"},
Op: informer.Eq,
Partial: true, Partial: true,
}, },
}, },
@@ -337,8 +396,265 @@ func TestParseQuery(t *testing.T) {
Page: 1, Page: 1,
}, },
}, },
setupNSCache: func() Cache { })
return nil tests = append(tests, testCase{
description: "ParseQuery() should handle simple dot-separated label filters.",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: "filter=beer=natty-bo,metadata.labels.beer3~rainier"},
},
},
expectedLO: informer.ListOptions{
ChunkSize: defaultLimit,
Filters: []informer.OrFilter{
{
Filters: []informer.Filter{
{
Field: []string{"beer"},
Matches: []string{"natty-bo"},
Op: informer.Eq,
Partial: false,
},
{
Field: []string{"metadata", "labels", "beer3"},
Matches: []string{"rainier"},
Op: informer.Eq,
Partial: true,
},
},
},
},
Pagination: informer.Pagination{
Page: 1,
},
},
})
tests = append(tests, testCase{
description: "ParseQuery() should handle 'in' and 'IN' with one arg",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: "filter=a1In in (x1),a2In IN (x2)"},
},
},
expectedLO: informer.ListOptions{
ChunkSize: defaultLimit,
Filters: []informer.OrFilter{
{
Filters: []informer.Filter{
{
Field: []string{"a1In"},
Matches: []string{"x1"},
Op: informer.In,
Partial: false,
},
{
Field: []string{"a2In"},
Matches: []string{"x2"},
Op: informer.In,
Partial: false,
},
},
},
},
Pagination: informer.Pagination{
Page: 1,
},
},
})
tests = append(tests, testCase{
description: "ParseQuery() should handle 'in' with multiple args",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: "filter=a2In in (x2a, x2b)"},
},
},
expectedLO: informer.ListOptions{
ChunkSize: defaultLimit,
Filters: []informer.OrFilter{
{
Filters: []informer.Filter{
{
Field: []string{"a2In"},
Matches: []string{"x2a", "x2b"},
Op: informer.In,
Partial: false,
},
},
},
},
Pagination: informer.Pagination{
Page: 1,
},
},
})
tests = append(tests, testCase{
description: "ParseQuery() should handle 'notin' and 'NOTIN' with one arg",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: "filter=a1NotIn notin (x1),a2NotIn NOTIN (x2)"},
},
},
expectedLO: informer.ListOptions{
ChunkSize: defaultLimit,
Filters: []informer.OrFilter{
{
Filters: []informer.Filter{
{
Field: []string{"a1NotIn"},
Matches: []string{"x1"},
Op: informer.NotIn,
Partial: false,
},
{
Field: []string{"a2NotIn"},
Matches: []string{"x2"},
Op: informer.NotIn,
Partial: false,
},
},
},
},
Pagination: informer.Pagination{
Page: 1,
},
},
})
tests = append(tests, testCase{
description: "ParseQuery() should handle 'in' with multiple args",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: "filter=a3NotIn in (x3a, x3b)"},
},
},
expectedLO: informer.ListOptions{
ChunkSize: defaultLimit,
Filters: []informer.OrFilter{
{
Filters: []informer.Filter{
{
Field: []string{"a3NotIn"},
Matches: []string{"x3a", "x3b"},
Op: informer.In,
Partial: false,
},
},
},
},
Pagination: informer.Pagination{
Page: 1,
},
},
})
tests = append(tests, testCase{
description: "ParseQuery() should handle 'in' and 'notin' in mixed case",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: "filter=a4In iN (x4a),a4NotIn nOtIn (x4b)"},
},
},
expectedLO: informer.ListOptions{
ChunkSize: defaultLimit,
Filters: []informer.OrFilter{
{
Filters: []informer.Filter{
{
Field: []string{"a4In"},
Matches: []string{"x4a"},
Op: informer.In,
Partial: false,
},
{
Field: []string{"a4NotIn"},
Matches: []string{"x4b"},
Op: informer.NotIn,
Partial: false,
},
},
},
},
Pagination: informer.Pagination{
Page: 1,
},
},
})
tests = append(tests, testCase{
description: "ParseQuery() should complain on non-label exists tests",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: "filter=a5In1,!a5In2, ! a5In3"},
},
},
errExpected: true,
errorText: "unable to parse requirement: existence tests are valid only for labels; not valid for field 'a5In1'",
})
tests = append(tests, testCase{
description: "ParseQuery() should allow label exists tests",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: "filter=metadata.labels.a5In1,!metadata.labels.a5In2, ! metadata.labels.a5In3"},
},
},
expectedLO: informer.ListOptions{
ChunkSize: defaultLimit,
Filters: []informer.OrFilter{
{
Filters: []informer.Filter{
{
Field: []string{"metadata", "labels", "a5In1"},
Op: informer.Exists,
Matches: []string{},
Partial: false,
},
{
Field: []string{"metadata", "labels", "a5In2"},
Op: informer.NotExists,
Matches: []string{},
Partial: false,
},
{
Field: []string{"metadata", "labels", "a5In3"},
Op: informer.NotExists,
Matches: []string{},
Partial: false,
},
},
},
},
Pagination: informer.Pagination{
Page: 1,
},
},
})
tests = append(tests, testCase{
description: "ParseQuery() should handle numeric comparisons",
req: &types.APIRequest{
Request: &http.Request{
URL: &url.URL{RawQuery: "filter=a<1,b>2"},
},
},
expectedLO: informer.ListOptions{
ChunkSize: defaultLimit,
Filters: []informer.OrFilter{
{
Filters: []informer.Filter{
{
Field: []string{"a"},
Op: informer.Lt,
Matches: []string{"1"},
Partial: false,
},
{
Field: []string{"b"},
Op: informer.Gt,
Matches: []string{"2"},
Partial: false,
},
},
},
},
Pagination: informer.Pagination{
Page: 1,
},
}, },
}) })
tests = append(tests, testCase{ tests = append(tests, testCase{
@@ -361,9 +677,6 @@ func TestParseQuery(t *testing.T) {
Page: 1, Page: 1,
}, },
}, },
setupNSCache: func() Cache {
return nil
},
}) })
tests = append(tests, testCase{ tests = append(tests, testCase{
description: "ParseQuery() with no errors returned should returned no errors. If one sort param is given primary field " + description: "ParseQuery() with no errors returned should returned no errors. If one sort param is given primary field " +
@@ -384,9 +697,6 @@ func TestParseQuery(t *testing.T) {
Page: 1, Page: 1,
}, },
}, },
setupNSCache: func() Cache {
return nil
},
}) })
tests = append(tests, testCase{ tests = append(tests, testCase{
description: "ParseQuery() with no errors returned should returned no errors. If two sort params are given, sort " + description: "ParseQuery() with no errors returned should returned no errors. If two sort params are given, sort " +
@@ -413,9 +723,6 @@ func TestParseQuery(t *testing.T) {
Page: 1, Page: 1,
}, },
}, },
setupNSCache: func() Cache {
return nil
},
}) })
tests = append(tests, testCase{ tests = append(tests, testCase{
@@ -459,9 +766,6 @@ func TestParseQuery(t *testing.T) {
Page: 1, Page: 1,
}, },
}, },
setupNSCache: func() Cache {
return nil
},
}) })
tests = append(tests, testCase{ tests = append(tests, testCase{
description: "ParseQuery() with no errors returned should returned no errors. If continue param is given, resume" + description: "ParseQuery() with no errors returned should returned no errors. If continue param is given, resume" +
@@ -479,9 +783,6 @@ func TestParseQuery(t *testing.T) {
Page: 1, Page: 1,
}, },
}, },
setupNSCache: func() Cache {
return nil
},
}) })
tests = append(tests, testCase{ tests = append(tests, testCase{
description: "ParseQuery() with no errors returned should returned no errors. If limit param is given, chunksize" + description: "ParseQuery() with no errors returned should returned no errors. If limit param is given, chunksize" +
@@ -498,9 +799,6 @@ func TestParseQuery(t *testing.T) {
Page: 1, Page: 1,
}, },
}, },
setupNSCache: func() Cache {
return nil
},
}) })
tests = append(tests, testCase{ tests = append(tests, testCase{
description: "ParseQuery() with no errors returned should returned no errors. If page param is given, page" + description: "ParseQuery() with no errors returned should returned no errors. If page param is given, page" +
@@ -517,9 +815,6 @@ func TestParseQuery(t *testing.T) {
Page: 3, Page: 3,
}, },
}, },
setupNSCache: func() Cache {
return nil
},
}) })
tests = append(tests, testCase{ tests = append(tests, testCase{
description: "ParseQuery() with no errors returned should returned no errors. If pagesize param is given, pageSize" + description: "ParseQuery() with no errors returned should returned no errors. If pagesize param is given, pageSize" +
@@ -537,21 +832,24 @@ func TestParseQuery(t *testing.T) {
Page: 1, Page: 1,
}, },
}, },
setupNSCache: func() Cache {
return nil
},
}) })
t.Parallel() t.Parallel()
for _, test := range tests { for _, test := range tests {
t.Run(test.description, func(t *testing.T) { t.Run(test.description, func(t *testing.T) {
test.nsc = test.setupNSCache() if test.setupNSCache == nil {
if test.description == "sorting can parse bracketed field names correctly" { test.nsc = nil
fmt.Printf("stop here") } else {
test.nsc = test.setupNSCache()
} }
lo, err := ParseQuery(test.req, test.nsc) lo, err := ParseQuery(test.req, test.nsc)
if test.errExpected { if test.errExpected {
assert.NotNil(t, err) assert.NotNil(t, err)
if test.errorText != "" {
assert.Contains(t, test.errorText, err.Error())
}
return return
} else {
assert.Nil(t, err)
} }
assert.Equal(t, test.expectedLO, lo) assert.Equal(t, test.expectedLO, lo)
}) })

View File

@@ -0,0 +1,166 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
This file is derived from
https://github.com/kubernetes/apimachinery/blob/90df4d1d2d40ea9b3a522bec6e3577237358de00/pkg/labels/labels.go
- FormatLabels was dropped
- validateLabelKey calls from ConvertSelectorToLabelsMap were dropped
*/
package queryparser
import (
"fmt"
"sort"
"strings"
"k8s.io/apimachinery/pkg/util/validation/field"
)
// Labels allows you to present labels independently from their storage.
type Labels interface {
// Has returns whether the provided label exists.
Has(label string) (exists bool)
// Get returns the value for the provided label.
Get(label string) (value string)
}
// Set is a map of label:value. It implements Labels.
type Set map[string]string
// String returns all labels listed as a human readable string.
// Conveniently, exactly the format that ParseSelector takes.
func (ls Set) String() string {
selector := make([]string, 0, len(ls))
for key, value := range ls {
selector = append(selector, key+"="+value)
}
// Sort for determinism.
sort.StringSlice(selector).Sort()
return strings.Join(selector, ",")
}
// Has returns whether the provided label exists in the map.
func (ls Set) Has(label string) bool {
_, exists := ls[label]
return exists
}
// Get returns the value in the map for the provided label.
func (ls Set) Get(label string) string {
return ls[label]
}
// AsSelector converts labels into a selectors. It does not
// perform any validation, which means the server will reject
// the request if the Set contains invalid values.
func (ls Set) AsSelector() Selector {
return SelectorFromSet(ls)
}
// AsValidatedSelector converts labels into a selectors.
// The Set is validated client-side, which allows to catch errors early.
func (ls Set) AsValidatedSelector() (Selector, error) {
return ValidatedSelectorFromSet(ls)
}
// AsSelectorPreValidated converts labels into a selector, but
// assumes that labels are already validated and thus doesn't
// perform any validation.
// According to our measurements this is significantly faster
// in codepaths that matter at high scale.
// Note: this method copies the Set; if the Set is immutable, consider wrapping it with ValidatedSetSelector
// instead, which does not copy.
func (ls Set) AsSelectorPreValidated() Selector {
return SelectorFromValidatedSet(ls)
}
// Conflicts takes 2 maps and returns true if there a key match between
// the maps but the value doesn't match, and returns false in other cases
func Conflicts(labels1, labels2 Set) bool {
small := labels1
big := labels2
if len(labels2) < len(labels1) {
small = labels2
big = labels1
}
for k, v := range small {
if val, match := big[k]; match {
if val != v {
return true
}
}
}
return false
}
// Merge combines given maps, and does not check for any conflicts
// between the maps. In case of conflicts, second map (labels2) wins
func Merge(labels1, labels2 Set) Set {
mergedMap := Set{}
for k, v := range labels1 {
mergedMap[k] = v
}
for k, v := range labels2 {
mergedMap[k] = v
}
return mergedMap
}
// Equals returns true if the given maps are equal
func Equals(labels1, labels2 Set) bool {
if len(labels1) != len(labels2) {
return false
}
for k, v := range labels1 {
value, ok := labels2[k]
if !ok {
return false
}
if value != v {
return false
}
}
return true
}
// ConvertSelectorToLabelsMap converts selector string to labels map
// and validates keys and values
func ConvertSelectorToLabelsMap(selector string, opts ...field.PathOption) (Set, error) {
labelsMap := Set{}
if len(selector) == 0 {
return labelsMap, nil
}
labels := strings.Split(selector, ",")
for _, label := range labels {
l := strings.Split(label, "=")
if len(l) != 2 {
return labelsMap, fmt.Errorf("invalid selector: %s", l)
}
key := strings.TrimSpace(l[0])
value := strings.TrimSpace(l[1])
labelsMap[key] = value
}
return labelsMap, nil
}

View File

@@ -0,0 +1,957 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
This file is derived from
https://github.com/kubernetes/apimachinery/blob/90df4d1d2d40ea9b3a522bec6e3577237358de00/pkg/labels/selector.go
*/
/**
Main changes:
1. The upstream `selector.go` file does parsing and applying to the objects being test.
We only care about the parser, so the selection part is dropped.
2. I dropped label value validation in the parser
3. Multiple values are returned as an array rather than a `k8s.io/utils/sets.String` object
to avoid having to pull in that dependency as well (and it isn't needed because we convert
the array into a sql statement. So the set gives us no benefit apart from removing duplicate target values).
4. Our filter language ignores case for `in` and `notin`. These must be lower-case in kubectl filter expressions.
5. The `Lexer.Lex` function names the return parameters in its header but has no argument-less
return statement, so I dropped the names.
6. We allow `lt` and `gt` as aliases for `<` and `>`.
7. We added the '~' and '!~' operators to indicate partial match and non-match
*/
package queryparser
import (
"errors"
"fmt"
"regexp"
"slices"
"sort"
"strconv"
"strings"
"unicode"
"github.com/rancher/steve/pkg/stores/sqlpartition/selection"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/util/validation/field"
)
type Operator string
var (
a = Equals
unaryOperators = []string{
string(selection.Exists), string(selection.DoesNotExist),
}
binaryOperators = []string{
string(selection.In), string(selection.NotIn),
string(selection.Equals), string(selection.DoubleEquals), string(selection.NotEquals),
string(selection.PartialEquals), string(selection.NotPartialEquals),
string(selection.GreaterThan), string(selection.LessThan),
}
validRequirementOperators = append(binaryOperators, unaryOperators...)
labelSelectorRegex = regexp.MustCompile(`^metadata.labels(?:\.\w[-a-zA-Z0-9_./]*|\[.*])$`)
)
// Requirements is AND of all requirements.
type Requirements []Requirement
func (r Requirements) String() string {
var sb strings.Builder
for i, requirement := range r {
if i > 0 {
sb.WriteString(", ")
}
sb.WriteString(requirement.String())
}
return sb.String()
}
// Selector represents a label selector.
type Selector interface {
// String returns a human readable string that represents this selector.
String() string
// Requirements converts this interface into Requirements to expose
// more detailed selection information.
// If there are querying parameters, it will return converted requirements and selectable=true.
// If this selector doesn't want to select anything, it will return selectable=false.
Requirements() (requirements Requirements, selectable bool)
// Make a deep copy of the selector.
DeepCopySelector() Selector
}
type internalSelector []Requirement
func (s internalSelector) DeepCopy() internalSelector {
if s == nil {
return nil
}
result := make([]Requirement, len(s))
for i := range s {
s[i].DeepCopyInto(&result[i])
}
return result
}
func (s internalSelector) DeepCopySelector() Selector {
return s.DeepCopy()
}
// Requirement contains values, a key, and an operator that relates the key and values.
// The zero value of Requirement is invalid.
// Requirement implements both set based match and exact match
// Requirement should be initialized via NewRequirement constructor for creating a valid Requirement.
// +k8s:deepcopy-gen=true
type Requirement struct {
key string
operator selection.Operator
// In huge majority of cases we have at most one value here.
// It is generally faster to operate on a single-element slice
// than on a single-element map, so we have a slice here.
strValues []string
}
// NewRequirement is the constructor for a Requirement.
// If any of these rules is violated, an error is returned:
// 1. The operator can only be In, NotIn, Equals, DoubleEquals, Gt, Lt, NotEquals, Exists, or DoesNotExist.
// 2. If the operator is In or NotIn, the values set must be non-empty.
// 3. If the operator is Equals, DoubleEquals, or NotEquals, the values set must contain one value.
// 4. If the operator is Exists or DoesNotExist, the value set must be empty.
// 5. If the operator is Gt or Lt, the values set must contain only one value, which will be interpreted as an integer.
// 6. The key is invalid due to its length, or sequence of characters. See validateLabelKey for more details.
//
// The empty string is a valid value in the input values set.
// Returned error, if not nil, is guaranteed to be an aggregated field.ErrorList
func NewRequirement(key string, op selection.Operator, vals []string, opts ...field.PathOption) (*Requirement, error) {
var allErrs field.ErrorList
path := field.ToPath(opts...)
valuePath := path.Child("values")
switch op {
case selection.In, selection.NotIn:
if len(vals) == 0 {
allErrs = append(allErrs, field.Invalid(valuePath, vals, "for 'in', 'notin' operators, values set can't be empty"))
}
case selection.Equals, selection.DoubleEquals, selection.NotEquals:
if len(vals) != 1 {
allErrs = append(allErrs, field.Invalid(valuePath, vals, "exact-match compatibility requires one single value"))
}
case selection.PartialEquals, selection.NotPartialEquals:
if len(vals) != 1 {
allErrs = append(allErrs, field.Invalid(valuePath, vals, "partial-match compatibility requires one single value"))
}
case selection.Exists, selection.DoesNotExist:
if len(vals) != 0 {
allErrs = append(allErrs, field.Invalid(valuePath, vals, "values set must be empty for exists and does not exist"))
}
case selection.GreaterThan, selection.LessThan:
if len(vals) != 1 {
allErrs = append(allErrs, field.Invalid(valuePath, vals, "for 'Gt', 'Lt' operators, exactly one value is required"))
}
for i := range vals {
if _, err := strconv.ParseInt(vals[i], 10, 64); err != nil {
allErrs = append(allErrs, field.Invalid(valuePath.Index(i), vals[i], "for 'Gt', 'Lt' operators, the value must be an integer"))
}
}
default:
allErrs = append(allErrs, field.NotSupported(path.Child("operator"), op, validRequirementOperators))
}
return &Requirement{key: key, operator: op, strValues: vals}, allErrs.ToAggregate()
}
func (r *Requirement) hasValue(value string) bool {
for i := range r.strValues {
if r.strValues[i] == value {
return true
}
}
return false
}
// Key returns requirement key
func (r *Requirement) Key() string {
return r.key
}
// Operator returns requirement operator
func (r *Requirement) Operator() selection.Operator {
return r.operator
}
// Values returns requirement values
func (r *Requirement) Values() []string {
ret := sets.String{}
for i := range r.strValues {
ret.Insert(r.strValues[i])
}
return ret.List()
}
// Equal checks the equality of requirement.
func (r Requirement) Equal(x Requirement) bool {
if r.key != x.key {
return false
}
if r.operator != x.operator {
return false
}
return slices.Equal(r.strValues, x.strValues)
}
// Empty returns true if the internalSelector doesn't restrict selection space
func (s internalSelector) Empty() bool {
if s == nil {
return true
}
return len(s) == 0
}
// String returns a human-readable string that represents this
// Requirement. If called on an invalid Requirement, an error is
// returned. See NewRequirement for creating a valid Requirement.
func (r *Requirement) String() string {
var sb strings.Builder
sb.Grow(
// length of r.key
len(r.key) +
// length of 'r.operator' + 2 spaces for the worst case ('in' and 'notin')
len(string(r.operator)) + 2 +
// length of 'r.strValues' slice times. Heuristically 5 chars per word
+5*len(r.strValues))
if r.operator == selection.DoesNotExist {
sb.WriteString("!")
}
sb.WriteString(r.key)
switch r.operator {
case selection.Equals:
sb.WriteString("=")
case selection.DoubleEquals:
sb.WriteString("==")
case selection.NotEquals:
sb.WriteString("!=")
case selection.PartialEquals:
sb.WriteString("~")
case selection.NotPartialEquals:
sb.WriteString("!~")
case selection.In:
sb.WriteString(" in ")
case selection.NotIn:
sb.WriteString(" notin ")
case selection.GreaterThan:
sb.WriteString(">")
case selection.LessThan:
sb.WriteString("<")
case selection.Exists, selection.DoesNotExist:
return sb.String()
}
switch r.operator {
case selection.In, selection.NotIn:
sb.WriteString("(")
}
if len(r.strValues) == 1 {
sb.WriteString(r.strValues[0])
} else { // only > 1 since == 0 prohibited by NewRequirement
// normalizes value order on output, without mutating the in-memory selector representation
// also avoids normalization when it is not required, and ensures we do not mutate shared data
sb.WriteString(strings.Join(safeSort(r.strValues), ","))
}
switch r.operator {
case selection.In, selection.NotIn:
sb.WriteString(")")
}
return sb.String()
}
// safeSort sorts input strings without modification
func safeSort(in []string) []string {
if sort.StringsAreSorted(in) {
return in
}
out := make([]string, len(in))
copy(out, in)
sort.Strings(out)
return out
}
// Add adds requirements to the selector. It copies the current selector returning a new one
func (s internalSelector) Add(reqs ...Requirement) Selector {
ret := make(internalSelector, 0, len(s)+len(reqs))
ret = append(ret, s...)
ret = append(ret, reqs...)
return ret
}
func (s internalSelector) Requirements() (Requirements, bool) { return Requirements(s), true }
// String returns a comma-separated string of all
// the internalSelector Requirements' human-readable strings.
func (s internalSelector) String() string {
var reqs []string
for ix := range s {
reqs = append(reqs, s[ix].String())
}
return strings.Join(reqs, ",")
}
// RequiresExactMatch introspects whether a given selector requires a single specific field
// to be set, and if so returns the value it requires.
func (s internalSelector) RequiresExactMatch(label string) (value string, found bool) {
for ix := range s {
if s[ix].key == label {
switch s[ix].operator {
case selection.Equals, selection.DoubleEquals, selection.In:
if len(s[ix].strValues) == 1 {
return s[ix].strValues[0], true
}
}
return "", false
}
}
return "", false
}
// Token represents constant definition for lexer token
type Token int
const (
// ErrorToken represents scan error
ErrorToken Token = iota
// EndOfStringToken represents end of string
EndOfStringToken
// ClosedParToken represents close parenthesis
ClosedParToken
// CommaToken represents the comma
CommaToken
// DoesNotExistToken represents logic not
DoesNotExistToken
// DoubleEqualsToken represents double equals
DoubleEqualsToken
// EqualsToken represents equal
EqualsToken
// PartialEqualsToken does a partial match
PartialEqualsToken
// GreaterThanToken represents greater than
GreaterThanToken
// IdentifierToken represents identifier, e.g. keys and values
IdentifierToken
QuotedStringToken
// InToken represents in
InToken
// LessThanToken represents less than
LessThanToken
// NotEqualsToken represents not equal
NotEqualsToken
// NotInToken represents not in
NotInToken
// NotPartialEqualsToken does a partial match
NotPartialEqualsToken
// OpenParToken represents open parenthesis
OpenParToken
)
// string2token contains the mapping between lexer Token and token literal
// (except IdentifierToken, EndOfStringToken and ErrorToken since it makes no sense)
var string2token = map[string]Token{
")": ClosedParToken,
",": CommaToken,
"!": DoesNotExistToken,
"==": DoubleEqualsToken,
"=": EqualsToken,
"~": PartialEqualsToken,
">": GreaterThanToken,
"in": InToken,
"<": LessThanToken,
"!=": NotEqualsToken,
"!~": NotPartialEqualsToken,
"notin": NotInToken,
"(": OpenParToken,
}
// ScannedItem contains the Token and the literal produced by the lexer.
type ScannedItem struct {
tok Token
literal string
}
func isIdentifierStartChar(ch byte) bool {
r := rune(ch)
return unicode.IsLetter(r) || unicode.IsDigit(r) || ch == '_'
}
// isWhitespace returns true if the rune is a space, tab, or newline.
func isWhitespace(ch byte) bool {
return ch == ' ' || ch == '\t' || ch == '\r' || ch == '\n'
}
// isSpecialSymbol detects if the character ch can be an operator
func isSpecialSymbol(ch byte) bool {
switch ch {
case '=', '!', '(', ')', ',', '>', '<', '~':
return true
}
return false
}
// Lexer represents the Lexer struct for label selector.
// It contains necessary informationt to tokenize the input string
type Lexer struct {
// s stores the string to be tokenized
s string
// pos is the position currently tokenized
pos int
}
// read returns the character currently lexed
// increment the position and check the buffer overflow
func (l *Lexer) read() (b byte) {
b = 0
if l.pos < len(l.s) {
b = l.s[l.pos]
l.pos++
}
return b
}
// unread 'undoes' the last read character
func (l *Lexer) unread() {
l.pos--
}
// scanIDOrKeyword scans string to recognize literal token (for example 'in'), an identifier, or a quoted string.
func (l *Lexer) scanIDOrKeyword() (tok Token, lit string) {
var buffer []byte
IdentifierLoop:
for {
switch ch := l.read(); {
case ch == 0:
break IdentifierLoop
case isSpecialSymbol(ch) || isWhitespace(ch):
l.unread()
break IdentifierLoop
default:
buffer = append(buffer, ch)
}
}
s := string(buffer)
if val, ok := string2token[strings.ToLower(s)]; ok { // is a literal token?
return val, s
}
return IdentifierToken, s // otherwise is an identifier
}
// scanSpecialSymbol scans string starting with special symbol.
// special symbol identify non literal operators. "!=", "==", "=", "!~"
func (l *Lexer) scanSpecialSymbol() (Token, string) {
lastScannedItem := ScannedItem{}
var buffer []byte
SpecialSymbolLoop:
for {
switch ch := l.read(); {
case ch == 0:
break SpecialSymbolLoop
case isSpecialSymbol(ch):
buffer = append(buffer, ch)
if token, ok := string2token[string(buffer)]; ok {
lastScannedItem = ScannedItem{tok: token, literal: string(buffer)}
} else if lastScannedItem.tok != 0 {
l.unread()
break SpecialSymbolLoop
}
default:
l.unread()
break SpecialSymbolLoop
}
}
if lastScannedItem.tok == 0 {
return ErrorToken, fmt.Sprintf("error expected: keyword found '%s'", buffer)
}
return lastScannedItem.tok, lastScannedItem.literal
}
// skipWhiteSpaces consumes all blank characters
// returning the first non blank character
func (l *Lexer) skipWhiteSpaces(ch byte) byte {
for {
if !isWhitespace(ch) {
return ch
}
ch = l.read()
}
}
// Lex returns a pair of Token and the literal
// literal is meaningful only for IdentifierToken and QuotedStringToken
func (l *Lexer) Lex() (Token, string) {
switch ch := l.skipWhiteSpaces(l.read()); {
case ch == 0:
return EndOfStringToken, ""
case isSpecialSymbol(ch):
l.unread()
return l.scanSpecialSymbol()
case isIdentifierStartChar(ch):
l.unread()
return l.scanIDOrKeyword()
default:
return ErrorToken, fmt.Sprintf("unexpected character '%c'", ch)
}
}
// Parser data structure contains the label selector parser data structure
type Parser struct {
l *Lexer
scannedItems []ScannedItem
position int
path *field.Path
}
// ParserContext represents context during parsing:
// some literal for example 'in' and 'notin' can be
// recognized as operator for example 'x in (a)' but
// it can be recognized as value for example 'value in (in)'
type ParserContext int
const (
// KeyAndOperator represents key and operator
KeyAndOperator ParserContext = iota
// Values represents values
Values
)
// lookahead func returns the current token and string. No increment of current position
func (p *Parser) lookahead(context ParserContext) (Token, string) {
tok, lit := p.scannedItems[p.position].tok, p.scannedItems[p.position].literal
if context == Values {
switch tok {
case InToken, NotInToken:
tok = IdentifierToken
}
}
return tok, lit
}
// consume returns current token and string. Increments the position
func (p *Parser) consume(context ParserContext) (Token, string) {
p.position++
tok, lit := p.scannedItems[p.position-1].tok, p.scannedItems[p.position-1].literal
if context == Values {
switch tok {
case InToken, NotInToken:
tok = IdentifierToken
}
}
return tok, lit
}
// scan runs through the input string and stores the ScannedItem in an array
// Parser can now lookahead and consume the tokens
func (p *Parser) scan() {
for {
token, literal := p.l.Lex()
p.scannedItems = append(p.scannedItems, ScannedItem{token, literal})
if token == EndOfStringToken {
break
}
}
}
// parse runs the left recursive descending algorithm
// on input string. It returns a list of Requirement objects.
func (p *Parser) parse() (internalSelector, error) {
p.scan() // init scannedItems
var requirements internalSelector
for {
tok, lit := p.lookahead(Values)
switch tok {
case IdentifierToken, DoesNotExistToken:
r, err := p.parseRequirement()
if err != nil {
return nil, err
}
requirements = append(requirements, *r)
t, l := p.consume(Values)
switch t {
case EndOfStringToken:
return requirements, nil
case CommaToken:
t2, l2 := p.lookahead(Values)
if t2 != IdentifierToken && t2 != DoesNotExistToken {
return nil, fmt.Errorf("found '%s', expected: identifier after ','", l2)
}
default:
return nil, fmt.Errorf("found '%s', expected: ',' or 'end of string'", l)
}
case EndOfStringToken:
return requirements, nil
default:
return nil, fmt.Errorf("found '%s', expected: !, identifier, or 'end of string'", lit)
}
}
}
func (p *Parser) parseRequirement() (*Requirement, error) {
key, operator, err := p.parseKeyAndInferOperator()
if err != nil {
return nil, err
}
if operator == selection.Exists || operator == selection.DoesNotExist { // operator found lookahead set checked
if !labelSelectorRegex.MatchString(key) {
return nil, fmt.Errorf("existence tests are valid only for labels; not valid for field '%s'", key)
}
return NewRequirement(key, operator, []string{}, field.WithPath(p.path))
}
operator, err = p.parseOperator()
if err != nil {
return nil, err
}
var values sets.String
switch operator {
case selection.In, selection.NotIn:
values, err = p.parseValues()
case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.GreaterThan, selection.LessThan, selection.PartialEquals, selection.NotPartialEquals:
values, err = p.parseSingleValue()
}
if err != nil {
return nil, err
}
return NewRequirement(key, operator, values.List(), field.WithPath(p.path))
}
// parseKeyAndInferOperator parses literals.
// in case of no operator '!, in, notin, ==, =, !=' are found
// the 'exists' operator is inferred
func (p *Parser) parseKeyAndInferOperator() (string, selection.Operator, error) {
var operator selection.Operator
tok, literal := p.consume(Values)
if tok == DoesNotExistToken {
operator = selection.DoesNotExist
tok, literal = p.consume(Values)
}
if tok != IdentifierToken {
err := fmt.Errorf("found '%s', expected: identifier", literal)
return "", "", err
}
if t, _ := p.lookahead(Values); t == EndOfStringToken || t == CommaToken {
if operator != selection.DoesNotExist {
operator = selection.Exists
}
}
return literal, operator, nil
}
// parseOperator returns operator and eventually matchType
// matchType can be exact
func (p *Parser) parseOperator() (op selection.Operator, err error) {
tok, lit := p.consume(KeyAndOperator)
switch tok {
// DoesNotExistToken shouldn't be here because it's a unary operator, not a binary operator
case InToken:
op = selection.In
case EqualsToken:
op = selection.Equals
case PartialEqualsToken:
op = selection.PartialEquals
case DoubleEqualsToken:
op = selection.DoubleEquals
case GreaterThanToken:
op = selection.GreaterThan
case LessThanToken:
op = selection.LessThan
case NotInToken:
op = selection.NotIn
case NotEqualsToken:
op = selection.NotEquals
case NotPartialEqualsToken:
op = selection.NotPartialEquals
default:
if lit == "lt" {
op = selection.LessThan
} else if lit == "gt" {
op = selection.GreaterThan
} else {
return "", fmt.Errorf("found '%s', expected: %v", lit, strings.Join(binaryOperators, ", "))
}
}
return op, nil
}
// parseValues parses the values for set based matching (x,y,z)
func (p *Parser) parseValues() (sets.String, error) {
tok, lit := p.consume(Values)
if tok != OpenParToken {
return nil, fmt.Errorf("found '%s' expected: '('", lit)
}
tok, lit = p.lookahead(Values)
switch tok {
case IdentifierToken, CommaToken:
s, err := p.parseIdentifiersList() // handles general cases
if err != nil {
return s, err
}
if tok, _ = p.consume(Values); tok != ClosedParToken {
return nil, fmt.Errorf("found '%s', expected: ')'", lit)
}
return s, nil
case ClosedParToken: // handles "()"
p.consume(Values)
return sets.NewString(""), nil
default:
return nil, fmt.Errorf("found '%s', expected: ',', ')' or identifier", lit)
}
}
// parseIdentifiersList parses a (possibly empty) list of
// of comma separated (possibly empty) identifiers
func (p *Parser) parseIdentifiersList() (sets.String, error) {
s := sets.NewString()
for {
tok, lit := p.consume(Values)
switch tok {
case IdentifierToken:
s.Insert(lit)
tok2, lit2 := p.lookahead(Values)
switch tok2 {
case CommaToken:
continue
case ClosedParToken:
return s, nil
default:
return nil, fmt.Errorf("found '%s', expected: ',' or ')'", lit2)
}
case CommaToken: // handled here since we can have "(,"
if s.Len() == 0 {
s.Insert("") // to handle (,
}
tok2, _ := p.lookahead(Values)
if tok2 == ClosedParToken {
s.Insert("") // to handle ,) Double "" removed by StringSet
return s, nil
}
if tok2 == CommaToken {
p.consume(Values)
s.Insert("") // to handle ,, Double "" removed by StringSet
}
default: // it can be operator
return s, fmt.Errorf("found '%s', expected: ',', or identifier", lit)
}
}
}
// parseSingleValue parses the only value for exact match style
func (p *Parser) parseSingleValue() (sets.String, error) {
s := sets.NewString()
tok, _ := p.lookahead(Values)
if tok == EndOfStringToken || tok == CommaToken {
return s, errors.New("found end of a query string, expected: a comparison value")
}
tok, lit := p.consume(Values)
if tok == IdentifierToken || tok == QuotedStringToken {
s.Insert(lit)
return s, nil
}
return nil, fmt.Errorf("found '%s', expected: identifier", lit)
}
// Parse takes a string representing a selector and returns a selector
// object, or an error. This parsing function differs from ParseSelector
// as they parse different selectors with different syntaxes.
// The input will cause an error if it does not follow this form:
//
// <selector-syntax> ::= <requirement> | <requirement> "," <selector-syntax>
// <requirement> ::= [!] KEY [ <set-based-restriction> | <exact-match-restriction> ]
// <set-based-restriction> ::= "" | <inclusion-exclusion> <value-set>
// <inclusion-exclusion> ::= <inclusion> | <exclusion>
// <exclusion> ::= "notin"
// <inclusion> ::= "in"
// <value-set> ::= "(" <values> ")"
// <values> ::= VALUE | VALUE "," <values>
// <exact-match-restriction> ::= ["="|"=="|"!="] VALUE
//
// KEY is a sequence of one or more characters following [ DNS_SUBDOMAIN "/" ] DNS_LABEL. Max length is 63 characters.
// VALUE is a sequence of zero or more characters "([A-Za-z0-9_-\.])". Max length is 63 characters.
// Delimiter is white space: (' ', '\t')
// Example of valid syntax:
//
// "x in (foo,,baz),y,z notin ()"
//
// Note:
// 1. Inclusion - " in " - denotes that the KEY exists and is equal to any of the
// VALUEs in its requirement
// 2. Exclusion - " notin " - denotes that the KEY is not equal to any
// of the VALUEs in its requirement or does not exist
// 3. The empty string is a valid VALUE
// 4. A requirement with just a KEY - as in "y" above - denotes that
// the KEY exists and can be any VALUE.
// 5. A requirement with just !KEY requires that the KEY not exist.
func Parse(selector string, opts ...field.PathOption) (Selector, error) {
pathThing := field.ToPath(opts...)
parsedSelector, err := parse(selector, pathThing)
if err == nil {
return parsedSelector, nil
}
return nil, err
}
// parse parses the string representation of the selector and returns the internalSelector struct.
// The callers of this method can then decide how to return the internalSelector struct to their
// callers. This function has two callers now, one returns a Selector interface and the other
// returns a list of requirements.
func parse(selector string, path *field.Path) (internalSelector, error) {
p := &Parser{l: &Lexer{s: selector, pos: 0}, path: path}
items, err := p.parse()
if err != nil {
return nil, err
}
return internalSelector(items), err
}
// SelectorFromSet returns a Selector which will match exactly the given Set. A
// nil and empty Sets are considered equivalent to Everything().
// It does not perform any validation, which means the server will reject
// the request if the Set contains invalid values.
func SelectorFromSet(ls Set) Selector {
return SelectorFromValidatedSet(ls)
}
// ValidatedSelectorFromSet returns a Selector which will match exactly the given Set. A
// nil and empty Sets are considered equivalent to Everything().
// The Set is validated client-side, which allows to catch errors early.
func ValidatedSelectorFromSet(ls Set) (Selector, error) {
if ls == nil || len(ls) == 0 {
return internalSelector{}, nil
}
requirements := make([]Requirement, 0, len(ls))
for label, value := range ls {
r, err := NewRequirement(label, selection.Equals, []string{value})
if err != nil {
return nil, err
}
requirements = append(requirements, *r)
}
return internalSelector(requirements), nil
}
// SelectorFromValidatedSet returns a Selector which will match exactly the given Set.
// A nil and empty Sets are considered equivalent to Everything().
// It assumes that Set is already validated and doesn't do any validation.
// Note: this method copies the Set; if the Set is immutable, consider wrapping it with ValidatedSetSelector
// instead, which does not copy.
func SelectorFromValidatedSet(ls Set) Selector {
if ls == nil || len(ls) == 0 {
return internalSelector{}
}
requirements := make([]Requirement, 0, len(ls))
for label, value := range ls {
requirements = append(requirements, Requirement{key: label, operator: selection.Equals, strValues: []string{value}})
}
return internalSelector(requirements)
}
// ParseToRequirements takes a string representing a selector and returns a list of
// requirements. This function is suitable for those callers that perform additional
// processing on selector requirements.
// See the documentation for Parse() function for more details.
// TODO: Consider exporting the internalSelector type instead.
func ParseToRequirements(selector string, opts ...field.PathOption) ([]Requirement, error) {
return parse(selector, field.ToPath(opts...))
}
// ValidatedSetSelector wraps a Set, allowing it to implement the Selector interface. Unlike
// Set.AsSelectorPreValidated (which copies the input Set), this type simply wraps the underlying
// Set. As a result, it is substantially more efficient. A nil and empty Sets are considered
// equivalent to Everything().
//
// Callers MUST ensure the underlying Set is not mutated, and that it is already validated. If these
// constraints are not met, Set.AsValidatedSelector should be preferred
//
// None of the Selector methods mutate the underlying Set, but Add() and Requirements() convert to
// the less optimized version.
type ValidatedSetSelector Set
func (s ValidatedSetSelector) Matches(labels Labels) bool {
for k, v := range s {
if !labels.Has(k) || v != labels.Get(k) {
return false
}
}
return true
}
func (s ValidatedSetSelector) Empty() bool {
return len(s) == 0
}
func (s ValidatedSetSelector) String() string {
keys := make([]string, 0, len(s))
for k := range s {
keys = append(keys, k)
}
// Ensure deterministic output
sort.Strings(keys)
b := strings.Builder{}
for i, key := range keys {
v := s[key]
b.Grow(len(key) + 2 + len(v))
if i != 0 {
b.WriteString(",")
}
b.WriteString(key)
b.WriteString("=")
b.WriteString(v)
}
return b.String()
}
func (s ValidatedSetSelector) Requirements() (requirements Requirements, selectable bool) {
return s.toFullSelector().Requirements()
}
func (s ValidatedSetSelector) DeepCopySelector() Selector {
res := make(ValidatedSetSelector, len(s))
for k, v := range s {
res[k] = v
}
return res
}
func (s ValidatedSetSelector) RequiresExactMatch(label string) (value string, found bool) {
v, f := s[label]
return v, f
}
func (s ValidatedSetSelector) toFullSelector() Selector {
return SelectorFromValidatedSet(Set(s))
}
var _ Selector = ValidatedSetSelector{}

View File

@@ -0,0 +1,419 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
This file is derived from
https://github.com/kubernetes/apimachinery/blob/master/pkg/labels/selector_test.go
*/
package queryparser
import (
"fmt"
"reflect"
"strings"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/rancher/steve/pkg/stores/sqlpartition/selection"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/util/validation/field"
)
var (
ignoreDetail = cmpopts.IgnoreFields(field.Error{}, "Detail")
)
func TestSelectorParse(t *testing.T) {
testGoodStrings := []string{
"x=a,y=b,z=c",
"",
"x!=a,y=b",
"close ~ value",
"notclose !~ value",
"x>1",
"x>1,z<5",
"x gt 1,z lt 5",
`y == def`,
"metadata.labels.im-here",
"!metadata.labels.im-not-here",
"metadata.labels[im.here]",
"!metadata.labels[im.not.here]",
"metadata.labels[k8s.io/meta-stuff] ~ has-dashes_underscores.dots.only",
}
testBadStrings := []string{
"!no-label-absence-test",
"no-label-presence-test",
"x=a||y=b",
"x==a==b",
"!x=a",
"x<a",
"x=",
"x= ",
"x=,z= ",
"x= ,z= ",
"x ~",
"x !~",
"~ val",
"!~ val",
"= val",
"== val",
"metadata.labels-im.here",
"metadata.labels[missing/close-bracket",
"!metadata.labels(im.not.here)",
`x="no double quotes allowed"`,
`x='no single quotes allowed'`,
}
for _, test := range testGoodStrings {
_, err := Parse(test)
if err != nil {
t.Errorf("%v: error %v (%#v)\n", test, err, err)
}
}
for _, test := range testBadStrings {
_, err := Parse(test)
if err == nil {
t.Errorf("%v: did not get expected error\n", test)
}
}
}
func TestLexer(t *testing.T) {
testcases := []struct {
s string
t Token
}{
{"", EndOfStringToken},
{",", CommaToken},
{"notin", NotInToken},
{"in", InToken},
{"=", EqualsToken},
{"==", DoubleEqualsToken},
{">", GreaterThanToken},
{"<", LessThanToken},
//Note that Lex returns the longest valid token found
{"!", DoesNotExistToken},
{"!=", NotEqualsToken},
{"(", OpenParToken},
{")", ClosedParToken},
{`'sq string''`, ErrorToken},
{`"dq string"`, ErrorToken},
{"~", PartialEqualsToken},
{"!~", NotPartialEqualsToken},
{"||", ErrorToken},
}
for _, v := range testcases {
l := &Lexer{s: v.s, pos: 0}
token, lit := l.Lex()
if token != v.t {
t.Errorf("Got %d it should be %d for '%s'", token, v.t, v.s)
}
if v.t != ErrorToken && lit != v.s {
t.Errorf("Got '%s' it should be '%s'", lit, v.s)
}
}
}
func min(l, r int) (m int) {
m = r
if l < r {
m = l
}
return m
}
func TestLexerSequence(t *testing.T) {
testcases := []struct {
s string
t []Token
}{
{"key in ( value )", []Token{IdentifierToken, InToken, OpenParToken, IdentifierToken, ClosedParToken}},
{"key notin ( value )", []Token{IdentifierToken, NotInToken, OpenParToken, IdentifierToken, ClosedParToken}},
{"key in ( value1, value2 )", []Token{IdentifierToken, InToken, OpenParToken, IdentifierToken, CommaToken, IdentifierToken, ClosedParToken}},
{"key", []Token{IdentifierToken}},
{"!key", []Token{DoesNotExistToken, IdentifierToken}},
{"()", []Token{OpenParToken, ClosedParToken}},
{"x in (),y", []Token{IdentifierToken, InToken, OpenParToken, ClosedParToken, CommaToken, IdentifierToken}},
{"== != (), = notin", []Token{DoubleEqualsToken, NotEqualsToken, OpenParToken, ClosedParToken, CommaToken, EqualsToken, NotInToken}},
{"key>2", []Token{IdentifierToken, GreaterThanToken, IdentifierToken}},
{"key<1", []Token{IdentifierToken, LessThanToken, IdentifierToken}},
{"key gt 3", []Token{IdentifierToken, IdentifierToken, IdentifierToken}},
{"key lt 4", []Token{IdentifierToken, IdentifierToken, IdentifierToken}},
{"key=value", []Token{IdentifierToken, EqualsToken, IdentifierToken}},
{"key == value", []Token{IdentifierToken, DoubleEqualsToken, IdentifierToken}},
{"key ~ value", []Token{IdentifierToken, PartialEqualsToken, IdentifierToken}},
{"key~ value", []Token{IdentifierToken, PartialEqualsToken, IdentifierToken}},
{"key ~value", []Token{IdentifierToken, PartialEqualsToken, IdentifierToken}},
{"key~value", []Token{IdentifierToken, PartialEqualsToken, IdentifierToken}},
{"key !~ value", []Token{IdentifierToken, NotPartialEqualsToken, IdentifierToken}},
{"key!~ value", []Token{IdentifierToken, NotPartialEqualsToken, IdentifierToken}},
{"key !~value", []Token{IdentifierToken, NotPartialEqualsToken, IdentifierToken}},
{"key!~value", []Token{IdentifierToken, NotPartialEqualsToken, IdentifierToken}},
}
for _, v := range testcases {
var tokens []Token
l := &Lexer{s: v.s, pos: 0}
for {
token, _ := l.Lex()
if token == EndOfStringToken {
break
}
tokens = append(tokens, token)
}
if len(tokens) != len(v.t) {
t.Errorf("Bad number of tokens for '%s': got %d, wanted %d (got %v)", v.s, len(tokens), len(v.t), tokens)
}
for i := 0; i < min(len(tokens), len(v.t)); i++ {
if tokens[i] != v.t[i] {
t.Errorf("Test '%s': Mismatching in token type found '%v' it should be '%v'", v.s, tokens[i], v.t[i])
}
}
}
}
func TestParserLookahead(t *testing.T) {
testcases := []struct {
s string
t []Token
}{
{"key in ( value )", []Token{IdentifierToken, InToken, OpenParToken, IdentifierToken, ClosedParToken, EndOfStringToken}},
{"key notin ( value )", []Token{IdentifierToken, NotInToken, OpenParToken, IdentifierToken, ClosedParToken, EndOfStringToken}},
{"key in ( value1, value2 )", []Token{IdentifierToken, InToken, OpenParToken, IdentifierToken, CommaToken, IdentifierToken, ClosedParToken, EndOfStringToken}},
{"key", []Token{IdentifierToken, EndOfStringToken}},
{"!key", []Token{DoesNotExistToken, IdentifierToken, EndOfStringToken}},
{"()", []Token{OpenParToken, ClosedParToken, EndOfStringToken}},
{"", []Token{EndOfStringToken}},
{"x in (),y", []Token{IdentifierToken, InToken, OpenParToken, ClosedParToken, CommaToken, IdentifierToken, EndOfStringToken}},
{"== != (), = notin", []Token{DoubleEqualsToken, NotEqualsToken, OpenParToken, ClosedParToken, CommaToken, EqualsToken, NotInToken, EndOfStringToken}},
{"key>2", []Token{IdentifierToken, GreaterThanToken, IdentifierToken, EndOfStringToken}},
{"key<1", []Token{IdentifierToken, LessThanToken, IdentifierToken, EndOfStringToken}},
{"key gt 3", []Token{IdentifierToken, GreaterThanToken, IdentifierToken, EndOfStringToken}},
{"key lt 4", []Token{IdentifierToken, LessThanToken, IdentifierToken, EndOfStringToken}},
{`key = multi-word-string`, []Token{IdentifierToken, EqualsToken, QuotedStringToken, EndOfStringToken}},
}
for _, v := range testcases {
p := &Parser{l: &Lexer{s: v.s, pos: 0}, position: 0}
p.scan()
if len(p.scannedItems) != len(v.t) {
t.Errorf("Expected %d items for test %s, found %d", len(v.t), v.s, len(p.scannedItems))
}
for {
token, lit := p.lookahead(KeyAndOperator)
token2, lit2 := p.consume(KeyAndOperator)
if token == EndOfStringToken {
break
}
if token != token2 || lit != lit2 {
t.Errorf("Bad values")
}
}
}
}
func TestParseOperator(t *testing.T) {
testcases := []struct {
token string
expectedError error
}{
{"in", nil},
{"=", nil},
{"==", nil},
{"~", nil},
{">", nil},
{"<", nil},
{"lt", nil},
{"gt", nil},
{"notin", nil},
{"!=", nil},
{"!~", nil},
{"!", fmt.Errorf("found '%s', expected: %v", selection.DoesNotExist, strings.Join(binaryOperators, ", "))},
{"exists", fmt.Errorf("found '%s', expected: %v", selection.Exists, strings.Join(binaryOperators, ", "))},
{"(", fmt.Errorf("found '%s', expected: %v", "(", strings.Join(binaryOperators, ", "))},
}
for _, testcase := range testcases {
p := &Parser{l: &Lexer{s: testcase.token, pos: 0}, position: 0}
p.scan()
_, err := p.parseOperator()
if ok := reflect.DeepEqual(testcase.expectedError, err); !ok {
t.Errorf("\nexpect err [%v], \nactual err [%v]", testcase.expectedError, err)
}
}
}
// Some error fields are commented out here because this fork no longer
// enforces k8s label expression lexical and length restrictions
func TestRequirementConstructor(t *testing.T) {
requirementConstructorTests := []struct {
Key string
Op selection.Operator
Vals sets.String
WantErr field.ErrorList
}{
{
Key: "x1",
Op: selection.In,
WantErr: field.ErrorList{
&field.Error{
Type: field.ErrorTypeInvalid,
Field: "values",
BadValue: []string{},
},
},
},
{
Key: "x2",
Op: selection.NotIn,
Vals: sets.NewString(),
WantErr: field.ErrorList{
&field.Error{
Type: field.ErrorTypeInvalid,
Field: "values",
BadValue: []string{},
},
},
},
{
Key: "x3",
Op: selection.In,
Vals: sets.NewString("foo"),
},
{
Key: "x4",
Op: selection.NotIn,
Vals: sets.NewString("foo"),
},
{
Key: "x5",
Op: selection.Equals,
Vals: sets.NewString("foo", "bar"),
WantErr: field.ErrorList{
&field.Error{
Type: field.ErrorTypeInvalid,
Field: "values",
BadValue: []string{"bar", "foo"},
},
},
},
{
Key: "x6",
Op: selection.Exists,
},
{
Key: "x7",
Op: selection.DoesNotExist,
},
{
Key: "x8",
Op: selection.Exists,
Vals: sets.NewString("foo"),
WantErr: field.ErrorList{
&field.Error{
Type: field.ErrorTypeInvalid,
Field: "values",
BadValue: []string{"foo"},
},
},
},
{
Key: "x9",
Op: selection.In,
Vals: sets.NewString("bar"),
},
{
Key: "x10",
Op: selection.In,
Vals: sets.NewString("bar"),
},
{
Key: "x11",
Op: selection.GreaterThan,
Vals: sets.NewString("1"),
},
{
Key: "x12",
Op: selection.LessThan,
Vals: sets.NewString("6"),
},
{
Key: "x13",
Op: selection.GreaterThan,
WantErr: field.ErrorList{
&field.Error{
Type: field.ErrorTypeInvalid,
Field: "values",
BadValue: []string{},
},
},
},
{
Key: "x14",
Op: selection.GreaterThan,
Vals: sets.NewString("bar"),
WantErr: field.ErrorList{
&field.Error{
Type: field.ErrorTypeInvalid,
Field: "values[0]",
BadValue: "bar",
},
},
},
{
Key: "x15",
Op: selection.LessThan,
Vals: sets.NewString("bar"),
WantErr: field.ErrorList{
&field.Error{
Type: field.ErrorTypeInvalid,
Field: "values[0]",
BadValue: "bar",
},
},
},
{
Key: strings.Repeat("a", 254), //breaks DNS rule that len(key) <= 253
Op: selection.Exists,
},
{
Key: "x16",
Op: selection.Equals,
Vals: sets.NewString(strings.Repeat("a", 254)),
},
{
Key: "x17",
Op: selection.Equals,
Vals: sets.NewString("a b"),
},
{
Key: "x18",
Op: "unsupportedOp",
WantErr: field.ErrorList{
&field.Error{
Type: field.ErrorTypeNotSupported,
Field: "operator",
BadValue: selection.Operator("unsupportedOp"),
},
},
},
}
for _, rc := range requirementConstructorTests {
_, err := NewRequirement(rc.Key, rc.Op, rc.Vals.List())
if diff := cmp.Diff(rc.WantErr.ToAggregate(), err, ignoreDetail); diff != "" {
t.Errorf("NewRequirement test %v returned unexpected error (-want,+got):\n%s", rc.Key, diff)
}
}
}

View File

@@ -0,0 +1,48 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
This file is derived from
https://github.com/kubernetes/apimachinery/blob/master/pkg/labels/zz_generated.deepcopy.go
*/
// Code generated by deepcopy-gen. DO NOT EDIT.
package queryparser
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Requirement) DeepCopyInto(out *Requirement) {
*out = *in
if in.strValues != nil {
in, out := &in.strValues, &out.strValues
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Requirement.
func (in *Requirement) DeepCopy() *Requirement {
if in == nil {
return nil
}
out := new(Requirement)
in.DeepCopyInto(out)
return out
}

View File

@@ -0,0 +1,41 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Adapted from k8s.io/apimachinery@v0.31.2/pkg/selection/operator.go
We're adding partial-match operators ~ and !~
*/
package selection
// Operator represents a key/field's relationship to value(s).
// See labels.Requirement and fields.Requirement for more details.
type Operator string
const (
DoesNotExist Operator = "!"
Equals Operator = "="
DoubleEquals Operator = "=="
PartialEquals Operator = "~"
In Operator = "in"
NotEquals Operator = "!="
NotPartialEquals Operator = "!~"
NotIn Operator = "notin"
Exists Operator = "exists"
GreaterThan Operator = "gt"
LessThan Operator = "lt"
)

View File

@@ -57,8 +57,9 @@ const (
) )
var ( var (
paramScheme = runtime.NewScheme() paramScheme = runtime.NewScheme()
paramCodec = runtime.NewParameterCodec(paramScheme) paramCodec = runtime.NewParameterCodec(paramScheme)
// Please keep the gvkKey entries in alphabetical order, on a field-by-field basis
typeSpecificIndexedFields = map[string][][]string{ typeSpecificIndexedFields = map[string][][]string{
gvkKey("", "v1", "ConfigMap"): { gvkKey("", "v1", "ConfigMap"): {
{"metadata", "labels[harvesterhci.io/cloud-init-template]"}}, {"metadata", "labels[harvesterhci.io/cloud-init-template]"}},
@@ -307,11 +308,13 @@ func (s *Store) initializeNamespaceCache() error {
// get any type-specific fields that steve is interested in // get any type-specific fields that steve is interested in
fields = append(fields, getFieldForGVK(gvk)...) fields = append(fields, getFieldForGVK(gvk)...)
// get the type-specifc transform func // get the type-specific transform func
transformFunc := s.transformBuilder.GetTransformFunc(gvk) transformFunc := s.transformBuilder.GetTransformFunc(gvk)
// get the ns informer // get the ns informer
nsInformer, err := s.cacheFactory.CacheFor(fields, transformFunc, &tablelistconvert.Client{ResourceInterface: client}, attributes.GVK(&nsSchema), false, true) tableClient := &tablelistconvert.Client{ResourceInterface: client}
attrs := attributes.GVK(&nsSchema)
nsInformer, err := s.cacheFactory.CacheFor(fields, transformFunc, tableClient, attrs, false, true)
if err != nil { if err != nil {
return err return err
} }
@@ -750,8 +753,10 @@ func (s *Store) ListByPartitions(apiOp *types.APIRequest, schema *types.APISchem
fields := getFieldsFromSchema(schema) fields := getFieldsFromSchema(schema)
fields = append(fields, getFieldForGVK(gvk)...) fields = append(fields, getFieldForGVK(gvk)...)
transformFunc := s.transformBuilder.GetTransformFunc(gvk) transformFunc := s.transformBuilder.GetTransformFunc(gvk)
tableClient := &tablelistconvert.Client{ResourceInterface: client}
inf, err := s.cacheFactory.CacheFor(fields, transformFunc, &tablelistconvert.Client{ResourceInterface: client}, attributes.GVK(schema), attributes.Namespaced(schema), controllerschema.IsListWatchable(schema)) attrs := attributes.GVK(schema)
ns := attributes.Namespaced(schema)
inf, err := s.cacheFactory.CacheFor(fields, transformFunc, tableClient, attrs, ns, controllerschema.IsListWatchable(schema))
if err != nil { if err != nil {
return nil, 0, "", err return nil, 0, "", err
} }