mirror of
https://github.com/minio/minio.git
synced 2025-11-20 01:50:24 -05:00
Add new SQL parser to support S3 Select syntax (#7102)
- New parser written from scratch, allows easier and complete parsing of the full S3 Select SQL syntax. Parser definition is directly provided by the AST defined for the SQL grammar. - Bring support to parse and interpret SQL involving JSON path expressions; evaluation of JSON path expressions will be subsequently added. - Bring automatic type inference and conversion for untyped values (e.g. CSV data).
This commit is contained in:
committed by
Harshavardhana
parent
0a28c28a8c
commit
2786055df4
19
vendor/github.com/alecthomas/participle/COPYING
generated
vendored
Normal file
19
vendor/github.com/alecthomas/participle/COPYING
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
Copyright (C) 2017 Alec Thomas
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
|
||||
of the Software, and to permit persons to whom the Software is furnished to do
|
||||
so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
345
vendor/github.com/alecthomas/participle/README.md
generated
vendored
Normal file
345
vendor/github.com/alecthomas/participle/README.md
generated
vendored
Normal file
@@ -0,0 +1,345 @@
|
||||
# A dead simple parser package for Go
|
||||
|
||||
[](http://godoc.org/github.com/alecthomas/participle) [](https://circleci.com/gh/alecthomas/participle)
|
||||
[](https://goreportcard.com/report/github.com/alecthomas/participle) [](https://gitter.im/alecthomas/Lobby)
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
1. [Introduction](#introduction)
|
||||
2. [Limitations](#limitations)
|
||||
3. [Tutorial](#tutorial)
|
||||
4. [Overview](#overview)
|
||||
5. [Annotation syntax](#annotation-syntax)
|
||||
6. [Capturing](#capturing)
|
||||
7. [Streaming](#streaming)
|
||||
8. [Lexing](#lexing)
|
||||
9. [Options](#options)
|
||||
10. [Examples](#examples)
|
||||
11. [Performance](#performance)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
<a id="markdown-introduction" name="introduction"></a>
|
||||
## Introduction
|
||||
|
||||
The goal of this package is to provide a simple, idiomatic and elegant way of
|
||||
defining parsers in Go.
|
||||
|
||||
Participle's method of defining grammars should be familiar to any Go
|
||||
programmer who has used the `encoding/json` package: struct field tags define
|
||||
what and how input is mapped to those same fields. This is not unusual for Go
|
||||
encoders, but is unusual for a parser.
|
||||
|
||||
<a id="markdown-limitations" name="limitations"></a>
|
||||
## Limitations
|
||||
|
||||
Participle parsers are recursive descent. Among other things, this means that they do not support left recursion.
|
||||
|
||||
There is an experimental lookahead option for using precomputed lookahead
|
||||
tables for disambiguation. You can enable this with the parser option
|
||||
`participle.UseLookahead()`.
|
||||
|
||||
Left recursion must be eliminated by restructuring your grammar.
|
||||
|
||||
<a id="markdown-tutorial" name="tutorial"></a>
|
||||
## Tutorial
|
||||
|
||||
A [tutorial](TUTORIAL.md) is available, walking through the creation of an .ini parser.
|
||||
|
||||
<a id="markdown-overview" name="overview"></a>
|
||||
## Overview
|
||||
|
||||
A grammar is an annotated Go structure used to both define the parser grammar,
|
||||
and be the AST output by the parser. As an example, following is the final INI
|
||||
parser from the tutorial.
|
||||
|
||||
```go
|
||||
type INI struct {
|
||||
Properties []*Property `{ @@ }`
|
||||
Sections []*Section `{ @@ }`
|
||||
}
|
||||
|
||||
type Section struct {
|
||||
Identifier string `"[" @Ident "]"`
|
||||
Properties []*Property `{ @@ }`
|
||||
}
|
||||
|
||||
type Property struct {
|
||||
Key string `@Ident "="`
|
||||
Value *Value `@@`
|
||||
}
|
||||
|
||||
type Value struct {
|
||||
String *string ` @String`
|
||||
Number *float64 `| @Float`
|
||||
}
|
||||
```
|
||||
|
||||
> **Note:** Participle also supports named struct tags (eg. <code>Hello string `parser:"@Ident"`</code>).
|
||||
|
||||
A parser is constructed from a grammar and a lexer:
|
||||
|
||||
```go
|
||||
parser, err := participle.Build(&INI{})
|
||||
```
|
||||
|
||||
Once constructed, the parser is applied to input to produce an AST:
|
||||
|
||||
```go
|
||||
ast := &INI{}
|
||||
err := parser.ParseString("size = 10", ast)
|
||||
// ast == &INI{
|
||||
// Properties: []*Property{
|
||||
// {Key: "size", Value: &Value{Number: &10}},
|
||||
// },
|
||||
// }
|
||||
```
|
||||
|
||||
<a id="markdown-annotation-syntax" name="annotation-syntax"></a>
|
||||
## Annotation syntax
|
||||
|
||||
- `@<expr>` Capture expression into the field.
|
||||
- `@@` Recursively capture using the fields own type.
|
||||
- `<identifier>` Match named lexer token.
|
||||
- `( ... )` Group.
|
||||
- `"..."` Match the literal (note that the lexer must emit tokens matching this literal exactly).
|
||||
- `"...":<identifier>` Match the literal, specifying the exact lexer token type to match.
|
||||
- `<expr> <expr> ...` Match expressions.
|
||||
- `<expr> | <expr>` Match one of the alternatives.
|
||||
|
||||
The following modifiers can be used after any expression:
|
||||
|
||||
- `*` Expression can match zero or more times.
|
||||
- `+` Expression must match one or more times.
|
||||
- `?` Expression can match zero or once.
|
||||
- `!` Require a non-empty match (this is useful with a sequence of optional matches eg. `("a"? "b"? "c"?)!`).
|
||||
|
||||
Supported but deprecated:
|
||||
- `{ ... }` Match 0 or more times (**DEPRECATED** - prefer `( ... )*`).
|
||||
- `[ ... ]` Optional (**DEPRECATED** - prefer `( ... )?`).
|
||||
|
||||
Notes:
|
||||
|
||||
- Each struct is a single production, with each field applied in sequence.
|
||||
- `@<expr>` is the mechanism for capturing matches into the field.
|
||||
- if a struct field is not keyed with "parser", the entire struct tag
|
||||
will be used as the grammar fragment. This allows the grammar syntax to remain
|
||||
clear and simple to maintain.
|
||||
|
||||
<a id="markdown-capturing" name="capturing"></a>
|
||||
## Capturing
|
||||
|
||||
Prefixing any expression in the grammar with `@` will capture matching values
|
||||
for that expression into the corresponding field.
|
||||
|
||||
For example:
|
||||
|
||||
```go
|
||||
// The grammar definition.
|
||||
type Grammar struct {
|
||||
Hello string `@Ident`
|
||||
}
|
||||
|
||||
// The source text to parse.
|
||||
source := "world"
|
||||
|
||||
// After parsing, the resulting AST.
|
||||
result == &Grammar{
|
||||
Hello: "world",
|
||||
}
|
||||
```
|
||||
|
||||
For slice and string fields, each instance of `@` will accumulate into the
|
||||
field (including repeated patterns). Accumulation into other types is not
|
||||
supported.
|
||||
|
||||
A successful capture match into a boolean field will set the field to true.
|
||||
|
||||
For integer and floating point types, a successful capture will be parsed
|
||||
with `strconv.ParseInt()` and `strconv.ParseBool()` respectively.
|
||||
|
||||
Custom control of how values are captured into fields can be achieved by a
|
||||
field type implementing the `Capture` interface (`Capture(values []string)
|
||||
error`).
|
||||
|
||||
<a id="markdown-streaming" name="streaming"></a>
|
||||
## Streaming
|
||||
|
||||
Participle supports streaming parsing. Simply pass a channel of your grammar into
|
||||
`Parse*()`. The grammar will be repeatedly parsed and sent to the channel. Note that
|
||||
the `Parse*()` call will not return until parsing completes, so it should generally be
|
||||
started in a goroutine.
|
||||
|
||||
```go
|
||||
type token struct {
|
||||
Str string ` @Ident`
|
||||
Num int `| @Int`
|
||||
}
|
||||
|
||||
parser, err := participle.Build(&token{})
|
||||
|
||||
tokens := make(chan *token, 128)
|
||||
err := parser.ParseString(`hello 10 11 12 world`, tokens)
|
||||
for token := range tokens {
|
||||
fmt.Printf("%#v\n", token)
|
||||
}
|
||||
```
|
||||
|
||||
<a id="markdown-lexing" name="lexing"></a>
|
||||
## Lexing
|
||||
|
||||
Participle operates on tokens and thus relies on a lexer to convert character
|
||||
streams to tokens.
|
||||
|
||||
Three lexers are provided, varying in speed and flexibility. The fastest lexer
|
||||
is based on the [text/scanner](https://golang.org/pkg/text/scanner/) package
|
||||
but only allows tokens provided by that package. Next fastest is the regexp
|
||||
lexer (`lexer.Regexp()`). The slowest is currently the EBNF based lexer, but it has a large potential for optimisation through code generation.
|
||||
|
||||
To use your own Lexer you will need to implement two interfaces:
|
||||
[Definition](https://godoc.org/github.com/alecthomas/participle/lexer#Definition)
|
||||
and [Lexer](https://godoc.org/github.com/alecthomas/participle/lexer#Lexer).
|
||||
|
||||
<a id="markdown-options" name="options"></a>
|
||||
## Options
|
||||
|
||||
The Parser's behaviour can be configured via [Options](https://godoc.org/github.com/alecthomas/participle#Option).
|
||||
|
||||
<a id="markdown-examples" name="examples"></a>
|
||||
## Examples
|
||||
|
||||
There are several [examples](https://github.com/alecthomas/participle/tree/master/_examples) included:
|
||||
|
||||
Example | Description
|
||||
--------|---------------
|
||||
[BASIC](https://github.com/alecthomas/participle/tree/master/_examples/basic) | A lexer, parser and interpreter for a [rudimentary dialect](https://caml.inria.fr/pub/docs/oreilly-book/html/book-ora058.html) of BASIC.
|
||||
[EBNF](https://github.com/alecthomas/participle/tree/master/_examples/ebnf) | Parser for the form of EBNF used by Go.
|
||||
[Expr](https://github.com/alecthomas/participle/tree/master/_examples/expr) | A basic mathematical expression parser and evaluator.
|
||||
[GraphQL](https://github.com/alecthomas/participle/tree/master/_examples/graphql) | Lexer+parser for GraphQL schemas
|
||||
[HCL](https://github.com/alecthomas/participle/tree/master/_examples/hcl) | A parser for the [HashiCorp Configuration Language](https://github.com/hashicorp/hcl).
|
||||
[INI](https://github.com/alecthomas/participle/tree/master/_examples/ini) | An INI file parser.
|
||||
[Protobuf](https://github.com/alecthomas/participle/tree/master/_examples/protobuf) | A full [Protobuf](https://developers.google.com/protocol-buffers/) version 2 and 3 parser.
|
||||
[SQL](https://github.com/alecthomas/participle/tree/master/_examples/sql) | A *very* rudimentary SQL SELECT parser.
|
||||
[Thrift](https://github.com/alecthomas/participle/tree/master/_examples/thrift) | A full [Thrift](https://thrift.apache.org/docs/idl) parser.
|
||||
[TOML](https://github.com/alecthomas/participle/blob/master/_examples/toml/main.go) | A [TOML](https://github.com/toml-lang/toml) parser.
|
||||
|
||||
Included below is a full GraphQL lexer and parser:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/alecthomas/kong"
|
||||
"github.com/alecthomas/repr"
|
||||
|
||||
"github.com/alecthomas/participle"
|
||||
"github.com/alecthomas/participle/lexer"
|
||||
"github.com/alecthomas/participle/lexer/ebnf"
|
||||
)
|
||||
|
||||
type File struct {
|
||||
Entries []*Entry `{ @@ }`
|
||||
}
|
||||
|
||||
type Entry struct {
|
||||
Type *Type ` @@`
|
||||
Schema *Schema `| @@`
|
||||
Enum *Enum `| @@`
|
||||
Scalar string `| "scalar" @Ident`
|
||||
}
|
||||
|
||||
type Enum struct {
|
||||
Name string `"enum" @Ident`
|
||||
Cases []string `"{" { @Ident } "}"`
|
||||
}
|
||||
|
||||
type Schema struct {
|
||||
Fields []*Field `"schema" "{" { @@ } "}"`
|
||||
}
|
||||
|
||||
type Type struct {
|
||||
Name string `"type" @Ident`
|
||||
Implements string `[ "implements" @Ident ]`
|
||||
Fields []*Field `"{" { @@ } "}"`
|
||||
}
|
||||
|
||||
type Field struct {
|
||||
Name string `@Ident`
|
||||
Arguments []*Argument `[ "(" [ @@ { "," @@ } ] ")" ]`
|
||||
Type *TypeRef `":" @@`
|
||||
Annotation string `[ "@" @Ident ]`
|
||||
}
|
||||
|
||||
type Argument struct {
|
||||
Name string `@Ident`
|
||||
Type *TypeRef `":" @@`
|
||||
Default *Value `[ "=" @@ ]`
|
||||
}
|
||||
|
||||
type TypeRef struct {
|
||||
Array *TypeRef `( "[" @@ "]"`
|
||||
Type string ` | @Ident )`
|
||||
NonNullable bool `[ @"!" ]`
|
||||
}
|
||||
|
||||
type Value struct {
|
||||
Symbol string `@Ident`
|
||||
}
|
||||
|
||||
var (
|
||||
graphQLLexer = lexer.Must(ebnf.New(`
|
||||
Comment = ("#" | "//") { "\u0000"…"\uffff"-"\n" } .
|
||||
Ident = (alpha | "_") { "_" | alpha | digit } .
|
||||
Number = ("." | digit) {"." | digit} .
|
||||
Whitespace = " " | "\t" | "\n" | "\r" .
|
||||
Punct = "!"…"/" | ":"…"@" | "["…`+"\"`\""+` | "{"…"~" .
|
||||
|
||||
alpha = "a"…"z" | "A"…"Z" .
|
||||
digit = "0"…"9" .
|
||||
`))
|
||||
|
||||
parser = participle.MustBuild(&File{},
|
||||
participle.Lexer(graphQLLexer),
|
||||
participle.Elide("Comment", "Whitespace"),
|
||||
)
|
||||
|
||||
cli struct {
|
||||
Files []string `arg:"" type:"existingfile" required:"" help:"GraphQL schema files to parse."`
|
||||
}
|
||||
)
|
||||
|
||||
func main() {
|
||||
ctx := kong.Parse(&cli)
|
||||
for _, file := range cli.Files {
|
||||
ast := &File{}
|
||||
r, err := os.Open(file)
|
||||
ctx.FatalIfErrorf(err)
|
||||
err = parser.Parse(r, ast)
|
||||
r.Close()
|
||||
repr.Println(ast)
|
||||
ctx.FatalIfErrorf(err)
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
<a id="markdown-performance" name="performance"></a>
|
||||
## Performance
|
||||
|
||||
One of the included examples is a complete Thrift parser
|
||||
(shell-style comments are not supported). This gives
|
||||
a convenient baseline for comparing to the PEG based
|
||||
[pigeon](https://github.com/PuerkitoBio/pigeon), which is the parser used by
|
||||
[go-thrift](https://github.com/samuel/go-thrift). Additionally, the pigeon
|
||||
parser is utilising a generated parser, while the participle parser is built at
|
||||
run time.
|
||||
|
||||
You can run the benchmarks yourself, but here's the output on my machine:
|
||||
|
||||
BenchmarkParticipleThrift-4 10000 221818 ns/op 48880 B/op 1240 allocs/op
|
||||
BenchmarkGoThriftParser-4 2000 804709 ns/op 170301 B/op 3086 allocs/op
|
||||
|
||||
On a real life codebase of 47K lines of Thrift, Participle takes 200ms and go-
|
||||
thrift takes 630ms, which aligns quite closely with the benchmarks.
|
||||
255
vendor/github.com/alecthomas/participle/TUTORIAL.md
generated
vendored
Normal file
255
vendor/github.com/alecthomas/participle/TUTORIAL.md
generated
vendored
Normal file
@@ -0,0 +1,255 @@
|
||||
# Participle parser tutorial
|
||||
|
||||
<!-- MarkdownTOC -->
|
||||
|
||||
1. [Introduction](#introduction)
|
||||
1. [The complete grammar](#the-complete-grammar)
|
||||
1. [Root of the .ini AST \(structure, fields\)](#root-of-the-ini-ast-structure-fields)
|
||||
1. [.ini properties \(named tokens, capturing, literals\)](#ini-properties-named-tokens-capturing-literals)
|
||||
1. [.ini property values \(alternates, recursive structs, sequences\)](#ini-property-values-alternates-recursive-structs-sequences)
|
||||
1. [Complete, but limited, .ini grammar \(top-level properties only\)](#complete-but-limited-ini-grammar-top-level-properties-only)
|
||||
1. [Extending our grammar to support sections](#extending-our-grammar-to-support-sections)
|
||||
1. [\(Optional\) Source positional information](#optional-source-positional-information)
|
||||
1. [Parsing using our grammar](#parsing-using-our-grammar)
|
||||
|
||||
<!-- /MarkdownTOC -->
|
||||
|
||||
## Introduction
|
||||
|
||||
Writing a parser in Participle typically involves starting from the "root" of
|
||||
the AST, annotating fields with the grammar, then recursively expanding until
|
||||
it is complete. The AST is expressed via Go data types and the grammar is
|
||||
expressed through struct field tags, as a form of EBNF.
|
||||
|
||||
The parser we're going to create for this tutorial parses .ini files
|
||||
like this:
|
||||
|
||||
```ini
|
||||
age = 21
|
||||
name = "Bob Smith"
|
||||
|
||||
[address]
|
||||
city = "Beverly Hills"
|
||||
postal_code = 90210
|
||||
```
|
||||
|
||||
## The complete grammar
|
||||
|
||||
I think it's useful to see the complete grammar first, to see what we're
|
||||
working towards. Read on below for details.
|
||||
|
||||
```go
|
||||
type INI struct {
|
||||
Properties []*Property `@@*`
|
||||
Sections []*Section `@@*`
|
||||
}
|
||||
|
||||
type Section struct {
|
||||
Identifier string `"[" @Ident "]"`
|
||||
Properties []*Property `@@*`
|
||||
}
|
||||
|
||||
type Property struct {
|
||||
Key string `@Ident "="`
|
||||
Value *Value `@@`
|
||||
}
|
||||
|
||||
type Value struct {
|
||||
String *string ` @String`
|
||||
Number *float64 `| @Float`
|
||||
}
|
||||
```
|
||||
|
||||
## Root of the .ini AST (structure, fields)
|
||||
|
||||
The first step is to create a root struct for our grammar. In the case of our
|
||||
.ini parser, this struct will contain a sequence of properties:
|
||||
|
||||
```go
|
||||
type INI struct {
|
||||
Properties []*Property
|
||||
}
|
||||
|
||||
type Property struct {
|
||||
}
|
||||
```
|
||||
|
||||
## .ini properties (named tokens, capturing, literals)
|
||||
|
||||
Each property in an .ini file has an identifier key:
|
||||
|
||||
```go
|
||||
type Property struct {
|
||||
Key string
|
||||
}
|
||||
```
|
||||
|
||||
The default lexer tokenises Go source code, and includes an `Ident` token type
|
||||
that matches identifiers. To match this token we simply use the token type
|
||||
name:
|
||||
|
||||
```go
|
||||
type Property struct {
|
||||
Key string `Ident`
|
||||
}
|
||||
```
|
||||
|
||||
This will *match* identifiers, but not *capture* them into the `Key` field. To
|
||||
capture input tokens into AST fields, prefix any grammar node with `@`:
|
||||
|
||||
```go
|
||||
type Property struct {
|
||||
Key string `@Ident`
|
||||
}
|
||||
```
|
||||
|
||||
In .ini files, each key is separated from its value with a literal `=`. To
|
||||
match a literal, enclose the literal in double quotes:
|
||||
|
||||
```go
|
||||
type Property struct {
|
||||
Key string `@Ident "="`
|
||||
}
|
||||
```
|
||||
|
||||
> Note: literals in the grammar must match tokens from the lexer *exactly*. In
|
||||
> this example if the lexer does not output `=` as a distinct token the
|
||||
> grammar will not match.
|
||||
|
||||
## .ini property values (alternates, recursive structs, sequences)
|
||||
|
||||
For the purposes of our example we are only going to support quoted string
|
||||
and numeric property values. As each value can be *either* a string or a float
|
||||
we'll need something akin to a sum type. Go's type system cannot express this
|
||||
directly, so we'll use the common approach of making each element a pointer.
|
||||
The selected "case" will *not* be nil.
|
||||
|
||||
```go
|
||||
type Value struct {
|
||||
String *string
|
||||
Number *float64
|
||||
}
|
||||
```
|
||||
|
||||
> Note: Participle will hydrate pointers as necessary.
|
||||
|
||||
To express matching a set of alternatives we use the `|` operator:
|
||||
|
||||
```go
|
||||
type Value struct {
|
||||
String *string ` @String`
|
||||
Number *float64 `| @Float`
|
||||
}
|
||||
```
|
||||
|
||||
> Note: the grammar can cross fields.
|
||||
|
||||
Next, we'll match values and capture them into the `Property`. To recursively
|
||||
capture structs use `@@` (capture self):
|
||||
|
||||
```go
|
||||
type Property struct {
|
||||
Key string `@Ident "="`
|
||||
Value *Value `@@`
|
||||
}
|
||||
```
|
||||
|
||||
Now that we can parse a `Property` we need to go back to the root of the
|
||||
grammar. We want to parse 0 or more properties. To do this, we use `<expr>*`.
|
||||
Participle will accumulate each match into the slice until matching fails,
|
||||
then move to the next node in the grammar.
|
||||
|
||||
```go
|
||||
type INI struct {
|
||||
Properties []*Property `@@*`
|
||||
}
|
||||
```
|
||||
|
||||
> Note: tokens can also be accumulated into strings, appending each match.
|
||||
|
||||
## Complete, but limited, .ini grammar (top-level properties only)
|
||||
|
||||
We now have a functional, but limited, .ini parser!
|
||||
|
||||
```go
|
||||
type INI struct {
|
||||
Properties []*Property `@@*`
|
||||
}
|
||||
|
||||
type Property struct {
|
||||
Key string `@Ident "="`
|
||||
Value *Value `@@`
|
||||
}
|
||||
|
||||
type Value struct {
|
||||
String *string ` @String`
|
||||
Number *float64 `| @Float`
|
||||
}
|
||||
```
|
||||
|
||||
## Extending our grammar to support sections
|
||||
|
||||
Adding support for sections is simply a matter of utilising the constructs
|
||||
we've just learnt. A section consists of a header identifier, and a sequence
|
||||
of properties:
|
||||
|
||||
```go
|
||||
type Section struct {
|
||||
Identifier string `"[" @Ident "]"`
|
||||
Properties []*Property `@@*`
|
||||
}
|
||||
```
|
||||
|
||||
Simple!
|
||||
|
||||
Now we just add a sequence of `Section`s to our root node:
|
||||
|
||||
```go
|
||||
type INI struct {
|
||||
Properties []*Property `@@*`
|
||||
Sections []*Section `@@*`
|
||||
}
|
||||
```
|
||||
|
||||
And we're done!
|
||||
|
||||
## (Optional) Source positional information
|
||||
|
||||
If a grammar node includes a field with the name `Pos` and type `lexer.Position`, it will be automatically populated by positional information. eg.
|
||||
|
||||
```go
|
||||
type Value struct {
|
||||
Pos lexer.Position
|
||||
String *string ` @String`
|
||||
Number *float64 `| @Float`
|
||||
}
|
||||
```
|
||||
|
||||
This is useful for error reporting.
|
||||
|
||||
## Parsing using our grammar
|
||||
|
||||
To parse with this grammar we first construct the parser (we'll use the
|
||||
default lexer for now):
|
||||
|
||||
```go
|
||||
parser, err := participle.Build(&INI{})
|
||||
```
|
||||
|
||||
Then create a root node and parse into it with `parser.Parse{,String,Bytes}()`:
|
||||
|
||||
```go
|
||||
ini := &INI{}
|
||||
err = parser.ParseString(`
|
||||
age = 21
|
||||
name = "Bob Smith"
|
||||
|
||||
[address]
|
||||
city = "Beverly Hills"
|
||||
postal_code = 90210
|
||||
`, ini)
|
||||
```
|
||||
|
||||
You can find the full example [here](_examples/ini/main.go), alongside
|
||||
other examples including an SQL `SELECT` parser and a full
|
||||
[Thrift](https://thrift.apache.org/) parser.
|
||||
19
vendor/github.com/alecthomas/participle/api.go
generated
vendored
Normal file
19
vendor/github.com/alecthomas/participle/api.go
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
package participle
|
||||
|
||||
import (
|
||||
"github.com/alecthomas/participle/lexer"
|
||||
)
|
||||
|
||||
// Capture can be implemented by fields in order to transform captured tokens into field values.
|
||||
type Capture interface {
|
||||
Capture(values []string) error
|
||||
}
|
||||
|
||||
// The Parseable interface can be implemented by any element in the grammar to provide custom parsing.
|
||||
type Parseable interface {
|
||||
// Parse into the receiver.
|
||||
//
|
||||
// Should return NextMatch if no tokens matched and parsing should continue.
|
||||
// Nil should be returned if parsing was successful.
|
||||
Parse(lex lexer.PeekingLexer) error
|
||||
}
|
||||
123
vendor/github.com/alecthomas/participle/context.go
generated
vendored
Normal file
123
vendor/github.com/alecthomas/participle/context.go
generated
vendored
Normal file
@@ -0,0 +1,123 @@
|
||||
package participle
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
|
||||
"github.com/alecthomas/participle/lexer"
|
||||
)
|
||||
|
||||
type contextFieldSet struct {
|
||||
pos lexer.Position
|
||||
strct reflect.Value
|
||||
field structLexerField
|
||||
fieldValue []reflect.Value
|
||||
}
|
||||
|
||||
// Context for a single parse.
|
||||
type parseContext struct {
|
||||
*rewinder
|
||||
lookahead int
|
||||
caseInsensitive map[rune]bool
|
||||
apply []*contextFieldSet
|
||||
}
|
||||
|
||||
func newParseContext(lex lexer.Lexer, lookahead int, caseInsensitive map[rune]bool) (*parseContext, error) {
|
||||
rew, err := newRewinder(lex)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &parseContext{
|
||||
rewinder: rew,
|
||||
caseInsensitive: caseInsensitive,
|
||||
lookahead: lookahead,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Defer adds a function to be applied once a branch has been picked.
|
||||
func (p *parseContext) Defer(pos lexer.Position, strct reflect.Value, field structLexerField, fieldValue []reflect.Value) {
|
||||
p.apply = append(p.apply, &contextFieldSet{pos, strct, field, fieldValue})
|
||||
}
|
||||
|
||||
// Apply deferred functions.
|
||||
func (p *parseContext) Apply() error {
|
||||
for _, apply := range p.apply {
|
||||
if err := setField(apply.pos, apply.strct, apply.field, apply.fieldValue); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
p.apply = nil
|
||||
return nil
|
||||
}
|
||||
|
||||
// Branch accepts the branch as the correct branch.
|
||||
func (p *parseContext) Accept(branch *parseContext) {
|
||||
p.apply = append(p.apply, branch.apply...)
|
||||
p.rewinder = branch.rewinder
|
||||
}
|
||||
|
||||
// Branch starts a new lookahead branch.
|
||||
func (p *parseContext) Branch() *parseContext {
|
||||
branch := &parseContext{}
|
||||
*branch = *p
|
||||
branch.apply = nil
|
||||
branch.rewinder = p.rewinder.Lookahead()
|
||||
return branch
|
||||
}
|
||||
|
||||
// Stop returns true if parsing should terminate after the given "branch" failed to match.
|
||||
func (p *parseContext) Stop(branch *parseContext) bool {
|
||||
if branch.cursor > p.cursor+p.lookahead {
|
||||
p.Accept(branch)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
type rewinder struct {
|
||||
cursor, limit int
|
||||
tokens []lexer.Token
|
||||
}
|
||||
|
||||
func newRewinder(lex lexer.Lexer) (*rewinder, error) {
|
||||
r := &rewinder{}
|
||||
for {
|
||||
t, err := lex.Next()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if t.EOF() {
|
||||
break
|
||||
}
|
||||
r.tokens = append(r.tokens, t)
|
||||
}
|
||||
return r, nil
|
||||
}
|
||||
|
||||
func (r *rewinder) Next() (lexer.Token, error) {
|
||||
if r.cursor >= len(r.tokens) {
|
||||
return lexer.EOFToken(lexer.Position{}), nil
|
||||
}
|
||||
r.cursor++
|
||||
return r.tokens[r.cursor-1], nil
|
||||
}
|
||||
|
||||
func (r *rewinder) Peek(n int) (lexer.Token, error) {
|
||||
i := r.cursor + n
|
||||
if i >= len(r.tokens) {
|
||||
return lexer.EOFToken(lexer.Position{}), nil
|
||||
}
|
||||
return r.tokens[i], nil
|
||||
}
|
||||
|
||||
// Lookahead returns a new rewinder usable for lookahead.
|
||||
func (r *rewinder) Lookahead() *rewinder {
|
||||
clone := &rewinder{}
|
||||
*clone = *r
|
||||
clone.limit = clone.cursor
|
||||
return clone
|
||||
}
|
||||
|
||||
// Keep this lookahead rewinder.
|
||||
func (r *rewinder) Keep() {
|
||||
r.limit = 0
|
||||
}
|
||||
73
vendor/github.com/alecthomas/participle/doc.go
generated
vendored
Normal file
73
vendor/github.com/alecthomas/participle/doc.go
generated
vendored
Normal file
@@ -0,0 +1,73 @@
|
||||
// Package participle constructs parsers from definitions in struct tags and parses directly into
|
||||
// those structs. The approach is philosophically similar to how other marshallers work in Go,
|
||||
// "unmarshalling" an instance of a grammar into a struct.
|
||||
//
|
||||
// The supported annotation syntax is:
|
||||
//
|
||||
// - `@<expr>` Capture expression into the field.
|
||||
// - `@@` Recursively capture using the fields own type.
|
||||
// - `<identifier>` Match named lexer token.
|
||||
// - `( ... )` Group.
|
||||
// - `"..."` Match the literal (note that the lexer must emit tokens matching this literal exactly).
|
||||
// - `"...":<identifier>` Match the literal, specifying the exact lexer token type to match.
|
||||
// - `<expr> <expr> ...` Match expressions.
|
||||
// - `<expr> | <expr>` Match one of the alternatives.
|
||||
//
|
||||
// The following modifiers can be used after any expression:
|
||||
//
|
||||
// - `*` Expression can match zero or more times.
|
||||
// - `+` Expression must match one or more times.
|
||||
// - `?` Expression can match zero or once.
|
||||
// - `!` Require a non-empty match (this is useful with a sequence of optional matches eg. `("a"? "b"? "c"?)!`).
|
||||
//
|
||||
// Supported but deprecated:
|
||||
//
|
||||
// - `{ ... }` Match 0 or more times (**DEPRECATED** - prefer `( ... )*`).
|
||||
// - `[ ... ]` Optional (**DEPRECATED** - prefer `( ... )?`).
|
||||
//
|
||||
// Here's an example of an EBNF grammar.
|
||||
//
|
||||
// type Group struct {
|
||||
// Expression *Expression `"(" @@ ")"`
|
||||
// }
|
||||
//
|
||||
// type Option struct {
|
||||
// Expression *Expression `"[" @@ "]"`
|
||||
// }
|
||||
//
|
||||
// type Repetition struct {
|
||||
// Expression *Expression `"{" @@ "}"`
|
||||
// }
|
||||
//
|
||||
// type Literal struct {
|
||||
// Start string `@String` // lexer.Lexer token "String"
|
||||
// End string `("…" @String)?`
|
||||
// }
|
||||
//
|
||||
// type Term struct {
|
||||
// Name string ` @Ident`
|
||||
// Literal *Literal `| @@`
|
||||
// Group *Group `| @@`
|
||||
// Option *Option `| @@`
|
||||
// Repetition *Repetition `| @@`
|
||||
// }
|
||||
//
|
||||
// type Sequence struct {
|
||||
// Terms []*Term `@@+`
|
||||
// }
|
||||
//
|
||||
// type Expression struct {
|
||||
// Alternatives []*Sequence `@@ ("|" @@)*`
|
||||
// }
|
||||
//
|
||||
// type Expressions []*Expression
|
||||
//
|
||||
// type Production struct {
|
||||
// Name string `@Ident "="`
|
||||
// Expressions Expressions `@@+ "."`
|
||||
// }
|
||||
//
|
||||
// type EBNF struct {
|
||||
// Productions []*Production `@@*`
|
||||
// }
|
||||
package participle
|
||||
7
vendor/github.com/alecthomas/participle/go.mod
generated
vendored
Normal file
7
vendor/github.com/alecthomas/participle/go.mod
generated
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
module github.com/alecthomas/participle
|
||||
|
||||
require (
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/stretchr/testify v1.2.2
|
||||
)
|
||||
6
vendor/github.com/alecthomas/participle/go.sum
generated
vendored
Normal file
6
vendor/github.com/alecthomas/participle/go.sum
generated
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
324
vendor/github.com/alecthomas/participle/grammar.go
generated
vendored
Normal file
324
vendor/github.com/alecthomas/participle/grammar.go
generated
vendored
Normal file
@@ -0,0 +1,324 @@
|
||||
package participle
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"text/scanner"
|
||||
|
||||
"github.com/alecthomas/participle/lexer"
|
||||
)
|
||||
|
||||
type generatorContext struct {
|
||||
lexer.Definition
|
||||
typeNodes map[reflect.Type]node
|
||||
symbolsToIDs map[rune]string
|
||||
}
|
||||
|
||||
func newGeneratorContext(lex lexer.Definition) *generatorContext {
|
||||
return &generatorContext{
|
||||
Definition: lex,
|
||||
typeNodes: map[reflect.Type]node{},
|
||||
symbolsToIDs: lexer.SymbolsByRune(lex),
|
||||
}
|
||||
}
|
||||
|
||||
// Takes a type and builds a tree of nodes out of it.
|
||||
func (g *generatorContext) parseType(t reflect.Type) (_ node, returnedError error) {
|
||||
rt := t
|
||||
t = indirectType(t)
|
||||
if n, ok := g.typeNodes[t]; ok {
|
||||
return n, nil
|
||||
}
|
||||
if rt.Implements(parseableType) {
|
||||
return &parseable{rt.Elem()}, nil
|
||||
}
|
||||
if reflect.PtrTo(rt).Implements(parseableType) {
|
||||
return &parseable{rt}, nil
|
||||
}
|
||||
switch t.Kind() {
|
||||
case reflect.Slice, reflect.Ptr:
|
||||
t = indirectType(t.Elem())
|
||||
if t.Kind() != reflect.Struct {
|
||||
return nil, fmt.Errorf("expected a struct but got %T", t)
|
||||
}
|
||||
fallthrough
|
||||
|
||||
case reflect.Struct:
|
||||
slexer, err := lexStruct(t)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
out := &strct{typ: t}
|
||||
g.typeNodes[t] = out // Ensure we avoid infinite recursion.
|
||||
if slexer.NumField() == 0 {
|
||||
return nil, fmt.Errorf("can not parse into empty struct %s", t)
|
||||
}
|
||||
defer decorate(&returnedError, func() string { return slexer.Field().Name })
|
||||
e, err := g.parseDisjunction(slexer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if e == nil {
|
||||
return nil, fmt.Errorf("no grammar found in %s", t)
|
||||
}
|
||||
if token, _ := slexer.Peek(); !token.EOF() {
|
||||
return nil, fmt.Errorf("unexpected input %q", token.Value)
|
||||
}
|
||||
out.expr = e
|
||||
return out, nil
|
||||
}
|
||||
return nil, fmt.Errorf("%s should be a struct or should implement the Parseable interface", t)
|
||||
}
|
||||
|
||||
func (g *generatorContext) parseDisjunction(slexer *structLexer) (node, error) {
|
||||
out := &disjunction{}
|
||||
for {
|
||||
n, err := g.parseSequence(slexer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
out.nodes = append(out.nodes, n)
|
||||
if token, _ := slexer.Peek(); token.Type != '|' {
|
||||
break
|
||||
}
|
||||
_, err = slexer.Next() // |
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if len(out.nodes) == 1 {
|
||||
return out.nodes[0], nil
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (g *generatorContext) parseSequence(slexer *structLexer) (node, error) {
|
||||
head := &sequence{}
|
||||
cursor := head
|
||||
loop:
|
||||
for {
|
||||
if token, err := slexer.Peek(); err != nil {
|
||||
return nil, err
|
||||
} else if token.Type == lexer.EOF {
|
||||
break loop
|
||||
}
|
||||
term, err := g.parseTerm(slexer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if term == nil {
|
||||
break loop
|
||||
}
|
||||
if cursor.node == nil {
|
||||
cursor.head = true
|
||||
cursor.node = term
|
||||
} else {
|
||||
cursor.next = &sequence{node: term}
|
||||
cursor = cursor.next
|
||||
}
|
||||
}
|
||||
if head.node == nil {
|
||||
return nil, nil
|
||||
}
|
||||
if head.next == nil {
|
||||
return head.node, nil
|
||||
}
|
||||
return head, nil
|
||||
}
|
||||
|
||||
func (g *generatorContext) parseTermNoModifiers(slexer *structLexer) (node, error) {
|
||||
t, err := slexer.Peek()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var out node
|
||||
switch t.Type {
|
||||
case '@':
|
||||
out, err = g.parseCapture(slexer)
|
||||
case scanner.String, scanner.RawString, scanner.Char:
|
||||
out, err = g.parseLiteral(slexer)
|
||||
case '[':
|
||||
return g.parseOptional(slexer)
|
||||
case '{':
|
||||
return g.parseRepetition(slexer)
|
||||
case '(':
|
||||
out, err = g.parseGroup(slexer)
|
||||
case scanner.Ident:
|
||||
out, err = g.parseReference(slexer)
|
||||
case lexer.EOF:
|
||||
_, _ = slexer.Next()
|
||||
return nil, nil
|
||||
default:
|
||||
return nil, nil
|
||||
}
|
||||
return out, err
|
||||
}
|
||||
|
||||
func (g *generatorContext) parseTerm(slexer *structLexer) (node, error) {
|
||||
out, err := g.parseTermNoModifiers(slexer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return g.parseModifier(slexer, out)
|
||||
}
|
||||
|
||||
// Parse modifiers: ?, *, + and/or !
|
||||
func (g *generatorContext) parseModifier(slexer *structLexer, expr node) (node, error) {
|
||||
out := &group{expr: expr}
|
||||
t, err := slexer.Peek()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
switch t.Type {
|
||||
case '!':
|
||||
out.mode = groupMatchNonEmpty
|
||||
case '+':
|
||||
out.mode = groupMatchOneOrMore
|
||||
case '*':
|
||||
out.mode = groupMatchZeroOrMore
|
||||
case '?':
|
||||
out.mode = groupMatchZeroOrOne
|
||||
default:
|
||||
return expr, nil
|
||||
}
|
||||
_, _ = slexer.Next()
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// @<expression> captures <expression> into the current field.
|
||||
func (g *generatorContext) parseCapture(slexer *structLexer) (node, error) {
|
||||
_, _ = slexer.Next()
|
||||
token, err := slexer.Peek()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
field := slexer.Field()
|
||||
if token.Type == '@' {
|
||||
_, _ = slexer.Next()
|
||||
n, err := g.parseType(field.Type)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &capture{field, n}, nil
|
||||
}
|
||||
if indirectType(field.Type).Kind() == reflect.Struct && !field.Type.Implements(captureType) {
|
||||
return nil, fmt.Errorf("structs can only be parsed with @@ or by implementing the Capture interface")
|
||||
}
|
||||
n, err := g.parseTermNoModifiers(slexer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &capture{field, n}, nil
|
||||
}
|
||||
|
||||
// A reference in the form <identifier> refers to a named token from the lexer.
|
||||
func (g *generatorContext) parseReference(slexer *structLexer) (node, error) { // nolint: interfacer
|
||||
token, err := slexer.Next()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if token.Type != scanner.Ident {
|
||||
return nil, fmt.Errorf("expected identifier but got %q", token)
|
||||
}
|
||||
typ, ok := g.Symbols()[token.Value]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("unknown token type %q", token)
|
||||
}
|
||||
return &reference{typ: typ, identifier: token.Value}, nil
|
||||
}
|
||||
|
||||
// [ <expression> ] optionally matches <expression>.
|
||||
func (g *generatorContext) parseOptional(slexer *structLexer) (node, error) {
|
||||
_, _ = slexer.Next() // [
|
||||
disj, err := g.parseDisjunction(slexer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
n := &group{expr: disj, mode: groupMatchZeroOrOne}
|
||||
next, err := slexer.Next()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if next.Type != ']' {
|
||||
return nil, fmt.Errorf("expected ] but got %q", next)
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// { <expression> } matches 0 or more repititions of <expression>
|
||||
func (g *generatorContext) parseRepetition(slexer *structLexer) (node, error) {
|
||||
_, _ = slexer.Next() // {
|
||||
disj, err := g.parseDisjunction(slexer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
n := &group{expr: disj, mode: groupMatchZeroOrMore}
|
||||
next, err := slexer.Next()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if next.Type != '}' {
|
||||
return nil, fmt.Errorf("expected } but got %q", next)
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// ( <expression> ) groups a sub-expression
|
||||
func (g *generatorContext) parseGroup(slexer *structLexer) (node, error) {
|
||||
_, _ = slexer.Next() // (
|
||||
disj, err := g.parseDisjunction(slexer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
next, err := slexer.Next() // )
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if next.Type != ')' {
|
||||
return nil, fmt.Errorf("expected ) but got %q", next)
|
||||
}
|
||||
return &group{expr: disj}, nil
|
||||
}
|
||||
|
||||
// A literal string.
|
||||
//
|
||||
// Note that for this to match, the tokeniser must be able to produce this string. For example,
|
||||
// if the tokeniser only produces individual characters but the literal is "hello", or vice versa.
|
||||
func (g *generatorContext) parseLiteral(lex *structLexer) (node, error) { // nolint: interfacer
|
||||
token, err := lex.Next()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if token.Type != scanner.String && token.Type != scanner.RawString && token.Type != scanner.Char {
|
||||
return nil, fmt.Errorf("expected quoted string but got %q", token)
|
||||
}
|
||||
s := token.Value
|
||||
t := rune(-1)
|
||||
token, err = lex.Peek()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if token.Value == ":" && (token.Type == scanner.Char || token.Type == ':') {
|
||||
_, _ = lex.Next()
|
||||
token, err = lex.Next()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if token.Type != scanner.Ident {
|
||||
return nil, fmt.Errorf("expected identifier for literal type constraint but got %q", token)
|
||||
}
|
||||
var ok bool
|
||||
t, ok = g.Symbols()[token.Value]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("unknown token type %q in literal type constraint", token)
|
||||
}
|
||||
}
|
||||
return &literal{s: s, t: t, tt: g.symbolsToIDs[t]}, nil
|
||||
}
|
||||
|
||||
func indirectType(t reflect.Type) reflect.Type {
|
||||
if t.Kind() == reflect.Ptr || t.Kind() == reflect.Slice {
|
||||
return indirectType(t.Elem())
|
||||
}
|
||||
return t
|
||||
}
|
||||
19
vendor/github.com/alecthomas/participle/lexer/doc.go
generated
vendored
Normal file
19
vendor/github.com/alecthomas/participle/lexer/doc.go
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
// Package lexer defines interfaces and implementations used by Participle to perform lexing.
|
||||
//
|
||||
// The primary interfaces are Definition and Lexer. There are three implementations of these
|
||||
// interfaces:
|
||||
//
|
||||
// TextScannerLexer is based on text/scanner. This is the fastest, but least flexible, in that
|
||||
// tokens are restricted to those supported by that package. It can scan about 5M tokens/second on a
|
||||
// late 2013 15" MacBook Pro.
|
||||
//
|
||||
// The second lexer is constructed via the Regexp() function, mapping regexp capture groups
|
||||
// to tokens. The complete input source is read into memory, so it is unsuitable for large inputs.
|
||||
//
|
||||
// The final lexer provided accepts a lexical grammar in EBNF. Each capitalised production is a
|
||||
// lexical token supported by the resulting Lexer. This is very flexible, but a bit slower, scanning
|
||||
// around 730K tokens/second on the same machine, though it is currently completely unoptimised.
|
||||
// This could/should be converted to a table-based lexer.
|
||||
//
|
||||
// Lexer implementations must use Panic/Panicf to report errors.
|
||||
package lexer
|
||||
26
vendor/github.com/alecthomas/participle/lexer/errors.go
generated
vendored
Normal file
26
vendor/github.com/alecthomas/participle/lexer/errors.go
generated
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
package lexer
|
||||
|
||||
import "fmt"
|
||||
|
||||
// Error represents an error while parsing.
|
||||
type Error struct {
|
||||
Message string
|
||||
Pos Position
|
||||
}
|
||||
|
||||
// Errorf creats a new Error at the given position.
|
||||
func Errorf(pos Position, format string, args ...interface{}) *Error {
|
||||
return &Error{
|
||||
Message: fmt.Sprintf(format, args...),
|
||||
Pos: pos,
|
||||
}
|
||||
}
|
||||
|
||||
// Error complies with the error interface and reports the position of an error.
|
||||
func (e *Error) Error() string {
|
||||
filename := e.Pos.Filename
|
||||
if filename == "" {
|
||||
filename = "<source>"
|
||||
}
|
||||
return fmt.Sprintf("%s:%d:%d: %s", filename, e.Pos.Line, e.Pos.Column, e.Message)
|
||||
}
|
||||
150
vendor/github.com/alecthomas/participle/lexer/lexer.go
generated
vendored
Normal file
150
vendor/github.com/alecthomas/participle/lexer/lexer.go
generated
vendored
Normal file
@@ -0,0 +1,150 @@
|
||||
package lexer
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
const (
|
||||
// EOF represents an end of file.
|
||||
EOF rune = -(iota + 1)
|
||||
)
|
||||
|
||||
// EOFToken creates a new EOF token at the given position.
|
||||
func EOFToken(pos Position) Token {
|
||||
return Token{Type: EOF, Pos: pos}
|
||||
}
|
||||
|
||||
// Definition provides the parser with metadata for a lexer.
|
||||
type Definition interface {
|
||||
// Lex an io.Reader.
|
||||
Lex(io.Reader) (Lexer, error)
|
||||
// Symbols returns a map of symbolic names to the corresponding pseudo-runes for those symbols.
|
||||
// This is the same approach as used by text/scanner. For example, "EOF" might have the rune
|
||||
// value of -1, "Ident" might be -2, and so on.
|
||||
Symbols() map[string]rune
|
||||
}
|
||||
|
||||
// A Lexer returns tokens from a source.
|
||||
type Lexer interface {
|
||||
// Next consumes and returns the next token.
|
||||
Next() (Token, error)
|
||||
}
|
||||
|
||||
// A PeekingLexer returns tokens from a source and allows peeking.
|
||||
type PeekingLexer interface {
|
||||
Lexer
|
||||
// Peek at the next token.
|
||||
Peek(n int) (Token, error)
|
||||
}
|
||||
|
||||
// SymbolsByRune returns a map of lexer symbol names keyed by rune.
|
||||
func SymbolsByRune(def Definition) map[rune]string {
|
||||
out := map[rune]string{}
|
||||
for s, r := range def.Symbols() {
|
||||
out[r] = s
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// NameOfReader attempts to retrieve the filename of a reader.
|
||||
func NameOfReader(r interface{}) string {
|
||||
if nr, ok := r.(interface{ Name() string }); ok {
|
||||
return nr.Name()
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Must takes the result of a Definition constructor call and returns the definition, but panics if
|
||||
// it errors
|
||||
//
|
||||
// eg.
|
||||
//
|
||||
// lex = lexer.Must(lexer.Build(`Symbol = "symbol" .`))
|
||||
func Must(def Definition, err error) Definition {
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return def
|
||||
}
|
||||
|
||||
// ConsumeAll reads all tokens from a Lexer.
|
||||
func ConsumeAll(lexer Lexer) ([]Token, error) {
|
||||
tokens := []Token{}
|
||||
for {
|
||||
token, err := lexer.Next()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tokens = append(tokens, token)
|
||||
if token.Type == EOF {
|
||||
return tokens, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Position of a token.
|
||||
type Position struct {
|
||||
Filename string
|
||||
Offset int
|
||||
Line int
|
||||
Column int
|
||||
}
|
||||
|
||||
func (p Position) GoString() string {
|
||||
return fmt.Sprintf("Position{Filename: %q, Offset: %d, Line: %d, Column: %d}",
|
||||
p.Filename, p.Offset, p.Line, p.Column)
|
||||
}
|
||||
|
||||
func (p Position) String() string {
|
||||
filename := p.Filename
|
||||
if filename == "" {
|
||||
filename = "<source>"
|
||||
}
|
||||
return fmt.Sprintf("%s:%d:%d", filename, p.Line, p.Column)
|
||||
}
|
||||
|
||||
// A Token returned by a Lexer.
|
||||
type Token struct {
|
||||
// Type of token. This is the value keyed by symbol as returned by Definition.Symbols().
|
||||
Type rune
|
||||
Value string
|
||||
Pos Position
|
||||
}
|
||||
|
||||
// RuneToken represents a rune as a Token.
|
||||
func RuneToken(r rune) Token {
|
||||
return Token{Type: r, Value: string(r)}
|
||||
}
|
||||
|
||||
// EOF returns true if this Token is an EOF token.
|
||||
func (t Token) EOF() bool {
|
||||
return t.Type == EOF
|
||||
}
|
||||
|
||||
func (t Token) String() string {
|
||||
if t.EOF() {
|
||||
return "<EOF>"
|
||||
}
|
||||
return t.Value
|
||||
}
|
||||
|
||||
func (t Token) GoString() string {
|
||||
return fmt.Sprintf("Token{%d, %q}", t.Type, t.Value)
|
||||
}
|
||||
|
||||
// MakeSymbolTable builds a lookup table for checking token ID existence.
|
||||
//
|
||||
// For each symbolic name in "types", the returned map will contain the corresponding token ID as a key.
|
||||
func MakeSymbolTable(def Definition, types ...string) (map[rune]bool, error) {
|
||||
symbols := def.Symbols()
|
||||
table := map[rune]bool{}
|
||||
for _, symbol := range types {
|
||||
rn, ok := symbols[symbol]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("lexer does not support symbol %q", symbol)
|
||||
}
|
||||
table[rn] = true
|
||||
}
|
||||
return table, nil
|
||||
}
|
||||
37
vendor/github.com/alecthomas/participle/lexer/peek.go
generated
vendored
Normal file
37
vendor/github.com/alecthomas/participle/lexer/peek.go
generated
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
package lexer
|
||||
|
||||
// Upgrade a Lexer to a PeekingLexer with arbitrary lookahead.
|
||||
func Upgrade(lexer Lexer) PeekingLexer {
|
||||
if peeking, ok := lexer.(PeekingLexer); ok {
|
||||
return peeking
|
||||
}
|
||||
return &lookaheadLexer{Lexer: lexer}
|
||||
}
|
||||
|
||||
type lookaheadLexer struct {
|
||||
Lexer
|
||||
peeked []Token
|
||||
}
|
||||
|
||||
func (l *lookaheadLexer) Peek(n int) (Token, error) {
|
||||
for len(l.peeked) <= n {
|
||||
t, err := l.Lexer.Next()
|
||||
if err != nil {
|
||||
return Token{}, err
|
||||
}
|
||||
if t.EOF() {
|
||||
return t, nil
|
||||
}
|
||||
l.peeked = append(l.peeked, t)
|
||||
}
|
||||
return l.peeked[n], nil
|
||||
}
|
||||
|
||||
func (l *lookaheadLexer) Next() (Token, error) {
|
||||
if len(l.peeked) > 0 {
|
||||
t := l.peeked[0]
|
||||
l.peeked = l.peeked[1:]
|
||||
return t, nil
|
||||
}
|
||||
return l.Lexer.Next()
|
||||
}
|
||||
112
vendor/github.com/alecthomas/participle/lexer/regexp.go
generated
vendored
Normal file
112
vendor/github.com/alecthomas/participle/lexer/regexp.go
generated
vendored
Normal file
@@ -0,0 +1,112 @@
|
||||
package lexer
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"regexp"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
var eolBytes = []byte("\n")
|
||||
|
||||
type regexpDefinition struct {
|
||||
re *regexp.Regexp
|
||||
symbols map[string]rune
|
||||
}
|
||||
|
||||
// Regexp creates a lexer definition from a regular expression.
|
||||
//
|
||||
// Each named sub-expression in the regular expression matches a token. Anonymous sub-expressions
|
||||
// will be matched and discarded.
|
||||
//
|
||||
// eg.
|
||||
//
|
||||
// def, err := Regexp(`(?P<Ident>[a-z]+)|(\s+)|(?P<Number>\d+)`)
|
||||
func Regexp(pattern string) (Definition, error) {
|
||||
re, err := regexp.Compile(pattern)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
symbols := map[string]rune{
|
||||
"EOF": EOF,
|
||||
}
|
||||
for i, sym := range re.SubexpNames()[1:] {
|
||||
if sym != "" {
|
||||
symbols[sym] = EOF - 1 - rune(i)
|
||||
}
|
||||
}
|
||||
return ®expDefinition{re: re, symbols: symbols}, nil
|
||||
}
|
||||
|
||||
func (d *regexpDefinition) Lex(r io.Reader) (Lexer, error) {
|
||||
b, err := ioutil.ReadAll(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ®expLexer{
|
||||
pos: Position{
|
||||
Filename: NameOfReader(r),
|
||||
Line: 1,
|
||||
Column: 1,
|
||||
},
|
||||
b: b,
|
||||
re: d.re,
|
||||
names: d.re.SubexpNames(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *regexpDefinition) Symbols() map[string]rune {
|
||||
return d.symbols
|
||||
}
|
||||
|
||||
type regexpLexer struct {
|
||||
pos Position
|
||||
b []byte
|
||||
re *regexp.Regexp
|
||||
names []string
|
||||
}
|
||||
|
||||
func (r *regexpLexer) Next() (Token, error) {
|
||||
nextToken:
|
||||
for len(r.b) != 0 {
|
||||
matches := r.re.FindSubmatchIndex(r.b)
|
||||
if matches == nil || matches[0] != 0 {
|
||||
rn, _ := utf8.DecodeRune(r.b)
|
||||
return Token{}, Errorf(r.pos, "invalid token %q", rn)
|
||||
}
|
||||
match := r.b[:matches[1]]
|
||||
token := Token{
|
||||
Pos: r.pos,
|
||||
Value: string(match),
|
||||
}
|
||||
|
||||
// Update lexer state.
|
||||
r.pos.Offset += matches[1]
|
||||
lines := bytes.Count(match, eolBytes)
|
||||
r.pos.Line += lines
|
||||
// Update column.
|
||||
if lines == 0 {
|
||||
r.pos.Column += utf8.RuneCount(match)
|
||||
} else {
|
||||
r.pos.Column = utf8.RuneCount(match[bytes.LastIndex(match, eolBytes):])
|
||||
}
|
||||
// Move slice along.
|
||||
r.b = r.b[matches[1]:]
|
||||
|
||||
// Finally, assign token type. If it is not a named group, we continue to the next token.
|
||||
for i := 2; i < len(matches); i += 2 {
|
||||
if matches[i] != -1 {
|
||||
if r.names[i/2] == "" {
|
||||
continue nextToken
|
||||
}
|
||||
token.Type = EOF - rune(i/2)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return token, nil
|
||||
}
|
||||
|
||||
return EOFToken(r.pos), nil
|
||||
}
|
||||
125
vendor/github.com/alecthomas/participle/lexer/text_scanner.go
generated
vendored
Normal file
125
vendor/github.com/alecthomas/participle/lexer/text_scanner.go
generated
vendored
Normal file
@@ -0,0 +1,125 @@
|
||||
package lexer
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"strconv"
|
||||
"strings"
|
||||
"text/scanner"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// TextScannerLexer is a lexer that uses the text/scanner module.
|
||||
var (
|
||||
TextScannerLexer Definition = &defaultDefinition{}
|
||||
|
||||
// DefaultDefinition defines properties for the default lexer.
|
||||
DefaultDefinition = TextScannerLexer
|
||||
)
|
||||
|
||||
type defaultDefinition struct{}
|
||||
|
||||
func (d *defaultDefinition) Lex(r io.Reader) (Lexer, error) {
|
||||
return Lex(r), nil
|
||||
}
|
||||
|
||||
func (d *defaultDefinition) Symbols() map[string]rune {
|
||||
return map[string]rune{
|
||||
"EOF": scanner.EOF,
|
||||
"Char": scanner.Char,
|
||||
"Ident": scanner.Ident,
|
||||
"Int": scanner.Int,
|
||||
"Float": scanner.Float,
|
||||
"String": scanner.String,
|
||||
"RawString": scanner.RawString,
|
||||
"Comment": scanner.Comment,
|
||||
}
|
||||
}
|
||||
|
||||
// textScannerLexer is a Lexer based on text/scanner.Scanner
|
||||
type textScannerLexer struct {
|
||||
scanner *scanner.Scanner
|
||||
filename string
|
||||
err error
|
||||
}
|
||||
|
||||
// Lex an io.Reader with text/scanner.Scanner.
|
||||
//
|
||||
// This provides very fast lexing of source code compatible with Go tokens.
|
||||
//
|
||||
// Note that this differs from text/scanner.Scanner in that string tokens will be unquoted.
|
||||
func Lex(r io.Reader) Lexer {
|
||||
lexer := lexWithScanner(r, &scanner.Scanner{})
|
||||
lexer.scanner.Error = func(s *scanner.Scanner, msg string) {
|
||||
// This is to support single quoted strings. Hacky.
|
||||
if msg != "illegal char literal" {
|
||||
lexer.err = Errorf(Position(lexer.scanner.Pos()), msg)
|
||||
}
|
||||
}
|
||||
return lexer
|
||||
}
|
||||
|
||||
// LexWithScanner creates a Lexer from a user-provided scanner.Scanner.
|
||||
//
|
||||
// Useful if you need to customise the Scanner.
|
||||
func LexWithScanner(r io.Reader, scan *scanner.Scanner) Lexer {
|
||||
return lexWithScanner(r, scan)
|
||||
}
|
||||
|
||||
func lexWithScanner(r io.Reader, scan *scanner.Scanner) *textScannerLexer {
|
||||
lexer := &textScannerLexer{
|
||||
filename: NameOfReader(r),
|
||||
scanner: scan,
|
||||
}
|
||||
lexer.scanner.Init(r)
|
||||
return lexer
|
||||
}
|
||||
|
||||
// LexBytes returns a new default lexer over bytes.
|
||||
func LexBytes(b []byte) Lexer {
|
||||
return Lex(bytes.NewReader(b))
|
||||
}
|
||||
|
||||
// LexString returns a new default lexer over a string.
|
||||
func LexString(s string) Lexer {
|
||||
return Lex(strings.NewReader(s))
|
||||
}
|
||||
|
||||
func (t *textScannerLexer) Next() (Token, error) {
|
||||
typ := t.scanner.Scan()
|
||||
text := t.scanner.TokenText()
|
||||
pos := Position(t.scanner.Position)
|
||||
pos.Filename = t.filename
|
||||
if t.err != nil {
|
||||
return Token{}, t.err
|
||||
}
|
||||
return textScannerTransform(Token{
|
||||
Type: typ,
|
||||
Value: text,
|
||||
Pos: pos,
|
||||
})
|
||||
}
|
||||
|
||||
func textScannerTransform(token Token) (Token, error) {
|
||||
// Unquote strings.
|
||||
switch token.Type {
|
||||
case scanner.Char:
|
||||
// FIXME(alec): This is pretty hacky...we convert a single quoted char into a double
|
||||
// quoted string in order to support single quoted strings.
|
||||
token.Value = fmt.Sprintf("\"%s\"", token.Value[1:len(token.Value)-1])
|
||||
fallthrough
|
||||
case scanner.String:
|
||||
s, err := strconv.Unquote(token.Value)
|
||||
if err != nil {
|
||||
return Token{}, Errorf(token.Pos, "%s: %q", err.Error(), token.Value)
|
||||
}
|
||||
token.Value = s
|
||||
if token.Type == scanner.Char && utf8.RuneCountInString(s) > 1 {
|
||||
token.Type = scanner.String
|
||||
}
|
||||
case scanner.RawString:
|
||||
token.Value = token.Value[1 : len(token.Value)-1]
|
||||
}
|
||||
return token, nil
|
||||
}
|
||||
118
vendor/github.com/alecthomas/participle/map.go
generated
vendored
Normal file
118
vendor/github.com/alecthomas/participle/map.go
generated
vendored
Normal file
@@ -0,0 +1,118 @@
|
||||
package participle
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/alecthomas/participle/lexer"
|
||||
)
|
||||
|
||||
type mapperByToken struct {
|
||||
symbols []string
|
||||
mapper Mapper
|
||||
}
|
||||
|
||||
// DropToken can be returned by a Mapper to remove a token from the stream.
|
||||
var DropToken = errors.New("drop token") // nolint: golint
|
||||
|
||||
// Mapper function for mutating tokens before being applied to the AST.
|
||||
//
|
||||
// If the Mapper func returns an error of DropToken, the token will be removed from the stream.
|
||||
type Mapper func(token lexer.Token) (lexer.Token, error)
|
||||
|
||||
// Map is an Option that configures the Parser to apply a mapping function to each Token from the lexer.
|
||||
//
|
||||
// This can be useful to eg. upper-case all tokens of a certain type, or dequote strings.
|
||||
//
|
||||
// "symbols" specifies the token symbols that the Mapper will be applied to. If empty, all tokens will be mapped.
|
||||
func Map(mapper Mapper, symbols ...string) Option {
|
||||
return func(p *Parser) error {
|
||||
p.mappers = append(p.mappers, mapperByToken{
|
||||
mapper: mapper,
|
||||
symbols: symbols,
|
||||
})
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Unquote applies strconv.Unquote() to tokens of the given types.
|
||||
//
|
||||
// Tokens of type "String" will be unquoted if no other types are provided.
|
||||
func Unquote(types ...string) Option {
|
||||
if len(types) == 0 {
|
||||
types = []string{"String"}
|
||||
}
|
||||
return Map(func(t lexer.Token) (lexer.Token, error) {
|
||||
value, err := unquote(t.Value)
|
||||
if err != nil {
|
||||
return t, lexer.Errorf(t.Pos, "invalid quoted string %q: %s", t.Value, err.Error())
|
||||
}
|
||||
t.Value = value
|
||||
return t, nil
|
||||
}, types...)
|
||||
}
|
||||
|
||||
func unquote(s string) (string, error) {
|
||||
quote := s[0]
|
||||
s = s[1 : len(s)-1]
|
||||
out := ""
|
||||
for s != "" {
|
||||
value, _, tail, err := strconv.UnquoteChar(s, quote)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
s = tail
|
||||
out += string(value)
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// Upper is an Option that upper-cases all tokens of the given type. Useful for case normalisation.
|
||||
func Upper(types ...string) Option {
|
||||
return Map(func(token lexer.Token) (lexer.Token, error) {
|
||||
token.Value = strings.ToUpper(token.Value)
|
||||
return token, nil
|
||||
}, types...)
|
||||
}
|
||||
|
||||
// Elide drops tokens of the specified types.
|
||||
func Elide(types ...string) Option {
|
||||
return Map(func(token lexer.Token) (lexer.Token, error) {
|
||||
return lexer.Token{}, DropToken
|
||||
}, types...)
|
||||
}
|
||||
|
||||
// Apply a Mapping to all tokens coming out of a Lexer.
|
||||
type mappingLexerDef struct {
|
||||
lexer.Definition
|
||||
mapper Mapper
|
||||
}
|
||||
|
||||
func (m *mappingLexerDef) Lex(r io.Reader) (lexer.Lexer, error) {
|
||||
lexer, err := m.Definition.Lex(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &mappingLexer{lexer, m.mapper}, nil
|
||||
}
|
||||
|
||||
type mappingLexer struct {
|
||||
lexer.Lexer
|
||||
mapper Mapper
|
||||
}
|
||||
|
||||
func (m *mappingLexer) Next() (lexer.Token, error) {
|
||||
for {
|
||||
t, err := m.Lexer.Next()
|
||||
if err != nil {
|
||||
return t, err
|
||||
}
|
||||
t, err = m.mapper(t)
|
||||
if err == DropToken {
|
||||
continue
|
||||
}
|
||||
return t, err
|
||||
}
|
||||
}
|
||||
575
vendor/github.com/alecthomas/participle/nodes.go
generated
vendored
Normal file
575
vendor/github.com/alecthomas/participle/nodes.go
generated
vendored
Normal file
@@ -0,0 +1,575 @@
|
||||
package participle
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/alecthomas/participle/lexer"
|
||||
)
|
||||
|
||||
var (
|
||||
// MaxIterations limits the number of elements capturable by {}.
|
||||
MaxIterations = 1000000
|
||||
|
||||
positionType = reflect.TypeOf(lexer.Position{})
|
||||
captureType = reflect.TypeOf((*Capture)(nil)).Elem()
|
||||
parseableType = reflect.TypeOf((*Parseable)(nil)).Elem()
|
||||
|
||||
// NextMatch should be returned by Parseable.Parse() method implementations to indicate
|
||||
// that the node did not match and that other matches should be attempted, if appropriate.
|
||||
NextMatch = errors.New("no match") // nolint: golint
|
||||
)
|
||||
|
||||
// A node in the grammar.
|
||||
type node interface {
|
||||
// Parse from scanner into value.
|
||||
//
|
||||
// Returned slice will be nil if the node does not match.
|
||||
Parse(ctx *parseContext, parent reflect.Value) ([]reflect.Value, error)
|
||||
|
||||
// Return a decent string representation of the Node.
|
||||
String() string
|
||||
}
|
||||
|
||||
func decorate(err *error, name func() string) {
|
||||
if *err == nil {
|
||||
return
|
||||
}
|
||||
switch realError := (*err).(type) {
|
||||
case *lexer.Error:
|
||||
*err = &lexer.Error{Message: name() + ": " + realError.Message, Pos: realError.Pos}
|
||||
default:
|
||||
*err = fmt.Errorf("%s: %s", name(), realError)
|
||||
}
|
||||
}
|
||||
|
||||
// A node that proxies to an implementation that implements the Parseable interface.
|
||||
type parseable struct {
|
||||
t reflect.Type
|
||||
}
|
||||
|
||||
func (p *parseable) String() string { return stringer(p) }
|
||||
|
||||
func (p *parseable) Parse(ctx *parseContext, parent reflect.Value) (out []reflect.Value, err error) {
|
||||
rv := reflect.New(p.t)
|
||||
v := rv.Interface().(Parseable)
|
||||
err = v.Parse(ctx)
|
||||
if err != nil {
|
||||
if err == NextMatch {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return []reflect.Value{rv.Elem()}, nil
|
||||
}
|
||||
|
||||
type strct struct {
|
||||
typ reflect.Type
|
||||
expr node
|
||||
}
|
||||
|
||||
func (s *strct) String() string { return stringer(s) }
|
||||
|
||||
func (s *strct) maybeInjectPos(pos lexer.Position, v reflect.Value) {
|
||||
if f := v.FieldByName("Pos"); f.IsValid() && f.Type() == positionType {
|
||||
f.Set(reflect.ValueOf(pos))
|
||||
}
|
||||
}
|
||||
|
||||
func (s *strct) Parse(ctx *parseContext, parent reflect.Value) (out []reflect.Value, err error) {
|
||||
sv := reflect.New(s.typ).Elem()
|
||||
t, err := ctx.Peek(0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s.maybeInjectPos(t.Pos, sv)
|
||||
if out, err = s.expr.Parse(ctx, sv); err != nil {
|
||||
_ = ctx.Apply()
|
||||
return []reflect.Value{sv}, err
|
||||
} else if out == nil {
|
||||
return nil, nil
|
||||
}
|
||||
return []reflect.Value{sv}, ctx.Apply()
|
||||
}
|
||||
|
||||
type groupMatchMode int
|
||||
|
||||
const (
|
||||
groupMatchOnce groupMatchMode = iota
|
||||
groupMatchZeroOrOne = iota
|
||||
groupMatchZeroOrMore = iota
|
||||
groupMatchOneOrMore = iota
|
||||
groupMatchNonEmpty = iota
|
||||
)
|
||||
|
||||
// ( <expr> ) - match once
|
||||
// ( <expr> )* - match zero or more times
|
||||
// ( <expr> )+ - match one or more times
|
||||
// ( <expr> )? - match zero or once
|
||||
// ( <expr> )! - must be a non-empty match
|
||||
//
|
||||
// The additional modifier "!" forces the content of the group to be non-empty if it does match.
|
||||
type group struct {
|
||||
expr node
|
||||
mode groupMatchMode
|
||||
}
|
||||
|
||||
func (g *group) String() string { return stringer(g) }
|
||||
func (g *group) Parse(ctx *parseContext, parent reflect.Value) (out []reflect.Value, err error) {
|
||||
// Configure min/max matches.
|
||||
min := 1
|
||||
max := 1
|
||||
switch g.mode {
|
||||
case groupMatchNonEmpty:
|
||||
out, err = g.expr.Parse(ctx, parent)
|
||||
if err != nil {
|
||||
return out, err
|
||||
}
|
||||
if len(out) == 0 {
|
||||
t, _ := ctx.Peek(0)
|
||||
return out, lexer.Errorf(t.Pos, "sub-expression %s cannot be empty", g)
|
||||
}
|
||||
return out, nil
|
||||
case groupMatchOnce:
|
||||
return g.expr.Parse(ctx, parent)
|
||||
case groupMatchZeroOrOne:
|
||||
min = 0
|
||||
case groupMatchZeroOrMore:
|
||||
min = 0
|
||||
max = MaxIterations
|
||||
case groupMatchOneOrMore:
|
||||
min = 1
|
||||
max = MaxIterations
|
||||
}
|
||||
matches := 0
|
||||
for ; matches < max; matches++ {
|
||||
branch := ctx.Branch()
|
||||
v, err := g.expr.Parse(branch, parent)
|
||||
out = append(out, v...)
|
||||
if err != nil {
|
||||
// Optional part failed to match.
|
||||
if ctx.Stop(branch) {
|
||||
return out, err
|
||||
}
|
||||
break
|
||||
} else {
|
||||
ctx.Accept(branch)
|
||||
}
|
||||
if v == nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
// fmt.Printf("%d < %d < %d: out == nil? %v\n", min, matches, max, out == nil)
|
||||
t, _ := ctx.Peek(0)
|
||||
if matches >= MaxIterations {
|
||||
panic(lexer.Errorf(t.Pos, "too many iterations of %s (> %d)", g, MaxIterations))
|
||||
}
|
||||
if matches < min {
|
||||
return out, lexer.Errorf(t.Pos, "sub-expression %s must match at least once", g)
|
||||
}
|
||||
// The idea here is that something like "a"? is a successful match and that parsing should proceed.
|
||||
if min == 0 && out == nil {
|
||||
out = []reflect.Value{}
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// <expr> {"|" <expr>}
|
||||
type disjunction struct {
|
||||
nodes []node
|
||||
}
|
||||
|
||||
func (d *disjunction) String() string { return stringer(d) }
|
||||
|
||||
func (d *disjunction) Parse(ctx *parseContext, parent reflect.Value) (out []reflect.Value, err error) {
|
||||
var (
|
||||
deepestError = 0
|
||||
firstError error
|
||||
firstValues []reflect.Value
|
||||
)
|
||||
for _, a := range d.nodes {
|
||||
branch := ctx.Branch()
|
||||
if value, err := a.Parse(branch, parent); err != nil {
|
||||
// If this branch progressed too far and still didn't match, error out.
|
||||
if ctx.Stop(branch) {
|
||||
return value, err
|
||||
}
|
||||
// Show the closest error returned. The idea here is that the further the parser progresses
|
||||
// without error, the more difficult it is to trace the error back to its root.
|
||||
if err != nil && branch.cursor >= deepestError {
|
||||
firstError = err
|
||||
firstValues = value
|
||||
deepestError = branch.cursor
|
||||
}
|
||||
} else if value != nil {
|
||||
ctx.Accept(branch)
|
||||
return value, nil
|
||||
}
|
||||
}
|
||||
if firstError != nil {
|
||||
return firstValues, firstError
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// <node> ...
|
||||
type sequence struct {
|
||||
head bool
|
||||
node node
|
||||
next *sequence
|
||||
}
|
||||
|
||||
func (s *sequence) String() string { return stringer(s) }
|
||||
|
||||
func (s *sequence) Parse(ctx *parseContext, parent reflect.Value) (out []reflect.Value, err error) {
|
||||
for n := s; n != nil; n = n.next {
|
||||
child, err := n.node.Parse(ctx, parent)
|
||||
out = append(out, child...)
|
||||
if err != nil {
|
||||
return out, err
|
||||
}
|
||||
if child == nil {
|
||||
// Early exit if first value doesn't match, otherwise all values must match.
|
||||
if n == s {
|
||||
return nil, nil
|
||||
}
|
||||
token, err := ctx.Peek(0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, lexer.Errorf(token.Pos, "unexpected %q (expected %s)", token, n)
|
||||
}
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// @<expr>
|
||||
type capture struct {
|
||||
field structLexerField
|
||||
node node
|
||||
}
|
||||
|
||||
func (c *capture) String() string { return stringer(c) }
|
||||
|
||||
func (c *capture) Parse(ctx *parseContext, parent reflect.Value) (out []reflect.Value, err error) {
|
||||
token, err := ctx.Peek(0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pos := token.Pos
|
||||
v, err := c.node.Parse(ctx, parent)
|
||||
if err != nil {
|
||||
if v != nil {
|
||||
ctx.Defer(pos, parent, c.field, v)
|
||||
}
|
||||
return []reflect.Value{parent}, err
|
||||
}
|
||||
if v == nil {
|
||||
return nil, nil
|
||||
}
|
||||
ctx.Defer(pos, parent, c.field, v)
|
||||
return []reflect.Value{parent}, nil
|
||||
}
|
||||
|
||||
// <identifier> - named lexer token reference
|
||||
type reference struct {
|
||||
typ rune
|
||||
identifier string // Used for informational purposes.
|
||||
}
|
||||
|
||||
func (r *reference) String() string { return stringer(r) }
|
||||
|
||||
func (r *reference) Parse(ctx *parseContext, parent reflect.Value) (out []reflect.Value, err error) {
|
||||
token, err := ctx.Peek(0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if token.Type != r.typ {
|
||||
return nil, nil
|
||||
}
|
||||
_, _ = ctx.Next()
|
||||
return []reflect.Value{reflect.ValueOf(token.Value)}, nil
|
||||
}
|
||||
|
||||
// [ <expr> ] <sequence>
|
||||
type optional struct {
|
||||
node node
|
||||
}
|
||||
|
||||
func (o *optional) String() string { return stringer(o) }
|
||||
|
||||
func (o *optional) Parse(ctx *parseContext, parent reflect.Value) (out []reflect.Value, err error) {
|
||||
branch := ctx.Branch()
|
||||
out, err = o.node.Parse(branch, parent)
|
||||
if err != nil {
|
||||
// Optional part failed to match.
|
||||
if ctx.Stop(branch) {
|
||||
return out, err
|
||||
}
|
||||
} else {
|
||||
ctx.Accept(branch)
|
||||
}
|
||||
if out == nil {
|
||||
out = []reflect.Value{}
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// { <expr> } <sequence>
|
||||
type repetition struct {
|
||||
node node
|
||||
}
|
||||
|
||||
func (r *repetition) String() string { return stringer(r) }
|
||||
|
||||
// Parse a repetition. Once a repetition is encountered it will always match, so grammars
|
||||
// should ensure that branches are differentiated prior to the repetition.
|
||||
func (r *repetition) Parse(ctx *parseContext, parent reflect.Value) (out []reflect.Value, err error) {
|
||||
i := 0
|
||||
for ; i < MaxIterations; i++ {
|
||||
branch := ctx.Branch()
|
||||
v, err := r.node.Parse(branch, parent)
|
||||
out = append(out, v...)
|
||||
if err != nil {
|
||||
// Optional part failed to match.
|
||||
if ctx.Stop(branch) {
|
||||
return out, err
|
||||
}
|
||||
break
|
||||
} else {
|
||||
ctx.Accept(branch)
|
||||
}
|
||||
if v == nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
if i >= MaxIterations {
|
||||
t, _ := ctx.Peek(0)
|
||||
panic(lexer.Errorf(t.Pos, "too many iterations of %s (> %d)", r, MaxIterations))
|
||||
}
|
||||
if out == nil {
|
||||
out = []reflect.Value{}
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// Match a token literal exactly "..."[:<type>].
|
||||
type literal struct {
|
||||
s string
|
||||
t rune
|
||||
tt string // Used for display purposes - symbolic name of t.
|
||||
}
|
||||
|
||||
func (l *literal) String() string { return stringer(l) }
|
||||
|
||||
func (l *literal) Parse(ctx *parseContext, parent reflect.Value) (out []reflect.Value, err error) {
|
||||
token, err := ctx.Peek(0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
equal := false // nolint: ineffassign
|
||||
if ctx.caseInsensitive[token.Type] {
|
||||
equal = strings.EqualFold(token.Value, l.s)
|
||||
} else {
|
||||
equal = token.Value == l.s
|
||||
}
|
||||
if equal && (l.t == -1 || l.t == token.Type) {
|
||||
next, err := ctx.Next()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return []reflect.Value{reflect.ValueOf(next.Value)}, nil
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Attempt to transform values to given type.
|
||||
//
|
||||
// This will dereference pointers, and attempt to parse strings into integer values, floats, etc.
|
||||
func conform(t reflect.Type, values []reflect.Value) (out []reflect.Value, err error) {
|
||||
for _, v := range values {
|
||||
for t != v.Type() && t.Kind() == reflect.Ptr && v.Kind() != reflect.Ptr {
|
||||
// This can occur during partial failure.
|
||||
if !v.CanAddr() {
|
||||
return
|
||||
}
|
||||
v = v.Addr()
|
||||
}
|
||||
|
||||
// Already of the right kind, don't bother converting.
|
||||
if v.Kind() == t.Kind() {
|
||||
out = append(out, v)
|
||||
continue
|
||||
}
|
||||
|
||||
kind := t.Kind()
|
||||
switch kind {
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
n, err := strconv.ParseInt(v.String(), 0, sizeOfKind(kind))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid integer %q: %s", v.String(), err)
|
||||
}
|
||||
v = reflect.New(t).Elem()
|
||||
v.SetInt(n)
|
||||
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
|
||||
n, err := strconv.ParseUint(v.String(), 0, sizeOfKind(kind))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid integer %q: %s", v.String(), err)
|
||||
}
|
||||
v = reflect.New(t).Elem()
|
||||
v.SetUint(n)
|
||||
|
||||
case reflect.Bool:
|
||||
v = reflect.ValueOf(true)
|
||||
|
||||
case reflect.Float32, reflect.Float64:
|
||||
n, err := strconv.ParseFloat(v.String(), sizeOfKind(kind))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid integer %q: %s", v.String(), err)
|
||||
}
|
||||
v = reflect.New(t).Elem()
|
||||
v.SetFloat(n)
|
||||
}
|
||||
|
||||
out = append(out, v)
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func sizeOfKind(kind reflect.Kind) int {
|
||||
switch kind {
|
||||
case reflect.Int8, reflect.Uint8:
|
||||
return 8
|
||||
case reflect.Int16, reflect.Uint16:
|
||||
return 16
|
||||
case reflect.Int32, reflect.Uint32, reflect.Float32:
|
||||
return 32
|
||||
case reflect.Int64, reflect.Uint64, reflect.Float64:
|
||||
return 64
|
||||
case reflect.Int, reflect.Uint:
|
||||
return strconv.IntSize
|
||||
}
|
||||
panic("unsupported kind " + kind.String())
|
||||
}
|
||||
|
||||
// Set field.
|
||||
//
|
||||
// If field is a pointer the pointer will be set to the value. If field is a string, value will be
|
||||
// appended. If field is a slice, value will be appended to slice.
|
||||
//
|
||||
// For all other types, an attempt will be made to convert the string to the corresponding
|
||||
// type (int, float32, etc.).
|
||||
func setField(pos lexer.Position, strct reflect.Value, field structLexerField, fieldValue []reflect.Value) (err error) { // nolint: gocyclo
|
||||
defer decorate(&err, func() string { return pos.String() + ": " + strct.Type().String() + "." + field.Name })
|
||||
|
||||
f := strct.FieldByIndex(field.Index)
|
||||
switch f.Kind() {
|
||||
case reflect.Slice:
|
||||
fieldValue, err = conform(f.Type().Elem(), fieldValue)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
f.Set(reflect.Append(f, fieldValue...))
|
||||
return nil
|
||||
|
||||
case reflect.Ptr:
|
||||
if f.IsNil() {
|
||||
fv := reflect.New(f.Type().Elem()).Elem()
|
||||
f.Set(fv.Addr())
|
||||
f = fv
|
||||
} else {
|
||||
f = f.Elem()
|
||||
}
|
||||
}
|
||||
|
||||
if f.Kind() == reflect.Struct {
|
||||
if pf := f.FieldByName("Pos"); pf.IsValid() && pf.Type() == positionType {
|
||||
pf.Set(reflect.ValueOf(pos))
|
||||
}
|
||||
}
|
||||
|
||||
if f.CanAddr() {
|
||||
if d, ok := f.Addr().Interface().(Capture); ok {
|
||||
ifv := []string{}
|
||||
for _, v := range fieldValue {
|
||||
ifv = append(ifv, v.Interface().(string))
|
||||
}
|
||||
err := d.Capture(ifv)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Strings concatenate all captured tokens.
|
||||
if f.Kind() == reflect.String {
|
||||
fieldValue, err = conform(f.Type(), fieldValue)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, v := range fieldValue {
|
||||
f.Set(reflect.ValueOf(f.String() + v.String()).Convert(f.Type()))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Coalesce multiple tokens into one. This allows eg. ["-", "10"] to be captured as separate tokens but
|
||||
// parsed as a single string "-10".
|
||||
if len(fieldValue) > 1 {
|
||||
out := []string{}
|
||||
for _, v := range fieldValue {
|
||||
out = append(out, v.String())
|
||||
}
|
||||
fieldValue = []reflect.Value{reflect.ValueOf(strings.Join(out, ""))}
|
||||
}
|
||||
|
||||
fieldValue, err = conform(f.Type(), fieldValue)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fv := fieldValue[0]
|
||||
|
||||
switch f.Kind() {
|
||||
// Numeric types will increment if the token can not be coerced.
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
if fv.Type() != f.Type() {
|
||||
f.SetInt(f.Int() + 1)
|
||||
} else {
|
||||
f.Set(fv)
|
||||
}
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
|
||||
if fv.Type() != f.Type() {
|
||||
f.SetUint(f.Uint() + 1)
|
||||
} else {
|
||||
f.Set(fv)
|
||||
}
|
||||
|
||||
case reflect.Float32, reflect.Float64:
|
||||
if fv.Type() != f.Type() {
|
||||
f.SetFloat(f.Float() + 1)
|
||||
} else {
|
||||
f.Set(fv)
|
||||
}
|
||||
|
||||
case reflect.Bool, reflect.Struct:
|
||||
if fv.Type() != f.Type() {
|
||||
return fmt.Errorf("value %q is not correct type %s", fv, f.Type())
|
||||
}
|
||||
f.Set(fv)
|
||||
|
||||
default:
|
||||
return fmt.Errorf("unsupported field type %s for field %s", f.Type(), field.Name)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Error is an error returned by the parser internally to differentiate from non-Participle errors.
|
||||
type Error string
|
||||
|
||||
func (e Error) Error() string { return string(e) }
|
||||
39
vendor/github.com/alecthomas/participle/options.go
generated
vendored
Normal file
39
vendor/github.com/alecthomas/participle/options.go
generated
vendored
Normal file
@@ -0,0 +1,39 @@
|
||||
package participle
|
||||
|
||||
import (
|
||||
"github.com/alecthomas/participle/lexer"
|
||||
)
|
||||
|
||||
// An Option to modify the behaviour of the Parser.
|
||||
type Option func(p *Parser) error
|
||||
|
||||
// Lexer is an Option that sets the lexer to use with the given grammar.
|
||||
func Lexer(def lexer.Definition) Option {
|
||||
return func(p *Parser) error {
|
||||
p.lex = def
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// UseLookahead allows branch lookahead up to "n" tokens.
|
||||
//
|
||||
// If parsing cannot be disambiguated before "n" tokens of lookahead, parsing will fail.
|
||||
//
|
||||
// Note that increasing lookahead has a minor performance impact, but also
|
||||
// reduces the accuracy of error reporting.
|
||||
func UseLookahead(n int) Option {
|
||||
return func(p *Parser) error {
|
||||
p.useLookahead = n
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// CaseInsensitive allows the specified token types to be matched case-insensitively.
|
||||
func CaseInsensitive(tokens ...string) Option {
|
||||
return func(p *Parser) error {
|
||||
for _, token := range tokens {
|
||||
p.caseInsensitive[token] = true
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
229
vendor/github.com/alecthomas/participle/parser.go
generated
vendored
Normal file
229
vendor/github.com/alecthomas/participle/parser.go
generated
vendored
Normal file
@@ -0,0 +1,229 @@
|
||||
package participle
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"github.com/alecthomas/participle/lexer"
|
||||
)
|
||||
|
||||
// A Parser for a particular grammar and lexer.
|
||||
type Parser struct {
|
||||
root node
|
||||
lex lexer.Definition
|
||||
typ reflect.Type
|
||||
useLookahead int
|
||||
caseInsensitive map[string]bool
|
||||
mappers []mapperByToken
|
||||
}
|
||||
|
||||
// MustBuild calls Build(grammar, options...) and panics if an error occurs.
|
||||
func MustBuild(grammar interface{}, options ...Option) *Parser {
|
||||
parser, err := Build(grammar, options...)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return parser
|
||||
}
|
||||
|
||||
// Build constructs a parser for the given grammar.
|
||||
//
|
||||
// If "Lexer()" is not provided as an option, a default lexer based on text/scanner will be used. This scans typical Go-
|
||||
// like tokens.
|
||||
//
|
||||
// See documentation for details
|
||||
func Build(grammar interface{}, options ...Option) (parser *Parser, err error) {
|
||||
// Configure Parser struct with defaults + options.
|
||||
p := &Parser{
|
||||
lex: lexer.TextScannerLexer,
|
||||
caseInsensitive: map[string]bool{},
|
||||
useLookahead: 1,
|
||||
}
|
||||
for _, option := range options {
|
||||
if option == nil {
|
||||
return nil, fmt.Errorf("nil Option passed, signature has changed; " +
|
||||
"if you intended to provide a custom Lexer, try participle.Build(grammar, participle.Lexer(lexer))")
|
||||
}
|
||||
if err = option(p); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if len(p.mappers) > 0 {
|
||||
mappers := map[rune][]Mapper{}
|
||||
symbols := p.lex.Symbols()
|
||||
for _, mapper := range p.mappers {
|
||||
if len(mapper.symbols) == 0 {
|
||||
mappers[lexer.EOF] = append(mappers[lexer.EOF], mapper.mapper)
|
||||
} else {
|
||||
for _, symbol := range mapper.symbols {
|
||||
if rn, ok := symbols[symbol]; !ok {
|
||||
return nil, fmt.Errorf("mapper %#v uses unknown token %q", mapper, symbol)
|
||||
} else { // nolint: golint
|
||||
mappers[rn] = append(mappers[rn], mapper.mapper)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
p.lex = &mappingLexerDef{p.lex, func(t lexer.Token) (lexer.Token, error) {
|
||||
combined := make([]Mapper, 0, len(mappers[t.Type])+len(mappers[lexer.EOF]))
|
||||
combined = append(combined, mappers[lexer.EOF]...)
|
||||
combined = append(combined, mappers[t.Type]...)
|
||||
|
||||
var err error
|
||||
for _, m := range combined {
|
||||
t, err = m(t)
|
||||
if err != nil {
|
||||
return t, err
|
||||
}
|
||||
}
|
||||
return t, nil
|
||||
}}
|
||||
}
|
||||
|
||||
context := newGeneratorContext(p.lex)
|
||||
v := reflect.ValueOf(grammar)
|
||||
if v.Kind() == reflect.Interface {
|
||||
v = v.Elem()
|
||||
}
|
||||
p.typ = v.Type()
|
||||
p.root, err = context.parseType(p.typ)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return p, nil
|
||||
}
|
||||
|
||||
// Lex uses the parser's lexer to tokenise input.
|
||||
func (p *Parser) Lex(r io.Reader) ([]lexer.Token, error) {
|
||||
lex, err := p.lex.Lex(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return lexer.ConsumeAll(lex)
|
||||
}
|
||||
|
||||
// Parse from r into grammar v which must be of the same type as the grammar passed to
|
||||
// participle.Build().
|
||||
func (p *Parser) Parse(r io.Reader, v interface{}) (err error) {
|
||||
rv := reflect.ValueOf(v)
|
||||
if rv.Kind() == reflect.Interface {
|
||||
rv = rv.Elem()
|
||||
}
|
||||
var stream reflect.Value
|
||||
if rv.Kind() == reflect.Chan {
|
||||
stream = rv
|
||||
rt := rv.Type().Elem()
|
||||
rv = reflect.New(rt).Elem()
|
||||
}
|
||||
rt := rv.Type()
|
||||
if rt != p.typ {
|
||||
return fmt.Errorf("must parse into value of type %s not %T", p.typ, v)
|
||||
}
|
||||
baseLexer, err := p.lex.Lex(r)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
lex := lexer.Upgrade(baseLexer)
|
||||
caseInsensitive := map[rune]bool{}
|
||||
for sym, rn := range p.lex.Symbols() {
|
||||
if p.caseInsensitive[sym] {
|
||||
caseInsensitive[rn] = true
|
||||
}
|
||||
}
|
||||
ctx, err := newParseContext(lex, p.useLookahead, caseInsensitive)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// If the grammar implements Parseable, use it.
|
||||
if parseable, ok := v.(Parseable); ok {
|
||||
return p.rootParseable(ctx, parseable)
|
||||
}
|
||||
if rt.Kind() != reflect.Ptr || rt.Elem().Kind() != reflect.Struct {
|
||||
return fmt.Errorf("target must be a pointer to a struct, not %s", rt)
|
||||
}
|
||||
if stream.IsValid() {
|
||||
return p.parseStreaming(ctx, stream)
|
||||
}
|
||||
return p.parseOne(ctx, rv)
|
||||
}
|
||||
|
||||
func (p *Parser) parseStreaming(ctx *parseContext, rv reflect.Value) error {
|
||||
t := rv.Type().Elem().Elem()
|
||||
for {
|
||||
if token, _ := ctx.Peek(0); token.EOF() {
|
||||
rv.Close()
|
||||
return nil
|
||||
}
|
||||
v := reflect.New(t)
|
||||
if err := p.parseInto(ctx, v); err != nil {
|
||||
return err
|
||||
}
|
||||
rv.Send(v)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Parser) parseOne(ctx *parseContext, rv reflect.Value) error {
|
||||
err := p.parseInto(ctx, rv)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
token, err := ctx.Peek(0)
|
||||
if err != nil {
|
||||
return err
|
||||
} else if !token.EOF() {
|
||||
return lexer.Errorf(token.Pos, "unexpected trailing token %q", token)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Parser) parseInto(ctx *parseContext, rv reflect.Value) error {
|
||||
if rv.IsNil() {
|
||||
return fmt.Errorf("target must be a non-nil pointer to a struct, but is a nil %s", rv.Type())
|
||||
}
|
||||
pv, err := p.root.Parse(ctx, rv.Elem())
|
||||
if len(pv) > 0 && pv[0].Type() == rv.Elem().Type() {
|
||||
rv.Elem().Set(reflect.Indirect(pv[0]))
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if pv == nil {
|
||||
token, _ := ctx.Peek(0)
|
||||
return lexer.Errorf(token.Pos, "invalid syntax")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *Parser) rootParseable(lex lexer.PeekingLexer, parseable Parseable) error {
|
||||
peek, err := lex.Peek(0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = parseable.Parse(lex)
|
||||
if err == NextMatch {
|
||||
return lexer.Errorf(peek.Pos, "invalid syntax")
|
||||
}
|
||||
if err == nil && !peek.EOF() {
|
||||
return lexer.Errorf(peek.Pos, "unexpected token %q", peek)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// ParseString is a convenience around Parse().
|
||||
func (p *Parser) ParseString(s string, v interface{}) error {
|
||||
return p.Parse(strings.NewReader(s), v)
|
||||
}
|
||||
|
||||
// ParseBytes is a convenience around Parse().
|
||||
func (p *Parser) ParseBytes(b []byte, v interface{}) error {
|
||||
return p.Parse(bytes.NewReader(b), v)
|
||||
}
|
||||
|
||||
// String representation of the grammar.
|
||||
func (p *Parser) String() string {
|
||||
return stringern(p.root, 128)
|
||||
}
|
||||
118
vendor/github.com/alecthomas/participle/stringer.go
generated
vendored
Normal file
118
vendor/github.com/alecthomas/participle/stringer.go
generated
vendored
Normal file
@@ -0,0 +1,118 @@
|
||||
package participle
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/alecthomas/participle/lexer"
|
||||
)
|
||||
|
||||
type stringerVisitor struct {
|
||||
bytes.Buffer
|
||||
seen map[node]bool
|
||||
}
|
||||
|
||||
func stringern(n node, depth int) string {
|
||||
v := &stringerVisitor{seen: map[node]bool{}}
|
||||
v.visit(n, depth, false)
|
||||
return v.String()
|
||||
}
|
||||
|
||||
func stringer(n node) string {
|
||||
return stringern(n, 1)
|
||||
}
|
||||
|
||||
func (s *stringerVisitor) visit(n node, depth int, disjunctions bool) {
|
||||
if s.seen[n] || depth <= 0 {
|
||||
fmt.Fprintf(s, "...")
|
||||
return
|
||||
}
|
||||
s.seen[n] = true
|
||||
|
||||
switch n := n.(type) {
|
||||
case *disjunction:
|
||||
for i, c := range n.nodes {
|
||||
if i > 0 {
|
||||
fmt.Fprint(s, " | ")
|
||||
}
|
||||
s.visit(c, depth, disjunctions || len(n.nodes) > 1)
|
||||
}
|
||||
|
||||
case *strct:
|
||||
s.visit(n.expr, depth, disjunctions)
|
||||
|
||||
case *sequence:
|
||||
c := n
|
||||
for i := 0; c != nil && depth-i > 0; c, i = c.next, i+1 {
|
||||
if c != n {
|
||||
fmt.Fprint(s, " ")
|
||||
}
|
||||
s.visit(c.node, depth-i, disjunctions)
|
||||
}
|
||||
if c != nil {
|
||||
fmt.Fprint(s, " ...")
|
||||
}
|
||||
|
||||
case *parseable:
|
||||
fmt.Fprintf(s, "<%s>", strings.ToLower(n.t.Name()))
|
||||
|
||||
case *capture:
|
||||
if _, ok := n.node.(*parseable); ok {
|
||||
fmt.Fprintf(s, "<%s>", strings.ToLower(n.field.Name))
|
||||
} else {
|
||||
if n.node == nil {
|
||||
fmt.Fprintf(s, "<%s>", strings.ToLower(n.field.Name))
|
||||
} else {
|
||||
s.visit(n.node, depth, disjunctions)
|
||||
}
|
||||
}
|
||||
|
||||
case *reference:
|
||||
fmt.Fprintf(s, "<%s>", strings.ToLower(n.identifier))
|
||||
|
||||
case *optional:
|
||||
fmt.Fprint(s, "[ ")
|
||||
s.visit(n.node, depth, disjunctions)
|
||||
fmt.Fprint(s, " ]")
|
||||
|
||||
case *repetition:
|
||||
fmt.Fprint(s, "{ ")
|
||||
s.visit(n.node, depth, disjunctions)
|
||||
fmt.Fprint(s, " }")
|
||||
|
||||
case *literal:
|
||||
fmt.Fprintf(s, "%q", n.s)
|
||||
if n.t != lexer.EOF && n.s == "" {
|
||||
fmt.Fprintf(s, ":%s", n.tt)
|
||||
}
|
||||
|
||||
case *group:
|
||||
fmt.Fprint(s, "(")
|
||||
if child, ok := n.expr.(*group); ok && child.mode == groupMatchOnce {
|
||||
s.visit(child.expr, depth, disjunctions)
|
||||
} else if child, ok := n.expr.(*capture); ok {
|
||||
if grandchild, ok := child.node.(*group); ok && grandchild.mode == groupMatchOnce {
|
||||
s.visit(grandchild.expr, depth, disjunctions)
|
||||
} else {
|
||||
s.visit(n.expr, depth, disjunctions)
|
||||
}
|
||||
} else {
|
||||
s.visit(n.expr, depth, disjunctions)
|
||||
}
|
||||
fmt.Fprint(s, ")")
|
||||
switch n.mode {
|
||||
case groupMatchNonEmpty:
|
||||
fmt.Fprintf(s, "!")
|
||||
case groupMatchZeroOrOne:
|
||||
fmt.Fprintf(s, "?")
|
||||
case groupMatchZeroOrMore:
|
||||
fmt.Fprintf(s, "*")
|
||||
case groupMatchOneOrMore:
|
||||
fmt.Fprintf(s, "+")
|
||||
}
|
||||
|
||||
default:
|
||||
panic("unsupported")
|
||||
}
|
||||
}
|
||||
126
vendor/github.com/alecthomas/participle/struct.go
generated
vendored
Normal file
126
vendor/github.com/alecthomas/participle/struct.go
generated
vendored
Normal file
@@ -0,0 +1,126 @@
|
||||
package participle
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
|
||||
"github.com/alecthomas/participle/lexer"
|
||||
)
|
||||
|
||||
// A structLexer lexes over the tags of struct fields while tracking the current field.
|
||||
type structLexer struct {
|
||||
s reflect.Type
|
||||
field int
|
||||
indexes [][]int
|
||||
lexer lexer.PeekingLexer
|
||||
}
|
||||
|
||||
func lexStruct(s reflect.Type) (*structLexer, error) {
|
||||
indexes, err := collectFieldIndexes(s)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
slex := &structLexer{
|
||||
s: s,
|
||||
indexes: indexes,
|
||||
}
|
||||
if len(slex.indexes) > 0 {
|
||||
tag := fieldLexerTag(slex.Field().StructField)
|
||||
slex.lexer = lexer.Upgrade(lexer.LexString(tag))
|
||||
}
|
||||
return slex, nil
|
||||
}
|
||||
|
||||
// NumField returns the number of fields in the struct associated with this structLexer.
|
||||
func (s *structLexer) NumField() int {
|
||||
return len(s.indexes)
|
||||
}
|
||||
|
||||
type structLexerField struct {
|
||||
reflect.StructField
|
||||
Index []int
|
||||
}
|
||||
|
||||
// Field returns the field associated with the current token.
|
||||
func (s *structLexer) Field() structLexerField {
|
||||
return s.GetField(s.field)
|
||||
}
|
||||
|
||||
func (s *structLexer) GetField(field int) structLexerField {
|
||||
if field >= len(s.indexes) {
|
||||
field = len(s.indexes) - 1
|
||||
}
|
||||
return structLexerField{
|
||||
StructField: s.s.FieldByIndex(s.indexes[field]),
|
||||
Index: s.indexes[field],
|
||||
}
|
||||
}
|
||||
|
||||
func (s *structLexer) Peek() (lexer.Token, error) {
|
||||
field := s.field
|
||||
lex := s.lexer
|
||||
for {
|
||||
token, err := lex.Peek(0)
|
||||
if err != nil {
|
||||
return token, err
|
||||
}
|
||||
if !token.EOF() {
|
||||
token.Pos.Line = field + 1
|
||||
return token, nil
|
||||
}
|
||||
field++
|
||||
if field >= s.NumField() {
|
||||
return lexer.EOFToken(token.Pos), nil
|
||||
}
|
||||
tag := fieldLexerTag(s.GetField(field).StructField)
|
||||
lex = lexer.Upgrade(lexer.LexString(tag))
|
||||
}
|
||||
}
|
||||
|
||||
func (s *structLexer) Next() (lexer.Token, error) {
|
||||
token, err := s.lexer.Next()
|
||||
if err != nil {
|
||||
return token, err
|
||||
}
|
||||
if !token.EOF() {
|
||||
token.Pos.Line = s.field + 1
|
||||
return token, nil
|
||||
}
|
||||
if s.field+1 >= s.NumField() {
|
||||
return lexer.EOFToken(token.Pos), nil
|
||||
}
|
||||
s.field++
|
||||
tag := fieldLexerTag(s.Field().StructField)
|
||||
s.lexer = lexer.Upgrade(lexer.LexString(tag))
|
||||
return s.Next()
|
||||
}
|
||||
|
||||
func fieldLexerTag(field reflect.StructField) string {
|
||||
if tag, ok := field.Tag.Lookup("parser"); ok {
|
||||
return tag
|
||||
}
|
||||
return string(field.Tag)
|
||||
}
|
||||
|
||||
// Recursively collect flattened indices for top-level fields and embedded fields.
|
||||
func collectFieldIndexes(s reflect.Type) (out [][]int, err error) {
|
||||
if s.Kind() != reflect.Struct {
|
||||
return nil, fmt.Errorf("expected a struct but got %q", s)
|
||||
}
|
||||
defer decorate(&err, s.String)
|
||||
for i := 0; i < s.NumField(); i++ {
|
||||
f := s.Field(i)
|
||||
if f.Anonymous {
|
||||
children, err := collectFieldIndexes(f.Type)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, idx := range children {
|
||||
out = append(out, append(f.Index, idx...))
|
||||
}
|
||||
} else if fieldLexerTag(f) != "" {
|
||||
out = append(out, f.Index)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
16
vendor/github.com/minio/parquet-go/parquet.go
generated
vendored
16
vendor/github.com/minio/parquet-go/parquet.go
generated
vendored
@@ -88,6 +88,7 @@ type File struct {
|
||||
rowGroups []*parquet.RowGroup
|
||||
rowGroupIndex int
|
||||
|
||||
nameList []string
|
||||
columnNames set.StringSet
|
||||
columns map[string]*column
|
||||
rowIndex int64
|
||||
@@ -100,16 +101,23 @@ func Open(getReaderFunc GetReaderFunc, columnNames set.StringSet) (*File, error)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
nameList := []string{}
|
||||
schemaElements := fileMeta.GetSchema()
|
||||
for _, element := range schemaElements {
|
||||
nameList = append(nameList, element.Name)
|
||||
}
|
||||
|
||||
return &File{
|
||||
getReaderFunc: getReaderFunc,
|
||||
rowGroups: fileMeta.GetRowGroups(),
|
||||
schemaElements: fileMeta.GetSchema(),
|
||||
schemaElements: schemaElements,
|
||||
nameList: nameList,
|
||||
columnNames: columnNames,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Read - reads single record.
|
||||
func (file *File) Read() (record map[string]Value, err error) {
|
||||
func (file *File) Read() (record *Record, err error) {
|
||||
if file.rowGroupIndex >= len(file.rowGroups) {
|
||||
return nil, io.EOF
|
||||
}
|
||||
@@ -134,10 +142,10 @@ func (file *File) Read() (record map[string]Value, err error) {
|
||||
return file.Read()
|
||||
}
|
||||
|
||||
record = make(map[string]Value)
|
||||
record = newRecord(file.nameList)
|
||||
for name := range file.columns {
|
||||
value, valueType := file.columns[name].read()
|
||||
record[name] = Value{value, valueType}
|
||||
record.set(name, Value{value, valueType})
|
||||
}
|
||||
|
||||
file.rowIndex++
|
||||
|
||||
70
vendor/github.com/minio/parquet-go/record.go
generated
vendored
Normal file
70
vendor/github.com/minio/parquet-go/record.go
generated
vendored
Normal file
@@ -0,0 +1,70 @@
|
||||
/*
|
||||
* Minio Cloud Storage, (C) 2019 Minio, Inc.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package parquet
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Record - ordered parquet record.
|
||||
type Record struct {
|
||||
nameList []string
|
||||
nameValueMap map[string]Value
|
||||
}
|
||||
|
||||
// String - returns string representation of this record.
|
||||
func (r *Record) String() string {
|
||||
values := []string{}
|
||||
r.Range(func(name string, value Value) bool {
|
||||
values = append(values, fmt.Sprintf("%v:%v", name, value))
|
||||
return true
|
||||
})
|
||||
|
||||
return "map[" + strings.Join(values, " ") + "]"
|
||||
}
|
||||
|
||||
func (r *Record) set(name string, value Value) {
|
||||
r.nameValueMap[name] = value
|
||||
}
|
||||
|
||||
// Get - returns Value of name.
|
||||
func (r *Record) Get(name string) (Value, bool) {
|
||||
value, ok := r.nameValueMap[name]
|
||||
return value, ok
|
||||
}
|
||||
|
||||
// Range - calls f sequentially for each name and value present in the record. If f returns false, range stops the iteration.
|
||||
func (r *Record) Range(f func(name string, value Value) bool) {
|
||||
for _, name := range r.nameList {
|
||||
value, ok := r.nameValueMap[name]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
if !f(name, value) {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func newRecord(nameList []string) *Record {
|
||||
return &Record{
|
||||
nameList: nameList,
|
||||
nameValueMap: make(map[string]Value),
|
||||
}
|
||||
}
|
||||
9
vendor/github.com/xwb1989/sqlparser/CONTRIBUTORS.md
generated
vendored
9
vendor/github.com/xwb1989/sqlparser/CONTRIBUTORS.md
generated
vendored
@@ -1,9 +0,0 @@
|
||||
This project is originally a fork of [https://github.com/youtube/vitess](https://github.com/youtube/vitess)
|
||||
Copyright Google Inc
|
||||
|
||||
# Contributors
|
||||
Wenbin Xiao 2015
|
||||
Started this project and maintained it.
|
||||
|
||||
Andrew Brampton 2017
|
||||
Merged in multiple upstream fixes/changes.
|
||||
201
vendor/github.com/xwb1989/sqlparser/LICENSE.md
generated
vendored
201
vendor/github.com/xwb1989/sqlparser/LICENSE.md
generated
vendored
@@ -1,201 +0,0 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
22
vendor/github.com/xwb1989/sqlparser/Makefile
generated
vendored
22
vendor/github.com/xwb1989/sqlparser/Makefile
generated
vendored
@@ -1,22 +0,0 @@
|
||||
# Copyright 2017 Google Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
MAKEFLAGS = -s
|
||||
|
||||
sql.go: sql.y
|
||||
goyacc -o sql.go sql.y
|
||||
gofmt -w sql.go
|
||||
|
||||
clean:
|
||||
rm -f y.output sql.go
|
||||
150
vendor/github.com/xwb1989/sqlparser/README.md
generated
vendored
150
vendor/github.com/xwb1989/sqlparser/README.md
generated
vendored
@@ -1,150 +0,0 @@
|
||||
# sqlparser [](https://travis-ci.org/xwb1989/sqlparser) [](https://coveralls.io/github/xwb1989/sqlparser) [](https://goreportcard.com/report/github.com/xwb1989/sqlparser) [](https://godoc.org/github.com/xwb1989/sqlparser)
|
||||
|
||||
Go package for parsing MySQL SQL queries.
|
||||
|
||||
## Notice
|
||||
|
||||
The backbone of this repo is extracted from [vitessio/vitess](https://github.com/vitessio/vitess).
|
||||
|
||||
Inside vitessio/vitess there is a very nicely written sql parser. However as it's not a self-contained application, I created this one.
|
||||
It applies the same LICENSE as vitessio/vitess.
|
||||
|
||||
## Usage
|
||||
|
||||
```go
|
||||
import (
|
||||
"github.com/xwb1989/sqlparser"
|
||||
)
|
||||
```
|
||||
|
||||
Then use:
|
||||
|
||||
```go
|
||||
sql := "SELECT * FROM table WHERE a = 'abc'"
|
||||
stmt, err := sqlparser.Parse(sql)
|
||||
if err != nil {
|
||||
// Do something with the err
|
||||
}
|
||||
|
||||
// Otherwise do something with stmt
|
||||
switch stmt := stmt.(type) {
|
||||
case *sqlparser.Select:
|
||||
_ = stmt
|
||||
case *sqlparser.Insert:
|
||||
}
|
||||
```
|
||||
|
||||
Alternative to read many queries from a io.Reader:
|
||||
|
||||
```go
|
||||
r := strings.NewReader("INSERT INTO table1 VALUES (1, 'a'); INSERT INTO table2 VALUES (3, 4);")
|
||||
|
||||
tokens := sqlparser.NewTokenizer(r)
|
||||
for {
|
||||
stmt, err := sqlparser.ParseNext(tokens)
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
// Do something with stmt or err.
|
||||
}
|
||||
```
|
||||
|
||||
See [parse_test.go](https://github.com/xwb1989/sqlparser/blob/master/parse_test.go) for more examples, or read the [godoc](https://godoc.org/github.com/xwb1989/sqlparser).
|
||||
|
||||
|
||||
## Porting Instructions
|
||||
|
||||
You only need the below if you plan to try and keep this library up to date with [vitessio/vitess](https://github.com/vitessio/vitess).
|
||||
|
||||
### Keeping up to date
|
||||
|
||||
```bash
|
||||
shopt -s nullglob
|
||||
VITESS=${GOPATH?}/src/vitess.io/vitess/go/
|
||||
XWB1989=${GOPATH?}/src/github.com/xwb1989/sqlparser/
|
||||
|
||||
# Create patches for everything that changed
|
||||
LASTIMPORT=1b7879cb91f1dfe1a2dfa06fea96e951e3a7aec5
|
||||
for path in ${VITESS?}/{vt/sqlparser,sqltypes,bytes2,hack}; do
|
||||
cd ${path}
|
||||
git format-patch ${LASTIMPORT?} .
|
||||
done;
|
||||
|
||||
# Apply patches to the dependencies
|
||||
cd ${XWB1989?}
|
||||
git am --directory dependency -p2 ${VITESS?}/{sqltypes,bytes2,hack}/*.patch
|
||||
|
||||
# Apply the main patches to the repo
|
||||
cd ${XWB1989?}
|
||||
git am -p4 ${VITESS?}/vt/sqlparser/*.patch
|
||||
|
||||
# If you encounter diff failures, manually fix them with
|
||||
patch -p4 < .git/rebase-apply/patch
|
||||
...
|
||||
git add name_of_files
|
||||
git am --continue
|
||||
|
||||
# Cleanup
|
||||
rm ${VITESS?}/{sqltypes,bytes2,hack}/*.patch ${VITESS?}/*.patch
|
||||
|
||||
# and Finally update the LASTIMPORT in this README.
|
||||
```
|
||||
|
||||
### Fresh install
|
||||
|
||||
TODO: Change these instructions to use git to copy the files, that'll make later patching easier.
|
||||
|
||||
```bash
|
||||
VITESS=${GOPATH?}/src/vitess.io/vitess/go/
|
||||
XWB1989=${GOPATH?}/src/github.com/xwb1989/sqlparser/
|
||||
|
||||
cd ${XWB1989?}
|
||||
|
||||
# Copy all the code
|
||||
cp -pr ${VITESS?}/vt/sqlparser/ .
|
||||
cp -pr ${VITESS?}/sqltypes dependency
|
||||
cp -pr ${VITESS?}/bytes2 dependency
|
||||
cp -pr ${VITESS?}/hack dependency
|
||||
|
||||
# Delete some code we haven't ported
|
||||
rm dependency/sqltypes/arithmetic.go dependency/sqltypes/arithmetic_test.go dependency/sqltypes/event_token.go dependency/sqltypes/event_token_test.go dependency/sqltypes/proto3.go dependency/sqltypes/proto3_test.go dependency/sqltypes/query_response.go dependency/sqltypes/result.go dependency/sqltypes/result_test.go
|
||||
|
||||
# Some automated fixes
|
||||
|
||||
# Fix imports
|
||||
sed -i '.bak' 's_vitess.io/vitess/go/vt/proto/query_github.com/xwb1989/sqlparser/dependency/querypb_g' *.go dependency/sqltypes/*.go
|
||||
sed -i '.bak' 's_vitess.io/vitess/go/_github.com/xwb1989/sqlparser/dependency/_g' *.go dependency/sqltypes/*.go
|
||||
|
||||
# Copy the proto, but basically drop everything we don't want
|
||||
cp -pr ${VITESS?}/vt/proto/query dependency/querypb
|
||||
|
||||
sed -i '.bak' 's_.*Descriptor.*__g' dependency/querypb/*.go
|
||||
sed -i '.bak' 's_.*ProtoMessage.*__g' dependency/querypb/*.go
|
||||
|
||||
sed -i '.bak' 's/proto.CompactTextString(m)/"TODO"/g' dependency/querypb/*.go
|
||||
sed -i '.bak' 's/proto.EnumName/EnumName/g' dependency/querypb/*.go
|
||||
|
||||
sed -i '.bak' 's/proto.Equal/reflect.DeepEqual/g' dependency/sqltypes/*.go
|
||||
|
||||
# Remove the error library
|
||||
sed -i '.bak' 's/vterrors.Errorf([^,]*, /fmt.Errorf(/g' *.go dependency/sqltypes/*.go
|
||||
sed -i '.bak' 's/vterrors.New([^,]*, /errors.New(/g' *.go dependency/sqltypes/*.go
|
||||
```
|
||||
|
||||
### Testing
|
||||
|
||||
```bash
|
||||
VITESS=${GOPATH?}/src/vitess.io/vitess/go/
|
||||
XWB1989=${GOPATH?}/src/github.com/xwb1989/sqlparser/
|
||||
|
||||
cd ${XWB1989?}
|
||||
|
||||
# Test, fix and repeat
|
||||
go test ./...
|
||||
|
||||
# Finally make some diffs (for later reference)
|
||||
diff -u ${VITESS?}/sqltypes/ ${XWB1989?}/dependency/sqltypes/ > ${XWB1989?}/patches/sqltypes.patch
|
||||
diff -u ${VITESS?}/bytes2/ ${XWB1989?}/dependency/bytes2/ > ${XWB1989?}/patches/bytes2.patch
|
||||
diff -u ${VITESS?}/vt/proto/query/ ${XWB1989?}/dependency/querypb/ > ${XWB1989?}/patches/querypb.patch
|
||||
diff -u ${VITESS?}/vt/sqlparser/ ${XWB1989?}/ > ${XWB1989?}/patches/sqlparser.patch
|
||||
```
|
||||
343
vendor/github.com/xwb1989/sqlparser/analyzer.go
generated
vendored
343
vendor/github.com/xwb1989/sqlparser/analyzer.go
generated
vendored
@@ -1,343 +0,0 @@
|
||||
/*
|
||||
Copyright 2017 Google Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package sqlparser
|
||||
|
||||
// analyzer.go contains utility analysis functions.
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
"unicode"
|
||||
|
||||
"github.com/xwb1989/sqlparser/dependency/sqltypes"
|
||||
)
|
||||
|
||||
// These constants are used to identify the SQL statement type.
|
||||
const (
|
||||
StmtSelect = iota
|
||||
StmtStream
|
||||
StmtInsert
|
||||
StmtReplace
|
||||
StmtUpdate
|
||||
StmtDelete
|
||||
StmtDDL
|
||||
StmtBegin
|
||||
StmtCommit
|
||||
StmtRollback
|
||||
StmtSet
|
||||
StmtShow
|
||||
StmtUse
|
||||
StmtOther
|
||||
StmtUnknown
|
||||
StmtComment
|
||||
)
|
||||
|
||||
// Preview analyzes the beginning of the query using a simpler and faster
|
||||
// textual comparison to identify the statement type.
|
||||
func Preview(sql string) int {
|
||||
trimmed := StripLeadingComments(sql)
|
||||
|
||||
firstWord := trimmed
|
||||
if end := strings.IndexFunc(trimmed, unicode.IsSpace); end != -1 {
|
||||
firstWord = trimmed[:end]
|
||||
}
|
||||
firstWord = strings.TrimLeftFunc(firstWord, func(r rune) bool { return !unicode.IsLetter(r) })
|
||||
// Comparison is done in order of priority.
|
||||
loweredFirstWord := strings.ToLower(firstWord)
|
||||
switch loweredFirstWord {
|
||||
case "select":
|
||||
return StmtSelect
|
||||
case "stream":
|
||||
return StmtStream
|
||||
case "insert":
|
||||
return StmtInsert
|
||||
case "replace":
|
||||
return StmtReplace
|
||||
case "update":
|
||||
return StmtUpdate
|
||||
case "delete":
|
||||
return StmtDelete
|
||||
}
|
||||
// For the following statements it is not sufficient to rely
|
||||
// on loweredFirstWord. This is because they are not statements
|
||||
// in the grammar and we are relying on Preview to parse them.
|
||||
// For instance, we don't want: "BEGIN JUNK" to be parsed
|
||||
// as StmtBegin.
|
||||
trimmedNoComments, _ := SplitMarginComments(trimmed)
|
||||
switch strings.ToLower(trimmedNoComments) {
|
||||
case "begin", "start transaction":
|
||||
return StmtBegin
|
||||
case "commit":
|
||||
return StmtCommit
|
||||
case "rollback":
|
||||
return StmtRollback
|
||||
}
|
||||
switch loweredFirstWord {
|
||||
case "create", "alter", "rename", "drop", "truncate":
|
||||
return StmtDDL
|
||||
case "set":
|
||||
return StmtSet
|
||||
case "show":
|
||||
return StmtShow
|
||||
case "use":
|
||||
return StmtUse
|
||||
case "analyze", "describe", "desc", "explain", "repair", "optimize":
|
||||
return StmtOther
|
||||
}
|
||||
if strings.Index(trimmed, "/*!") == 0 {
|
||||
return StmtComment
|
||||
}
|
||||
return StmtUnknown
|
||||
}
|
||||
|
||||
// StmtType returns the statement type as a string
|
||||
func StmtType(stmtType int) string {
|
||||
switch stmtType {
|
||||
case StmtSelect:
|
||||
return "SELECT"
|
||||
case StmtStream:
|
||||
return "STREAM"
|
||||
case StmtInsert:
|
||||
return "INSERT"
|
||||
case StmtReplace:
|
||||
return "REPLACE"
|
||||
case StmtUpdate:
|
||||
return "UPDATE"
|
||||
case StmtDelete:
|
||||
return "DELETE"
|
||||
case StmtDDL:
|
||||
return "DDL"
|
||||
case StmtBegin:
|
||||
return "BEGIN"
|
||||
case StmtCommit:
|
||||
return "COMMIT"
|
||||
case StmtRollback:
|
||||
return "ROLLBACK"
|
||||
case StmtSet:
|
||||
return "SET"
|
||||
case StmtShow:
|
||||
return "SHOW"
|
||||
case StmtUse:
|
||||
return "USE"
|
||||
case StmtOther:
|
||||
return "OTHER"
|
||||
default:
|
||||
return "UNKNOWN"
|
||||
}
|
||||
}
|
||||
|
||||
// IsDML returns true if the query is an INSERT, UPDATE or DELETE statement.
|
||||
func IsDML(sql string) bool {
|
||||
switch Preview(sql) {
|
||||
case StmtInsert, StmtReplace, StmtUpdate, StmtDelete:
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// GetTableName returns the table name from the SimpleTableExpr
|
||||
// only if it's a simple expression. Otherwise, it returns "".
|
||||
func GetTableName(node SimpleTableExpr) TableIdent {
|
||||
if n, ok := node.(TableName); ok && n.Qualifier.IsEmpty() {
|
||||
return n.Name
|
||||
}
|
||||
// sub-select or '.' expression
|
||||
return NewTableIdent("")
|
||||
}
|
||||
|
||||
// IsColName returns true if the Expr is a *ColName.
|
||||
func IsColName(node Expr) bool {
|
||||
_, ok := node.(*ColName)
|
||||
return ok
|
||||
}
|
||||
|
||||
// IsValue returns true if the Expr is a string, integral or value arg.
|
||||
// NULL is not considered to be a value.
|
||||
func IsValue(node Expr) bool {
|
||||
switch v := node.(type) {
|
||||
case *SQLVal:
|
||||
switch v.Type {
|
||||
case StrVal, HexVal, IntVal, ValArg:
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// IsNull returns true if the Expr is SQL NULL
|
||||
func IsNull(node Expr) bool {
|
||||
switch node.(type) {
|
||||
case *NullVal:
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// IsSimpleTuple returns true if the Expr is a ValTuple that
|
||||
// contains simple values or if it's a list arg.
|
||||
func IsSimpleTuple(node Expr) bool {
|
||||
switch vals := node.(type) {
|
||||
case ValTuple:
|
||||
for _, n := range vals {
|
||||
if !IsValue(n) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
case ListArg:
|
||||
return true
|
||||
}
|
||||
// It's a subquery
|
||||
return false
|
||||
}
|
||||
|
||||
// NewPlanValue builds a sqltypes.PlanValue from an Expr.
|
||||
func NewPlanValue(node Expr) (sqltypes.PlanValue, error) {
|
||||
switch node := node.(type) {
|
||||
case *SQLVal:
|
||||
switch node.Type {
|
||||
case ValArg:
|
||||
return sqltypes.PlanValue{Key: string(node.Val[1:])}, nil
|
||||
case IntVal:
|
||||
n, err := sqltypes.NewIntegral(string(node.Val))
|
||||
if err != nil {
|
||||
return sqltypes.PlanValue{}, fmt.Errorf("%v", err)
|
||||
}
|
||||
return sqltypes.PlanValue{Value: n}, nil
|
||||
case StrVal:
|
||||
return sqltypes.PlanValue{Value: sqltypes.MakeTrusted(sqltypes.VarBinary, node.Val)}, nil
|
||||
case HexVal:
|
||||
v, err := node.HexDecode()
|
||||
if err != nil {
|
||||
return sqltypes.PlanValue{}, fmt.Errorf("%v", err)
|
||||
}
|
||||
return sqltypes.PlanValue{Value: sqltypes.MakeTrusted(sqltypes.VarBinary, v)}, nil
|
||||
}
|
||||
case ListArg:
|
||||
return sqltypes.PlanValue{ListKey: string(node[2:])}, nil
|
||||
case ValTuple:
|
||||
pv := sqltypes.PlanValue{
|
||||
Values: make([]sqltypes.PlanValue, 0, len(node)),
|
||||
}
|
||||
for _, val := range node {
|
||||
innerpv, err := NewPlanValue(val)
|
||||
if err != nil {
|
||||
return sqltypes.PlanValue{}, err
|
||||
}
|
||||
if innerpv.ListKey != "" || innerpv.Values != nil {
|
||||
return sqltypes.PlanValue{}, errors.New("unsupported: nested lists")
|
||||
}
|
||||
pv.Values = append(pv.Values, innerpv)
|
||||
}
|
||||
return pv, nil
|
||||
case *NullVal:
|
||||
return sqltypes.PlanValue{}, nil
|
||||
}
|
||||
return sqltypes.PlanValue{}, fmt.Errorf("expression is too complex '%v'", String(node))
|
||||
}
|
||||
|
||||
// StringIn is a convenience function that returns
|
||||
// true if str matches any of the values.
|
||||
func StringIn(str string, values ...string) bool {
|
||||
for _, val := range values {
|
||||
if str == val {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// SetKey is the extracted key from one SetExpr
|
||||
type SetKey struct {
|
||||
Key string
|
||||
Scope string
|
||||
}
|
||||
|
||||
// ExtractSetValues returns a map of key-value pairs
|
||||
// if the query is a SET statement. Values can be bool, int64 or string.
|
||||
// Since set variable names are case insensitive, all keys are returned
|
||||
// as lower case.
|
||||
func ExtractSetValues(sql string) (keyValues map[SetKey]interface{}, scope string, err error) {
|
||||
stmt, err := Parse(sql)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
setStmt, ok := stmt.(*Set)
|
||||
if !ok {
|
||||
return nil, "", fmt.Errorf("ast did not yield *sqlparser.Set: %T", stmt)
|
||||
}
|
||||
result := make(map[SetKey]interface{})
|
||||
for _, expr := range setStmt.Exprs {
|
||||
scope := SessionStr
|
||||
key := expr.Name.Lowered()
|
||||
switch {
|
||||
case strings.HasPrefix(key, "@@global."):
|
||||
scope = GlobalStr
|
||||
key = strings.TrimPrefix(key, "@@global.")
|
||||
case strings.HasPrefix(key, "@@session."):
|
||||
key = strings.TrimPrefix(key, "@@session.")
|
||||
case strings.HasPrefix(key, "@@"):
|
||||
key = strings.TrimPrefix(key, "@@")
|
||||
}
|
||||
|
||||
if strings.HasPrefix(expr.Name.Lowered(), "@@") {
|
||||
if setStmt.Scope != "" && scope != "" {
|
||||
return nil, "", fmt.Errorf("unsupported in set: mixed using of variable scope")
|
||||
}
|
||||
_, out := NewStringTokenizer(key).Scan()
|
||||
key = string(out)
|
||||
}
|
||||
|
||||
setKey := SetKey{
|
||||
Key: key,
|
||||
Scope: scope,
|
||||
}
|
||||
|
||||
switch expr := expr.Expr.(type) {
|
||||
case *SQLVal:
|
||||
switch expr.Type {
|
||||
case StrVal:
|
||||
result[setKey] = strings.ToLower(string(expr.Val))
|
||||
case IntVal:
|
||||
num, err := strconv.ParseInt(string(expr.Val), 0, 64)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
result[setKey] = num
|
||||
default:
|
||||
return nil, "", fmt.Errorf("invalid value type: %v", String(expr))
|
||||
}
|
||||
case BoolVal:
|
||||
var val int64
|
||||
if expr {
|
||||
val = 1
|
||||
}
|
||||
result[setKey] = val
|
||||
case *ColName:
|
||||
result[setKey] = expr.Name.String()
|
||||
case *NullVal:
|
||||
result[setKey] = nil
|
||||
case *Default:
|
||||
result[setKey] = "default"
|
||||
default:
|
||||
return nil, "", fmt.Errorf("invalid syntax: %s", String(expr))
|
||||
}
|
||||
}
|
||||
return result, strings.ToLower(setStmt.Scope), nil
|
||||
}
|
||||
3450
vendor/github.com/xwb1989/sqlparser/ast.go
generated
vendored
3450
vendor/github.com/xwb1989/sqlparser/ast.go
generated
vendored
File diff suppressed because it is too large
Load Diff
293
vendor/github.com/xwb1989/sqlparser/comments.go
generated
vendored
293
vendor/github.com/xwb1989/sqlparser/comments.go
generated
vendored
@@ -1,293 +0,0 @@
|
||||
/*
|
||||
Copyright 2017 Google Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package sqlparser
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
"strings"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
const (
|
||||
// DirectiveMultiShardAutocommit is the query comment directive to allow
|
||||
// single round trip autocommit with a multi-shard statement.
|
||||
DirectiveMultiShardAutocommit = "MULTI_SHARD_AUTOCOMMIT"
|
||||
// DirectiveSkipQueryPlanCache skips query plan cache when set.
|
||||
DirectiveSkipQueryPlanCache = "SKIP_QUERY_PLAN_CACHE"
|
||||
// DirectiveQueryTimeout sets a query timeout in vtgate. Only supported for SELECTS.
|
||||
DirectiveQueryTimeout = "QUERY_TIMEOUT_MS"
|
||||
)
|
||||
|
||||
func isNonSpace(r rune) bool {
|
||||
return !unicode.IsSpace(r)
|
||||
}
|
||||
|
||||
// leadingCommentEnd returns the first index after all leading comments, or
|
||||
// 0 if there are no leading comments.
|
||||
func leadingCommentEnd(text string) (end int) {
|
||||
hasComment := false
|
||||
pos := 0
|
||||
for pos < len(text) {
|
||||
// Eat up any whitespace. Trailing whitespace will be considered part of
|
||||
// the leading comments.
|
||||
nextVisibleOffset := strings.IndexFunc(text[pos:], isNonSpace)
|
||||
if nextVisibleOffset < 0 {
|
||||
break
|
||||
}
|
||||
pos += nextVisibleOffset
|
||||
remainingText := text[pos:]
|
||||
|
||||
// Found visible characters. Look for '/*' at the beginning
|
||||
// and '*/' somewhere after that.
|
||||
if len(remainingText) < 4 || remainingText[:2] != "/*" {
|
||||
break
|
||||
}
|
||||
commentLength := 4 + strings.Index(remainingText[2:], "*/")
|
||||
if commentLength < 4 {
|
||||
// Missing end comment :/
|
||||
break
|
||||
}
|
||||
|
||||
hasComment = true
|
||||
pos += commentLength
|
||||
}
|
||||
|
||||
if hasComment {
|
||||
return pos
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// trailingCommentStart returns the first index of trailing comments.
|
||||
// If there are no trailing comments, returns the length of the input string.
|
||||
func trailingCommentStart(text string) (start int) {
|
||||
hasComment := false
|
||||
reducedLen := len(text)
|
||||
for reducedLen > 0 {
|
||||
// Eat up any whitespace. Leading whitespace will be considered part of
|
||||
// the trailing comments.
|
||||
nextReducedLen := strings.LastIndexFunc(text[:reducedLen], isNonSpace) + 1
|
||||
if nextReducedLen == 0 {
|
||||
break
|
||||
}
|
||||
reducedLen = nextReducedLen
|
||||
if reducedLen < 4 || text[reducedLen-2:reducedLen] != "*/" {
|
||||
break
|
||||
}
|
||||
|
||||
// Find the beginning of the comment
|
||||
startCommentPos := strings.LastIndex(text[:reducedLen-2], "/*")
|
||||
if startCommentPos < 0 {
|
||||
// Badly formatted sql :/
|
||||
break
|
||||
}
|
||||
|
||||
hasComment = true
|
||||
reducedLen = startCommentPos
|
||||
}
|
||||
|
||||
if hasComment {
|
||||
return reducedLen
|
||||
}
|
||||
return len(text)
|
||||
}
|
||||
|
||||
// MarginComments holds the leading and trailing comments that surround a query.
|
||||
type MarginComments struct {
|
||||
Leading string
|
||||
Trailing string
|
||||
}
|
||||
|
||||
// SplitMarginComments pulls out any leading or trailing comments from a raw sql query.
|
||||
// This function also trims leading (if there's a comment) and trailing whitespace.
|
||||
func SplitMarginComments(sql string) (query string, comments MarginComments) {
|
||||
trailingStart := trailingCommentStart(sql)
|
||||
leadingEnd := leadingCommentEnd(sql[:trailingStart])
|
||||
comments = MarginComments{
|
||||
Leading: strings.TrimLeftFunc(sql[:leadingEnd], unicode.IsSpace),
|
||||
Trailing: strings.TrimRightFunc(sql[trailingStart:], unicode.IsSpace),
|
||||
}
|
||||
return strings.TrimFunc(sql[leadingEnd:trailingStart], unicode.IsSpace), comments
|
||||
}
|
||||
|
||||
// StripLeadingComments trims the SQL string and removes any leading comments
|
||||
func StripLeadingComments(sql string) string {
|
||||
sql = strings.TrimFunc(sql, unicode.IsSpace)
|
||||
|
||||
for hasCommentPrefix(sql) {
|
||||
switch sql[0] {
|
||||
case '/':
|
||||
// Multi line comment
|
||||
index := strings.Index(sql, "*/")
|
||||
if index <= 1 {
|
||||
return sql
|
||||
}
|
||||
// don't strip /*! ... */ or /*!50700 ... */
|
||||
if len(sql) > 2 && sql[2] == '!' {
|
||||
return sql
|
||||
}
|
||||
sql = sql[index+2:]
|
||||
case '-':
|
||||
// Single line comment
|
||||
index := strings.Index(sql, "\n")
|
||||
if index == -1 {
|
||||
return sql
|
||||
}
|
||||
sql = sql[index+1:]
|
||||
}
|
||||
|
||||
sql = strings.TrimFunc(sql, unicode.IsSpace)
|
||||
}
|
||||
|
||||
return sql
|
||||
}
|
||||
|
||||
func hasCommentPrefix(sql string) bool {
|
||||
return len(sql) > 1 && ((sql[0] == '/' && sql[1] == '*') || (sql[0] == '-' && sql[1] == '-'))
|
||||
}
|
||||
|
||||
// ExtractMysqlComment extracts the version and SQL from a comment-only query
|
||||
// such as /*!50708 sql here */
|
||||
func ExtractMysqlComment(sql string) (version string, innerSQL string) {
|
||||
sql = sql[3 : len(sql)-2]
|
||||
|
||||
digitCount := 0
|
||||
endOfVersionIndex := strings.IndexFunc(sql, func(c rune) bool {
|
||||
digitCount++
|
||||
return !unicode.IsDigit(c) || digitCount == 6
|
||||
})
|
||||
version = sql[0:endOfVersionIndex]
|
||||
innerSQL = strings.TrimFunc(sql[endOfVersionIndex:], unicode.IsSpace)
|
||||
|
||||
return version, innerSQL
|
||||
}
|
||||
|
||||
const commentDirectivePreamble = "/*vt+"
|
||||
|
||||
// CommentDirectives is the parsed representation for execution directives
|
||||
// conveyed in query comments
|
||||
type CommentDirectives map[string]interface{}
|
||||
|
||||
// ExtractCommentDirectives parses the comment list for any execution directives
|
||||
// of the form:
|
||||
//
|
||||
// /*vt+ OPTION_ONE=1 OPTION_TWO OPTION_THREE=abcd */
|
||||
//
|
||||
// It returns the map of the directive values or nil if there aren't any.
|
||||
func ExtractCommentDirectives(comments Comments) CommentDirectives {
|
||||
if comments == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
var vals map[string]interface{}
|
||||
|
||||
for _, comment := range comments {
|
||||
commentStr := string(comment)
|
||||
if commentStr[0:5] != commentDirectivePreamble {
|
||||
continue
|
||||
}
|
||||
|
||||
if vals == nil {
|
||||
vals = make(map[string]interface{})
|
||||
}
|
||||
|
||||
// Split on whitespace and ignore the first and last directive
|
||||
// since they contain the comment start/end
|
||||
directives := strings.Fields(commentStr)
|
||||
for i := 1; i < len(directives)-1; i++ {
|
||||
directive := directives[i]
|
||||
sep := strings.IndexByte(directive, '=')
|
||||
|
||||
// No value is equivalent to a true boolean
|
||||
if sep == -1 {
|
||||
vals[directive] = true
|
||||
continue
|
||||
}
|
||||
|
||||
strVal := directive[sep+1:]
|
||||
directive = directive[:sep]
|
||||
|
||||
intVal, err := strconv.Atoi(strVal)
|
||||
if err == nil {
|
||||
vals[directive] = intVal
|
||||
continue
|
||||
}
|
||||
|
||||
boolVal, err := strconv.ParseBool(strVal)
|
||||
if err == nil {
|
||||
vals[directive] = boolVal
|
||||
continue
|
||||
}
|
||||
|
||||
vals[directive] = strVal
|
||||
}
|
||||
}
|
||||
return vals
|
||||
}
|
||||
|
||||
// IsSet checks the directive map for the named directive and returns
|
||||
// true if the directive is set and has a true/false or 0/1 value
|
||||
func (d CommentDirectives) IsSet(key string) bool {
|
||||
if d == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
val, ok := d[key]
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
boolVal, ok := val.(bool)
|
||||
if ok {
|
||||
return boolVal
|
||||
}
|
||||
|
||||
intVal, ok := val.(int)
|
||||
if ok {
|
||||
return intVal == 1
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// SkipQueryPlanCacheDirective returns true if skip query plan cache directive is set to true in query.
|
||||
func SkipQueryPlanCacheDirective(stmt Statement) bool {
|
||||
switch stmt := stmt.(type) {
|
||||
case *Select:
|
||||
directives := ExtractCommentDirectives(stmt.Comments)
|
||||
if directives.IsSet(DirectiveSkipQueryPlanCache) {
|
||||
return true
|
||||
}
|
||||
case *Insert:
|
||||
directives := ExtractCommentDirectives(stmt.Comments)
|
||||
if directives.IsSet(DirectiveSkipQueryPlanCache) {
|
||||
return true
|
||||
}
|
||||
case *Update:
|
||||
directives := ExtractCommentDirectives(stmt.Comments)
|
||||
if directives.IsSet(DirectiveSkipQueryPlanCache) {
|
||||
return true
|
||||
}
|
||||
case *Delete:
|
||||
directives := ExtractCommentDirectives(stmt.Comments)
|
||||
if directives.IsSet(DirectiveSkipQueryPlanCache) {
|
||||
return true
|
||||
}
|
||||
default:
|
||||
return false
|
||||
}
|
||||
return false
|
||||
}
|
||||
99
vendor/github.com/xwb1989/sqlparser/encodable.go
generated
vendored
99
vendor/github.com/xwb1989/sqlparser/encodable.go
generated
vendored
@@ -1,99 +0,0 @@
|
||||
/*
|
||||
Copyright 2017 Google Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package sqlparser
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
|
||||
"github.com/xwb1989/sqlparser/dependency/sqltypes"
|
||||
)
|
||||
|
||||
// This file contains types that are 'Encodable'.
|
||||
|
||||
// Encodable defines the interface for types that can
|
||||
// be custom-encoded into SQL.
|
||||
type Encodable interface {
|
||||
EncodeSQL(buf *bytes.Buffer)
|
||||
}
|
||||
|
||||
// InsertValues is a custom SQL encoder for the values of
|
||||
// an insert statement.
|
||||
type InsertValues [][]sqltypes.Value
|
||||
|
||||
// EncodeSQL performs the SQL encoding for InsertValues.
|
||||
func (iv InsertValues) EncodeSQL(buf *bytes.Buffer) {
|
||||
for i, rows := range iv {
|
||||
if i != 0 {
|
||||
buf.WriteString(", ")
|
||||
}
|
||||
buf.WriteByte('(')
|
||||
for j, bv := range rows {
|
||||
if j != 0 {
|
||||
buf.WriteString(", ")
|
||||
}
|
||||
bv.EncodeSQL(buf)
|
||||
}
|
||||
buf.WriteByte(')')
|
||||
}
|
||||
}
|
||||
|
||||
// TupleEqualityList is for generating equality constraints
|
||||
// for tables that have composite primary keys.
|
||||
type TupleEqualityList struct {
|
||||
Columns []ColIdent
|
||||
Rows [][]sqltypes.Value
|
||||
}
|
||||
|
||||
// EncodeSQL generates the where clause constraints for the tuple
|
||||
// equality.
|
||||
func (tpl *TupleEqualityList) EncodeSQL(buf *bytes.Buffer) {
|
||||
if len(tpl.Columns) == 1 {
|
||||
tpl.encodeAsIn(buf)
|
||||
return
|
||||
}
|
||||
tpl.encodeAsEquality(buf)
|
||||
}
|
||||
|
||||
func (tpl *TupleEqualityList) encodeAsIn(buf *bytes.Buffer) {
|
||||
Append(buf, tpl.Columns[0])
|
||||
buf.WriteString(" in (")
|
||||
for i, r := range tpl.Rows {
|
||||
if i != 0 {
|
||||
buf.WriteString(", ")
|
||||
}
|
||||
r[0].EncodeSQL(buf)
|
||||
}
|
||||
buf.WriteByte(')')
|
||||
}
|
||||
|
||||
func (tpl *TupleEqualityList) encodeAsEquality(buf *bytes.Buffer) {
|
||||
for i, r := range tpl.Rows {
|
||||
if i != 0 {
|
||||
buf.WriteString(" or ")
|
||||
}
|
||||
buf.WriteString("(")
|
||||
for j, c := range tpl.Columns {
|
||||
if j != 0 {
|
||||
buf.WriteString(" and ")
|
||||
}
|
||||
Append(buf, c)
|
||||
buf.WriteString(" = ")
|
||||
r[j].EncodeSQL(buf)
|
||||
}
|
||||
buf.WriteByte(')')
|
||||
}
|
||||
}
|
||||
39
vendor/github.com/xwb1989/sqlparser/impossible_query.go
generated
vendored
39
vendor/github.com/xwb1989/sqlparser/impossible_query.go
generated
vendored
@@ -1,39 +0,0 @@
|
||||
/*
|
||||
Copyright 2017 Google Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreedto in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package sqlparser
|
||||
|
||||
// FormatImpossibleQuery creates an impossible query in a TrackedBuffer.
|
||||
// An impossible query is a modified version of a query where all selects have where clauses that are
|
||||
// impossible for mysql to resolve. This is used in the vtgate and vttablet:
|
||||
//
|
||||
// - In the vtgate it's used for joins: if the first query returns no result, then vtgate uses the impossible
|
||||
// query just to fetch field info from vttablet
|
||||
// - In the vttablet, it's just an optimization: the field info is fetched once form MySQL, cached and reused
|
||||
// for subsequent queries
|
||||
func FormatImpossibleQuery(buf *TrackedBuffer, node SQLNode) {
|
||||
switch node := node.(type) {
|
||||
case *Select:
|
||||
buf.Myprintf("select %v from %v where 1 != 1", node.SelectExprs, node.From)
|
||||
if node.GroupBy != nil {
|
||||
node.GroupBy.Format(buf)
|
||||
}
|
||||
case *Union:
|
||||
buf.Myprintf("%v %s %v", node.Left, node.Type, node.Right)
|
||||
default:
|
||||
node.Format(buf)
|
||||
}
|
||||
}
|
||||
224
vendor/github.com/xwb1989/sqlparser/normalizer.go
generated
vendored
224
vendor/github.com/xwb1989/sqlparser/normalizer.go
generated
vendored
@@ -1,224 +0,0 @@
|
||||
/*
|
||||
Copyright 2017 Google Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package sqlparser
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/xwb1989/sqlparser/dependency/sqltypes"
|
||||
|
||||
"github.com/xwb1989/sqlparser/dependency/querypb"
|
||||
)
|
||||
|
||||
// Normalize changes the statement to use bind values, and
|
||||
// updates the bind vars to those values. The supplied prefix
|
||||
// is used to generate the bind var names. The function ensures
|
||||
// that there are no collisions with existing bind vars.
|
||||
// Within Select constructs, bind vars are deduped. This allows
|
||||
// us to identify vindex equality. Otherwise, every value is
|
||||
// treated as distinct.
|
||||
func Normalize(stmt Statement, bindVars map[string]*querypb.BindVariable, prefix string) {
|
||||
nz := newNormalizer(stmt, bindVars, prefix)
|
||||
_ = Walk(nz.WalkStatement, stmt)
|
||||
}
|
||||
|
||||
type normalizer struct {
|
||||
stmt Statement
|
||||
bindVars map[string]*querypb.BindVariable
|
||||
prefix string
|
||||
reserved map[string]struct{}
|
||||
counter int
|
||||
vals map[string]string
|
||||
}
|
||||
|
||||
func newNormalizer(stmt Statement, bindVars map[string]*querypb.BindVariable, prefix string) *normalizer {
|
||||
return &normalizer{
|
||||
stmt: stmt,
|
||||
bindVars: bindVars,
|
||||
prefix: prefix,
|
||||
reserved: GetBindvars(stmt),
|
||||
counter: 1,
|
||||
vals: make(map[string]string),
|
||||
}
|
||||
}
|
||||
|
||||
// WalkStatement is the top level walk function.
|
||||
// If it encounters a Select, it switches to a mode
|
||||
// where variables are deduped.
|
||||
func (nz *normalizer) WalkStatement(node SQLNode) (bool, error) {
|
||||
switch node := node.(type) {
|
||||
case *Select:
|
||||
_ = Walk(nz.WalkSelect, node)
|
||||
// Don't continue
|
||||
return false, nil
|
||||
case *SQLVal:
|
||||
nz.convertSQLVal(node)
|
||||
case *ComparisonExpr:
|
||||
nz.convertComparison(node)
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// WalkSelect normalizes the AST in Select mode.
|
||||
func (nz *normalizer) WalkSelect(node SQLNode) (bool, error) {
|
||||
switch node := node.(type) {
|
||||
case *SQLVal:
|
||||
nz.convertSQLValDedup(node)
|
||||
case *ComparisonExpr:
|
||||
nz.convertComparison(node)
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func (nz *normalizer) convertSQLValDedup(node *SQLVal) {
|
||||
// If value is too long, don't dedup.
|
||||
// Such values are most likely not for vindexes.
|
||||
// We save a lot of CPU because we avoid building
|
||||
// the key for them.
|
||||
if len(node.Val) > 256 {
|
||||
nz.convertSQLVal(node)
|
||||
return
|
||||
}
|
||||
|
||||
// Make the bindvar
|
||||
bval := nz.sqlToBindvar(node)
|
||||
if bval == nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Check if there's a bindvar for that value already.
|
||||
var key string
|
||||
if bval.Type == sqltypes.VarBinary {
|
||||
// Prefixing strings with "'" ensures that a string
|
||||
// and number that have the same representation don't
|
||||
// collide.
|
||||
key = "'" + string(node.Val)
|
||||
} else {
|
||||
key = string(node.Val)
|
||||
}
|
||||
bvname, ok := nz.vals[key]
|
||||
if !ok {
|
||||
// If there's no such bindvar, make a new one.
|
||||
bvname = nz.newName()
|
||||
nz.vals[key] = bvname
|
||||
nz.bindVars[bvname] = bval
|
||||
}
|
||||
|
||||
// Modify the AST node to a bindvar.
|
||||
node.Type = ValArg
|
||||
node.Val = append([]byte(":"), bvname...)
|
||||
}
|
||||
|
||||
// convertSQLVal converts an SQLVal without the dedup.
|
||||
func (nz *normalizer) convertSQLVal(node *SQLVal) {
|
||||
bval := nz.sqlToBindvar(node)
|
||||
if bval == nil {
|
||||
return
|
||||
}
|
||||
|
||||
bvname := nz.newName()
|
||||
nz.bindVars[bvname] = bval
|
||||
|
||||
node.Type = ValArg
|
||||
node.Val = append([]byte(":"), bvname...)
|
||||
}
|
||||
|
||||
// convertComparison attempts to convert IN clauses to
|
||||
// use the list bind var construct. If it fails, it returns
|
||||
// with no change made. The walk function will then continue
|
||||
// and iterate on converting each individual value into separate
|
||||
// bind vars.
|
||||
func (nz *normalizer) convertComparison(node *ComparisonExpr) {
|
||||
if node.Operator != InStr && node.Operator != NotInStr {
|
||||
return
|
||||
}
|
||||
tupleVals, ok := node.Right.(ValTuple)
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
// The RHS is a tuple of values.
|
||||
// Make a list bindvar.
|
||||
bvals := &querypb.BindVariable{
|
||||
Type: querypb.Type_TUPLE,
|
||||
}
|
||||
for _, val := range tupleVals {
|
||||
bval := nz.sqlToBindvar(val)
|
||||
if bval == nil {
|
||||
return
|
||||
}
|
||||
bvals.Values = append(bvals.Values, &querypb.Value{
|
||||
Type: bval.Type,
|
||||
Value: bval.Value,
|
||||
})
|
||||
}
|
||||
bvname := nz.newName()
|
||||
nz.bindVars[bvname] = bvals
|
||||
// Modify RHS to be a list bindvar.
|
||||
node.Right = ListArg(append([]byte("::"), bvname...))
|
||||
}
|
||||
|
||||
func (nz *normalizer) sqlToBindvar(node SQLNode) *querypb.BindVariable {
|
||||
if node, ok := node.(*SQLVal); ok {
|
||||
var v sqltypes.Value
|
||||
var err error
|
||||
switch node.Type {
|
||||
case StrVal:
|
||||
v, err = sqltypes.NewValue(sqltypes.VarBinary, node.Val)
|
||||
case IntVal:
|
||||
v, err = sqltypes.NewValue(sqltypes.Int64, node.Val)
|
||||
case FloatVal:
|
||||
v, err = sqltypes.NewValue(sqltypes.Float64, node.Val)
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return sqltypes.ValueBindVariable(v)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (nz *normalizer) newName() string {
|
||||
for {
|
||||
newName := fmt.Sprintf("%s%d", nz.prefix, nz.counter)
|
||||
if _, ok := nz.reserved[newName]; !ok {
|
||||
nz.reserved[newName] = struct{}{}
|
||||
return newName
|
||||
}
|
||||
nz.counter++
|
||||
}
|
||||
}
|
||||
|
||||
// GetBindvars returns a map of the bind vars referenced in the statement.
|
||||
// TODO(sougou); This function gets called again from vtgate/planbuilder.
|
||||
// Ideally, this should be done only once.
|
||||
func GetBindvars(stmt Statement) map[string]struct{} {
|
||||
bindvars := make(map[string]struct{})
|
||||
_ = Walk(func(node SQLNode) (kontinue bool, err error) {
|
||||
switch node := node.(type) {
|
||||
case *SQLVal:
|
||||
if node.Type == ValArg {
|
||||
bindvars[string(node.Val[1:])] = struct{}{}
|
||||
}
|
||||
case ListArg:
|
||||
bindvars[string(node[2:])] = struct{}{}
|
||||
}
|
||||
return true, nil
|
||||
}, stmt)
|
||||
return bindvars
|
||||
}
|
||||
119
vendor/github.com/xwb1989/sqlparser/parsed_query.go
generated
vendored
119
vendor/github.com/xwb1989/sqlparser/parsed_query.go
generated
vendored
@@ -1,119 +0,0 @@
|
||||
/*
|
||||
Copyright 2017 Google Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package sqlparser
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"github.com/xwb1989/sqlparser/dependency/querypb"
|
||||
"github.com/xwb1989/sqlparser/dependency/sqltypes"
|
||||
)
|
||||
|
||||
// ParsedQuery represents a parsed query where
|
||||
// bind locations are precompued for fast substitutions.
|
||||
type ParsedQuery struct {
|
||||
Query string
|
||||
bindLocations []bindLocation
|
||||
}
|
||||
|
||||
type bindLocation struct {
|
||||
offset, length int
|
||||
}
|
||||
|
||||
// NewParsedQuery returns a ParsedQuery of the ast.
|
||||
func NewParsedQuery(node SQLNode) *ParsedQuery {
|
||||
buf := NewTrackedBuffer(nil)
|
||||
buf.Myprintf("%v", node)
|
||||
return buf.ParsedQuery()
|
||||
}
|
||||
|
||||
// GenerateQuery generates a query by substituting the specified
|
||||
// bindVariables. The extras parameter specifies special parameters
|
||||
// that can perform custom encoding.
|
||||
func (pq *ParsedQuery) GenerateQuery(bindVariables map[string]*querypb.BindVariable, extras map[string]Encodable) ([]byte, error) {
|
||||
if len(pq.bindLocations) == 0 {
|
||||
return []byte(pq.Query), nil
|
||||
}
|
||||
buf := bytes.NewBuffer(make([]byte, 0, len(pq.Query)))
|
||||
current := 0
|
||||
for _, loc := range pq.bindLocations {
|
||||
buf.WriteString(pq.Query[current:loc.offset])
|
||||
name := pq.Query[loc.offset : loc.offset+loc.length]
|
||||
if encodable, ok := extras[name[1:]]; ok {
|
||||
encodable.EncodeSQL(buf)
|
||||
} else {
|
||||
supplied, _, err := FetchBindVar(name, bindVariables)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
EncodeValue(buf, supplied)
|
||||
}
|
||||
current = loc.offset + loc.length
|
||||
}
|
||||
buf.WriteString(pq.Query[current:])
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
// EncodeValue encodes one bind variable value into the query.
|
||||
func EncodeValue(buf *bytes.Buffer, value *querypb.BindVariable) {
|
||||
if value.Type != querypb.Type_TUPLE {
|
||||
// Since we already check for TUPLE, we don't expect an error.
|
||||
v, _ := sqltypes.BindVariableToValue(value)
|
||||
v.EncodeSQL(buf)
|
||||
return
|
||||
}
|
||||
|
||||
// It's a TUPLE.
|
||||
buf.WriteByte('(')
|
||||
for i, bv := range value.Values {
|
||||
if i != 0 {
|
||||
buf.WriteString(", ")
|
||||
}
|
||||
sqltypes.ProtoToValue(bv).EncodeSQL(buf)
|
||||
}
|
||||
buf.WriteByte(')')
|
||||
}
|
||||
|
||||
// FetchBindVar resolves the bind variable by fetching it from bindVariables.
|
||||
func FetchBindVar(name string, bindVariables map[string]*querypb.BindVariable) (val *querypb.BindVariable, isList bool, err error) {
|
||||
name = name[1:]
|
||||
if name[0] == ':' {
|
||||
name = name[1:]
|
||||
isList = true
|
||||
}
|
||||
supplied, ok := bindVariables[name]
|
||||
if !ok {
|
||||
return nil, false, fmt.Errorf("missing bind var %s", name)
|
||||
}
|
||||
|
||||
if isList {
|
||||
if supplied.Type != querypb.Type_TUPLE {
|
||||
return nil, false, fmt.Errorf("unexpected list arg type (%v) for key %s", supplied.Type, name)
|
||||
}
|
||||
if len(supplied.Values) == 0 {
|
||||
return nil, false, fmt.Errorf("empty list supplied for %s", name)
|
||||
}
|
||||
return supplied, true, nil
|
||||
}
|
||||
|
||||
if supplied.Type == querypb.Type_TUPLE {
|
||||
return nil, false, fmt.Errorf("unexpected arg type (TUPLE) for non-list key %s", name)
|
||||
}
|
||||
|
||||
return supplied, false, nil
|
||||
}
|
||||
19
vendor/github.com/xwb1989/sqlparser/redact_query.go
generated
vendored
19
vendor/github.com/xwb1989/sqlparser/redact_query.go
generated
vendored
@@ -1,19 +0,0 @@
|
||||
package sqlparser
|
||||
|
||||
import querypb "github.com/xwb1989/sqlparser/dependency/querypb"
|
||||
|
||||
// RedactSQLQuery returns a sql string with the params stripped out for display
|
||||
func RedactSQLQuery(sql string) (string, error) {
|
||||
bv := map[string]*querypb.BindVariable{}
|
||||
sqlStripped, comments := SplitMarginComments(sql)
|
||||
|
||||
stmt, err := Parse(sqlStripped)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
prefix := "redacted"
|
||||
Normalize(stmt, bv, prefix)
|
||||
|
||||
return comments.Leading + String(stmt) + comments.Trailing, nil
|
||||
}
|
||||
6136
vendor/github.com/xwb1989/sqlparser/sql.go
generated
vendored
6136
vendor/github.com/xwb1989/sqlparser/sql.go
generated
vendored
File diff suppressed because it is too large
Load Diff
3159
vendor/github.com/xwb1989/sqlparser/sql.y
generated
vendored
3159
vendor/github.com/xwb1989/sqlparser/sql.y
generated
vendored
File diff suppressed because it is too large
Load Diff
950
vendor/github.com/xwb1989/sqlparser/token.go
generated
vendored
950
vendor/github.com/xwb1989/sqlparser/token.go
generated
vendored
@@ -1,950 +0,0 @@
|
||||
/*
|
||||
Copyright 2017 Google Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package sqlparser
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/xwb1989/sqlparser/dependency/bytes2"
|
||||
"github.com/xwb1989/sqlparser/dependency/sqltypes"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultBufSize = 4096
|
||||
eofChar = 0x100
|
||||
)
|
||||
|
||||
// Tokenizer is the struct used to generate SQL
|
||||
// tokens for the parser.
|
||||
type Tokenizer struct {
|
||||
InStream io.Reader
|
||||
AllowComments bool
|
||||
ForceEOF bool
|
||||
lastChar uint16
|
||||
Position int
|
||||
lastToken []byte
|
||||
LastError error
|
||||
posVarIndex int
|
||||
ParseTree Statement
|
||||
partialDDL *DDL
|
||||
nesting int
|
||||
multi bool
|
||||
specialComment *Tokenizer
|
||||
|
||||
buf []byte
|
||||
bufPos int
|
||||
bufSize int
|
||||
}
|
||||
|
||||
// NewStringTokenizer creates a new Tokenizer for the
|
||||
// sql string.
|
||||
func NewStringTokenizer(sql string) *Tokenizer {
|
||||
buf := []byte(sql)
|
||||
return &Tokenizer{
|
||||
buf: buf,
|
||||
bufSize: len(buf),
|
||||
}
|
||||
}
|
||||
|
||||
// NewTokenizer creates a new Tokenizer reading a sql
|
||||
// string from the io.Reader.
|
||||
func NewTokenizer(r io.Reader) *Tokenizer {
|
||||
return &Tokenizer{
|
||||
InStream: r,
|
||||
buf: make([]byte, defaultBufSize),
|
||||
}
|
||||
}
|
||||
|
||||
// keywords is a map of mysql keywords that fall into two categories:
|
||||
// 1) keywords considered reserved by MySQL
|
||||
// 2) keywords for us to handle specially in sql.y
|
||||
//
|
||||
// Those marked as UNUSED are likely reserved keywords. We add them here so that
|
||||
// when rewriting queries we can properly backtick quote them so they don't cause issues
|
||||
//
|
||||
// NOTE: If you add new keywords, add them also to the reserved_keywords or
|
||||
// non_reserved_keywords grammar in sql.y -- this will allow the keyword to be used
|
||||
// in identifiers. See the docs for each grammar to determine which one to put it into.
|
||||
var keywords = map[string]int{
|
||||
"accessible": UNUSED,
|
||||
"add": ADD,
|
||||
"against": AGAINST,
|
||||
"all": ALL,
|
||||
"alter": ALTER,
|
||||
"analyze": ANALYZE,
|
||||
"and": AND,
|
||||
"as": AS,
|
||||
"asc": ASC,
|
||||
"asensitive": UNUSED,
|
||||
"auto_increment": AUTO_INCREMENT,
|
||||
"before": UNUSED,
|
||||
"begin": BEGIN,
|
||||
"between": BETWEEN,
|
||||
"bigint": BIGINT,
|
||||
"binary": BINARY,
|
||||
"_binary": UNDERSCORE_BINARY,
|
||||
"bit": BIT,
|
||||
"blob": BLOB,
|
||||
"bool": BOOL,
|
||||
"boolean": BOOLEAN,
|
||||
"both": UNUSED,
|
||||
"by": BY,
|
||||
"call": UNUSED,
|
||||
"cascade": UNUSED,
|
||||
"case": CASE,
|
||||
"cast": CAST,
|
||||
"change": UNUSED,
|
||||
"char": CHAR,
|
||||
"character": CHARACTER,
|
||||
"charset": CHARSET,
|
||||
"check": UNUSED,
|
||||
"collate": COLLATE,
|
||||
"column": COLUMN,
|
||||
"comment": COMMENT_KEYWORD,
|
||||
"committed": COMMITTED,
|
||||
"commit": COMMIT,
|
||||
"condition": UNUSED,
|
||||
"constraint": CONSTRAINT,
|
||||
"continue": UNUSED,
|
||||
"convert": CONVERT,
|
||||
"substr": SUBSTR,
|
||||
"substring": SUBSTRING,
|
||||
"create": CREATE,
|
||||
"cross": CROSS,
|
||||
"current_date": CURRENT_DATE,
|
||||
"current_time": CURRENT_TIME,
|
||||
"current_timestamp": CURRENT_TIMESTAMP,
|
||||
"current_user": UNUSED,
|
||||
"cursor": UNUSED,
|
||||
"database": DATABASE,
|
||||
"databases": DATABASES,
|
||||
"day_hour": UNUSED,
|
||||
"day_microsecond": UNUSED,
|
||||
"day_minute": UNUSED,
|
||||
"day_second": UNUSED,
|
||||
"date": DATE,
|
||||
"datetime": DATETIME,
|
||||
"dec": UNUSED,
|
||||
"decimal": DECIMAL,
|
||||
"declare": UNUSED,
|
||||
"default": DEFAULT,
|
||||
"delayed": UNUSED,
|
||||
"delete": DELETE,
|
||||
"desc": DESC,
|
||||
"describe": DESCRIBE,
|
||||
"deterministic": UNUSED,
|
||||
"distinct": DISTINCT,
|
||||
"distinctrow": UNUSED,
|
||||
"div": DIV,
|
||||
"double": DOUBLE,
|
||||
"drop": DROP,
|
||||
"duplicate": DUPLICATE,
|
||||
"each": UNUSED,
|
||||
"else": ELSE,
|
||||
"elseif": UNUSED,
|
||||
"enclosed": UNUSED,
|
||||
"end": END,
|
||||
"enum": ENUM,
|
||||
"escape": ESCAPE,
|
||||
"escaped": UNUSED,
|
||||
"exists": EXISTS,
|
||||
"exit": UNUSED,
|
||||
"explain": EXPLAIN,
|
||||
"expansion": EXPANSION,
|
||||
"extended": EXTENDED,
|
||||
"false": FALSE,
|
||||
"fetch": UNUSED,
|
||||
"float": FLOAT_TYPE,
|
||||
"float4": UNUSED,
|
||||
"float8": UNUSED,
|
||||
"for": FOR,
|
||||
"force": FORCE,
|
||||
"foreign": FOREIGN,
|
||||
"from": FROM,
|
||||
"full": FULL,
|
||||
"fulltext": FULLTEXT,
|
||||
"generated": UNUSED,
|
||||
"geometry": GEOMETRY,
|
||||
"geometrycollection": GEOMETRYCOLLECTION,
|
||||
"get": UNUSED,
|
||||
"global": GLOBAL,
|
||||
"grant": UNUSED,
|
||||
"group": GROUP,
|
||||
"group_concat": GROUP_CONCAT,
|
||||
"having": HAVING,
|
||||
"high_priority": UNUSED,
|
||||
"hour_microsecond": UNUSED,
|
||||
"hour_minute": UNUSED,
|
||||
"hour_second": UNUSED,
|
||||
"if": IF,
|
||||
"ignore": IGNORE,
|
||||
"in": IN,
|
||||
"index": INDEX,
|
||||
"infile": UNUSED,
|
||||
"inout": UNUSED,
|
||||
"inner": INNER,
|
||||
"insensitive": UNUSED,
|
||||
"insert": INSERT,
|
||||
"int": INT,
|
||||
"int1": UNUSED,
|
||||
"int2": UNUSED,
|
||||
"int3": UNUSED,
|
||||
"int4": UNUSED,
|
||||
"int8": UNUSED,
|
||||
"integer": INTEGER,
|
||||
"interval": INTERVAL,
|
||||
"into": INTO,
|
||||
"io_after_gtids": UNUSED,
|
||||
"is": IS,
|
||||
"isolation": ISOLATION,
|
||||
"iterate": UNUSED,
|
||||
"join": JOIN,
|
||||
"json": JSON,
|
||||
"key": KEY,
|
||||
"keys": KEYS,
|
||||
"key_block_size": KEY_BLOCK_SIZE,
|
||||
"kill": UNUSED,
|
||||
"language": LANGUAGE,
|
||||
"last_insert_id": LAST_INSERT_ID,
|
||||
"leading": UNUSED,
|
||||
"leave": UNUSED,
|
||||
"left": LEFT,
|
||||
"less": LESS,
|
||||
"level": LEVEL,
|
||||
"like": LIKE,
|
||||
"limit": LIMIT,
|
||||
"linear": UNUSED,
|
||||
"lines": UNUSED,
|
||||
"linestring": LINESTRING,
|
||||
"load": UNUSED,
|
||||
"localtime": LOCALTIME,
|
||||
"localtimestamp": LOCALTIMESTAMP,
|
||||
"lock": LOCK,
|
||||
"long": UNUSED,
|
||||
"longblob": LONGBLOB,
|
||||
"longtext": LONGTEXT,
|
||||
"loop": UNUSED,
|
||||
"low_priority": UNUSED,
|
||||
"master_bind": UNUSED,
|
||||
"match": MATCH,
|
||||
"maxvalue": MAXVALUE,
|
||||
"mediumblob": MEDIUMBLOB,
|
||||
"mediumint": MEDIUMINT,
|
||||
"mediumtext": MEDIUMTEXT,
|
||||
"middleint": UNUSED,
|
||||
"minute_microsecond": UNUSED,
|
||||
"minute_second": UNUSED,
|
||||
"mod": MOD,
|
||||
"mode": MODE,
|
||||
"modifies": UNUSED,
|
||||
"multilinestring": MULTILINESTRING,
|
||||
"multipoint": MULTIPOINT,
|
||||
"multipolygon": MULTIPOLYGON,
|
||||
"names": NAMES,
|
||||
"natural": NATURAL,
|
||||
"nchar": NCHAR,
|
||||
"next": NEXT,
|
||||
"not": NOT,
|
||||
"no_write_to_binlog": UNUSED,
|
||||
"null": NULL,
|
||||
"numeric": NUMERIC,
|
||||
"offset": OFFSET,
|
||||
"on": ON,
|
||||
"only": ONLY,
|
||||
"optimize": OPTIMIZE,
|
||||
"optimizer_costs": UNUSED,
|
||||
"option": UNUSED,
|
||||
"optionally": UNUSED,
|
||||
"or": OR,
|
||||
"order": ORDER,
|
||||
"out": UNUSED,
|
||||
"outer": OUTER,
|
||||
"outfile": UNUSED,
|
||||
"partition": PARTITION,
|
||||
"point": POINT,
|
||||
"polygon": POLYGON,
|
||||
"precision": UNUSED,
|
||||
"primary": PRIMARY,
|
||||
"processlist": PROCESSLIST,
|
||||
"procedure": PROCEDURE,
|
||||
"query": QUERY,
|
||||
"range": UNUSED,
|
||||
"read": READ,
|
||||
"reads": UNUSED,
|
||||
"read_write": UNUSED,
|
||||
"real": REAL,
|
||||
"references": UNUSED,
|
||||
"regexp": REGEXP,
|
||||
"release": UNUSED,
|
||||
"rename": RENAME,
|
||||
"reorganize": REORGANIZE,
|
||||
"repair": REPAIR,
|
||||
"repeat": UNUSED,
|
||||
"repeatable": REPEATABLE,
|
||||
"replace": REPLACE,
|
||||
"require": UNUSED,
|
||||
"resignal": UNUSED,
|
||||
"restrict": UNUSED,
|
||||
"return": UNUSED,
|
||||
"revoke": UNUSED,
|
||||
"right": RIGHT,
|
||||
"rlike": REGEXP,
|
||||
"rollback": ROLLBACK,
|
||||
"schema": SCHEMA,
|
||||
"schemas": UNUSED,
|
||||
"second_microsecond": UNUSED,
|
||||
"select": SELECT,
|
||||
"sensitive": UNUSED,
|
||||
"separator": SEPARATOR,
|
||||
"serializable": SERIALIZABLE,
|
||||
"session": SESSION,
|
||||
"set": SET,
|
||||
"share": SHARE,
|
||||
"show": SHOW,
|
||||
"signal": UNUSED,
|
||||
"signed": SIGNED,
|
||||
"smallint": SMALLINT,
|
||||
"spatial": SPATIAL,
|
||||
"specific": UNUSED,
|
||||
"sql": UNUSED,
|
||||
"sqlexception": UNUSED,
|
||||
"sqlstate": UNUSED,
|
||||
"sqlwarning": UNUSED,
|
||||
"sql_big_result": UNUSED,
|
||||
"sql_cache": SQL_CACHE,
|
||||
"sql_calc_found_rows": UNUSED,
|
||||
"sql_no_cache": SQL_NO_CACHE,
|
||||
"sql_small_result": UNUSED,
|
||||
"ssl": UNUSED,
|
||||
"start": START,
|
||||
"starting": UNUSED,
|
||||
"status": STATUS,
|
||||
"stored": UNUSED,
|
||||
"straight_join": STRAIGHT_JOIN,
|
||||
"stream": STREAM,
|
||||
"table": TABLE,
|
||||
"tables": TABLES,
|
||||
"terminated": UNUSED,
|
||||
"text": TEXT,
|
||||
"than": THAN,
|
||||
"then": THEN,
|
||||
"time": TIME,
|
||||
"timestamp": TIMESTAMP,
|
||||
"tinyblob": TINYBLOB,
|
||||
"tinyint": TINYINT,
|
||||
"tinytext": TINYTEXT,
|
||||
"to": TO,
|
||||
"trailing": UNUSED,
|
||||
"transaction": TRANSACTION,
|
||||
"trigger": TRIGGER,
|
||||
"true": TRUE,
|
||||
"truncate": TRUNCATE,
|
||||
"uncommitted": UNCOMMITTED,
|
||||
"undo": UNUSED,
|
||||
"union": UNION,
|
||||
"unique": UNIQUE,
|
||||
"unlock": UNUSED,
|
||||
"unsigned": UNSIGNED,
|
||||
"update": UPDATE,
|
||||
"usage": UNUSED,
|
||||
"use": USE,
|
||||
"using": USING,
|
||||
"utc_date": UTC_DATE,
|
||||
"utc_time": UTC_TIME,
|
||||
"utc_timestamp": UTC_TIMESTAMP,
|
||||
"values": VALUES,
|
||||
"variables": VARIABLES,
|
||||
"varbinary": VARBINARY,
|
||||
"varchar": VARCHAR,
|
||||
"varcharacter": UNUSED,
|
||||
"varying": UNUSED,
|
||||
"virtual": UNUSED,
|
||||
"vindex": VINDEX,
|
||||
"vindexes": VINDEXES,
|
||||
"view": VIEW,
|
||||
"vitess_keyspaces": VITESS_KEYSPACES,
|
||||
"vitess_shards": VITESS_SHARDS,
|
||||
"vitess_tablets": VITESS_TABLETS,
|
||||
"vschema_tables": VSCHEMA_TABLES,
|
||||
"when": WHEN,
|
||||
"where": WHERE,
|
||||
"while": UNUSED,
|
||||
"with": WITH,
|
||||
"write": WRITE,
|
||||
"xor": UNUSED,
|
||||
"year": YEAR,
|
||||
"year_month": UNUSED,
|
||||
"zerofill": ZEROFILL,
|
||||
}
|
||||
|
||||
// keywordStrings contains the reverse mapping of token to keyword strings
|
||||
var keywordStrings = map[int]string{}
|
||||
|
||||
func init() {
|
||||
for str, id := range keywords {
|
||||
if id == UNUSED {
|
||||
continue
|
||||
}
|
||||
keywordStrings[id] = str
|
||||
}
|
||||
}
|
||||
|
||||
// KeywordString returns the string corresponding to the given keyword
|
||||
func KeywordString(id int) string {
|
||||
str, ok := keywordStrings[id]
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
return str
|
||||
}
|
||||
|
||||
// Lex returns the next token form the Tokenizer.
|
||||
// This function is used by go yacc.
|
||||
func (tkn *Tokenizer) Lex(lval *yySymType) int {
|
||||
typ, val := tkn.Scan()
|
||||
for typ == COMMENT {
|
||||
if tkn.AllowComments {
|
||||
break
|
||||
}
|
||||
typ, val = tkn.Scan()
|
||||
}
|
||||
lval.bytes = val
|
||||
tkn.lastToken = val
|
||||
return typ
|
||||
}
|
||||
|
||||
// Error is called by go yacc if there's a parsing error.
|
||||
func (tkn *Tokenizer) Error(err string) {
|
||||
buf := &bytes2.Buffer{}
|
||||
if tkn.lastToken != nil {
|
||||
fmt.Fprintf(buf, "%s at position %v near '%s'", err, tkn.Position, tkn.lastToken)
|
||||
} else {
|
||||
fmt.Fprintf(buf, "%s at position %v", err, tkn.Position)
|
||||
}
|
||||
tkn.LastError = errors.New(buf.String())
|
||||
|
||||
// Try and re-sync to the next statement
|
||||
if tkn.lastChar != ';' {
|
||||
tkn.skipStatement()
|
||||
}
|
||||
}
|
||||
|
||||
// Scan scans the tokenizer for the next token and returns
|
||||
// the token type and an optional value.
|
||||
func (tkn *Tokenizer) Scan() (int, []byte) {
|
||||
if tkn.specialComment != nil {
|
||||
// Enter specialComment scan mode.
|
||||
// for scanning such kind of comment: /*! MySQL-specific code */
|
||||
specialComment := tkn.specialComment
|
||||
tok, val := specialComment.Scan()
|
||||
if tok != 0 {
|
||||
// return the specialComment scan result as the result
|
||||
return tok, val
|
||||
}
|
||||
// leave specialComment scan mode after all stream consumed.
|
||||
tkn.specialComment = nil
|
||||
}
|
||||
if tkn.lastChar == 0 {
|
||||
tkn.next()
|
||||
}
|
||||
|
||||
if tkn.ForceEOF {
|
||||
tkn.skipStatement()
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
tkn.skipBlank()
|
||||
switch ch := tkn.lastChar; {
|
||||
case isLetter(ch):
|
||||
tkn.next()
|
||||
if ch == 'X' || ch == 'x' {
|
||||
if tkn.lastChar == '\'' {
|
||||
tkn.next()
|
||||
return tkn.scanHex()
|
||||
}
|
||||
}
|
||||
if ch == 'B' || ch == 'b' {
|
||||
if tkn.lastChar == '\'' {
|
||||
tkn.next()
|
||||
return tkn.scanBitLiteral()
|
||||
}
|
||||
}
|
||||
isDbSystemVariable := false
|
||||
if ch == '@' && tkn.lastChar == '@' {
|
||||
isDbSystemVariable = true
|
||||
}
|
||||
return tkn.scanIdentifier(byte(ch), isDbSystemVariable)
|
||||
case isDigit(ch):
|
||||
return tkn.scanNumber(false)
|
||||
case ch == ':':
|
||||
return tkn.scanBindVar()
|
||||
case ch == ';' && tkn.multi:
|
||||
return 0, nil
|
||||
default:
|
||||
tkn.next()
|
||||
switch ch {
|
||||
case eofChar:
|
||||
return 0, nil
|
||||
case '=', ',', ';', '(', ')', '+', '*', '%', '^', '~':
|
||||
return int(ch), nil
|
||||
case '&':
|
||||
if tkn.lastChar == '&' {
|
||||
tkn.next()
|
||||
return AND, nil
|
||||
}
|
||||
return int(ch), nil
|
||||
case '|':
|
||||
if tkn.lastChar == '|' {
|
||||
tkn.next()
|
||||
return OR, nil
|
||||
}
|
||||
return int(ch), nil
|
||||
case '?':
|
||||
tkn.posVarIndex++
|
||||
buf := new(bytes2.Buffer)
|
||||
fmt.Fprintf(buf, ":v%d", tkn.posVarIndex)
|
||||
return VALUE_ARG, buf.Bytes()
|
||||
case '.':
|
||||
if isDigit(tkn.lastChar) {
|
||||
return tkn.scanNumber(true)
|
||||
}
|
||||
return int(ch), nil
|
||||
case '/':
|
||||
switch tkn.lastChar {
|
||||
case '/':
|
||||
tkn.next()
|
||||
return tkn.scanCommentType1("//")
|
||||
case '*':
|
||||
tkn.next()
|
||||
switch tkn.lastChar {
|
||||
case '!':
|
||||
return tkn.scanMySQLSpecificComment()
|
||||
default:
|
||||
return tkn.scanCommentType2()
|
||||
}
|
||||
default:
|
||||
return int(ch), nil
|
||||
}
|
||||
case '#':
|
||||
return tkn.scanCommentType1("#")
|
||||
case '-':
|
||||
switch tkn.lastChar {
|
||||
case '-':
|
||||
tkn.next()
|
||||
return tkn.scanCommentType1("--")
|
||||
case '>':
|
||||
tkn.next()
|
||||
if tkn.lastChar == '>' {
|
||||
tkn.next()
|
||||
return JSON_UNQUOTE_EXTRACT_OP, nil
|
||||
}
|
||||
return JSON_EXTRACT_OP, nil
|
||||
}
|
||||
return int(ch), nil
|
||||
case '<':
|
||||
switch tkn.lastChar {
|
||||
case '>':
|
||||
tkn.next()
|
||||
return NE, nil
|
||||
case '<':
|
||||
tkn.next()
|
||||
return SHIFT_LEFT, nil
|
||||
case '=':
|
||||
tkn.next()
|
||||
switch tkn.lastChar {
|
||||
case '>':
|
||||
tkn.next()
|
||||
return NULL_SAFE_EQUAL, nil
|
||||
default:
|
||||
return LE, nil
|
||||
}
|
||||
default:
|
||||
return int(ch), nil
|
||||
}
|
||||
case '>':
|
||||
switch tkn.lastChar {
|
||||
case '=':
|
||||
tkn.next()
|
||||
return GE, nil
|
||||
case '>':
|
||||
tkn.next()
|
||||
return SHIFT_RIGHT, nil
|
||||
default:
|
||||
return int(ch), nil
|
||||
}
|
||||
case '!':
|
||||
if tkn.lastChar == '=' {
|
||||
tkn.next()
|
||||
return NE, nil
|
||||
}
|
||||
return int(ch), nil
|
||||
case '\'', '"':
|
||||
return tkn.scanString(ch, STRING)
|
||||
case '`':
|
||||
return tkn.scanLiteralIdentifier()
|
||||
default:
|
||||
return LEX_ERROR, []byte{byte(ch)}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// skipStatement scans until the EOF, or end of statement is encountered.
|
||||
func (tkn *Tokenizer) skipStatement() {
|
||||
ch := tkn.lastChar
|
||||
for ch != ';' && ch != eofChar {
|
||||
tkn.next()
|
||||
ch = tkn.lastChar
|
||||
}
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) skipBlank() {
|
||||
ch := tkn.lastChar
|
||||
for ch == ' ' || ch == '\n' || ch == '\r' || ch == '\t' {
|
||||
tkn.next()
|
||||
ch = tkn.lastChar
|
||||
}
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) scanIdentifier(firstByte byte, isDbSystemVariable bool) (int, []byte) {
|
||||
buffer := &bytes2.Buffer{}
|
||||
buffer.WriteByte(firstByte)
|
||||
for isLetter(tkn.lastChar) || isDigit(tkn.lastChar) || (isDbSystemVariable && isCarat(tkn.lastChar)) {
|
||||
buffer.WriteByte(byte(tkn.lastChar))
|
||||
tkn.next()
|
||||
}
|
||||
lowered := bytes.ToLower(buffer.Bytes())
|
||||
loweredStr := string(lowered)
|
||||
if keywordID, found := keywords[loweredStr]; found {
|
||||
return keywordID, lowered
|
||||
}
|
||||
// dual must always be case-insensitive
|
||||
if loweredStr == "dual" {
|
||||
return ID, lowered
|
||||
}
|
||||
return ID, buffer.Bytes()
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) scanHex() (int, []byte) {
|
||||
buffer := &bytes2.Buffer{}
|
||||
tkn.scanMantissa(16, buffer)
|
||||
if tkn.lastChar != '\'' {
|
||||
return LEX_ERROR, buffer.Bytes()
|
||||
}
|
||||
tkn.next()
|
||||
if buffer.Len()%2 != 0 {
|
||||
return LEX_ERROR, buffer.Bytes()
|
||||
}
|
||||
return HEX, buffer.Bytes()
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) scanBitLiteral() (int, []byte) {
|
||||
buffer := &bytes2.Buffer{}
|
||||
tkn.scanMantissa(2, buffer)
|
||||
if tkn.lastChar != '\'' {
|
||||
return LEX_ERROR, buffer.Bytes()
|
||||
}
|
||||
tkn.next()
|
||||
return BIT_LITERAL, buffer.Bytes()
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) scanLiteralIdentifier() (int, []byte) {
|
||||
buffer := &bytes2.Buffer{}
|
||||
backTickSeen := false
|
||||
for {
|
||||
if backTickSeen {
|
||||
if tkn.lastChar != '`' {
|
||||
break
|
||||
}
|
||||
backTickSeen = false
|
||||
buffer.WriteByte('`')
|
||||
tkn.next()
|
||||
continue
|
||||
}
|
||||
// The previous char was not a backtick.
|
||||
switch tkn.lastChar {
|
||||
case '`':
|
||||
backTickSeen = true
|
||||
case eofChar:
|
||||
// Premature EOF.
|
||||
return LEX_ERROR, buffer.Bytes()
|
||||
default:
|
||||
buffer.WriteByte(byte(tkn.lastChar))
|
||||
}
|
||||
tkn.next()
|
||||
}
|
||||
if buffer.Len() == 0 {
|
||||
return LEX_ERROR, buffer.Bytes()
|
||||
}
|
||||
return ID, buffer.Bytes()
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) scanBindVar() (int, []byte) {
|
||||
buffer := &bytes2.Buffer{}
|
||||
buffer.WriteByte(byte(tkn.lastChar))
|
||||
token := VALUE_ARG
|
||||
tkn.next()
|
||||
if tkn.lastChar == ':' {
|
||||
token = LIST_ARG
|
||||
buffer.WriteByte(byte(tkn.lastChar))
|
||||
tkn.next()
|
||||
}
|
||||
if !isLetter(tkn.lastChar) {
|
||||
return LEX_ERROR, buffer.Bytes()
|
||||
}
|
||||
for isLetter(tkn.lastChar) || isDigit(tkn.lastChar) || tkn.lastChar == '.' {
|
||||
buffer.WriteByte(byte(tkn.lastChar))
|
||||
tkn.next()
|
||||
}
|
||||
return token, buffer.Bytes()
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) scanMantissa(base int, buffer *bytes2.Buffer) {
|
||||
for digitVal(tkn.lastChar) < base {
|
||||
tkn.consumeNext(buffer)
|
||||
}
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) scanNumber(seenDecimalPoint bool) (int, []byte) {
|
||||
token := INTEGRAL
|
||||
buffer := &bytes2.Buffer{}
|
||||
if seenDecimalPoint {
|
||||
token = FLOAT
|
||||
buffer.WriteByte('.')
|
||||
tkn.scanMantissa(10, buffer)
|
||||
goto exponent
|
||||
}
|
||||
|
||||
// 0x construct.
|
||||
if tkn.lastChar == '0' {
|
||||
tkn.consumeNext(buffer)
|
||||
if tkn.lastChar == 'x' || tkn.lastChar == 'X' {
|
||||
token = HEXNUM
|
||||
tkn.consumeNext(buffer)
|
||||
tkn.scanMantissa(16, buffer)
|
||||
goto exit
|
||||
}
|
||||
}
|
||||
|
||||
tkn.scanMantissa(10, buffer)
|
||||
|
||||
if tkn.lastChar == '.' {
|
||||
token = FLOAT
|
||||
tkn.consumeNext(buffer)
|
||||
tkn.scanMantissa(10, buffer)
|
||||
}
|
||||
|
||||
exponent:
|
||||
if tkn.lastChar == 'e' || tkn.lastChar == 'E' {
|
||||
token = FLOAT
|
||||
tkn.consumeNext(buffer)
|
||||
if tkn.lastChar == '+' || tkn.lastChar == '-' {
|
||||
tkn.consumeNext(buffer)
|
||||
}
|
||||
tkn.scanMantissa(10, buffer)
|
||||
}
|
||||
|
||||
exit:
|
||||
// A letter cannot immediately follow a number.
|
||||
if isLetter(tkn.lastChar) {
|
||||
return LEX_ERROR, buffer.Bytes()
|
||||
}
|
||||
|
||||
return token, buffer.Bytes()
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) scanString(delim uint16, typ int) (int, []byte) {
|
||||
var buffer bytes2.Buffer
|
||||
for {
|
||||
ch := tkn.lastChar
|
||||
if ch == eofChar {
|
||||
// Unterminated string.
|
||||
return LEX_ERROR, buffer.Bytes()
|
||||
}
|
||||
|
||||
if ch != delim && ch != '\\' {
|
||||
buffer.WriteByte(byte(ch))
|
||||
|
||||
// Scan ahead to the next interesting character.
|
||||
start := tkn.bufPos
|
||||
for ; tkn.bufPos < tkn.bufSize; tkn.bufPos++ {
|
||||
ch = uint16(tkn.buf[tkn.bufPos])
|
||||
if ch == delim || ch == '\\' {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
buffer.Write(tkn.buf[start:tkn.bufPos])
|
||||
tkn.Position += (tkn.bufPos - start)
|
||||
|
||||
if tkn.bufPos >= tkn.bufSize {
|
||||
// Reached the end of the buffer without finding a delim or
|
||||
// escape character.
|
||||
tkn.next()
|
||||
continue
|
||||
}
|
||||
|
||||
tkn.bufPos++
|
||||
tkn.Position++
|
||||
}
|
||||
tkn.next() // Read one past the delim or escape character.
|
||||
|
||||
if ch == '\\' {
|
||||
if tkn.lastChar == eofChar {
|
||||
// String terminates mid escape character.
|
||||
return LEX_ERROR, buffer.Bytes()
|
||||
}
|
||||
if decodedChar := sqltypes.SQLDecodeMap[byte(tkn.lastChar)]; decodedChar == sqltypes.DontEscape {
|
||||
ch = tkn.lastChar
|
||||
} else {
|
||||
ch = uint16(decodedChar)
|
||||
}
|
||||
|
||||
} else if ch == delim && tkn.lastChar != delim {
|
||||
// Correctly terminated string, which is not a double delim.
|
||||
break
|
||||
}
|
||||
|
||||
buffer.WriteByte(byte(ch))
|
||||
tkn.next()
|
||||
}
|
||||
|
||||
return typ, buffer.Bytes()
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) scanCommentType1(prefix string) (int, []byte) {
|
||||
buffer := &bytes2.Buffer{}
|
||||
buffer.WriteString(prefix)
|
||||
for tkn.lastChar != eofChar {
|
||||
if tkn.lastChar == '\n' {
|
||||
tkn.consumeNext(buffer)
|
||||
break
|
||||
}
|
||||
tkn.consumeNext(buffer)
|
||||
}
|
||||
return COMMENT, buffer.Bytes()
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) scanCommentType2() (int, []byte) {
|
||||
buffer := &bytes2.Buffer{}
|
||||
buffer.WriteString("/*")
|
||||
for {
|
||||
if tkn.lastChar == '*' {
|
||||
tkn.consumeNext(buffer)
|
||||
if tkn.lastChar == '/' {
|
||||
tkn.consumeNext(buffer)
|
||||
break
|
||||
}
|
||||
continue
|
||||
}
|
||||
if tkn.lastChar == eofChar {
|
||||
return LEX_ERROR, buffer.Bytes()
|
||||
}
|
||||
tkn.consumeNext(buffer)
|
||||
}
|
||||
return COMMENT, buffer.Bytes()
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) scanMySQLSpecificComment() (int, []byte) {
|
||||
buffer := &bytes2.Buffer{}
|
||||
buffer.WriteString("/*!")
|
||||
tkn.next()
|
||||
for {
|
||||
if tkn.lastChar == '*' {
|
||||
tkn.consumeNext(buffer)
|
||||
if tkn.lastChar == '/' {
|
||||
tkn.consumeNext(buffer)
|
||||
break
|
||||
}
|
||||
continue
|
||||
}
|
||||
if tkn.lastChar == eofChar {
|
||||
return LEX_ERROR, buffer.Bytes()
|
||||
}
|
||||
tkn.consumeNext(buffer)
|
||||
}
|
||||
_, sql := ExtractMysqlComment(buffer.String())
|
||||
tkn.specialComment = NewStringTokenizer(sql)
|
||||
return tkn.Scan()
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) consumeNext(buffer *bytes2.Buffer) {
|
||||
if tkn.lastChar == eofChar {
|
||||
// This should never happen.
|
||||
panic("unexpected EOF")
|
||||
}
|
||||
buffer.WriteByte(byte(tkn.lastChar))
|
||||
tkn.next()
|
||||
}
|
||||
|
||||
func (tkn *Tokenizer) next() {
|
||||
if tkn.bufPos >= tkn.bufSize && tkn.InStream != nil {
|
||||
// Try and refill the buffer
|
||||
var err error
|
||||
tkn.bufPos = 0
|
||||
if tkn.bufSize, err = tkn.InStream.Read(tkn.buf); err != io.EOF && err != nil {
|
||||
tkn.LastError = err
|
||||
}
|
||||
}
|
||||
|
||||
if tkn.bufPos >= tkn.bufSize {
|
||||
if tkn.lastChar != eofChar {
|
||||
tkn.Position++
|
||||
tkn.lastChar = eofChar
|
||||
}
|
||||
} else {
|
||||
tkn.Position++
|
||||
tkn.lastChar = uint16(tkn.buf[tkn.bufPos])
|
||||
tkn.bufPos++
|
||||
}
|
||||
}
|
||||
|
||||
// reset clears any internal state.
|
||||
func (tkn *Tokenizer) reset() {
|
||||
tkn.ParseTree = nil
|
||||
tkn.partialDDL = nil
|
||||
tkn.specialComment = nil
|
||||
tkn.posVarIndex = 0
|
||||
tkn.nesting = 0
|
||||
tkn.ForceEOF = false
|
||||
}
|
||||
|
||||
func isLetter(ch uint16) bool {
|
||||
return 'a' <= ch && ch <= 'z' || 'A' <= ch && ch <= 'Z' || ch == '_' || ch == '@'
|
||||
}
|
||||
|
||||
func isCarat(ch uint16) bool {
|
||||
return ch == '.' || ch == '\'' || ch == '"' || ch == '`'
|
||||
}
|
||||
|
||||
func digitVal(ch uint16) int {
|
||||
switch {
|
||||
case '0' <= ch && ch <= '9':
|
||||
return int(ch) - '0'
|
||||
case 'a' <= ch && ch <= 'f':
|
||||
return int(ch) - 'a' + 10
|
||||
case 'A' <= ch && ch <= 'F':
|
||||
return int(ch) - 'A' + 10
|
||||
}
|
||||
return 16 // larger than any legal digit val
|
||||
}
|
||||
|
||||
func isDigit(ch uint16) bool {
|
||||
return '0' <= ch && ch <= '9'
|
||||
}
|
||||
140
vendor/github.com/xwb1989/sqlparser/tracked_buffer.go
generated
vendored
140
vendor/github.com/xwb1989/sqlparser/tracked_buffer.go
generated
vendored
@@ -1,140 +0,0 @@
|
||||
/*
|
||||
Copyright 2017 Google Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package sqlparser
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// NodeFormatter defines the signature of a custom node formatter
|
||||
// function that can be given to TrackedBuffer for code generation.
|
||||
type NodeFormatter func(buf *TrackedBuffer, node SQLNode)
|
||||
|
||||
// TrackedBuffer is used to rebuild a query from the ast.
|
||||
// bindLocations keeps track of locations in the buffer that
|
||||
// use bind variables for efficient future substitutions.
|
||||
// nodeFormatter is the formatting function the buffer will
|
||||
// use to format a node. By default(nil), it's FormatNode.
|
||||
// But you can supply a different formatting function if you
|
||||
// want to generate a query that's different from the default.
|
||||
type TrackedBuffer struct {
|
||||
*bytes.Buffer
|
||||
bindLocations []bindLocation
|
||||
nodeFormatter NodeFormatter
|
||||
}
|
||||
|
||||
// NewTrackedBuffer creates a new TrackedBuffer.
|
||||
func NewTrackedBuffer(nodeFormatter NodeFormatter) *TrackedBuffer {
|
||||
return &TrackedBuffer{
|
||||
Buffer: new(bytes.Buffer),
|
||||
nodeFormatter: nodeFormatter,
|
||||
}
|
||||
}
|
||||
|
||||
// WriteNode function, initiates the writing of a single SQLNode tree by passing
|
||||
// through to Myprintf with a default format string
|
||||
func (buf *TrackedBuffer) WriteNode(node SQLNode) *TrackedBuffer {
|
||||
buf.Myprintf("%v", node)
|
||||
return buf
|
||||
}
|
||||
|
||||
// Myprintf mimics fmt.Fprintf(buf, ...), but limited to Node(%v),
|
||||
// Node.Value(%s) and string(%s). It also allows a %a for a value argument, in
|
||||
// which case it adds tracking info for future substitutions.
|
||||
//
|
||||
// The name must be something other than the usual Printf() to avoid "go vet"
|
||||
// warnings due to our custom format specifiers.
|
||||
func (buf *TrackedBuffer) Myprintf(format string, values ...interface{}) {
|
||||
end := len(format)
|
||||
fieldnum := 0
|
||||
for i := 0; i < end; {
|
||||
lasti := i
|
||||
for i < end && format[i] != '%' {
|
||||
i++
|
||||
}
|
||||
if i > lasti {
|
||||
buf.WriteString(format[lasti:i])
|
||||
}
|
||||
if i >= end {
|
||||
break
|
||||
}
|
||||
i++ // '%'
|
||||
switch format[i] {
|
||||
case 'c':
|
||||
switch v := values[fieldnum].(type) {
|
||||
case byte:
|
||||
buf.WriteByte(v)
|
||||
case rune:
|
||||
buf.WriteRune(v)
|
||||
default:
|
||||
panic(fmt.Sprintf("unexpected TrackedBuffer type %T", v))
|
||||
}
|
||||
case 's':
|
||||
switch v := values[fieldnum].(type) {
|
||||
case []byte:
|
||||
buf.Write(v)
|
||||
case string:
|
||||
buf.WriteString(v)
|
||||
default:
|
||||
panic(fmt.Sprintf("unexpected TrackedBuffer type %T", v))
|
||||
}
|
||||
case 'v':
|
||||
node := values[fieldnum].(SQLNode)
|
||||
if buf.nodeFormatter == nil {
|
||||
node.Format(buf)
|
||||
} else {
|
||||
buf.nodeFormatter(buf, node)
|
||||
}
|
||||
case 'a':
|
||||
buf.WriteArg(values[fieldnum].(string))
|
||||
default:
|
||||
panic("unexpected")
|
||||
}
|
||||
fieldnum++
|
||||
i++
|
||||
}
|
||||
}
|
||||
|
||||
// WriteArg writes a value argument into the buffer along with
|
||||
// tracking information for future substitutions. arg must contain
|
||||
// the ":" or "::" prefix.
|
||||
func (buf *TrackedBuffer) WriteArg(arg string) {
|
||||
buf.bindLocations = append(buf.bindLocations, bindLocation{
|
||||
offset: buf.Len(),
|
||||
length: len(arg),
|
||||
})
|
||||
buf.WriteString(arg)
|
||||
}
|
||||
|
||||
// ParsedQuery returns a ParsedQuery that contains bind
|
||||
// locations for easy substitution.
|
||||
func (buf *TrackedBuffer) ParsedQuery() *ParsedQuery {
|
||||
return &ParsedQuery{Query: buf.String(), bindLocations: buf.bindLocations}
|
||||
}
|
||||
|
||||
// HasBindVars returns true if the parsed query uses bind vars.
|
||||
func (buf *TrackedBuffer) HasBindVars() bool {
|
||||
return len(buf.bindLocations) != 0
|
||||
}
|
||||
|
||||
// BuildParsedQuery builds a ParsedQuery from the input.
|
||||
func BuildParsedQuery(in string, vars ...interface{}) *ParsedQuery {
|
||||
buf := NewTrackedBuffer(nil)
|
||||
buf.Myprintf(in, vars...)
|
||||
return buf.ParsedQuery()
|
||||
}
|
||||
24
vendor/vendor.json
vendored
24
vendor/vendor.json
vendored
@@ -97,6 +97,18 @@
|
||||
"revision": "f6be1abbb5abd0517522f850dd785990d373da7e",
|
||||
"revisionTime": "2017-09-13T22:19:17Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "Xmp7mYQyG/1fyIahOyTyN9yZamY=",
|
||||
"path": "github.com/alecthomas/participle",
|
||||
"revision": "bf8340a459bd383e5eb7d44a9a1b3af23b6cf8cd",
|
||||
"revisionTime": "2019-01-03T08:53:15Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "0R8Lqt4DtU8+7Eq1mL7Hd+cjDOI=",
|
||||
"path": "github.com/alecthomas/participle/lexer",
|
||||
"revision": "bf8340a459bd383e5eb7d44a9a1b3af23b6cf8cd",
|
||||
"revisionTime": "2019-01-03T08:53:15Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "tX0Bq1gzqskL98nnB1X2rDqxH18=",
|
||||
"path": "github.com/aliyun/aliyun-oss-go-sdk/oss",
|
||||
@@ -644,10 +656,10 @@
|
||||
"revisionTime": "2019-01-20T10:05:29Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "pxgHNx36gpRdhSqtaE5fqp7lrAA=",
|
||||
"checksumSHA1": "ik77jlf0oMQTlSndP85DlIVOnOY=",
|
||||
"path": "github.com/minio/parquet-go",
|
||||
"revision": "1014bfb4d0e323e3fbf6683e3519a98b0721f5cc",
|
||||
"revisionTime": "2019-01-14T09:43:57Z"
|
||||
"revision": "7a17a919eeed02c393f3117a9ed1ac6df0da9aa5",
|
||||
"revisionTime": "2019-01-18T04:40:39Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "N4WRPw4p3AN958RH/O53kUsJacQ=",
|
||||
@@ -888,12 +900,6 @@
|
||||
"revision": "ceec8f93295a060cdb565ec25e4ccf17941dbd55",
|
||||
"revisionTime": "2016-11-14T21:01:44Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "6ksZHYhLc3yOzTbcWKb3bDENhD4=",
|
||||
"path": "github.com/xwb1989/sqlparser",
|
||||
"revision": "120387863bf27d04bc07db8015110a6e96d0146c",
|
||||
"revisionTime": "2018-06-06T15:21:19Z"
|
||||
},
|
||||
{
|
||||
"checksumSHA1": "L/Q8Ylbo+wnj5whDFfMxxwyxmdo=",
|
||||
"path": "github.com/xwb1989/sqlparser/dependency/bytes2",
|
||||
|
||||
Reference in New Issue
Block a user