mirror of
https://github.com/jesseduffield/lazygit.git
synced 2025-05-12 21:05:48 +02:00
switch to our own fork of pty which lets us set our own stdout and stderr
This commit is contained in:
parent
f5f726e9c4
commit
319064f040
43 changed files with 25 additions and 2339 deletions
202
vendor/github.com/flynn/go-shlex/COPYING
generated
vendored
202
vendor/github.com/flynn/go-shlex/COPYING
generated
vendored
|
@ -1,202 +0,0 @@
|
|||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
21
vendor/github.com/flynn/go-shlex/Makefile
generated
vendored
21
vendor/github.com/flynn/go-shlex/Makefile
generated
vendored
|
@ -1,21 +0,0 @@
|
|||
# Copyright 2011 Google Inc. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
include $(GOROOT)/src/Make.inc
|
||||
|
||||
TARG=shlex
|
||||
GOFILES=\
|
||||
shlex.go\
|
||||
|
||||
include $(GOROOT)/src/Make.pkg
|
2
vendor/github.com/flynn/go-shlex/README.md
generated
vendored
2
vendor/github.com/flynn/go-shlex/README.md
generated
vendored
|
@ -1,2 +0,0 @@
|
|||
go-shlex is a simple lexer for go that supports shell-style quoting,
|
||||
commenting, and escaping.
|
457
vendor/github.com/flynn/go-shlex/shlex.go
generated
vendored
457
vendor/github.com/flynn/go-shlex/shlex.go
generated
vendored
|
@ -1,457 +0,0 @@
|
|||
/*
|
||||
Copyright 2012 Google Inc. All Rights Reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package shlex
|
||||
|
||||
/*
|
||||
Package shlex implements a simple lexer which splits input in to tokens using
|
||||
shell-style rules for quoting and commenting.
|
||||
*/
|
||||
import (
|
||||
"bufio"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
)
|
||||
|
||||
/*
|
||||
A TokenType is a top-level token; a word, space, comment, unknown.
|
||||
*/
|
||||
type TokenType int
|
||||
|
||||
/*
|
||||
A RuneTokenType is the type of a UTF-8 character; a character, quote, space, escape.
|
||||
*/
|
||||
type RuneTokenType int
|
||||
|
||||
type lexerState int
|
||||
|
||||
type Token struct {
|
||||
tokenType TokenType
|
||||
value string
|
||||
}
|
||||
|
||||
/*
|
||||
Two tokens are equal if both their types and values are equal. A nil token can
|
||||
never equal another token.
|
||||
*/
|
||||
func (a *Token) Equal(b *Token) bool {
|
||||
if a == nil || b == nil {
|
||||
return false
|
||||
}
|
||||
if a.tokenType != b.tokenType {
|
||||
return false
|
||||
}
|
||||
return a.value == b.value
|
||||
}
|
||||
|
||||
const (
|
||||
RUNE_CHAR string = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789._-,/@$*()+=><:;&^%~|!?[]{}"
|
||||
RUNE_SPACE string = " \t\r\n"
|
||||
RUNE_ESCAPING_QUOTE string = "\""
|
||||
RUNE_NONESCAPING_QUOTE string = "'"
|
||||
RUNE_ESCAPE = "\\"
|
||||
RUNE_COMMENT = "#"
|
||||
|
||||
RUNETOKEN_UNKNOWN RuneTokenType = 0
|
||||
RUNETOKEN_CHAR RuneTokenType = 1
|
||||
RUNETOKEN_SPACE RuneTokenType = 2
|
||||
RUNETOKEN_ESCAPING_QUOTE RuneTokenType = 3
|
||||
RUNETOKEN_NONESCAPING_QUOTE RuneTokenType = 4
|
||||
RUNETOKEN_ESCAPE RuneTokenType = 5
|
||||
RUNETOKEN_COMMENT RuneTokenType = 6
|
||||
RUNETOKEN_EOF RuneTokenType = 7
|
||||
|
||||
TOKEN_UNKNOWN TokenType = 0
|
||||
TOKEN_WORD TokenType = 1
|
||||
TOKEN_SPACE TokenType = 2
|
||||
TOKEN_COMMENT TokenType = 3
|
||||
|
||||
STATE_START lexerState = 0
|
||||
STATE_INWORD lexerState = 1
|
||||
STATE_ESCAPING lexerState = 2
|
||||
STATE_ESCAPING_QUOTED lexerState = 3
|
||||
STATE_QUOTED_ESCAPING lexerState = 4
|
||||
STATE_QUOTED lexerState = 5
|
||||
STATE_COMMENT lexerState = 6
|
||||
|
||||
INITIAL_TOKEN_CAPACITY int = 100
|
||||
)
|
||||
|
||||
/*
|
||||
A type for classifying characters. This allows for different sorts of
|
||||
classifiers - those accepting extended non-ascii chars, or strict posix
|
||||
compatibility, for example.
|
||||
*/
|
||||
type TokenClassifier struct {
|
||||
typeMap map[int32]RuneTokenType
|
||||
}
|
||||
|
||||
func addRuneClass(typeMap *map[int32]RuneTokenType, runes string, tokenType RuneTokenType) {
|
||||
for _, rune := range runes {
|
||||
(*typeMap)[int32(rune)] = tokenType
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
Create a new classifier for basic ASCII characters.
|
||||
*/
|
||||
func NewDefaultClassifier() *TokenClassifier {
|
||||
typeMap := map[int32]RuneTokenType{}
|
||||
addRuneClass(&typeMap, RUNE_CHAR, RUNETOKEN_CHAR)
|
||||
addRuneClass(&typeMap, RUNE_SPACE, RUNETOKEN_SPACE)
|
||||
addRuneClass(&typeMap, RUNE_ESCAPING_QUOTE, RUNETOKEN_ESCAPING_QUOTE)
|
||||
addRuneClass(&typeMap, RUNE_NONESCAPING_QUOTE, RUNETOKEN_NONESCAPING_QUOTE)
|
||||
addRuneClass(&typeMap, RUNE_ESCAPE, RUNETOKEN_ESCAPE)
|
||||
addRuneClass(&typeMap, RUNE_COMMENT, RUNETOKEN_COMMENT)
|
||||
return &TokenClassifier{
|
||||
typeMap: typeMap}
|
||||
}
|
||||
|
||||
func (classifier *TokenClassifier) ClassifyRune(rune int32) RuneTokenType {
|
||||
return classifier.typeMap[rune]
|
||||
}
|
||||
|
||||
/*
|
||||
A type for turning an input stream in to a sequence of strings. Whitespace and
|
||||
comments are skipped.
|
||||
*/
|
||||
type Lexer struct {
|
||||
tokenizer *Tokenizer
|
||||
}
|
||||
|
||||
/*
|
||||
Create a new lexer.
|
||||
*/
|
||||
func NewLexer(r io.Reader) (*Lexer, error) {
|
||||
|
||||
tokenizer, err := NewTokenizer(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
lexer := &Lexer{tokenizer: tokenizer}
|
||||
return lexer, nil
|
||||
}
|
||||
|
||||
/*
|
||||
Return the next word, and an error value. If there are no more words, the error
|
||||
will be io.EOF.
|
||||
*/
|
||||
func (l *Lexer) NextWord() (string, error) {
|
||||
var token *Token
|
||||
var err error
|
||||
for {
|
||||
token, err = l.tokenizer.NextToken()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
switch token.tokenType {
|
||||
case TOKEN_WORD:
|
||||
{
|
||||
return token.value, nil
|
||||
}
|
||||
case TOKEN_COMMENT:
|
||||
{
|
||||
// skip comments
|
||||
}
|
||||
default:
|
||||
{
|
||||
panic(fmt.Sprintf("Unknown token type: %v", token.tokenType))
|
||||
}
|
||||
}
|
||||
}
|
||||
return "", io.EOF
|
||||
}
|
||||
|
||||
/*
|
||||
A type for turning an input stream in to a sequence of typed tokens.
|
||||
*/
|
||||
type Tokenizer struct {
|
||||
input *bufio.Reader
|
||||
classifier *TokenClassifier
|
||||
}
|
||||
|
||||
/*
|
||||
Create a new tokenizer.
|
||||
*/
|
||||
func NewTokenizer(r io.Reader) (*Tokenizer, error) {
|
||||
input := bufio.NewReader(r)
|
||||
classifier := NewDefaultClassifier()
|
||||
tokenizer := &Tokenizer{
|
||||
input: input,
|
||||
classifier: classifier}
|
||||
return tokenizer, nil
|
||||
}
|
||||
|
||||
/*
|
||||
Scan the stream for the next token.
|
||||
|
||||
This uses an internal state machine. It will panic if it encounters a character
|
||||
which it does not know how to handle.
|
||||
*/
|
||||
func (t *Tokenizer) scanStream() (*Token, error) {
|
||||
state := STATE_START
|
||||
var tokenType TokenType
|
||||
value := make([]int32, 0, INITIAL_TOKEN_CAPACITY)
|
||||
var (
|
||||
nextRune int32
|
||||
nextRuneType RuneTokenType
|
||||
err error
|
||||
)
|
||||
SCAN:
|
||||
for {
|
||||
nextRune, _, err = t.input.ReadRune()
|
||||
nextRuneType = t.classifier.ClassifyRune(nextRune)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
nextRuneType = RUNETOKEN_EOF
|
||||
err = nil
|
||||
} else {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
switch state {
|
||||
case STATE_START: // no runes read yet
|
||||
{
|
||||
switch nextRuneType {
|
||||
case RUNETOKEN_EOF:
|
||||
{
|
||||
return nil, io.EOF
|
||||
}
|
||||
case RUNETOKEN_CHAR:
|
||||
{
|
||||
tokenType = TOKEN_WORD
|
||||
value = append(value, nextRune)
|
||||
state = STATE_INWORD
|
||||
}
|
||||
case RUNETOKEN_SPACE:
|
||||
{
|
||||
}
|
||||
case RUNETOKEN_ESCAPING_QUOTE:
|
||||
{
|
||||
tokenType = TOKEN_WORD
|
||||
state = STATE_QUOTED_ESCAPING
|
||||
}
|
||||
case RUNETOKEN_NONESCAPING_QUOTE:
|
||||
{
|
||||
tokenType = TOKEN_WORD
|
||||
state = STATE_QUOTED
|
||||
}
|
||||
case RUNETOKEN_ESCAPE:
|
||||
{
|
||||
tokenType = TOKEN_WORD
|
||||
state = STATE_ESCAPING
|
||||
}
|
||||
case RUNETOKEN_COMMENT:
|
||||
{
|
||||
tokenType = TOKEN_COMMENT
|
||||
state = STATE_COMMENT
|
||||
}
|
||||
default:
|
||||
{
|
||||
return nil, errors.New(fmt.Sprintf("Unknown rune: %v", nextRune))
|
||||
}
|
||||
}
|
||||
}
|
||||
case STATE_INWORD: // in a regular word
|
||||
{
|
||||
switch nextRuneType {
|
||||
case RUNETOKEN_EOF:
|
||||
{
|
||||
break SCAN
|
||||
}
|
||||
case RUNETOKEN_CHAR, RUNETOKEN_COMMENT:
|
||||
{
|
||||
value = append(value, nextRune)
|
||||
}
|
||||
case RUNETOKEN_SPACE:
|
||||
{
|
||||
t.input.UnreadRune()
|
||||
break SCAN
|
||||
}
|
||||
case RUNETOKEN_ESCAPING_QUOTE:
|
||||
{
|
||||
state = STATE_QUOTED_ESCAPING
|
||||
}
|
||||
case RUNETOKEN_NONESCAPING_QUOTE:
|
||||
{
|
||||
state = STATE_QUOTED
|
||||
}
|
||||
case RUNETOKEN_ESCAPE:
|
||||
{
|
||||
state = STATE_ESCAPING
|
||||
}
|
||||
default:
|
||||
{
|
||||
return nil, errors.New(fmt.Sprintf("Uknown rune: %v", nextRune))
|
||||
}
|
||||
}
|
||||
}
|
||||
case STATE_ESCAPING: // the next rune after an escape character
|
||||
{
|
||||
switch nextRuneType {
|
||||
case RUNETOKEN_EOF:
|
||||
{
|
||||
err = errors.New("EOF found after escape character")
|
||||
break SCAN
|
||||
}
|
||||
case RUNETOKEN_CHAR, RUNETOKEN_SPACE, RUNETOKEN_ESCAPING_QUOTE, RUNETOKEN_NONESCAPING_QUOTE, RUNETOKEN_ESCAPE, RUNETOKEN_COMMENT:
|
||||
{
|
||||
state = STATE_INWORD
|
||||
value = append(value, nextRune)
|
||||
}
|
||||
default:
|
||||
{
|
||||
return nil, errors.New(fmt.Sprintf("Uknown rune: %v", nextRune))
|
||||
}
|
||||
}
|
||||
}
|
||||
case STATE_ESCAPING_QUOTED: // the next rune after an escape character, in double quotes
|
||||
{
|
||||
switch nextRuneType {
|
||||
case RUNETOKEN_EOF:
|
||||
{
|
||||
err = errors.New("EOF found after escape character")
|
||||
break SCAN
|
||||
}
|
||||
case RUNETOKEN_CHAR, RUNETOKEN_SPACE, RUNETOKEN_ESCAPING_QUOTE, RUNETOKEN_NONESCAPING_QUOTE, RUNETOKEN_ESCAPE, RUNETOKEN_COMMENT:
|
||||
{
|
||||
state = STATE_QUOTED_ESCAPING
|
||||
value = append(value, nextRune)
|
||||
}
|
||||
default:
|
||||
{
|
||||
return nil, errors.New(fmt.Sprintf("Uknown rune: %v", nextRune))
|
||||
}
|
||||
}
|
||||
}
|
||||
case STATE_QUOTED_ESCAPING: // in escaping double quotes
|
||||
{
|
||||
switch nextRuneType {
|
||||
case RUNETOKEN_EOF:
|
||||
{
|
||||
err = errors.New("EOF found when expecting closing quote.")
|
||||
break SCAN
|
||||
}
|
||||
case RUNETOKEN_CHAR, RUNETOKEN_UNKNOWN, RUNETOKEN_SPACE, RUNETOKEN_NONESCAPING_QUOTE, RUNETOKEN_COMMENT:
|
||||
{
|
||||
value = append(value, nextRune)
|
||||
}
|
||||
case RUNETOKEN_ESCAPING_QUOTE:
|
||||
{
|
||||
state = STATE_INWORD
|
||||
}
|
||||
case RUNETOKEN_ESCAPE:
|
||||
{
|
||||
state = STATE_ESCAPING_QUOTED
|
||||
}
|
||||
default:
|
||||
{
|
||||
return nil, errors.New(fmt.Sprintf("Uknown rune: %v", nextRune))
|
||||
}
|
||||
}
|
||||
}
|
||||
case STATE_QUOTED: // in non-escaping single quotes
|
||||
{
|
||||
switch nextRuneType {
|
||||
case RUNETOKEN_EOF:
|
||||
{
|
||||
err = errors.New("EOF found when expecting closing quote.")
|
||||
break SCAN
|
||||
}
|
||||
case RUNETOKEN_CHAR, RUNETOKEN_UNKNOWN, RUNETOKEN_SPACE, RUNETOKEN_ESCAPING_QUOTE, RUNETOKEN_ESCAPE, RUNETOKEN_COMMENT:
|
||||
{
|
||||
value = append(value, nextRune)
|
||||
}
|
||||
case RUNETOKEN_NONESCAPING_QUOTE:
|
||||
{
|
||||
state = STATE_INWORD
|
||||
}
|
||||
default:
|
||||
{
|
||||
return nil, errors.New(fmt.Sprintf("Uknown rune: %v", nextRune))
|
||||
}
|
||||
}
|
||||
}
|
||||
case STATE_COMMENT:
|
||||
{
|
||||
switch nextRuneType {
|
||||
case RUNETOKEN_EOF:
|
||||
{
|
||||
break SCAN
|
||||
}
|
||||
case RUNETOKEN_CHAR, RUNETOKEN_UNKNOWN, RUNETOKEN_ESCAPING_QUOTE, RUNETOKEN_ESCAPE, RUNETOKEN_COMMENT, RUNETOKEN_NONESCAPING_QUOTE:
|
||||
{
|
||||
value = append(value, nextRune)
|
||||
}
|
||||
case RUNETOKEN_SPACE:
|
||||
{
|
||||
if nextRune == '\n' {
|
||||
state = STATE_START
|
||||
break SCAN
|
||||
} else {
|
||||
value = append(value, nextRune)
|
||||
}
|
||||
}
|
||||
default:
|
||||
{
|
||||
return nil, errors.New(fmt.Sprintf("Uknown rune: %v", nextRune))
|
||||
}
|
||||
}
|
||||
}
|
||||
default:
|
||||
{
|
||||
panic(fmt.Sprintf("Unexpected state: %v", state))
|
||||
}
|
||||
}
|
||||
}
|
||||
token := &Token{
|
||||
tokenType: tokenType,
|
||||
value: string(value)}
|
||||
return token, err
|
||||
}
|
||||
|
||||
/*
|
||||
Return the next token in the stream, and an error value. If there are no more
|
||||
tokens available, the error value will be io.EOF.
|
||||
*/
|
||||
func (t *Tokenizer) NextToken() (*Token, error) {
|
||||
return t.scanStream()
|
||||
}
|
||||
|
||||
/*
|
||||
Split a string in to a slice of strings, based upon shell-style rules for
|
||||
quoting, escaping, and spaces.
|
||||
*/
|
||||
func Split(s string) ([]string, error) {
|
||||
l, err := NewLexer(strings.NewReader(s))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
subStrings := []string{}
|
||||
for {
|
||||
word, err := l.NextWord()
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
return subStrings, nil
|
||||
}
|
||||
return subStrings, err
|
||||
}
|
||||
subStrings = append(subStrings, word)
|
||||
}
|
||||
return subStrings, nil
|
||||
}
|
162
vendor/github.com/flynn/go-shlex/shlex_test.go
generated
vendored
162
vendor/github.com/flynn/go-shlex/shlex_test.go
generated
vendored
|
@ -1,162 +0,0 @@
|
|||
/*
|
||||
Copyright 2012 Google Inc. All Rights Reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package shlex
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func checkError(err error, t *testing.T) {
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestClassifier(t *testing.T) {
|
||||
classifier := NewDefaultClassifier()
|
||||
runeTests := map[int32]RuneTokenType{
|
||||
'a': RUNETOKEN_CHAR,
|
||||
' ': RUNETOKEN_SPACE,
|
||||
'"': RUNETOKEN_ESCAPING_QUOTE,
|
||||
'\'': RUNETOKEN_NONESCAPING_QUOTE,
|
||||
'#': RUNETOKEN_COMMENT}
|
||||
for rune, expectedType := range runeTests {
|
||||
foundType := classifier.ClassifyRune(rune)
|
||||
if foundType != expectedType {
|
||||
t.Logf("Expected type: %v for rune '%c'(%v). Found type: %v.", expectedType, rune, rune, foundType)
|
||||
t.Fail()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestTokenizer(t *testing.T) {
|
||||
testInput := strings.NewReader("one two \"three four\" \"five \\\"six\\\"\" seven#eight # nine # ten\n eleven")
|
||||
expectedTokens := []*Token{
|
||||
&Token{
|
||||
tokenType: TOKEN_WORD,
|
||||
value: "one"},
|
||||
&Token{
|
||||
tokenType: TOKEN_WORD,
|
||||
value: "two"},
|
||||
&Token{
|
||||
tokenType: TOKEN_WORD,
|
||||
value: "three four"},
|
||||
&Token{
|
||||
tokenType: TOKEN_WORD,
|
||||
value: "five \"six\""},
|
||||
&Token{
|
||||
tokenType: TOKEN_WORD,
|
||||
value: "seven#eight"},
|
||||
&Token{
|
||||
tokenType: TOKEN_COMMENT,
|
||||
value: " nine # ten"},
|
||||
&Token{
|
||||
tokenType: TOKEN_WORD,
|
||||
value: "eleven"}}
|
||||
|
||||
tokenizer, err := NewTokenizer(testInput)
|
||||
checkError(err, t)
|
||||
for _, expectedToken := range expectedTokens {
|
||||
foundToken, err := tokenizer.NextToken()
|
||||
checkError(err, t)
|
||||
if !foundToken.Equal(expectedToken) {
|
||||
t.Error("Expected token:", expectedToken, ". Found:", foundToken)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestLexer(t *testing.T) {
|
||||
testInput := strings.NewReader("one")
|
||||
expectedWord := "one"
|
||||
lexer, err := NewLexer(testInput)
|
||||
checkError(err, t)
|
||||
foundWord, err := lexer.NextWord()
|
||||
checkError(err, t)
|
||||
if expectedWord != foundWord {
|
||||
t.Error("Expected word:", expectedWord, ". Found:", foundWord)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSplitSimple(t *testing.T) {
|
||||
testInput := "one two three"
|
||||
expectedOutput := []string{"one", "two", "three"}
|
||||
foundOutput, err := Split(testInput)
|
||||
if err != nil {
|
||||
t.Error("Split returned error:", err)
|
||||
}
|
||||
if len(expectedOutput) != len(foundOutput) {
|
||||
t.Error("Split expected:", len(expectedOutput), "results. Found:", len(foundOutput), "results")
|
||||
}
|
||||
for i := range foundOutput {
|
||||
if foundOutput[i] != expectedOutput[i] {
|
||||
t.Error("Item:", i, "(", foundOutput[i], ") differs from the expected value:", expectedOutput[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestSplitEscapingQuotes(t *testing.T) {
|
||||
testInput := "one \"два ${three}\" four"
|
||||
expectedOutput := []string{"one", "два ${three}", "four"}
|
||||
foundOutput, err := Split(testInput)
|
||||
if err != nil {
|
||||
t.Error("Split returned error:", err)
|
||||
}
|
||||
if len(expectedOutput) != len(foundOutput) {
|
||||
t.Error("Split expected:", len(expectedOutput), "results. Found:", len(foundOutput), "results")
|
||||
}
|
||||
for i := range foundOutput {
|
||||
if foundOutput[i] != expectedOutput[i] {
|
||||
t.Error("Item:", i, "(", foundOutput[i], ") differs from the expected value:", expectedOutput[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestGlobbingExpressions(t *testing.T) {
|
||||
testInput := "onefile *file one?ile onefil[de]"
|
||||
expectedOutput := []string{"onefile", "*file", "one?ile", "onefil[de]"}
|
||||
foundOutput, err := Split(testInput)
|
||||
if err != nil {
|
||||
t.Error("Split returned error", err)
|
||||
}
|
||||
if len(expectedOutput) != len(foundOutput) {
|
||||
t.Error("Split expected:", len(expectedOutput), "results. Found:", len(foundOutput), "results")
|
||||
}
|
||||
for i := range foundOutput {
|
||||
if foundOutput[i] != expectedOutput[i] {
|
||||
t.Error("Item:", i, "(", foundOutput[i], ") differs from the expected value:", expectedOutput[i])
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestSplitNonEscapingQuotes(t *testing.T) {
|
||||
testInput := "one 'два ${three}' four"
|
||||
expectedOutput := []string{"one", "два ${three}", "four"}
|
||||
foundOutput, err := Split(testInput)
|
||||
if err != nil {
|
||||
t.Error("Split returned error:", err)
|
||||
}
|
||||
if len(expectedOutput) != len(foundOutput) {
|
||||
t.Error("Split expected:", len(expectedOutput), "results. Found:", len(foundOutput), "results")
|
||||
}
|
||||
for i := range foundOutput {
|
||||
if foundOutput[i] != expectedOutput[i] {
|
||||
t.Error("Item:", i, "(", foundOutput[i], ") differs from the expected value:", expectedOutput[i])
|
||||
}
|
||||
}
|
||||
}
|
0
vendor/github.com/kr/pty/License → vendor/github.com/jesseduffield/pty/License
generated
vendored
0
vendor/github.com/kr/pty/License → vendor/github.com/jesseduffield/pty/License
generated
vendored
0
vendor/github.com/kr/pty/doc.go → vendor/github.com/jesseduffield/pty/doc.go
generated
vendored
0
vendor/github.com/kr/pty/doc.go → vendor/github.com/jesseduffield/pty/doc.go
generated
vendored
0
vendor/github.com/kr/pty/ioctl.go → vendor/github.com/jesseduffield/pty/ioctl.go
generated
vendored
0
vendor/github.com/kr/pty/ioctl.go → vendor/github.com/jesseduffield/pty/ioctl.go
generated
vendored
16
vendor/github.com/kr/pty/run.go → vendor/github.com/jesseduffield/pty/run.go
generated
vendored
16
vendor/github.com/kr/pty/run.go → vendor/github.com/jesseduffield/pty/run.go
generated
vendored
|
@ -12,11 +12,27 @@ import (
|
|||
// and c.Stderr, calls c.Start, and returns the File of the tty's
|
||||
// corresponding pty.
|
||||
func Start(c *exec.Cmd) (pty *os.File, err error) {
|
||||
return StartWithSize(c, nil)
|
||||
}
|
||||
|
||||
// StartWithSize assigns a pseudo-terminal tty os.File to c.Stdin, c.Stdout,
|
||||
// and c.Stderr, calls c.Start, and returns the File of the tty's
|
||||
// corresponding pty.
|
||||
//
|
||||
// This will resize the pty to the specified size before starting the command
|
||||
func StartWithSize(c *exec.Cmd, sz *Winsize) (pty *os.File, err error) {
|
||||
pty, tty, err := Open()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer tty.Close()
|
||||
if sz != nil {
|
||||
err = Setsize(pty, sz)
|
||||
if err != nil {
|
||||
pty.Close()
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
c.Stdout = tty
|
||||
c.Stdin = tty
|
||||
c.Stderr = tty
|
0
vendor/github.com/kr/pty/types.go → vendor/github.com/jesseduffield/pty/types.go
generated
vendored
0
vendor/github.com/kr/pty/types.go → vendor/github.com/jesseduffield/pty/types.go
generated
vendored
0
vendor/github.com/kr/pty/util.go → vendor/github.com/jesseduffield/pty/util.go
generated
vendored
0
vendor/github.com/kr/pty/util.go → vendor/github.com/jesseduffield/pty/util.go
generated
vendored
100
vendor/github.com/kr/pty/README.md
generated
vendored
100
vendor/github.com/kr/pty/README.md
generated
vendored
|
@ -1,100 +0,0 @@
|
|||
# pty
|
||||
|
||||
Pty is a Go package for using unix pseudo-terminals.
|
||||
|
||||
## Install
|
||||
|
||||
go get github.com/kr/pty
|
||||
|
||||
## Example
|
||||
|
||||
### Command
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"github.com/kr/pty"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
)
|
||||
|
||||
func main() {
|
||||
c := exec.Command("grep", "--color=auto", "bar")
|
||||
f, err := pty.Start(c)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
go func() {
|
||||
f.Write([]byte("foo\n"))
|
||||
f.Write([]byte("bar\n"))
|
||||
f.Write([]byte("baz\n"))
|
||||
f.Write([]byte{4}) // EOT
|
||||
}()
|
||||
io.Copy(os.Stdout, f)
|
||||
}
|
||||
```
|
||||
|
||||
### Shell
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
|
||||
"github.com/kr/pty"
|
||||
"golang.org/x/crypto/ssh/terminal"
|
||||
)
|
||||
|
||||
func test() error {
|
||||
// Create arbitrary command.
|
||||
c := exec.Command("bash")
|
||||
|
||||
// Start the command with a pty.
|
||||
ptmx, err := pty.Start(c)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Make sure to close the pty at the end.
|
||||
defer func() { _ = ptmx.Close() }() // Best effort.
|
||||
|
||||
// Handle pty size.
|
||||
ch := make(chan os.Signal, 1)
|
||||
signal.Notify(ch, syscall.SIGWINCH)
|
||||
go func() {
|
||||
for range ch {
|
||||
if err := pty.InheritSize(os.Stdin, ptmx); err != nil {
|
||||
log.Printf("error resizing pty: %s", err)
|
||||
}
|
||||
}
|
||||
}()
|
||||
ch <- syscall.SIGWINCH // Initial resize.
|
||||
|
||||
// Set stdin in raw mode.
|
||||
oldState, err := terminal.MakeRaw(int(os.Stdin.Fd()))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer func() { _ = terminal.Restore(int(os.Stdin.Fd()), oldState) }() // Best effort.
|
||||
|
||||
// Copy stdin to the pty and the pty to stdout.
|
||||
go func() { _, _ = io.Copy(ptmx, os.Stdin) }()
|
||||
_, _ = io.Copy(os.Stdout, ptmx)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func main() {
|
||||
if err := test(); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
```
|
19
vendor/github.com/kr/pty/mktypes.bash
generated
vendored
19
vendor/github.com/kr/pty/mktypes.bash
generated
vendored
|
@ -1,19 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
GOOSARCH="${GOOS}_${GOARCH}"
|
||||
case "$GOOSARCH" in
|
||||
_* | *_ | _)
|
||||
echo 'undefined $GOOS_$GOARCH:' "$GOOSARCH" 1>&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
GODEFS="go tool cgo -godefs"
|
||||
|
||||
$GODEFS types.go |gofmt > ztypes_$GOARCH.go
|
||||
|
||||
case $GOOS in
|
||||
freebsd|dragonfly|openbsd)
|
||||
$GODEFS types_$GOOS.go |gofmt > ztypes_$GOOSARCH.go
|
||||
;;
|
||||
esac
|
5
vendor/github.com/mgutz/str/CREDITS
generated
vendored
5
vendor/github.com/mgutz/str/CREDITS
generated
vendored
|
@ -1,5 +0,0 @@
|
|||
* [string.js](http://stringjs.com) - I contributed several
|
||||
functions to this project.
|
||||
|
||||
* [bbgen.net](http://bbgen.net/blog/2011/06/string-to-argc-argv/)
|
||||
|
649
vendor/github.com/mgutz/str/README.md
generated
vendored
649
vendor/github.com/mgutz/str/README.md
generated
vendored
|
@ -1,649 +0,0 @@
|
|||
# str
|
||||
|
||||
import "github.com/mgutz/str"
|
||||
|
||||
Package str is a comprehensive set of string functions to build more Go
|
||||
awesomeness. Str complements Go's standard packages and does not duplicate
|
||||
functionality found in `strings` or `strconv`.
|
||||
|
||||
Str is based on plain functions instead of object-based methods, consistent with
|
||||
Go standard string packages.
|
||||
|
||||
str.Between("<a>foo</a>", "<a>", "</a>") == "foo"
|
||||
|
||||
Str supports pipelining instead of chaining
|
||||
|
||||
s := str.Pipe("\nabcdef\n", Clean, BetweenF("a", "f"), ChompLeftF("bc"))
|
||||
|
||||
User-defined filters can be added to the pipeline by inserting a function or
|
||||
closure that returns a function with this signature
|
||||
|
||||
func(string) string
|
||||
|
||||
### Index
|
||||
|
||||
* [Variables](#variables)
|
||||
* [func Between](#func
|
||||
[godoc](https://godoc.org/github.com/mgutz/str)
|
||||
between)
|
||||
* [func BetweenF](#func--betweenf)
|
||||
* [func Camelize](#func--camelize)
|
||||
* [func Capitalize](#func--capitalize)
|
||||
* [func CharAt](#func--charat)
|
||||
* [func CharAtF](#func--charatf)
|
||||
* [func ChompLeft](#func--chompleft)
|
||||
* [func ChompLeftF](#func--chompleftf)
|
||||
* [func ChompRight](#func--chompright)
|
||||
* [func ChompRightF](#func--chomprightf)
|
||||
* [func Classify](#func--classify)
|
||||
* [func ClassifyF](#func--classifyf)
|
||||
* [func Clean](#func--clean)
|
||||
* [func Dasherize](#func--dasherize)
|
||||
* [func DecodeHTMLEntities](#func--decodehtmlentities)
|
||||
* [func EnsurePrefix](#func--ensureprefix)
|
||||
* [func EnsurePrefixF](#func--ensureprefixf)
|
||||
* [func EnsureSuffix](#func--ensuresuffix)
|
||||
* [func EnsureSuffixF](#func--ensuresuffixf)
|
||||
* [func EscapeHTML](#func--escapehtml)
|
||||
* [func Humanize](#func--humanize)
|
||||
* [func Iif](#func--iif)
|
||||
* [func IndexOf](#func--indexof)
|
||||
* [func IsAlpha](#func--isalpha)
|
||||
* [func IsAlphaNumeric](#func--isalphanumeric)
|
||||
* [func IsEmpty](#func--isempty)
|
||||
* [func IsLower](#func--islower)
|
||||
* [func IsNumeric](#func--isnumeric)
|
||||
* [func IsUpper](#func--isupper)
|
||||
* [func Left](#func--left)
|
||||
* [func LeftF](#func--leftf)
|
||||
* [func LeftOf](#func--leftof)
|
||||
* [func Letters](#func--letters)
|
||||
* [func Lines](#func--lines)
|
||||
* [func Map](#func--map)
|
||||
* [func Match](#func--match)
|
||||
* [func Pad](#func--pad)
|
||||
* [func PadF](#func--padf)
|
||||
* [func PadLeft](#func--padleft)
|
||||
* [func PadLeftF](#func--padleftf)
|
||||
* [func PadRight](#func--padright)
|
||||
* [func PadRightF](#func--padrightf)
|
||||
* [func Pipe](#func--pipe)
|
||||
* [func QuoteItems](#func--quoteitems)
|
||||
* [func ReplaceF](#func--replacef)
|
||||
* [func ReplacePattern](#func--replacepattern)
|
||||
* [func ReplacePatternF](#func--replacepatternf)
|
||||
* [func Reverse](#func--reverse)
|
||||
* [func Right](#func--right)
|
||||
* [func RightF](#func--rightf)
|
||||
* [func RightOf](#func--rightof)
|
||||
* [func SetTemplateDelimiters](#func--settemplatedelimiters)
|
||||
* [func Slice](#func--slice)
|
||||
* [func SliceContains](#func--slicecontains)
|
||||
* [func SliceF](#func--slicef)
|
||||
* [func SliceIndexOf](#func--sliceindexof)
|
||||
* [func Slugify](#func--slugify)
|
||||
* [func StripPunctuation](#func--strippunctuation)
|
||||
* [func StripTags](#func--striptags)
|
||||
* [func Substr](#func--substr)
|
||||
* [func SubstrF](#func--substrf)
|
||||
* [func Template](#func--template)
|
||||
* [func TemplateDelimiters](#func--templatedelimiters)
|
||||
* [func TemplateWithDelimiters](#func--templatewithdelimiters)
|
||||
* [func ToArgv](#func--toargv)
|
||||
* [func ToBool](#func--tobool)
|
||||
* [func ToBoolOr](#func--toboolor)
|
||||
* [func ToFloat32Or](#func--tofloat32or)
|
||||
* [func ToFloat64Or](#func--tofloat64or)
|
||||
* [func ToIntOr](#func--tointor)
|
||||
* [func Underscore](#func--underscore)
|
||||
* [func UnescapeHTML](#func--unescapehtml)
|
||||
* [func WrapHTML](#func--wraphtml)
|
||||
* [func WrapHTMLF](#func--wraphtmlf)
|
||||
|
||||
|
||||
#### Variables
|
||||
|
||||
```go
|
||||
var ToFloatOr = ToFloat64Or
|
||||
```
|
||||
ToFloatOr parses as a float64 or returns defaultValue.
|
||||
|
||||
```go
|
||||
var Verbose = false
|
||||
```
|
||||
Verbose flag enables console output for those functions that have counterparts
|
||||
in Go's excellent stadard packages.
|
||||
|
||||
#### func [Between](#between)
|
||||
|
||||
```go
|
||||
func Between(s, left, right string) string
|
||||
```
|
||||
Between extracts a string between left and right strings.
|
||||
|
||||
#### func [BetweenF](#betweenf)
|
||||
|
||||
```go
|
||||
func BetweenF(left, right string) func(string) string
|
||||
```
|
||||
BetweenF is the filter form for Between.
|
||||
|
||||
#### func [Camelize](#camelize)
|
||||
|
||||
```go
|
||||
func Camelize(s string) string
|
||||
```
|
||||
Camelize return new string which removes any underscores or dashes and convert a
|
||||
string into camel casing.
|
||||
|
||||
#### func [Capitalize](#capitalize)
|
||||
|
||||
```go
|
||||
func Capitalize(s string) string
|
||||
```
|
||||
Capitalize uppercases the first char of s and lowercases the rest.
|
||||
|
||||
#### func [CharAt](#charat)
|
||||
|
||||
```go
|
||||
func CharAt(s string, index int) string
|
||||
```
|
||||
CharAt returns a string from the character at the specified position.
|
||||
|
||||
#### func [CharAtF](#charatf)
|
||||
|
||||
```go
|
||||
func CharAtF(index int) func(string) string
|
||||
```
|
||||
CharAtF is the filter form of CharAt.
|
||||
|
||||
#### func [ChompLeft](#chompleft)
|
||||
|
||||
```go
|
||||
func ChompLeft(s, prefix string) string
|
||||
```
|
||||
ChompLeft removes prefix at the start of a string.
|
||||
|
||||
#### func [ChompLeftF](#chompleftf)
|
||||
|
||||
```go
|
||||
func ChompLeftF(prefix string) func(string) string
|
||||
```
|
||||
ChompLeftF is the filter form of ChompLeft.
|
||||
|
||||
#### func [ChompRight](#chompright)
|
||||
|
||||
```go
|
||||
func ChompRight(s, suffix string) string
|
||||
```
|
||||
ChompRight removes suffix from end of s.
|
||||
|
||||
#### func [ChompRightF](#chomprightf)
|
||||
|
||||
```go
|
||||
func ChompRightF(suffix string) func(string) string
|
||||
```
|
||||
ChompRightF is the filter form of ChompRight.
|
||||
|
||||
#### func [Classify](#classify)
|
||||
|
||||
```go
|
||||
func Classify(s string) string
|
||||
```
|
||||
Classify returns a camelized string with the first letter upper cased.
|
||||
|
||||
#### func [ClassifyF](#classifyf)
|
||||
|
||||
```go
|
||||
func ClassifyF(s string) func(string) string
|
||||
```
|
||||
ClassifyF is the filter form of Classify.
|
||||
|
||||
#### func [Clean](#clean)
|
||||
|
||||
```go
|
||||
func Clean(s string) string
|
||||
```
|
||||
Clean compresses all adjacent whitespace to a single space and trims s.
|
||||
|
||||
#### func [Dasherize](#dasherize)
|
||||
|
||||
```go
|
||||
func Dasherize(s string) string
|
||||
```
|
||||
Dasherize converts a camel cased string into a string delimited by dashes.
|
||||
|
||||
#### func [DecodeHTMLEntities](#decodehtmlentities)
|
||||
|
||||
```go
|
||||
func DecodeHTMLEntities(s string) string
|
||||
```
|
||||
DecodeHTMLEntities decodes HTML entities into their proper string
|
||||
representation. DecodeHTMLEntities is an alias for html.UnescapeString
|
||||
|
||||
#### func [EnsurePrefix](#ensureprefix)
|
||||
|
||||
```go
|
||||
func EnsurePrefix(s, prefix string) string
|
||||
```
|
||||
EnsurePrefix ensures s starts with prefix.
|
||||
|
||||
#### func [EnsurePrefixF](#ensureprefixf)
|
||||
|
||||
```go
|
||||
func EnsurePrefixF(prefix string) func(string) string
|
||||
```
|
||||
EnsurePrefixF is the filter form of EnsurePrefix.
|
||||
|
||||
#### func [EnsureSuffix](#ensuresuffix)
|
||||
|
||||
```go
|
||||
func EnsureSuffix(s, suffix string) string
|
||||
```
|
||||
EnsureSuffix ensures s ends with suffix.
|
||||
|
||||
#### func [EnsureSuffixF](#ensuresuffixf)
|
||||
|
||||
```go
|
||||
func EnsureSuffixF(suffix string) func(string) string
|
||||
```
|
||||
EnsureSuffixF is the filter form of EnsureSuffix.
|
||||
|
||||
#### func [EscapeHTML](#escapehtml)
|
||||
|
||||
```go
|
||||
func EscapeHTML(s string) string
|
||||
```
|
||||
EscapeHTML is alias for html.EscapeString.
|
||||
|
||||
#### func [Humanize](#humanize)
|
||||
|
||||
```go
|
||||
func Humanize(s string) string
|
||||
```
|
||||
Humanize transforms s into a human friendly form.
|
||||
|
||||
#### func [Iif](#iif)
|
||||
|
||||
```go
|
||||
func Iif(condition bool, truthy string, falsey string) string
|
||||
```
|
||||
Iif is short for immediate if. If condition is true return truthy else falsey.
|
||||
|
||||
#### func [IndexOf](#indexof)
|
||||
|
||||
```go
|
||||
func IndexOf(s string, needle string, start int) int
|
||||
```
|
||||
IndexOf finds the index of needle in s starting from start.
|
||||
|
||||
#### func [IsAlpha](#isalpha)
|
||||
|
||||
```go
|
||||
func IsAlpha(s string) bool
|
||||
```
|
||||
IsAlpha returns true if a string contains only letters from ASCII (a-z,A-Z).
|
||||
Other letters from other languages are not supported.
|
||||
|
||||
#### func [IsAlphaNumeric](#isalphanumeric)
|
||||
|
||||
```go
|
||||
func IsAlphaNumeric(s string) bool
|
||||
```
|
||||
IsAlphaNumeric returns true if a string contains letters and digits.
|
||||
|
||||
#### func [IsEmpty](#isempty)
|
||||
|
||||
```go
|
||||
func IsEmpty(s string) bool
|
||||
```
|
||||
IsEmpty returns true if the string is solely composed of whitespace.
|
||||
|
||||
#### func [IsLower](#islower)
|
||||
|
||||
```go
|
||||
func IsLower(s string) bool
|
||||
```
|
||||
IsLower returns true if s comprised of all lower case characters.
|
||||
|
||||
#### func [IsNumeric](#isnumeric)
|
||||
|
||||
```go
|
||||
func IsNumeric(s string) bool
|
||||
```
|
||||
IsNumeric returns true if a string contains only digits from 0-9. Other digits
|
||||
not in Latin (such as Arabic) are not currently supported.
|
||||
|
||||
#### func [IsUpper](#isupper)
|
||||
|
||||
```go
|
||||
func IsUpper(s string) bool
|
||||
```
|
||||
IsUpper returns true if s contains all upper case chracters.
|
||||
|
||||
#### func [Left](#left)
|
||||
|
||||
```go
|
||||
func Left(s string, n int) string
|
||||
```
|
||||
Left returns the left substring of length n.
|
||||
|
||||
#### func [LeftF](#leftf)
|
||||
|
||||
```go
|
||||
func LeftF(n int) func(string) string
|
||||
```
|
||||
LeftF is the filter form of Left.
|
||||
|
||||
#### func [LeftOf](#leftof)
|
||||
|
||||
```go
|
||||
func LeftOf(s string, needle string) string
|
||||
```
|
||||
LeftOf returns the substring left of needle.
|
||||
|
||||
#### func [Letters](#letters)
|
||||
|
||||
```go
|
||||
func Letters(s string) []string
|
||||
```
|
||||
Letters returns an array of runes as strings so it can be indexed into.
|
||||
|
||||
#### func [Lines](#lines)
|
||||
|
||||
```go
|
||||
func Lines(s string) []string
|
||||
```
|
||||
Lines convert windows newlines to unix newlines then convert to an Array of
|
||||
lines.
|
||||
|
||||
#### func [Map](#map)
|
||||
|
||||
```go
|
||||
func Map(arr []string, iterator func(string) string) []string
|
||||
```
|
||||
Map maps an array's iitem through an iterator.
|
||||
|
||||
#### func [Match](#match)
|
||||
|
||||
```go
|
||||
func Match(s, pattern string) bool
|
||||
```
|
||||
Match returns true if patterns matches the string
|
||||
|
||||
#### func [Pad](#pad)
|
||||
|
||||
```go
|
||||
func Pad(s, c string, n int) string
|
||||
```
|
||||
Pad pads string s on both sides with c until it has length of n.
|
||||
|
||||
#### func [PadF](#padf)
|
||||
|
||||
```go
|
||||
func PadF(c string, n int) func(string) string
|
||||
```
|
||||
PadF is the filter form of Pad.
|
||||
|
||||
#### func [PadLeft](#padleft)
|
||||
|
||||
```go
|
||||
func PadLeft(s, c string, n int) string
|
||||
```
|
||||
PadLeft pads s on left side with c until it has length of n.
|
||||
|
||||
#### func [PadLeftF](#padleftf)
|
||||
|
||||
```go
|
||||
func PadLeftF(c string, n int) func(string) string
|
||||
```
|
||||
PadLeftF is the filter form of PadLeft.
|
||||
|
||||
#### func [PadRight](#padright)
|
||||
|
||||
```go
|
||||
func PadRight(s, c string, n int) string
|
||||
```
|
||||
PadRight pads s on right side with c until it has length of n.
|
||||
|
||||
#### func [PadRightF](#padrightf)
|
||||
|
||||
```go
|
||||
func PadRightF(c string, n int) func(string) string
|
||||
```
|
||||
PadRightF is the filter form of Padright
|
||||
|
||||
#### func [Pipe](#pipe)
|
||||
|
||||
```go
|
||||
func Pipe(s string, funcs ...func(string) string) string
|
||||
```
|
||||
Pipe pipes s through one or more string filters.
|
||||
|
||||
#### func [QuoteItems](#quoteitems)
|
||||
|
||||
```go
|
||||
func QuoteItems(arr []string) []string
|
||||
```
|
||||
QuoteItems quotes all items in array, mostly for debugging.
|
||||
|
||||
#### func [ReplaceF](#replacef)
|
||||
|
||||
```go
|
||||
func ReplaceF(old, new string, n int) func(string) string
|
||||
```
|
||||
ReplaceF is the filter form of strings.Replace.
|
||||
|
||||
#### func [ReplacePattern](#replacepattern)
|
||||
|
||||
```go
|
||||
func ReplacePattern(s, pattern, repl string) string
|
||||
```
|
||||
ReplacePattern replaces string with regexp string. ReplacePattern returns a copy
|
||||
of src, replacing matches of the Regexp with the replacement string repl. Inside
|
||||
repl, $ signs are interpreted as in Expand, so for instance $1 represents the
|
||||
text of the first submatch.
|
||||
|
||||
#### func [ReplacePatternF](#replacepatternf)
|
||||
|
||||
```go
|
||||
func ReplacePatternF(pattern, repl string) func(string) string
|
||||
```
|
||||
ReplacePatternF is the filter form of ReplaceRegexp.
|
||||
|
||||
#### func [Reverse](#reverse)
|
||||
|
||||
```go
|
||||
func Reverse(s string) string
|
||||
```
|
||||
Reverse a string
|
||||
|
||||
#### func [Right](#right)
|
||||
|
||||
```go
|
||||
func Right(s string, n int) string
|
||||
```
|
||||
Right returns the right substring of length n.
|
||||
|
||||
#### func [RightF](#rightf)
|
||||
|
||||
```go
|
||||
func RightF(n int) func(string) string
|
||||
```
|
||||
RightF is the Filter version of Right.
|
||||
|
||||
#### func [RightOf](#rightof)
|
||||
|
||||
```go
|
||||
func RightOf(s string, prefix string) string
|
||||
```
|
||||
RightOf returns the substring to the right of prefix.
|
||||
|
||||
#### func [SetTemplateDelimiters](#settemplatedelimiters)
|
||||
|
||||
```go
|
||||
func SetTemplateDelimiters(opening, closing string)
|
||||
```
|
||||
SetTemplateDelimiters sets the delimiters for Template function. Defaults to
|
||||
"{{" and "}}"
|
||||
|
||||
#### func [Slice](#slice)
|
||||
|
||||
```go
|
||||
func Slice(s string, start, end int) string
|
||||
```
|
||||
Slice slices a string. If end is negative then it is the from the end of the
|
||||
string.
|
||||
|
||||
#### func [SliceContains](#slicecontains)
|
||||
|
||||
```go
|
||||
func SliceContains(slice []string, val string) bool
|
||||
```
|
||||
SliceContains determines whether val is an element in slice.
|
||||
|
||||
#### func [SliceF](#slicef)
|
||||
|
||||
```go
|
||||
func SliceF(start, end int) func(string) string
|
||||
```
|
||||
SliceF is the filter for Slice.
|
||||
|
||||
#### func [SliceIndexOf](#sliceindexof)
|
||||
|
||||
```go
|
||||
func SliceIndexOf(slice []string, val string) int
|
||||
```
|
||||
SliceIndexOf gets the indx of val in slice. Returns -1 if not found.
|
||||
|
||||
#### func [Slugify](#slugify)
|
||||
|
||||
```go
|
||||
func Slugify(s string) string
|
||||
```
|
||||
Slugify converts s into a dasherized string suitable for URL segment.
|
||||
|
||||
#### func [StripPunctuation](#strippunctuation)
|
||||
|
||||
```go
|
||||
func StripPunctuation(s string) string
|
||||
```
|
||||
StripPunctuation strips puncation from string.
|
||||
|
||||
#### func [StripTags](#striptags)
|
||||
|
||||
```go
|
||||
func StripTags(s string, tags ...string) string
|
||||
```
|
||||
StripTags strips all of the html tags or tags specified by the parameters
|
||||
|
||||
#### func [Substr](#substr)
|
||||
|
||||
```go
|
||||
func Substr(s string, index int, n int) string
|
||||
```
|
||||
Substr returns a substring of s starting at index of length n.
|
||||
|
||||
#### func [SubstrF](#substrf)
|
||||
|
||||
```go
|
||||
func SubstrF(index, n int) func(string) string
|
||||
```
|
||||
SubstrF is the filter form of Substr.
|
||||
|
||||
#### func [Template](#template)
|
||||
|
||||
```go
|
||||
func Template(s string, values map[string]interface{}) string
|
||||
```
|
||||
Template is a string template which replaces template placeholders delimited by
|
||||
"{{" and "}}" with values from map. The global delimiters may be set with
|
||||
SetTemplateDelimiters.
|
||||
|
||||
#### func [TemplateDelimiters](#templatedelimiters)
|
||||
|
||||
```go
|
||||
func TemplateDelimiters() (opening string, closing string)
|
||||
```
|
||||
TemplateDelimiters is the getter for the opening and closing delimiters for
|
||||
Template.
|
||||
|
||||
#### func [TemplateWithDelimiters](#templatewithdelimiters)
|
||||
|
||||
```go
|
||||
func TemplateWithDelimiters(s string, values map[string]interface{}, opening, closing string) string
|
||||
```
|
||||
TemplateWithDelimiters is string template with user-defineable opening and
|
||||
closing delimiters.
|
||||
|
||||
#### func [ToArgv](#toargv)
|
||||
|
||||
```go
|
||||
func ToArgv(s string) []string
|
||||
```
|
||||
ToArgv converts string s into an argv for exec.
|
||||
|
||||
#### func [ToBool](#tobool)
|
||||
|
||||
```go
|
||||
func ToBool(s string) bool
|
||||
```
|
||||
ToBool fuzzily converts truthy values.
|
||||
|
||||
#### func [ToBoolOr](#toboolor)
|
||||
|
||||
```go
|
||||
func ToBoolOr(s string, defaultValue bool) bool
|
||||
```
|
||||
ToBoolOr parses s as a bool or returns defaultValue.
|
||||
|
||||
#### func [ToFloat32Or](#tofloat32or)
|
||||
|
||||
```go
|
||||
func ToFloat32Or(s string, defaultValue float32) float32
|
||||
```
|
||||
ToFloat32Or parses as a float32 or returns defaultValue on error.
|
||||
|
||||
#### func [ToFloat64Or](#tofloat64or)
|
||||
|
||||
```go
|
||||
func ToFloat64Or(s string, defaultValue float64) float64
|
||||
```
|
||||
ToFloat64Or parses s as a float64 or returns defaultValue.
|
||||
|
||||
#### func [ToIntOr](#tointor)
|
||||
|
||||
```go
|
||||
func ToIntOr(s string, defaultValue int) int
|
||||
```
|
||||
ToIntOr parses s as an int or returns defaultValue.
|
||||
|
||||
#### func [Underscore](#underscore)
|
||||
|
||||
```go
|
||||
func Underscore(s string) string
|
||||
```
|
||||
Underscore returns converted camel cased string into a string delimited by
|
||||
underscores.
|
||||
|
||||
#### func [UnescapeHTML](#unescapehtml)
|
||||
|
||||
```go
|
||||
func UnescapeHTML(s string) string
|
||||
```
|
||||
UnescapeHTML is an alias for html.UnescapeString.
|
||||
|
||||
#### func [WrapHTML](#wraphtml)
|
||||
|
||||
```go
|
||||
func WrapHTML(s string, tag string, attrs map[string]string) string
|
||||
```
|
||||
WrapHTML wraps s within HTML tag having attributes attrs. Note, WrapHTML does
|
||||
not escape s value.
|
||||
|
||||
#### func [WrapHTMLF](#wraphtmlf)
|
||||
|
||||
```go
|
||||
func WrapHTMLF(tag string, attrs map[string]string) func(string) string
|
||||
```
|
||||
WrapHTMLF is the filter form of WrapHTML.
|
1
vendor/github.com/mgutz/str/VERSION
generated
vendored
1
vendor/github.com/mgutz/str/VERSION
generated
vendored
|
@ -1 +0,0 @@
|
|||
1.1.0
|
696
vendor/github.com/mgutz/str/str_test.go
generated
vendored
696
vendor/github.com/mgutz/str/str_test.go
generated
vendored
|
@ -1,696 +0,0 @@
|
|||
package str
|
||||
|
||||
//import "testing"
|
||||
import "fmt"
|
||||
|
||||
//import "strings"
|
||||
|
||||
func ExampleBetween() {
|
||||
eg(1, Between("<a>foo</a>", "<a>", "</a>"))
|
||||
eg(2, Between("<a>foo</a></a>", "<a>", "</a>"))
|
||||
eg(3, Between("<a><a>foo</a></a>", "<a>", "</a>"))
|
||||
eg(4, Between("<a><a>foo</a></a>", "<a>", "</a>"))
|
||||
eg(5, Between("<a>foo", "<a>", "</a>"))
|
||||
eg(6, Between("Some strings } are very {weird}, dont you think?", "{", "}"))
|
||||
eg(7, Between("This is ateststring", "", "test"))
|
||||
eg(8, Between("This is ateststring", "test", ""))
|
||||
// Output:
|
||||
// 1: foo
|
||||
// 2: foo
|
||||
// 3: <a>foo
|
||||
// 4: <a>foo
|
||||
// 5:
|
||||
// 6: weird
|
||||
// 7: This is a
|
||||
// 8: string
|
||||
}
|
||||
|
||||
func ExampleBetweenF() {
|
||||
eg(1, Pipe("abc", BetweenF("a", "c")))
|
||||
// Output:
|
||||
// 1: b
|
||||
}
|
||||
|
||||
func ExampleCamelize() {
|
||||
eg(1, Camelize("data_rate"))
|
||||
eg(2, Camelize("background-color"))
|
||||
eg(3, Camelize("-moz-something"))
|
||||
eg(4, Camelize("_car_speed_"))
|
||||
eg(5, Camelize("yes_we_can"))
|
||||
// Output:
|
||||
// 1: dataRate
|
||||
// 2: backgroundColor
|
||||
// 3: MozSomething
|
||||
// 4: CarSpeed
|
||||
// 5: yesWeCan
|
||||
}
|
||||
|
||||
func ExampleCapitalize() {
|
||||
eg(1, Capitalize("abc"))
|
||||
eg(2, Capitalize("ABC"))
|
||||
// Output:
|
||||
// 1: Abc
|
||||
// 2: Abc
|
||||
}
|
||||
|
||||
func ExampleCharAt() {
|
||||
eg(1, CharAt("abc", 1))
|
||||
eg(2, CharAt("", -1))
|
||||
eg(3, CharAt("", 0))
|
||||
eg(4, CharAt("", 10))
|
||||
eg(5, CharAt("abc", -1))
|
||||
eg(6, CharAt("abc", 10))
|
||||
// Output:
|
||||
// 1: b
|
||||
// 2:
|
||||
// 3:
|
||||
// 4:
|
||||
// 5:
|
||||
// 6:
|
||||
}
|
||||
|
||||
func ExampleCharAtF() {
|
||||
eg(1, Pipe("abc", CharAtF(1)))
|
||||
// Output:
|
||||
// 1: b
|
||||
}
|
||||
|
||||
func ExampleChompLeft() {
|
||||
eg(1, ChompLeft("foobar", "foo"))
|
||||
eg(2, ChompLeft("foobar", "bar"))
|
||||
eg(3, ChompLeft("", "foo"))
|
||||
eg(4, ChompLeft("", ""))
|
||||
eg(5, ChompLeft("foo", ""))
|
||||
// Output:
|
||||
// 1: bar
|
||||
// 2: foobar
|
||||
// 3:
|
||||
// 4:
|
||||
// 5: foo
|
||||
}
|
||||
|
||||
func ExampleChompLeftF() {
|
||||
eg(1, Pipe("abc", ChompLeftF("ab")))
|
||||
// Output:
|
||||
// 1: c
|
||||
}
|
||||
|
||||
func ExampleChompRight() {
|
||||
eg(1, ChompRight("foobar", "foo"))
|
||||
eg(2, ChompRight("foobar", "bar"))
|
||||
eg(3, ChompRight("", "foo"))
|
||||
eg(4, ChompRight("", ""))
|
||||
// Output:
|
||||
// 1: foobar
|
||||
// 2: foo
|
||||
// 3:
|
||||
// 4:
|
||||
}
|
||||
|
||||
func ExampleChompRightF() {
|
||||
eg(1, Pipe("abc", ChompRightF("bc")))
|
||||
// Output:
|
||||
// 1: a
|
||||
}
|
||||
|
||||
func ExampleClassify() {
|
||||
eg(1, Classify("data_rate"))
|
||||
eg(2, Classify("background-color"))
|
||||
eg(3, Classify("-moz-something"))
|
||||
eg(4, Classify("_car_speed_"))
|
||||
eg(5, Classify("yes_we_can"))
|
||||
// Output:
|
||||
// 1: DataRate
|
||||
// 2: BackgroundColor
|
||||
// 3: MozSomething
|
||||
// 4: CarSpeed
|
||||
// 5: YesWeCan
|
||||
}
|
||||
|
||||
func ExampleClean() {
|
||||
eg(1, Clean("clean"))
|
||||
eg(2, Clean(""))
|
||||
eg(3, Clean(" please\t clean \t \n me "))
|
||||
// Output:
|
||||
// 1: clean
|
||||
// 2:
|
||||
// 3: please clean me
|
||||
}
|
||||
|
||||
func ExampleDasherize() {
|
||||
eg(1, Dasherize("dataRate"))
|
||||
eg(2, Dasherize("CarSpeed"))
|
||||
eg(3, Dasherize("yesWeCan"))
|
||||
eg(4, Dasherize(""))
|
||||
eg(5, Dasherize("ABC"))
|
||||
// Output:
|
||||
// 1: data-rate
|
||||
// 2: -car-speed
|
||||
// 3: yes-we-can
|
||||
// 4:
|
||||
// 5: -a-b-c
|
||||
}
|
||||
|
||||
func ExampleDecodeHTMLEntities() {
|
||||
eg(1, DecodeHTMLEntities("Ken Thompson & Dennis Ritchie"))
|
||||
eg(2, DecodeHTMLEntities("3 < 4"))
|
||||
eg(3, DecodeHTMLEntities("http://"))
|
||||
// Output:
|
||||
// 1: Ken Thompson & Dennis Ritchie
|
||||
// 2: 3 < 4
|
||||
// 3: http://
|
||||
}
|
||||
|
||||
func ExampleEnsurePrefix() {
|
||||
eg(1, EnsurePrefix("foobar", "foo"))
|
||||
eg(2, EnsurePrefix("bar", "foo"))
|
||||
eg(3, EnsurePrefix("", ""))
|
||||
eg(4, EnsurePrefix("foo", ""))
|
||||
eg(5, EnsurePrefix("", "foo"))
|
||||
// Output:
|
||||
// 1: foobar
|
||||
// 2: foobar
|
||||
// 3:
|
||||
// 4: foo
|
||||
// 5: foo
|
||||
}
|
||||
|
||||
func ExampleEnsurePrefixF() {
|
||||
eg(1, Pipe("dir", EnsurePrefixF("./")))
|
||||
// Output:
|
||||
// 1: ./dir
|
||||
}
|
||||
|
||||
func ExampleEnsureSuffix() {
|
||||
eg(1, EnsureSuffix("foobar", "bar"))
|
||||
eg(2, EnsureSuffix("foo", "bar"))
|
||||
eg(3, EnsureSuffix("", ""))
|
||||
eg(4, EnsureSuffix("foo", ""))
|
||||
eg(5, EnsureSuffix("", "bar"))
|
||||
// Output:
|
||||
// 1: foobar
|
||||
// 2: foobar
|
||||
// 3:
|
||||
// 4: foo
|
||||
// 5: bar
|
||||
}
|
||||
|
||||
func ExampleHumanize() {
|
||||
eg(1, Humanize("the_humanize_string_method"))
|
||||
eg(2, Humanize("ThehumanizeStringMethod"))
|
||||
eg(3, Humanize("the humanize string method"))
|
||||
// Output:
|
||||
// 1: The humanize string method
|
||||
// 2: Thehumanize string method
|
||||
// 3: The humanize string method
|
||||
}
|
||||
|
||||
func ExampleIif() {
|
||||
eg(1, Iif(true, "T", "F"))
|
||||
eg(2, Iif(false, "T", "F"))
|
||||
// Output:
|
||||
// 1: T
|
||||
// 2: F
|
||||
}
|
||||
|
||||
func ExampleIndexOf() {
|
||||
eg(1, IndexOf("abcdef", "a", 0))
|
||||
eg(2, IndexOf("abcdef", "a", 3))
|
||||
eg(3, IndexOf("abcdef", "a", -2))
|
||||
eg(4, IndexOf("abcdef", "a", 10))
|
||||
eg(5, IndexOf("", "a", 0))
|
||||
eg(6, IndexOf("abcdef", "", 2))
|
||||
eg(7, IndexOf("abcdef", "", 1000))
|
||||
// Output:
|
||||
// 1: 0
|
||||
// 2: -1
|
||||
// 3: -1
|
||||
// 4: -1
|
||||
// 5: -1
|
||||
// 6: 2
|
||||
// 7: 6
|
||||
}
|
||||
|
||||
func ExampleIsAlpha() {
|
||||
eg(1, IsAlpha("afaf"))
|
||||
eg(2, IsAlpha("FJslfjkasfs"))
|
||||
eg(3, IsAlpha("áéúóúÁÉÍÓÚãõÃÕàèìòùÀÈÌÒÙâêîôûÂÊÎÔÛäëïöüÄËÏÖÜçÇ"))
|
||||
eg(4, IsAlpha("adflj43faljsdf"))
|
||||
eg(5, IsAlpha("33"))
|
||||
eg(6, IsAlpha("TT....TTTafafetstYY"))
|
||||
eg(7, IsAlpha("-áéúóúÁÉÍÓÚãõÃÕàèìòùÀÈÌÒÙâêîôûÂÊÎÔÛäëïöüÄËÏÖÜçÇ"))
|
||||
// Output:
|
||||
// 1: true
|
||||
// 2: true
|
||||
// 3: true
|
||||
// 4: false
|
||||
// 5: false
|
||||
// 6: false
|
||||
// 7: false
|
||||
}
|
||||
|
||||
func eg(index int, example interface{}) {
|
||||
output := fmt.Sprintf("%d: %v", index, example)
|
||||
fmt.Printf("%s\n", Clean(output))
|
||||
}
|
||||
|
||||
func ExampleIsAlphaNumeric() {
|
||||
eg(1, IsAlphaNumeric("afaf35353afaf"))
|
||||
eg(2, IsAlphaNumeric("FFFF99fff"))
|
||||
eg(3, IsAlphaNumeric("99"))
|
||||
eg(4, IsAlphaNumeric("afff"))
|
||||
eg(5, IsAlphaNumeric("Infinity"))
|
||||
eg(6, IsAlphaNumeric("áéúóúÁÉÍÓÚãõÃÕàèìòùÀÈÌÒÙâêîôûÂÊÎÔÛäëïöüÄËÏÖÜçÇ1234567890"))
|
||||
eg(7, IsAlphaNumeric("-Infinity"))
|
||||
eg(8, IsAlphaNumeric("-33"))
|
||||
eg(9, IsAlphaNumeric("aaff.."))
|
||||
eg(10, IsAlphaNumeric(".áéúóúÁÉÍÓÚãõÃÕàèìòùÀÈÌÒÙâêîôûÂÊÎÔÛäëïöüÄËÏÖÜçÇ1234567890"))
|
||||
// Output:
|
||||
// 1: true
|
||||
// 2: true
|
||||
// 3: true
|
||||
// 4: true
|
||||
// 5: true
|
||||
// 6: true
|
||||
// 7: false
|
||||
// 8: false
|
||||
// 9: false
|
||||
// 10: false
|
||||
}
|
||||
|
||||
func ExampleIsEmpty() {
|
||||
eg(1, IsEmpty(" "))
|
||||
eg(2, IsEmpty("\t\t\t "))
|
||||
eg(3, IsEmpty("\t\n "))
|
||||
eg(4, IsEmpty("hi"))
|
||||
// Output:
|
||||
// 1: true
|
||||
// 2: true
|
||||
// 3: true
|
||||
// 4: false
|
||||
}
|
||||
|
||||
func ExampleIsLower() {
|
||||
eg(1, IsLower("a"))
|
||||
eg(2, IsLower("A"))
|
||||
eg(3, IsLower("abc"))
|
||||
eg(4, IsLower("aBc"))
|
||||
eg(5, IsLower("áéúóúãõàèìòùâêîôûäëïöüç"))
|
||||
eg(6, IsLower("hi jp"))
|
||||
eg(7, IsLower("ÁÉÍÓÚÃÕÀÈÌÒÙÂÊÎÔÛÄËÏÖÜÇ"))
|
||||
eg(8, IsLower("áéúóúãõàèìòùâêîôûäëïöüçÁ"))
|
||||
eg(9, IsLower("áéúóúãõàèìòùâêîôû äëïöüç"))
|
||||
// Output:
|
||||
// 1: true
|
||||
// 2: false
|
||||
// 3: true
|
||||
// 4: false
|
||||
// 5: true
|
||||
// 6: false
|
||||
// 7: false
|
||||
// 8: false
|
||||
// 9: false
|
||||
}
|
||||
|
||||
func ExampleIsNumeric() {
|
||||
eg(1, IsNumeric("3"))
|
||||
eg(2, IsNumeric("34.22"))
|
||||
eg(3, IsNumeric("-22.33"))
|
||||
eg(4, IsNumeric("NaN"))
|
||||
eg(5, IsNumeric("Infinity"))
|
||||
eg(6, IsNumeric("-Infinity"))
|
||||
eg(7, IsNumeric("JP"))
|
||||
eg(8, IsNumeric("-5"))
|
||||
eg(9, IsNumeric("00099242424"))
|
||||
// Output:
|
||||
// 1: true
|
||||
// 2: false
|
||||
// 3: false
|
||||
// 4: false
|
||||
// 5: false
|
||||
// 6: false
|
||||
// 7: false
|
||||
// 8: false
|
||||
// 9: true
|
||||
}
|
||||
|
||||
func ExampleIsUpper() {
|
||||
eg(1, IsUpper("a"))
|
||||
eg(2, IsUpper("A"))
|
||||
eg(3, IsUpper("ABC"))
|
||||
eg(4, IsUpper("aBc"))
|
||||
eg(5, IsUpper("áéúóúãõàèìòùâêîôûäëïöüç"))
|
||||
eg(6, IsUpper("HI JP"))
|
||||
eg(7, IsUpper("ÁÉÍÓÚÃÕÀÈÌÒÙÂÊÎÔÛÄËÏÖÜÇ"))
|
||||
eg(8, IsUpper("áéúóúãõàèìòùâêîôûäëïöüçÁ"))
|
||||
eg(9, IsUpper("ÁÉÍÓÚÃÕÀÈÌÒÙÂÊÎ ÔÛÄËÏÖÜÇ"))
|
||||
// Output:
|
||||
// 1: false
|
||||
// 2: true
|
||||
// 3: true
|
||||
// 4: false
|
||||
// 5: false
|
||||
// 6: false
|
||||
// 7: true
|
||||
// 8: false
|
||||
// 9: false
|
||||
}
|
||||
|
||||
func ExampleLeft() {
|
||||
eg(1, Left("abcdef", 0))
|
||||
eg(2, Left("abcdef", 1))
|
||||
eg(3, Left("abcdef", 4))
|
||||
eg(4, Left("abcdef", -2))
|
||||
// Output:
|
||||
// 1:
|
||||
// 2: a
|
||||
// 3: abcd
|
||||
// 4: ef
|
||||
}
|
||||
|
||||
func ExampleLeftOf() {
|
||||
eg(1, LeftOf("abcdef", "def"))
|
||||
eg(2, LeftOf("abcdef", "abc"))
|
||||
eg(3, LeftOf("abcdef", ""))
|
||||
eg(4, LeftOf("", "abc"))
|
||||
eg(5, LeftOf("abcdef", "xyz"))
|
||||
// Output:
|
||||
// 1: abc
|
||||
// 2:
|
||||
// 3: abcdef
|
||||
// 4:
|
||||
// 5:
|
||||
}
|
||||
|
||||
func ExampleLines() {
|
||||
eg(1, Lines("a\r\nb\nc\r\n"))
|
||||
eg(2, Lines("a\r\nb\nc\r\nd"))
|
||||
// Output:
|
||||
// 1: [a b c ]
|
||||
// 2: [a b c d]
|
||||
}
|
||||
|
||||
func ExampleMatch() {
|
||||
eg(1, Match("foobar", `^fo.*r$`))
|
||||
eg(2, Match("foobar", `^fo.*x$`))
|
||||
eg(3, Match("", `^fo.*x$`))
|
||||
// Output:
|
||||
// 1: true
|
||||
// 2: false
|
||||
// 3: false
|
||||
}
|
||||
|
||||
func ExamplePad() {
|
||||
eg(1, Pad("hello", "x", 5))
|
||||
eg(2, Pad("hello", "x", 10))
|
||||
eg(3, Pad("hello", "x", 11))
|
||||
eg(4, Pad("hello", "x", 6))
|
||||
eg(5, Pad("hello", "x", 1))
|
||||
// Output:
|
||||
// 1: hello
|
||||
// 2: xxxhelloxx
|
||||
// 3: xxxhelloxxx
|
||||
// 4: xhello
|
||||
// 5: hello
|
||||
}
|
||||
|
||||
func ExamplePadLeft() {
|
||||
eg(1, PadLeft("hello", "x", 5))
|
||||
eg(2, PadLeft("hello", "x", 10))
|
||||
eg(3, PadLeft("hello", "x", 11))
|
||||
eg(4, PadLeft("hello", "x", 6))
|
||||
eg(5, PadLeft("hello", "x", 1))
|
||||
// Output:
|
||||
// 1: hello
|
||||
// 2: xxxxxhello
|
||||
// 3: xxxxxxhello
|
||||
// 4: xhello
|
||||
// 5: hello
|
||||
}
|
||||
|
||||
func ExamplePadRight() {
|
||||
eg(1, PadRight("hello", "x", 5))
|
||||
eg(2, PadRight("hello", "x", 10))
|
||||
eg(3, PadRight("hello", "x", 11))
|
||||
eg(4, PadRight("hello", "x", 6))
|
||||
eg(5, PadRight("hello", "x", 1))
|
||||
// Output:
|
||||
// 1: hello
|
||||
// 2: helloxxxxx
|
||||
// 3: helloxxxxxx
|
||||
// 4: hellox
|
||||
// 5: hello
|
||||
}
|
||||
|
||||
func ExamplePipe() {
|
||||
eg(1, Pipe("\nabcdef \n", Clean, BetweenF("a", "f"), ChompLeftF("bc")))
|
||||
// Output:
|
||||
// 1: de
|
||||
}
|
||||
|
||||
func ExampleReplaceF() {
|
||||
eg(1, Pipe("abcdefab", ReplaceF("ab", "x", -1)))
|
||||
eg(2, Pipe("abcdefab", ReplaceF("ab", "x", 1)))
|
||||
eg(3, Pipe("abcdefab", ReplaceF("ab", "x", 0)))
|
||||
// Output:
|
||||
// 1: xcdefx
|
||||
// 2: xcdefab
|
||||
// 3: abcdefab
|
||||
}
|
||||
|
||||
func ExampleReplacePattern() {
|
||||
eg(1, ReplacePattern("aabbcc", `a`, "x"))
|
||||
// Output:
|
||||
// 1: xxbbcc
|
||||
}
|
||||
|
||||
func ExampleReplacePatternF() {
|
||||
eg(1, Pipe("aabbcc", ReplacePatternF(`a`, "x")))
|
||||
// Output:
|
||||
// 1: xxbbcc
|
||||
}
|
||||
|
||||
func ExampleReverse() {
|
||||
eg(1, Reverse("abc"))
|
||||
eg(2, Reverse("中文"))
|
||||
// Output:
|
||||
// 1: cba
|
||||
// 2: 文中
|
||||
}
|
||||
|
||||
func ExampleRight() {
|
||||
eg(1, Right("abcdef", 0))
|
||||
eg(2, Right("abcdef", 1))
|
||||
eg(3, Right("abcdef", 4))
|
||||
eg(4, Right("abcdef", -2))
|
||||
// Output:
|
||||
// 1:
|
||||
// 2: f
|
||||
// 3: cdef
|
||||
// 4: ab
|
||||
}
|
||||
|
||||
func ExampleRightOf() {
|
||||
eg(1, RightOf("abcdef", "abc"))
|
||||
eg(2, RightOf("abcdef", "def"))
|
||||
eg(3, RightOf("abcdef", ""))
|
||||
eg(4, RightOf("", "abc"))
|
||||
eg(5, RightOf("abcdef", "xyz"))
|
||||
// Output:
|
||||
// 1: def
|
||||
// 2:
|
||||
// 3: abcdef
|
||||
// 4:
|
||||
// 5:
|
||||
}
|
||||
|
||||
func ExampleRightF() {
|
||||
eg(1, Pipe("abcdef", RightF(3)))
|
||||
// Output:
|
||||
// 1: def
|
||||
}
|
||||
|
||||
func ExampleSliceContains() {
|
||||
eg(1, SliceContains([]string{"foo", "bar"}, "foo"))
|
||||
eg(2, SliceContains(nil, "foo"))
|
||||
eg(3, SliceContains([]string{"foo", "bar"}, "bah"))
|
||||
eg(4, SliceContains([]string{"foo", "bar"}, ""))
|
||||
// Output:
|
||||
// 1: true
|
||||
// 2: false
|
||||
// 3: false
|
||||
// 4: false
|
||||
}
|
||||
|
||||
func ExampleSliceIndexOf() {
|
||||
eg(1, SliceIndexOf([]string{"foo", "bar"}, "foo"))
|
||||
eg(2, SliceIndexOf(nil, "foo"))
|
||||
eg(3, SliceIndexOf([]string{"foo", "bar"}, "bah"))
|
||||
eg(4, SliceIndexOf([]string{"foo", "bar"}, ""))
|
||||
eg(5, SliceIndexOf([]string{"foo", "bar"}, "bar"))
|
||||
// Output:
|
||||
// 1: 0
|
||||
// 2: -1
|
||||
// 3: -1
|
||||
// 4: -1
|
||||
// 5: 1
|
||||
}
|
||||
|
||||
func ExampleSlugify() {
|
||||
eg(1, Slugify("foo bar"))
|
||||
eg(2, Slugify("foo/bar bah"))
|
||||
eg(3, Slugify("foo-bar--bah"))
|
||||
// Output:
|
||||
// 1: foo-bar
|
||||
// 2: foobar-bah
|
||||
// 3: foo-bar-bah
|
||||
}
|
||||
|
||||
func ExampleStripPunctuation() {
|
||||
eg(1, StripPunctuation("My, st[ring] *full* of %punct)"))
|
||||
// Output:
|
||||
// 1: My string full of punct
|
||||
}
|
||||
|
||||
func ExampleStripTags() {
|
||||
eg(1, StripTags("<p>just <b>some</b> text</p>"))
|
||||
eg(2, StripTags("<p>just <b>some</b> text</p>", "p"))
|
||||
eg(3, StripTags("<a><p>just <b>some</b> text</p></a>", "a", "p"))
|
||||
eg(4, StripTags("<a><p>just <b>some</b> text</p></a>", "b"))
|
||||
// Output:
|
||||
// 1: just some text
|
||||
// 2: just <b>some</b> text
|
||||
// 3: just <b>some</b> text
|
||||
// 4: <a><p>just some text</p></a>
|
||||
}
|
||||
|
||||
func ExampleSubstr() {
|
||||
eg(1, Substr("abcdef", 2, -1))
|
||||
eg(2, Substr("abcdef", 2, 0))
|
||||
eg(3, Substr("abcdef", 2, 1))
|
||||
eg(4, Substr("abcdef", 2, 3))
|
||||
eg(5, Substr("abcdef", 2, 4))
|
||||
eg(6, Substr("abcdef", 2, 100))
|
||||
eg(7, Substr("abcdef", 0, 1))
|
||||
// Output:
|
||||
// 1:
|
||||
// 2:
|
||||
// 3: c
|
||||
// 4: cde
|
||||
// 5: cdef
|
||||
// 6: cdef
|
||||
// 7: a
|
||||
}
|
||||
|
||||
func ExampleTemplateWithDelimiters() {
|
||||
eg(1, TemplateWithDelimiters("Hello {{name}} at {{date-year}}", map[string]interface{}{"name": "foo", "date-year": 2014}, "{{", "}}"))
|
||||
eg(2, TemplateWithDelimiters("Hello #{name} at #{date-year}", map[string]interface{}{"name": "foo", "date-year": 2014}, "#{", "}"))
|
||||
eg(3, TemplateWithDelimiters("Hello (name) at (date-year)", map[string]interface{}{"name": "foo", "date-year": 2014}, "(", ")"))
|
||||
eg(4, TemplateWithDelimiters("Hello [name] at [date-year]", map[string]interface{}{"name": "foo", "date-year": 2014}, "[", "]"))
|
||||
eg(5, TemplateWithDelimiters("Hello *name* at *date-year*", map[string]interface{}{"name": "foo", "date-year": 2014}, "*", "*"))
|
||||
eg(6, TemplateWithDelimiters("Hello $name$ at $date-year$", map[string]interface{}{"name": "foo", "date-year": 2014}, "$", "$"))
|
||||
// Output:
|
||||
// 1: Hello foo at 2014
|
||||
// 2: Hello foo at 2014
|
||||
// 3: Hello foo at 2014
|
||||
// 4: Hello foo at 2014
|
||||
// 5: Hello foo at 2014
|
||||
// 6: Hello foo at 2014
|
||||
}
|
||||
|
||||
func ExampleTemplate() {
|
||||
eg(1, Template("Hello {{name}} at {{date-year}}", map[string]interface{}{"name": "foo", "date-year": 2014}))
|
||||
eg(2, Template("Hello {{name}}", map[string]interface{}{"name": ""}))
|
||||
SetTemplateDelimiters("{", "}")
|
||||
eg(3, Template("Hello {name} at {date-year}", map[string]interface{}{"name": "foo", "date-year": 2014}))
|
||||
// Output:
|
||||
// 1: Hello foo at 2014
|
||||
// 2: Hello
|
||||
// 3: Hello foo at 2014
|
||||
}
|
||||
|
||||
func ExampleToArgv() {
|
||||
eg(1, QuoteItems(ToArgv(`GO_ENV=test gosu --watch foo@release "some quoted string 'inside'"`)))
|
||||
eg(2, QuoteItems(ToArgv(`gosu foo\ bar`)))
|
||||
eg(3, QuoteItems(ToArgv(`gosu --test="some arg" -w -s a=123`)))
|
||||
// Output:
|
||||
// 1: ["GO_ENV=test" "gosu" "--watch" "foo@release" "some quoted string 'inside'"]
|
||||
// 2: ["gosu" "foo bar"]
|
||||
// 3: ["gosu" "--test=some arg" "-w" "-s" "a=123"]
|
||||
}
|
||||
|
||||
func ExampleToBool() {
|
||||
eg(1, ToBool("true"))
|
||||
eg(2, ToBool("yes"))
|
||||
eg(3, ToBool("1"))
|
||||
eg(4, ToBool("on"))
|
||||
eg(5, ToBool("false"))
|
||||
eg(6, ToBool("no"))
|
||||
eg(7, ToBool("0"))
|
||||
eg(8, ToBool("off"))
|
||||
eg(9, ToBool(""))
|
||||
eg(10, ToBool("?"))
|
||||
// Output:
|
||||
// 1: true
|
||||
// 2: true
|
||||
// 3: true
|
||||
// 4: true
|
||||
// 5: false
|
||||
// 6: false
|
||||
// 7: false
|
||||
// 8: false
|
||||
// 9: false
|
||||
// 10: false
|
||||
}
|
||||
|
||||
func ExampleToBoolOr() {
|
||||
eg(1, ToBoolOr("foo", true))
|
||||
eg(2, ToBoolOr("foo", false))
|
||||
eg(3, ToBoolOr("true", false))
|
||||
eg(4, ToBoolOr("", true))
|
||||
// Output:
|
||||
// 1: true
|
||||
// 2: false
|
||||
// 3: true
|
||||
// 4: true
|
||||
}
|
||||
|
||||
func ExampleToIntOr() {
|
||||
eg(1, ToIntOr("foo", 0))
|
||||
eg(2, ToIntOr("", 1))
|
||||
eg(3, ToIntOr("100", 0))
|
||||
eg(4, ToIntOr("-1", 1))
|
||||
// Output:
|
||||
// 1: 0
|
||||
// 2: 1
|
||||
// 3: 100
|
||||
// 4: -1
|
||||
}
|
||||
|
||||
func ExampleUnderscore() {
|
||||
eg(1, Underscore("fooBar"))
|
||||
eg(2, Underscore("FooBar"))
|
||||
eg(3, Underscore(""))
|
||||
eg(4, Underscore("x"))
|
||||
// Output:
|
||||
// 1: foo_bar
|
||||
// 2: _foo_bar
|
||||
// 3:
|
||||
// 4: x
|
||||
}
|
||||
|
||||
func ExampleWrapHTML() {
|
||||
eg(1, WrapHTML("foo", "span", nil))
|
||||
eg(2, WrapHTML("foo", "", nil))
|
||||
eg(3, WrapHTML("foo", "", map[string]string{"class": "bar"}))
|
||||
// Output:
|
||||
// 1: <span>foo</span>
|
||||
// 2: <div>foo</div>
|
||||
// 3: <div class="bar">foo</div>
|
||||
}
|
||||
|
||||
func ExampleWrapHTMLF() {
|
||||
eg(1, Pipe("foo", WrapHTMLF("div", nil)))
|
||||
// Output:
|
||||
// 1: <div>foo</div>
|
||||
}
|
Loading…
Add table
Add a link
Reference in a new issue