Move macaron to chi (#14293)

Use [chi](https://github.com/go-chi/chi) instead of the forked [macaron](https://gitea.com/macaron/macaron). Since macaron and chi have conflicts with session share, this big PR becomes a have-to thing. According my previous idea, we can replace macaron step by step but I'm wrong. :( Below is a list of big changes on this PR.

- [x] Define `context.ResponseWriter` interface with an implementation `context.Response`.
- [x] Use chi instead of macaron, and also a customize `Route` to wrap chi so that the router usage is similar as before.
- [x] Create different routers for `web`, `api`, `internal` and `install` so that the codes will be more clear and no magic .
- [x] Use https://github.com/unrolled/render instead of macaron's internal render
- [x] Use https://github.com/NYTimes/gziphandler instead of https://gitea.com/macaron/gzip
- [x] Use https://gitea.com/go-chi/session which is a modified version of https://gitea.com/macaron/session and removed `nodb` support since it will not be maintained. **BREAK**
- [x] Use https://gitea.com/go-chi/captcha which is a modified version of https://gitea.com/macaron/captcha
- [x] Use https://gitea.com/go-chi/cache which is a modified version of https://gitea.com/macaron/cache
- [x] Use https://gitea.com/go-chi/binding which is a modified version of https://gitea.com/macaron/binding
- [x] Use https://github.com/go-chi/cors instead of https://gitea.com/macaron/cors
- [x] Dropped https://gitea.com/macaron/i18n and make a new one in `code.gitea.io/gitea/modules/translation`
- [x] Move validation form structs from `code.gitea.io/gitea/modules/auth` to `code.gitea.io/gitea/modules/forms` to avoid dependency cycle.
- [x] Removed macaron log service because it's not need any more. **BREAK**
- [x] All form structs have to be get by `web.GetForm(ctx)` in the route function but not as a function parameter on routes definition.
- [x] Move Git HTTP protocol implementation to use routers directly.
- [x] Fix the problem that chi routes don't support trailing slash but macaron did.
- [x] `/api/v1/swagger` now will be redirect to `/api/swagger` but not render directly so that `APIContext` will not create a html render. 

Notices:
- Chi router don't support request with trailing slash
- Integration test `TestUserHeatmap` maybe mysql version related. It's failed on my macOS(mysql 5.7.29 installed via brew) but succeed on CI.

Co-authored-by: 6543 <6543@obermui.de>
This commit is contained in:
Lunny Xiao 2021-01-26 23:36:53 +08:00 committed by GitHub
parent 3adbbb4255
commit 6433ba0ec3
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
353 changed files with 5463 additions and 20785 deletions

1
vendor/github.com/NYTimes/gziphandler/.gitignore generated vendored Normal file
View file

@ -0,0 +1 @@
*.swp

10
vendor/github.com/NYTimes/gziphandler/.travis.yml generated vendored Normal file
View file

@ -0,0 +1,10 @@
language: go
go:
- 1.x
- tip
env:
- GO111MODULE=on
install:
- go mod download
script:
- go test -race -v

View file

@ -0,0 +1,75 @@
---
layout: code-of-conduct
version: v1.0
---
This code of conduct outlines our expectations for participants within the **NYTimes/gziphandler** community, as well as steps to reporting unacceptable behavior. We are committed to providing a welcoming and inspiring community for all and expect our code of conduct to be honored. Anyone who violates this code of conduct may be banned from the community.
Our open source community strives to:
* **Be friendly and patient.**
* **Be welcoming**: We strive to be a community that welcomes and supports people of all backgrounds and identities. This includes, but is not limited to members of any race, ethnicity, culture, national origin, colour, immigration status, social and economic class, educational level, sex, sexual orientation, gender identity and expression, age, size, family status, political belief, religion, and mental and physical ability.
* **Be considerate**: Your work will be used by other people, and you in turn will depend on the work of others. Any decision you take will affect users and colleagues, and you should take those consequences into account when making decisions. Remember that we're a world-wide community, so you might not be communicating in someone else's primary language.
* **Be respectful**: Not all of us will agree all the time, but disagreement is no excuse for poor behavior and poor manners. We might all experience some frustration now and then, but we cannot allow that frustration to turn into a personal attack. Its important to remember that a community where people feel uncomfortable or threatened is not a productive one.
* **Be careful in the words that we choose**: we are a community of professionals, and we conduct ourselves professionally. Be kind to others. Do not insult or put down other participants. Harassment and other exclusionary behavior aren't acceptable.
* **Try to understand why we disagree**: Disagreements, both social and technical, happen all the time. It is important that we resolve disagreements and differing views constructively. Remember that were different. The strength of our community comes from its diversity, people from a wide range of backgrounds. Different people have different perspectives on issues. Being unable to understand why someone holds a viewpoint doesnt mean that theyre wrong. Dont forget that it is human to err and blaming each other doesnt get us anywhere. Instead, focus on helping to resolve issues and learning from mistakes.
## Definitions
Harassment includes, but is not limited to:
- Offensive comments related to gender, gender identity and expression, sexual orientation, disability, mental illness, neuro(a)typicality, physical appearance, body size, race, age, regional discrimination, political or religious affiliation
- Unwelcome comments regarding a persons lifestyle choices and practices, including those related to food, health, parenting, drugs, and employment
- Deliberate misgendering. This includes deadnaming or persistently using a pronoun that does not correctly reflect a person's gender identity. You must address people by the name they give you when not addressing them by their username or handle
- Physical contact and simulated physical contact (eg, textual descriptions like “*hug*” or “*backrub*”) without consent or after a request to stop
- Threats of violence, both physical and psychological
- Incitement of violence towards any individual, including encouraging a person to commit suicide or to engage in self-harm
- Deliberate intimidation
- Stalking or following
- Harassing photography or recording, including logging online activity for harassment purposes
- Sustained disruption of discussion
- Unwelcome sexual attention, including gratuitous or off-topic sexual images or behaviour
- Pattern of inappropriate social contact, such as requesting/assuming inappropriate levels of intimacy with others
- Continued one-on-one communication after requests to cease
- Deliberate “outing” of any aspect of a persons identity without their consent except as necessary to protect others from intentional abuse
- Publication of non-harassing private communication
Our open source community prioritizes marginalized peoples safety over privileged peoples comfort. We will not act on complaints regarding:
- Reverse -isms, including reverse racism, reverse sexism, and cisphobia
- Reasonable communication of boundaries, such as “leave me alone,” “go away,” or “Im not discussing this with you”
- Refusal to explain or debate social justice concepts
- Communicating in a tone you dont find congenial
- Criticizing racist, sexist, cissexist, or otherwise oppressive behavior or assumptions
### Diversity Statement
We encourage everyone to participate and are committed to building a community for all. Although we will fail at times, we seek to treat everyone both as fairly and equally as possible. Whenever a participant has made a mistake, we expect them to take responsibility for it. If someone has been harmed or offended, it is our responsibility to listen carefully and respectfully, and do our best to right the wrong.
Although this list cannot be exhaustive, we explicitly honor diversity in age, gender, gender identity or expression, culture, ethnicity, language, national origin, political beliefs, profession, race, religion, sexual orientation, socioeconomic status, and technical ability. We will not tolerate discrimination based on any of the protected
characteristics above, including participants with disabilities.
### Reporting Issues
If you experience or witness unacceptable behavior—or have any other concerns—please report it by contacting us via **code@nytimes.com**. All reports will be handled with discretion. In your report please include:
- Your contact information.
- Names (real, nicknames, or pseudonyms) of any individuals involved. If there are additional witnesses, please
include them as well. Your account of what occurred, and if you believe the incident is ongoing. If there is a publicly available record (e.g. a mailing list archive or a public IRC logger), please include a link.
- Any additional information that may be helpful.
After filing a report, a representative will contact you personally, review the incident, follow up with any additional questions, and make a decision as to how to respond. If the person who is harassing you is part of the response team, they will recuse themselves from handling your incident. If the complaint originates from a member of the response team, it will be handled by a different member of the response team. We will respect confidentiality requests for the purpose of protecting victims of abuse.
### Attribution & Acknowledgements
We all stand on the shoulders of giants across many open source communities. We'd like to thank the communities and projects that established code of conducts and diversity statements as our inspiration:
* [Django](https://www.djangoproject.com/conduct/reporting/)
* [Python](https://www.python.org/community/diversity/)
* [Ubuntu](http://www.ubuntu.com/about/about-ubuntu/conduct)
* [Contributor Covenant](http://contributor-covenant.org/)
* [Geek Feminism](http://geekfeminism.org/about/code-of-conduct/)
* [Citizen Code of Conduct](http://citizencodeofconduct.org/)
This Code of Conduct was based on https://github.com/todogroup/opencodeofconduct

30
vendor/github.com/NYTimes/gziphandler/CONTRIBUTING.md generated vendored Normal file
View file

@ -0,0 +1,30 @@
# Contributing to NYTimes/gziphandler
This is an open source project started by handful of developers at The New York Times and open to the entire Go community.
We really appreciate your help!
## Filing issues
When filing an issue, make sure to answer these five questions:
1. What version of Go are you using (`go version`)?
2. What operating system and processor architecture are you using?
3. What did you do?
4. What did you expect to see?
5. What did you see instead?
## Contributing code
Before submitting changes, please follow these guidelines:
1. Check the open issues and pull requests for existing discussions.
2. Open an issue to discuss a new feature.
3. Write tests.
4. Make sure code follows the ['Go Code Review Comments'](https://github.com/golang/go/wiki/CodeReviewComments).
5. Make sure your changes pass `go test`.
6. Make sure the entire test suite passes locally and on Travis CI.
7. Open a Pull Request.
8. [Squash your commits](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html) after receiving feedback and add a [great commit message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html).
Unless otherwise noted, the gziphandler source files are distributed under the Apache 2.0-style license found in the LICENSE.md file.

201
vendor/github.com/NYTimes/gziphandler/LICENSE generated vendored Normal file
View file

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2016-2017 The New York Times Company
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

56
vendor/github.com/NYTimes/gziphandler/README.md generated vendored Normal file
View file

@ -0,0 +1,56 @@
Gzip Handler
============
This is a tiny Go package which wraps HTTP handlers to transparently gzip the
response body, for clients which support it. Although it's usually simpler to
leave that to a reverse proxy (like nginx or Varnish), this package is useful
when that's undesirable.
## Install
```bash
go get -u github.com/NYTimes/gziphandler
```
## Usage
Call `GzipHandler` with any handler (an object which implements the
`http.Handler` interface), and it'll return a new handler which gzips the
response. For example:
```go
package main
import (
"io"
"net/http"
"github.com/NYTimes/gziphandler"
)
func main() {
withoutGz := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
io.WriteString(w, "Hello, World")
})
withGz := gziphandler.GzipHandler(withoutGz)
http.Handle("/", withGz)
http.ListenAndServe("0.0.0.0:8000", nil)
}
```
## Documentation
The docs can be found at [godoc.org][docs], as usual.
## License
[Apache 2.0][license].
[docs]: https://godoc.org/github.com/NYTimes/gziphandler
[license]: https://github.com/NYTimes/gziphandler/blob/master/LICENSE

5
vendor/github.com/NYTimes/gziphandler/go.mod generated vendored Normal file
View file

@ -0,0 +1,5 @@
module github.com/NYTimes/gziphandler
go 1.11
require github.com/stretchr/testify v1.3.0

7
vendor/github.com/NYTimes/gziphandler/go.sum generated vendored Normal file
View file

@ -0,0 +1,7 @@
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=

532
vendor/github.com/NYTimes/gziphandler/gzip.go generated vendored Normal file
View file

@ -0,0 +1,532 @@
package gziphandler // import "github.com/NYTimes/gziphandler"
import (
"bufio"
"compress/gzip"
"fmt"
"io"
"mime"
"net"
"net/http"
"strconv"
"strings"
"sync"
)
const (
vary = "Vary"
acceptEncoding = "Accept-Encoding"
contentEncoding = "Content-Encoding"
contentType = "Content-Type"
contentLength = "Content-Length"
)
type codings map[string]float64
const (
// DefaultQValue is the default qvalue to assign to an encoding if no explicit qvalue is set.
// This is actually kind of ambiguous in RFC 2616, so hopefully it's correct.
// The examples seem to indicate that it is.
DefaultQValue = 1.0
// DefaultMinSize is the default minimum size until we enable gzip compression.
// 1500 bytes is the MTU size for the internet since that is the largest size allowed at the network layer.
// If you take a file that is 1300 bytes and compress it to 800 bytes, its still transmitted in that same 1500 byte packet regardless, so youve gained nothing.
// That being the case, you should restrict the gzip compression to files with a size greater than a single packet, 1400 bytes (1.4KB) is a safe value.
DefaultMinSize = 1400
)
// gzipWriterPools stores a sync.Pool for each compression level for reuse of
// gzip.Writers. Use poolIndex to covert a compression level to an index into
// gzipWriterPools.
var gzipWriterPools [gzip.BestCompression - gzip.BestSpeed + 2]*sync.Pool
func init() {
for i := gzip.BestSpeed; i <= gzip.BestCompression; i++ {
addLevelPool(i)
}
addLevelPool(gzip.DefaultCompression)
}
// poolIndex maps a compression level to its index into gzipWriterPools. It
// assumes that level is a valid gzip compression level.
func poolIndex(level int) int {
// gzip.DefaultCompression == -1, so we need to treat it special.
if level == gzip.DefaultCompression {
return gzip.BestCompression - gzip.BestSpeed + 1
}
return level - gzip.BestSpeed
}
func addLevelPool(level int) {
gzipWriterPools[poolIndex(level)] = &sync.Pool{
New: func() interface{} {
// NewWriterLevel only returns error on a bad level, we are guaranteeing
// that this will be a valid level so it is okay to ignore the returned
// error.
w, _ := gzip.NewWriterLevel(nil, level)
return w
},
}
}
// GzipResponseWriter provides an http.ResponseWriter interface, which gzips
// bytes before writing them to the underlying response. This doesn't close the
// writers, so don't forget to do that.
// It can be configured to skip response smaller than minSize.
type GzipResponseWriter struct {
http.ResponseWriter
index int // Index for gzipWriterPools.
gw *gzip.Writer
code int // Saves the WriteHeader value.
minSize int // Specifed the minimum response size to gzip. If the response length is bigger than this value, it is compressed.
buf []byte // Holds the first part of the write before reaching the minSize or the end of the write.
ignore bool // If true, then we immediately passthru writes to the underlying ResponseWriter.
contentTypes []parsedContentType // Only compress if the response is one of these content-types. All are accepted if empty.
}
type GzipResponseWriterWithCloseNotify struct {
*GzipResponseWriter
}
func (w GzipResponseWriterWithCloseNotify) CloseNotify() <-chan bool {
return w.ResponseWriter.(http.CloseNotifier).CloseNotify()
}
// Write appends data to the gzip writer.
func (w *GzipResponseWriter) Write(b []byte) (int, error) {
// GZIP responseWriter is initialized. Use the GZIP responseWriter.
if w.gw != nil {
return w.gw.Write(b)
}
// If we have already decided not to use GZIP, immediately passthrough.
if w.ignore {
return w.ResponseWriter.Write(b)
}
// Save the write into a buffer for later use in GZIP responseWriter (if content is long enough) or at close with regular responseWriter.
// On the first write, w.buf changes from nil to a valid slice
w.buf = append(w.buf, b...)
var (
cl, _ = strconv.Atoi(w.Header().Get(contentLength))
ct = w.Header().Get(contentType)
ce = w.Header().Get(contentEncoding)
)
// Only continue if they didn't already choose an encoding or a known unhandled content length or type.
if ce == "" && (cl == 0 || cl >= w.minSize) && (ct == "" || handleContentType(w.contentTypes, ct)) {
// If the current buffer is less than minSize and a Content-Length isn't set, then wait until we have more data.
if len(w.buf) < w.minSize && cl == 0 {
return len(b), nil
}
// If the Content-Length is larger than minSize or the current buffer is larger than minSize, then continue.
if cl >= w.minSize || len(w.buf) >= w.minSize {
// If a Content-Type wasn't specified, infer it from the current buffer.
if ct == "" {
ct = http.DetectContentType(w.buf)
w.Header().Set(contentType, ct)
}
// If the Content-Type is acceptable to GZIP, initialize the GZIP writer.
if handleContentType(w.contentTypes, ct) {
if err := w.startGzip(); err != nil {
return 0, err
}
return len(b), nil
}
}
}
// If we got here, we should not GZIP this response.
if err := w.startPlain(); err != nil {
return 0, err
}
return len(b), nil
}
// startGzip initializes a GZIP writer and writes the buffer.
func (w *GzipResponseWriter) startGzip() error {
// Set the GZIP header.
w.Header().Set(contentEncoding, "gzip")
// if the Content-Length is already set, then calls to Write on gzip
// will fail to set the Content-Length header since its already set
// See: https://github.com/golang/go/issues/14975.
w.Header().Del(contentLength)
// Write the header to gzip response.
if w.code != 0 {
w.ResponseWriter.WriteHeader(w.code)
// Ensure that no other WriteHeader's happen
w.code = 0
}
// Initialize and flush the buffer into the gzip response if there are any bytes.
// If there aren't any, we shouldn't initialize it yet because on Close it will
// write the gzip header even if nothing was ever written.
if len(w.buf) > 0 {
// Initialize the GZIP response.
w.init()
n, err := w.gw.Write(w.buf)
// This should never happen (per io.Writer docs), but if the write didn't
// accept the entire buffer but returned no specific error, we have no clue
// what's going on, so abort just to be safe.
if err == nil && n < len(w.buf) {
err = io.ErrShortWrite
}
return err
}
return nil
}
// startPlain writes to sent bytes and buffer the underlying ResponseWriter without gzip.
func (w *GzipResponseWriter) startPlain() error {
if w.code != 0 {
w.ResponseWriter.WriteHeader(w.code)
// Ensure that no other WriteHeader's happen
w.code = 0
}
w.ignore = true
// If Write was never called then don't call Write on the underlying ResponseWriter.
if w.buf == nil {
return nil
}
n, err := w.ResponseWriter.Write(w.buf)
w.buf = nil
// This should never happen (per io.Writer docs), but if the write didn't
// accept the entire buffer but returned no specific error, we have no clue
// what's going on, so abort just to be safe.
if err == nil && n < len(w.buf) {
err = io.ErrShortWrite
}
return err
}
// WriteHeader just saves the response code until close or GZIP effective writes.
func (w *GzipResponseWriter) WriteHeader(code int) {
if w.code == 0 {
w.code = code
}
}
// init graps a new gzip writer from the gzipWriterPool and writes the correct
// content encoding header.
func (w *GzipResponseWriter) init() {
// Bytes written during ServeHTTP are redirected to this gzip writer
// before being written to the underlying response.
gzw := gzipWriterPools[w.index].Get().(*gzip.Writer)
gzw.Reset(w.ResponseWriter)
w.gw = gzw
}
// Close will close the gzip.Writer and will put it back in the gzipWriterPool.
func (w *GzipResponseWriter) Close() error {
if w.ignore {
return nil
}
if w.gw == nil {
// GZIP not triggered yet, write out regular response.
err := w.startPlain()
// Returns the error if any at write.
if err != nil {
err = fmt.Errorf("gziphandler: write to regular responseWriter at close gets error: %q", err.Error())
}
return err
}
err := w.gw.Close()
gzipWriterPools[w.index].Put(w.gw)
w.gw = nil
return err
}
// Flush flushes the underlying *gzip.Writer and then the underlying
// http.ResponseWriter if it is an http.Flusher. This makes GzipResponseWriter
// an http.Flusher.
func (w *GzipResponseWriter) Flush() {
if w.gw == nil && !w.ignore {
// Only flush once startGzip or startPlain has been called.
//
// Flush is thus a no-op until we're certain whether a plain
// or gzipped response will be served.
return
}
if w.gw != nil {
w.gw.Flush()
}
if fw, ok := w.ResponseWriter.(http.Flusher); ok {
fw.Flush()
}
}
// Hijack implements http.Hijacker. If the underlying ResponseWriter is a
// Hijacker, its Hijack method is returned. Otherwise an error is returned.
func (w *GzipResponseWriter) Hijack() (net.Conn, *bufio.ReadWriter, error) {
if hj, ok := w.ResponseWriter.(http.Hijacker); ok {
return hj.Hijack()
}
return nil, nil, fmt.Errorf("http.Hijacker interface is not supported")
}
// verify Hijacker interface implementation
var _ http.Hijacker = &GzipResponseWriter{}
// MustNewGzipLevelHandler behaves just like NewGzipLevelHandler except that in
// an error case it panics rather than returning an error.
func MustNewGzipLevelHandler(level int) func(http.Handler) http.Handler {
wrap, err := NewGzipLevelHandler(level)
if err != nil {
panic(err)
}
return wrap
}
// NewGzipLevelHandler returns a wrapper function (often known as middleware)
// which can be used to wrap an HTTP handler to transparently gzip the response
// body if the client supports it (via the Accept-Encoding header). Responses will
// be encoded at the given gzip compression level. An error will be returned only
// if an invalid gzip compression level is given, so if one can ensure the level
// is valid, the returned error can be safely ignored.
func NewGzipLevelHandler(level int) (func(http.Handler) http.Handler, error) {
return NewGzipLevelAndMinSize(level, DefaultMinSize)
}
// NewGzipLevelAndMinSize behave as NewGzipLevelHandler except it let the caller
// specify the minimum size before compression.
func NewGzipLevelAndMinSize(level, minSize int) (func(http.Handler) http.Handler, error) {
return GzipHandlerWithOpts(CompressionLevel(level), MinSize(minSize))
}
func GzipHandlerWithOpts(opts ...option) (func(http.Handler) http.Handler, error) {
c := &config{
level: gzip.DefaultCompression,
minSize: DefaultMinSize,
}
for _, o := range opts {
o(c)
}
if err := c.validate(); err != nil {
return nil, err
}
return func(h http.Handler) http.Handler {
index := poolIndex(c.level)
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Add(vary, acceptEncoding)
if acceptsGzip(r) {
gw := &GzipResponseWriter{
ResponseWriter: w,
index: index,
minSize: c.minSize,
contentTypes: c.contentTypes,
}
defer gw.Close()
if _, ok := w.(http.CloseNotifier); ok {
gwcn := GzipResponseWriterWithCloseNotify{gw}
h.ServeHTTP(gwcn, r)
} else {
h.ServeHTTP(gw, r)
}
} else {
h.ServeHTTP(w, r)
}
})
}, nil
}
// Parsed representation of one of the inputs to ContentTypes.
// See https://golang.org/pkg/mime/#ParseMediaType
type parsedContentType struct {
mediaType string
params map[string]string
}
// equals returns whether this content type matches another content type.
func (pct parsedContentType) equals(mediaType string, params map[string]string) bool {
if pct.mediaType != mediaType {
return false
}
// if pct has no params, don't care about other's params
if len(pct.params) == 0 {
return true
}
// if pct has any params, they must be identical to other's.
if len(pct.params) != len(params) {
return false
}
for k, v := range pct.params {
if w, ok := params[k]; !ok || v != w {
return false
}
}
return true
}
// Used for functional configuration.
type config struct {
minSize int
level int
contentTypes []parsedContentType
}
func (c *config) validate() error {
if c.level != gzip.DefaultCompression && (c.level < gzip.BestSpeed || c.level > gzip.BestCompression) {
return fmt.Errorf("invalid compression level requested: %d", c.level)
}
if c.minSize < 0 {
return fmt.Errorf("minimum size must be more than zero")
}
return nil
}
type option func(c *config)
func MinSize(size int) option {
return func(c *config) {
c.minSize = size
}
}
func CompressionLevel(level int) option {
return func(c *config) {
c.level = level
}
}
// ContentTypes specifies a list of content types to compare
// the Content-Type header to before compressing. If none
// match, the response will be returned as-is.
//
// Content types are compared in a case-insensitive, whitespace-ignored
// manner.
//
// A MIME type without any other directive will match a content type
// that has the same MIME type, regardless of that content type's other
// directives. I.e., "text/html" will match both "text/html" and
// "text/html; charset=utf-8".
//
// A MIME type with any other directive will only match a content type
// that has the same MIME type and other directives. I.e.,
// "text/html; charset=utf-8" will only match "text/html; charset=utf-8".
//
// By default, responses are gzipped regardless of
// Content-Type.
func ContentTypes(types []string) option {
return func(c *config) {
c.contentTypes = []parsedContentType{}
for _, v := range types {
mediaType, params, err := mime.ParseMediaType(v)
if err == nil {
c.contentTypes = append(c.contentTypes, parsedContentType{mediaType, params})
}
}
}
}
// GzipHandler wraps an HTTP handler, to transparently gzip the response body if
// the client supports it (via the Accept-Encoding header). This will compress at
// the default compression level.
func GzipHandler(h http.Handler) http.Handler {
wrapper, _ := NewGzipLevelHandler(gzip.DefaultCompression)
return wrapper(h)
}
// acceptsGzip returns true if the given HTTP request indicates that it will
// accept a gzipped response.
func acceptsGzip(r *http.Request) bool {
acceptedEncodings, _ := parseEncodings(r.Header.Get(acceptEncoding))
return acceptedEncodings["gzip"] > 0.0
}
// returns true if we've been configured to compress the specific content type.
func handleContentType(contentTypes []parsedContentType, ct string) bool {
// If contentTypes is empty we handle all content types.
if len(contentTypes) == 0 {
return true
}
mediaType, params, err := mime.ParseMediaType(ct)
if err != nil {
return false
}
for _, c := range contentTypes {
if c.equals(mediaType, params) {
return true
}
}
return false
}
// parseEncodings attempts to parse a list of codings, per RFC 2616, as might
// appear in an Accept-Encoding header. It returns a map of content-codings to
// quality values, and an error containing the errors encountered. It's probably
// safe to ignore those, because silently ignoring errors is how the internet
// works.
//
// See: http://tools.ietf.org/html/rfc2616#section-14.3.
func parseEncodings(s string) (codings, error) {
c := make(codings)
var e []string
for _, ss := range strings.Split(s, ",") {
coding, qvalue, err := parseCoding(ss)
if err != nil {
e = append(e, err.Error())
} else {
c[coding] = qvalue
}
}
// TODO (adammck): Use a proper multi-error struct, so the individual errors
// can be extracted if anyone cares.
if len(e) > 0 {
return c, fmt.Errorf("errors while parsing encodings: %s", strings.Join(e, ", "))
}
return c, nil
}
// parseCoding parses a single conding (content-coding with an optional qvalue),
// as might appear in an Accept-Encoding header. It attempts to forgive minor
// formatting errors.
func parseCoding(s string) (coding string, qvalue float64, err error) {
for n, part := range strings.Split(s, ";") {
part = strings.TrimSpace(part)
qvalue = DefaultQValue
if n == 0 {
coding = strings.ToLower(part)
} else if strings.HasPrefix(part, "q=") {
qvalue, err = strconv.ParseFloat(strings.TrimPrefix(part, "q="), 64)
if qvalue < 0.0 {
qvalue = 0.0
} else if qvalue > 1.0 {
qvalue = 1.0
}
}
}
if coding == "" {
err = fmt.Errorf("empty content-coding")
}
return
}

43
vendor/github.com/NYTimes/gziphandler/gzip_go18.go generated vendored Normal file
View file

@ -0,0 +1,43 @@
// +build go1.8
package gziphandler
import "net/http"
// Push initiates an HTTP/2 server push.
// Push returns ErrNotSupported if the client has disabled push or if push
// is not supported on the underlying connection.
func (w *GzipResponseWriter) Push(target string, opts *http.PushOptions) error {
pusher, ok := w.ResponseWriter.(http.Pusher)
if ok && pusher != nil {
return pusher.Push(target, setAcceptEncodingForPushOptions(opts))
}
return http.ErrNotSupported
}
// setAcceptEncodingForPushOptions sets "Accept-Encoding" : "gzip" for PushOptions without overriding existing headers.
func setAcceptEncodingForPushOptions(opts *http.PushOptions) *http.PushOptions {
if opts == nil {
opts = &http.PushOptions{
Header: http.Header{
acceptEncoding: []string{"gzip"},
},
}
return opts
}
if opts.Header == nil {
opts.Header = http.Header{
acceptEncoding: []string{"gzip"},
}
return opts
}
if encoding := opts.Header.Get(acceptEncoding); encoding == "" {
opts.Header.Add(acceptEncoding, "gzip")
return opts
}
return opts
}

21
vendor/github.com/go-chi/cors/LICENSE generated vendored Normal file
View file

@ -0,0 +1,21 @@
Copyright (c) 2014 Olivier Poitrey <rs@dailymotion.com>
Copyright (c) 2016-Present https://github.com/go-chi authors
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

39
vendor/github.com/go-chi/cors/README.md generated vendored Normal file
View file

@ -0,0 +1,39 @@
# CORS net/http middleware
[go-chi/cors](https://github.com/go-chi/cors) is a fork of [github.com/rs/cors](https://github.com/rs/cors) that
provides a `net/http` compatible middleware for performing preflight CORS checks on the server side. These headers
are required for using the browser native [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API).
This middleware is designed to be used as a top-level middleware on the [chi](https://github.com/go-chi/chi) router.
Applying with within a `r.Group()` or using `With()` will not work without routes matching `OPTIONS` added.
## Usage
```go
func main() {
r := chi.NewRouter()
// Basic CORS
// for more ideas, see: https://developer.github.com/v3/#cross-origin-resource-sharing
r.Use(cors.Handler(cors.Options{
// AllowedOrigins: []string{"https://foo.com"}, // Use this to allow specific origin hosts
AllowedOrigins: []string{"*"},
// AllowOriginFunc: func(r *http.Request, origin string) bool { return true },
AllowedMethods: []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"},
AllowedHeaders: []string{"Accept", "Authorization", "Content-Type", "X-CSRF-Token"},
ExposedHeaders: []string{"Link"},
AllowCredentials: false,
MaxAge: 300, // Maximum value not ignored by any of major browsers
}))
r.Get("/", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("welcome"))
})
http.ListenAndServe(":3000", r)
}
```
## Credits
All credit for the original work of this middleware goes out to [github.com/rs](github.com/rs).

400
vendor/github.com/go-chi/cors/cors.go generated vendored Normal file
View file

@ -0,0 +1,400 @@
// cors package is net/http handler to handle CORS related requests
// as defined by http://www.w3.org/TR/cors/
//
// You can configure it by passing an option struct to cors.New:
//
// c := cors.New(cors.Options{
// AllowedOrigins: []string{"foo.com"},
// AllowedMethods: []string{"GET", "POST", "DELETE"},
// AllowCredentials: true,
// })
//
// Then insert the handler in the chain:
//
// handler = c.Handler(handler)
//
// See Options documentation for more options.
//
// The resulting handler is a standard net/http handler.
package cors
import (
"log"
"net/http"
"os"
"strconv"
"strings"
)
// Options is a configuration container to setup the CORS middleware.
type Options struct {
// AllowedOrigins is a list of origins a cross-domain request can be executed from.
// If the special "*" value is present in the list, all origins will be allowed.
// An origin may contain a wildcard (*) to replace 0 or more characters
// (i.e.: http://*.domain.com). Usage of wildcards implies a small performance penalty.
// Only one wildcard can be used per origin.
// Default value is ["*"]
AllowedOrigins []string
// AllowOriginFunc is a custom function to validate the origin. It takes the origin
// as argument and returns true if allowed or false otherwise. If this option is
// set, the content of AllowedOrigins is ignored.
AllowOriginFunc func(r *http.Request, origin string) bool
// AllowedMethods is a list of methods the client is allowed to use with
// cross-domain requests. Default value is simple methods (HEAD, GET and POST).
AllowedMethods []string
// AllowedHeaders is list of non simple headers the client is allowed to use with
// cross-domain requests.
// If the special "*" value is present in the list, all headers will be allowed.
// Default value is [] but "Origin" is always appended to the list.
AllowedHeaders []string
// ExposedHeaders indicates which headers are safe to expose to the API of a CORS
// API specification
ExposedHeaders []string
// AllowCredentials indicates whether the request can include user credentials like
// cookies, HTTP authentication or client side SSL certificates.
AllowCredentials bool
// MaxAge indicates how long (in seconds) the results of a preflight request
// can be cached
MaxAge int
// OptionsPassthrough instructs preflight to let other potential next handlers to
// process the OPTIONS method. Turn this on if your application handles OPTIONS.
OptionsPassthrough bool
// Debugging flag adds additional output to debug server side CORS issues
Debug bool
}
// Logger generic interface for logger
type Logger interface {
Printf(string, ...interface{})
}
// Cors http handler
type Cors struct {
// Debug logger
Log Logger
// Normalized list of plain allowed origins
allowedOrigins []string
// List of allowed origins containing wildcards
allowedWOrigins []wildcard
// Optional origin validator function
allowOriginFunc func(r *http.Request, origin string) bool
// Normalized list of allowed headers
allowedHeaders []string
// Normalized list of allowed methods
allowedMethods []string
// Normalized list of exposed headers
exposedHeaders []string
maxAge int
// Set to true when allowed origins contains a "*"
allowedOriginsAll bool
// Set to true when allowed headers contains a "*"
allowedHeadersAll bool
allowCredentials bool
optionPassthrough bool
}
// New creates a new Cors handler with the provided options.
func New(options Options) *Cors {
c := &Cors{
exposedHeaders: convert(options.ExposedHeaders, http.CanonicalHeaderKey),
allowOriginFunc: options.AllowOriginFunc,
allowCredentials: options.AllowCredentials,
maxAge: options.MaxAge,
optionPassthrough: options.OptionsPassthrough,
}
if options.Debug && c.Log == nil {
c.Log = log.New(os.Stdout, "[cors] ", log.LstdFlags)
}
// Normalize options
// Note: for origins and methods matching, the spec requires a case-sensitive matching.
// As it may error prone, we chose to ignore the spec here.
// Allowed Origins
if len(options.AllowedOrigins) == 0 {
if options.AllowOriginFunc == nil {
// Default is all origins
c.allowedOriginsAll = true
}
} else {
c.allowedOrigins = []string{}
c.allowedWOrigins = []wildcard{}
for _, origin := range options.AllowedOrigins {
// Normalize
origin = strings.ToLower(origin)
if origin == "*" {
// If "*" is present in the list, turn the whole list into a match all
c.allowedOriginsAll = true
c.allowedOrigins = nil
c.allowedWOrigins = nil
break
} else if i := strings.IndexByte(origin, '*'); i >= 0 {
// Split the origin in two: start and end string without the *
w := wildcard{origin[0:i], origin[i+1:]}
c.allowedWOrigins = append(c.allowedWOrigins, w)
} else {
c.allowedOrigins = append(c.allowedOrigins, origin)
}
}
}
// Allowed Headers
if len(options.AllowedHeaders) == 0 {
// Use sensible defaults
c.allowedHeaders = []string{"Origin", "Accept", "Content-Type"}
} else {
// Origin is always appended as some browsers will always request for this header at preflight
c.allowedHeaders = convert(append(options.AllowedHeaders, "Origin"), http.CanonicalHeaderKey)
for _, h := range options.AllowedHeaders {
if h == "*" {
c.allowedHeadersAll = true
c.allowedHeaders = nil
break
}
}
}
// Allowed Methods
if len(options.AllowedMethods) == 0 {
// Default is spec's "simple" methods
c.allowedMethods = []string{http.MethodGet, http.MethodPost, http.MethodHead}
} else {
c.allowedMethods = convert(options.AllowedMethods, strings.ToUpper)
}
return c
}
// Handler creates a new Cors handler with passed options.
func Handler(options Options) func(next http.Handler) http.Handler {
c := New(options)
return c.Handler
}
// AllowAll create a new Cors handler with permissive configuration allowing all
// origins with all standard methods with any header and credentials.
func AllowAll() *Cors {
return New(Options{
AllowedOrigins: []string{"*"},
AllowedMethods: []string{
http.MethodHead,
http.MethodGet,
http.MethodPost,
http.MethodPut,
http.MethodPatch,
http.MethodDelete,
},
AllowedHeaders: []string{"*"},
AllowCredentials: false,
})
}
// Handler apply the CORS specification on the request, and add relevant CORS headers
// as necessary.
func (c *Cors) Handler(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method == http.MethodOptions && r.Header.Get("Access-Control-Request-Method") != "" {
c.logf("Handler: Preflight request")
c.handlePreflight(w, r)
// Preflight requests are standalone and should stop the chain as some other
// middleware may not handle OPTIONS requests correctly. One typical example
// is authentication middleware ; OPTIONS requests won't carry authentication
// headers (see #1)
if c.optionPassthrough {
next.ServeHTTP(w, r)
} else {
w.WriteHeader(http.StatusOK)
}
} else {
c.logf("Handler: Actual request")
c.handleActualRequest(w, r)
next.ServeHTTP(w, r)
}
})
}
// handlePreflight handles pre-flight CORS requests
func (c *Cors) handlePreflight(w http.ResponseWriter, r *http.Request) {
headers := w.Header()
origin := r.Header.Get("Origin")
if r.Method != http.MethodOptions {
c.logf("Preflight aborted: %s!=OPTIONS", r.Method)
return
}
// Always set Vary headers
// see https://github.com/rs/cors/issues/10,
// https://github.com/rs/cors/commit/dbdca4d95feaa7511a46e6f1efb3b3aa505bc43f#commitcomment-12352001
headers.Add("Vary", "Origin")
headers.Add("Vary", "Access-Control-Request-Method")
headers.Add("Vary", "Access-Control-Request-Headers")
if origin == "" {
c.logf("Preflight aborted: empty origin")
return
}
if !c.isOriginAllowed(r, origin) {
c.logf("Preflight aborted: origin '%s' not allowed", origin)
return
}
reqMethod := r.Header.Get("Access-Control-Request-Method")
if !c.isMethodAllowed(reqMethod) {
c.logf("Preflight aborted: method '%s' not allowed", reqMethod)
return
}
reqHeaders := parseHeaderList(r.Header.Get("Access-Control-Request-Headers"))
if !c.areHeadersAllowed(reqHeaders) {
c.logf("Preflight aborted: headers '%v' not allowed", reqHeaders)
return
}
if c.allowedOriginsAll {
headers.Set("Access-Control-Allow-Origin", "*")
} else {
headers.Set("Access-Control-Allow-Origin", origin)
}
// Spec says: Since the list of methods can be unbounded, simply returning the method indicated
// by Access-Control-Request-Method (if supported) can be enough
headers.Set("Access-Control-Allow-Methods", strings.ToUpper(reqMethod))
if len(reqHeaders) > 0 {
// Spec says: Since the list of headers can be unbounded, simply returning supported headers
// from Access-Control-Request-Headers can be enough
headers.Set("Access-Control-Allow-Headers", strings.Join(reqHeaders, ", "))
}
if c.allowCredentials {
headers.Set("Access-Control-Allow-Credentials", "true")
}
if c.maxAge > 0 {
headers.Set("Access-Control-Max-Age", strconv.Itoa(c.maxAge))
}
c.logf("Preflight response headers: %v", headers)
}
// handleActualRequest handles simple cross-origin requests, actual request or redirects
func (c *Cors) handleActualRequest(w http.ResponseWriter, r *http.Request) {
headers := w.Header()
origin := r.Header.Get("Origin")
// Always set Vary, see https://github.com/rs/cors/issues/10
headers.Add("Vary", "Origin")
if origin == "" {
c.logf("Actual request no headers added: missing origin")
return
}
if !c.isOriginAllowed(r, origin) {
c.logf("Actual request no headers added: origin '%s' not allowed", origin)
return
}
// Note that spec does define a way to specifically disallow a simple method like GET or
// POST. Access-Control-Allow-Methods is only used for pre-flight requests and the
// spec doesn't instruct to check the allowed methods for simple cross-origin requests.
// We think it's a nice feature to be able to have control on those methods though.
if !c.isMethodAllowed(r.Method) {
c.logf("Actual request no headers added: method '%s' not allowed", r.Method)
return
}
if c.allowedOriginsAll {
headers.Set("Access-Control-Allow-Origin", "*")
} else {
headers.Set("Access-Control-Allow-Origin", origin)
}
if len(c.exposedHeaders) > 0 {
headers.Set("Access-Control-Expose-Headers", strings.Join(c.exposedHeaders, ", "))
}
if c.allowCredentials {
headers.Set("Access-Control-Allow-Credentials", "true")
}
c.logf("Actual response added headers: %v", headers)
}
// convenience method. checks if a logger is set.
func (c *Cors) logf(format string, a ...interface{}) {
if c.Log != nil {
c.Log.Printf(format, a...)
}
}
// isOriginAllowed checks if a given origin is allowed to perform cross-domain requests
// on the endpoint
func (c *Cors) isOriginAllowed(r *http.Request, origin string) bool {
if c.allowOriginFunc != nil {
return c.allowOriginFunc(r, origin)
}
if c.allowedOriginsAll {
return true
}
origin = strings.ToLower(origin)
for _, o := range c.allowedOrigins {
if o == origin {
return true
}
}
for _, w := range c.allowedWOrigins {
if w.match(origin) {
return true
}
}
return false
}
// isMethodAllowed checks if a given method can be used as part of a cross-domain request
// on the endpoint
func (c *Cors) isMethodAllowed(method string) bool {
if len(c.allowedMethods) == 0 {
// If no method allowed, always return false, even for preflight request
return false
}
method = strings.ToUpper(method)
if method == http.MethodOptions {
// Always allow preflight requests
return true
}
for _, m := range c.allowedMethods {
if m == method {
return true
}
}
return false
}
// areHeadersAllowed checks if a given list of headers are allowed to used within
// a cross-domain request.
func (c *Cors) areHeadersAllowed(requestedHeaders []string) bool {
if c.allowedHeadersAll || len(requestedHeaders) == 0 {
return true
}
for _, header := range requestedHeaders {
header = http.CanonicalHeaderKey(header)
found := false
for _, h := range c.allowedHeaders {
if h == header {
found = true
break
}
}
if !found {
return false
}
}
return true
}

70
vendor/github.com/go-chi/cors/utils.go generated vendored Normal file
View file

@ -0,0 +1,70 @@
package cors
import "strings"
const toLower = 'a' - 'A'
type converter func(string) string
type wildcard struct {
prefix string
suffix string
}
func (w wildcard) match(s string) bool {
return len(s) >= len(w.prefix+w.suffix) && strings.HasPrefix(s, w.prefix) && strings.HasSuffix(s, w.suffix)
}
// convert converts a list of string using the passed converter function
func convert(s []string, c converter) []string {
out := []string{}
for _, i := range s {
out = append(out, c(i))
}
return out
}
// parseHeaderList tokenize + normalize a string containing a list of headers
func parseHeaderList(headerList string) []string {
l := len(headerList)
h := make([]byte, 0, l)
upper := true
// Estimate the number headers in order to allocate the right splice size
t := 0
for i := 0; i < l; i++ {
if headerList[i] == ',' {
t++
}
}
headers := make([]string, 0, t)
for i := 0; i < l; i++ {
b := headerList[i]
if b >= 'a' && b <= 'z' {
if upper {
h = append(h, b-toLower)
} else {
h = append(h, b)
}
} else if b >= 'A' && b <= 'Z' {
if !upper {
h = append(h, b+toLower)
} else {
h = append(h, b)
}
} else if b == '-' || (b >= '0' && b <= '9') {
h = append(h, b)
}
if b == ' ' || b == ',' || i == l-1 {
if len(h) > 0 {
// Flush the found header
headers = append(headers, string(h))
h = h[:0]
upper = true
}
} else {
upper = b == '-'
}
}
return headers
}

View file

@ -1,12 +0,0 @@
# This is the official list of Snappy-Go authors for copyright purposes.
# This file is distinct from the CONTRIBUTORS files.
# See the latter for an explanation.
# Names should be added to this file as
# Name or Organization <email address>
# The email address is not required for organizations.
# Please keep the list sorted.
Google Inc.
Jan Mercl <0xjnml@gmail.com>

View file

@ -1,34 +0,0 @@
# This is the official list of people who can contribute
# (and typically have contributed) code to the Snappy-Go repository.
# The AUTHORS file lists the copyright holders; this file
# lists people. For example, Google employees are listed here
# but not in AUTHORS, because Google holds the copyright.
#
# The submission process automatically checks to make sure
# that people submitting code are listed in this file (by email address).
#
# Names should be added to this file only after verifying that
# the individual or the individual's organization has agreed to
# the appropriate Contributor License Agreement, found here:
#
# http://code.google.com/legal/individual-cla-v1.0.html
# http://code.google.com/legal/corporate-cla-v1.0.html
#
# The agreement for individuals can be filled out on the web.
#
# When adding J Random Contributor's name to this file,
# either J's name or J's organization's name should be
# added to the AUTHORS file, depending on whether the
# individual or corporate CLA was used.
# Names should be added to this file like so:
# Name <email address>
# Please keep the list sorted.
Jan Mercl <0xjnml@gmail.com>
Kai Backman <kaib@golang.org>
Marc-Antoine Ruel <maruel@chromium.org>
Nigel Tao <nigeltao@golang.org>
Rob Pike <r@golang.org>
Russ Cox <rsc@golang.org>

View file

@ -1,27 +0,0 @@
Copyright (c) 2011 The Snappy-Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View file

@ -1,124 +0,0 @@
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package snappy
import (
"encoding/binary"
"errors"
)
// ErrCorrupt reports that the input is invalid.
var ErrCorrupt = errors.New("snappy: corrupt input")
// DecodedLen returns the length of the decoded block.
func DecodedLen(src []byte) (int, error) {
v, _, err := decodedLen(src)
return v, err
}
// decodedLen returns the length of the decoded block and the number of bytes
// that the length header occupied.
func decodedLen(src []byte) (blockLen, headerLen int, err error) {
v, n := binary.Uvarint(src)
if n == 0 {
return 0, 0, ErrCorrupt
}
if uint64(int(v)) != v {
return 0, 0, errors.New("snappy: decoded block is too large")
}
return int(v), n, nil
}
// Decode returns the decoded form of src. The returned slice may be a sub-
// slice of dst if dst was large enough to hold the entire decoded block.
// Otherwise, a newly allocated slice will be returned.
// It is valid to pass a nil dst.
func Decode(dst, src []byte) ([]byte, error) {
dLen, s, err := decodedLen(src)
if err != nil {
return nil, err
}
if len(dst) < dLen {
dst = make([]byte, dLen)
}
var d, offset, length int
for s < len(src) {
switch src[s] & 0x03 {
case tagLiteral:
x := uint(src[s] >> 2)
switch {
case x < 60:
s += 1
case x == 60:
s += 2
if s > len(src) {
return nil, ErrCorrupt
}
x = uint(src[s-1])
case x == 61:
s += 3
if s > len(src) {
return nil, ErrCorrupt
}
x = uint(src[s-2]) | uint(src[s-1])<<8
case x == 62:
s += 4
if s > len(src) {
return nil, ErrCorrupt
}
x = uint(src[s-3]) | uint(src[s-2])<<8 | uint(src[s-1])<<16
case x == 63:
s += 5
if s > len(src) {
return nil, ErrCorrupt
}
x = uint(src[s-4]) | uint(src[s-3])<<8 | uint(src[s-2])<<16 | uint(src[s-1])<<24
}
length = int(x + 1)
if length <= 0 {
return nil, errors.New("snappy: unsupported literal length")
}
if length > len(dst)-d || length > len(src)-s {
return nil, ErrCorrupt
}
copy(dst[d:], src[s:s+length])
d += length
s += length
continue
case tagCopy1:
s += 2
if s > len(src) {
return nil, ErrCorrupt
}
length = 4 + int(src[s-2])>>2&0x7
offset = int(src[s-2])&0xe0<<3 | int(src[s-1])
case tagCopy2:
s += 3
if s > len(src) {
return nil, ErrCorrupt
}
length = 1 + int(src[s-3])>>2
offset = int(src[s-2]) | int(src[s-1])<<8
case tagCopy4:
return nil, errors.New("snappy: unsupported COPY_4 tag")
}
end := d + length
if offset > d || end > len(dst) {
return nil, ErrCorrupt
}
for ; d < end; d++ {
dst[d] = dst[d-offset]
}
}
if d != dLen {
return nil, ErrCorrupt
}
return dst[:d], nil
}

View file

@ -1,174 +0,0 @@
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package snappy
import (
"encoding/binary"
)
// We limit how far copy back-references can go, the same as the C++ code.
const maxOffset = 1 << 15
// emitLiteral writes a literal chunk and returns the number of bytes written.
func emitLiteral(dst, lit []byte) int {
i, n := 0, uint(len(lit)-1)
switch {
case n < 60:
dst[0] = uint8(n)<<2 | tagLiteral
i = 1
case n < 1<<8:
dst[0] = 60<<2 | tagLiteral
dst[1] = uint8(n)
i = 2
case n < 1<<16:
dst[0] = 61<<2 | tagLiteral
dst[1] = uint8(n)
dst[2] = uint8(n >> 8)
i = 3
case n < 1<<24:
dst[0] = 62<<2 | tagLiteral
dst[1] = uint8(n)
dst[2] = uint8(n >> 8)
dst[3] = uint8(n >> 16)
i = 4
case int64(n) < 1<<32:
dst[0] = 63<<2 | tagLiteral
dst[1] = uint8(n)
dst[2] = uint8(n >> 8)
dst[3] = uint8(n >> 16)
dst[4] = uint8(n >> 24)
i = 5
default:
panic("snappy: source buffer is too long")
}
if copy(dst[i:], lit) != len(lit) {
panic("snappy: destination buffer is too short")
}
return i + len(lit)
}
// emitCopy writes a copy chunk and returns the number of bytes written.
func emitCopy(dst []byte, offset, length int) int {
i := 0
for length > 0 {
x := length - 4
if 0 <= x && x < 1<<3 && offset < 1<<11 {
dst[i+0] = uint8(offset>>8)&0x07<<5 | uint8(x)<<2 | tagCopy1
dst[i+1] = uint8(offset)
i += 2
break
}
x = length
if x > 1<<6 {
x = 1 << 6
}
dst[i+0] = uint8(x-1)<<2 | tagCopy2
dst[i+1] = uint8(offset)
dst[i+2] = uint8(offset >> 8)
i += 3
length -= x
}
return i
}
// Encode returns the encoded form of src. The returned slice may be a sub-
// slice of dst if dst was large enough to hold the entire encoded block.
// Otherwise, a newly allocated slice will be returned.
// It is valid to pass a nil dst.
func Encode(dst, src []byte) ([]byte, error) {
if n := MaxEncodedLen(len(src)); len(dst) < n {
dst = make([]byte, n)
}
// The block starts with the varint-encoded length of the decompressed bytes.
d := binary.PutUvarint(dst, uint64(len(src)))
// Return early if src is short.
if len(src) <= 4 {
if len(src) != 0 {
d += emitLiteral(dst[d:], src)
}
return dst[:d], nil
}
// Initialize the hash table. Its size ranges from 1<<8 to 1<<14 inclusive.
const maxTableSize = 1 << 14
shift, tableSize := uint(32-8), 1<<8
for tableSize < maxTableSize && tableSize < len(src) {
shift--
tableSize *= 2
}
var table [maxTableSize]int
// Iterate over the source bytes.
var (
s int // The iterator position.
t int // The last position with the same hash as s.
lit int // The start position of any pending literal bytes.
)
for s+3 < len(src) {
// Update the hash table.
b0, b1, b2, b3 := src[s], src[s+1], src[s+2], src[s+3]
h := uint32(b0) | uint32(b1)<<8 | uint32(b2)<<16 | uint32(b3)<<24
p := &table[(h*0x1e35a7bd)>>shift]
// We need to to store values in [-1, inf) in table. To save
// some initialization time, (re)use the table's zero value
// and shift the values against this zero: add 1 on writes,
// subtract 1 on reads.
t, *p = *p-1, s+1
// If t is invalid or src[s:s+4] differs from src[t:t+4], accumulate a literal byte.
if t < 0 || s-t >= maxOffset || b0 != src[t] || b1 != src[t+1] || b2 != src[t+2] || b3 != src[t+3] {
s++
continue
}
// Otherwise, we have a match. First, emit any pending literal bytes.
if lit != s {
d += emitLiteral(dst[d:], src[lit:s])
}
// Extend the match to be as long as possible.
s0 := s
s, t = s+4, t+4
for s < len(src) && src[s] == src[t] {
s++
t++
}
// Emit the copied bytes.
d += emitCopy(dst[d:], s-t, s-s0)
lit = s
}
// Emit any final pending literal bytes and return.
if lit != len(src) {
d += emitLiteral(dst[d:], src[lit:])
}
return dst[:d], nil
}
// MaxEncodedLen returns the maximum length of a snappy block, given its
// uncompressed length.
func MaxEncodedLen(srcLen int) int {
// Compressed data can be defined as:
// compressed := item* literal*
// item := literal* copy
//
// The trailing literal sequence has a space blowup of at most 62/60
// since a literal of length 60 needs one tag byte + one extra byte
// for length information.
//
// Item blowup is trickier to measure. Suppose the "copy" op copies
// 4 bytes of data. Because of a special check in the encoding code,
// we produce a 4-byte copy only if the offset is < 65536. Therefore
// the copy op takes 3 bytes to encode, and this type of item leads
// to at most the 62/60 blowup for representing literals.
//
// Suppose the "copy" op copies 5 bytes of data. If the offset is big
// enough, it will take 5 bytes to encode the copy op. Therefore the
// worst case here is a one-byte literal followed by a five-byte copy.
// That is, 6 bytes of input turn into 7 bytes of "compressed" data.
//
// This last factor dominates the blowup, so the final estimate is:
return 32 + srcLen + srcLen/6
}

View file

@ -1,38 +0,0 @@
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package snappy implements the snappy block-based compression format.
// It aims for very high speeds and reasonable compression.
//
// The C++ snappy implementation is at http://code.google.com/p/snappy/
package snappy
/*
Each encoded block begins with the varint-encoded length of the decoded data,
followed by a sequence of chunks. Chunks begin and end on byte boundaries. The
first byte of each chunk is broken into its 2 least and 6 most significant bits
called l and m: l ranges in [0, 4) and m ranges in [0, 64). l is the chunk tag.
Zero means a literal tag. All other values mean a copy tag.
For literal tags:
- If m < 60, the next 1 + m bytes are literal bytes.
- Otherwise, let n be the little-endian unsigned integer denoted by the next
m - 59 bytes. The next 1 + n bytes after that are literal bytes.
For copy tags, length bytes are copied from offset bytes ago, in the style of
Lempel-Ziv compression algorithms. In particular:
- For l == 1, the offset ranges in [0, 1<<11) and the length in [4, 12).
The length is 4 + the low 3 bits of m. The high 3 bits of m form bits 8-10
of the offset. The next byte is bits 0-7 of the offset.
- For l == 2, the offset ranges in [0, 1<<16) and the length in [1, 65).
The length is 1 + m. The offset is the little-endian unsigned integer
denoted by the next 2 bytes.
- For l == 3, this tag is a legacy format that is no longer supported.
*/
const (
tagLiteral = 0x00
tagCopy1 = 0x01
tagCopy2 = 0x02
tagCopy4 = 0x03
)