This updates the repo index/file view endpoints so annex files match the way
LFS files are rendered, making annexed files accessible via the web instead of
being black boxes only accessible by git clone.
This mostly just duplicates the existing LFS logic. It doesn't try to combine itself
with the existing logic, to make merging with upstream easier. If upstream ever
decides to accept, I would like to try to merge the redundant logic.
The one bit that doesn't directly copy LFS is my choice to hide annex-symlinks.
LFS files are always _pointer files_ and therefore always render with the "file"
icon and no special label, but annex files come in two flavours: symlinks or
pointer files. I've conflated both kinds to try to give a consistent experience.
The tests in here ensure the correct download link (/media, from the last PR)
renders in both the toolbar and, if a binary file (like most annexed files will be),
in the main pane, but it also adds quite a bit of code to make sure text files
that happen to be annexed are dug out and rendered inline like LFS files are.
Previously, Gitea's LFS support allowed direct-downloads of LFS content,
via http://$HOSTNAME:$PORT/$USER/$REPO/media/branch/$BRANCH/$FILE
Expand that grace to git-annex too. Now /media should provide the
relevant *content* from the .git/annex/objects/ folder.
This adds tests too. And expands the tests to try symlink-based annexing,
since /media implicitly supports both that and pointer-file-based annexing.
This moves the `annexObjectPath()` helper out of the tests and into a
dedicated sub-package as `annex.ContentLocation()`, and expands it with
`.Pointer()` (which validates using `git annex examinekey`),
`.IsAnnexed()` and `.Content()` to make it a more useful module.
The tests retain their own wrapper version of `ContentLocation()`
because I tried to follow close to the API modules/lfs uses, which in
terms of abstract `git.Blob` and `git.TreeEntry` objects, not in terms
of `repoPath string`s which are more convenient for the tests.
This makes HTTP symmetric with SSH clone URLs.
This gives us the fancy feature of _anonymous_ downloads,
so people can access datasets without having to set up an
account or manage ssh keys.
Previously, to access "open access" data shared this way,
users would need to:
1. Create an account on gitea.example.com
2. Create ssh keys
3. Upload ssh keys (and make sure to find and upload the correct file)
4. `git clone git@gitea.example.com:user/dataset.git`
5. `cd dataset`
6. `git annex get`
This cuts that down to just the last three steps:
1. `git clone https://gitea.example.com/user/dataset.git`
2. `cd dataset`
3. `git annex get`
This is significantly simpler for downstream users, especially for those
unfamiliar with the command line.
Unfortunately there's no uploading. While git-annex supports uploading
over HTTP to S3 and some other special remotes, it seems to fail on a
_plain_ HTTP remote. See https://github.com/neuropoly/gitea/issues/7
and https://git-annex.branchable.com/forum/HTTP_uploads/#comment-ce28adc128fdefe4c4c49628174d9b92.
This is not a major loss since no one wants uploading to be anonymous anyway.
To support private repos, I had to hunt down and patch a secret extra security
corner that Gitea only applies to HTTP for some reason (services/auth/basic.go).
This was guided by https://git-annex.branchable.com/tips/setup_a_public_repository_on_a_web_site/
Fixes https://github.com/neuropoly/gitea/issues/3
Co-authored-by: Mathieu Guay-Paquet <mathieu.guaypaquet@polymtl.ca>
Fixes https://github.com/neuropoly/gitea/issues/11
Tests:
* `git annex init`
* `git annex copy --from origin`
* `git annex copy --to origin`
over:
* ssh
for:
* the owner
* a collaborator
* a read-only collaborator
* a stranger
in a
* public repo
* private repo
And then confirms:
* Deletion of the remote repo (to ensure lockdown isn't messing with us: https://git-annex.branchable.com/internals/lockdown/#comment-0cc5225dc5abe8eddeb843bfd2fdc382)
------
To support all this:
* Add util.FileCmp()
* Patch withKeyFile() so it can be nested in other copies of itself
-------
Many thanks to Mathieu for giving style tips and catching several bugs,
including a subtle one in util.filecmp() which neutered it.
Co-authored-by: Mathieu Guay-Paquet <mathieu.guay-paquet@polymtl.ca>
backport #28213
This PR will fix some missed checks for private repositories' data on
web routes and API routes.
(cherry picked from commit dfd511faf35fef68557e53763f9b06e5a139565d)
Backport of #27915Fixes#27819
We have support for two factor logins with the normal web login and with
basic auth. For basic auth the two factor check was implemented at three
different places and you need to know that this check is necessary. This
PR moves the check into the basic auth itself.
(cherry picked from commit 00705da102be929dfa41519b030be3bdd8c68472)
- The current architecture is inherently insecure, because you can
construct the 'secret' cookie value with values that are available in
the database. Thus provides zero protection when a database is
dumped/leaked.
- This patch implements a new architecture that's inspired from: [Paragonie Initiative](https://paragonie.com/blog/2015/04/secure-authentication-php-with-long-term-persistence#secure-remember-me-cookies).
- Integration testing is added to ensure the new mechanism works.
- Removes a setting, because it's not used anymore.
(cherry-pick from eff097448b1ebd2a280fcdd55d10b1f6081e9ccd)
Conflicts:
modules/context/context_cookie.go
trivial context conflicts
routers/web/web.go
ctx.GetSiteCookie(setting.CookieRememberName) moved from services/auth/middleware.go
System users (Ghost, ActionsUser, etc) have a negative id and may be
the author of a comment, either because it was created by a now
deleted user or via an action using a transient token.
The GetPossibleUserByID function has special cases related to system
users and will not fail if given a negative id.
Refs: https://codeberg.org/forgejo/forgejo/issues/1425
(cherry picked from commit 97667e06b384d834a04eaa05e8f91563481709b1)
Backport of #25613Fixes#25564Fixes#23191
- Api v2 search endpoint should return only the latest version matching
the query
- Api v3 search endpoint should return `take` packages not package
versions
(cherry picked from commit 762d4245fb22a927861d30c6314d81e27eb1a06a)
The tests at tests/integration/migration-test/migration_test.go will
not run any Forgejo migration when using the gitea-*.sql.gz files
because they do not contain a ForgejoVersion row which is interpreted
as a new Forgejo installation for which there is no need for migration.
Create a situation by which the ForgejoVersion table exists and has a
version of 0 in tests/integration/migration-test/forgejo-v1.19.0.*.sql.gz
thus ensuring all Forgejo migrations are run.
The forgejo*.sql.gz files do not have any Gitea related records, which
will be interpreted by the Gitea migrations as a new installation that
does not need any migration. As a consequence the migration tests run
when using forgejo-v1.19.0.*.sql.gz are exclusively about Forgejo
migrations.
(cherry picked from commit ec8003859c920ac05a071ad9b1d9d8af5a694ac0)