This updates the repo index/file view endpoints so annex files match the way
LFS files are rendered, making annexed files accessible via the web instead of
being black boxes only accessible by git clone.
This mostly just duplicates the existing LFS logic. It doesn't try to combine itself
with the existing logic, to make merging with upstream easier. If upstream ever
decides to accept, I would like to try to merge the redundant logic.
The one bit that doesn't directly copy LFS is my choice to hide annex-symlinks.
LFS files are always _pointer files_ and therefore always render with the "file"
icon and no special label, but annex files come in two flavours: symlinks or
pointer files. I've conflated both kinds to try to give a consistent experience.
The tests in here ensure the correct download link (/media, from the last PR)
renders in both the toolbar and, if a binary file (like most annexed files will be),
in the main pane, but it also adds quite a bit of code to make sure text files
that happen to be annexed are dug out and rendered inline like LFS files are.
Previously, Gitea's LFS support allowed direct-downloads of LFS content,
via http://$HOSTNAME:$PORT/$USER/$REPO/media/branch/$BRANCH/$FILE
Expand that grace to git-annex too. Now /media should provide the
relevant *content* from the .git/annex/objects/ folder.
This adds tests too. And expands the tests to try symlink-based annexing,
since /media implicitly supports both that and pointer-file-based annexing.
This moves the `annexObjectPath()` helper out of the tests and into a
dedicated sub-package as `annex.ContentLocation()`, and expands it with
`.Pointer()` (which validates using `git annex examinekey`),
`.IsAnnexed()` and `.Content()` to make it a more useful module.
The tests retain their own wrapper version of `ContentLocation()`
because I tried to follow close to the API modules/lfs uses, which in
terms of abstract `git.Blob` and `git.TreeEntry` objects, not in terms
of `repoPath string`s which are more convenient for the tests.
This makes HTTP symmetric with SSH clone URLs.
This gives us the fancy feature of _anonymous_ downloads,
so people can access datasets without having to set up an
account or manage ssh keys.
Previously, to access "open access" data shared this way,
users would need to:
1. Create an account on gitea.example.com
2. Create ssh keys
3. Upload ssh keys (and make sure to find and upload the correct file)
4. `git clone git@gitea.example.com:user/dataset.git`
5. `cd dataset`
6. `git annex get`
This cuts that down to just the last three steps:
1. `git clone https://gitea.example.com/user/dataset.git`
2. `cd dataset`
3. `git annex get`
This is significantly simpler for downstream users, especially for those
unfamiliar with the command line.
Unfortunately there's no uploading. While git-annex supports uploading
over HTTP to S3 and some other special remotes, it seems to fail on a
_plain_ HTTP remote. See https://github.com/neuropoly/gitea/issues/7
and https://git-annex.branchable.com/forum/HTTP_uploads/#comment-ce28adc128fdefe4c4c49628174d9b92.
This is not a major loss since no one wants uploading to be anonymous anyway.
To support private repos, I had to hunt down and patch a secret extra security
corner that Gitea only applies to HTTP for some reason (services/auth/basic.go).
This was guided by https://git-annex.branchable.com/tips/setup_a_public_repository_on_a_web_site/
Fixes https://github.com/neuropoly/gitea/issues/3
Co-authored-by: Mathieu Guay-Paquet <mathieu.guaypaquet@polymtl.ca>
Fixes https://github.com/neuropoly/gitea/issues/11
Tests:
* `git annex init`
* `git annex copy --from origin`
* `git annex copy --to origin`
over:
* ssh
for:
* the owner
* a collaborator
* a read-only collaborator
* a stranger
in a
* public repo
* private repo
And then confirms:
* Deletion of the remote repo (to ensure lockdown isn't messing with us: https://git-annex.branchable.com/internals/lockdown/#comment-0cc5225dc5abe8eddeb843bfd2fdc382)
------
To support all this:
* Add util.FileCmp()
* Patch withKeyFile() so it can be nested in other copies of itself
-------
Many thanks to Mathieu for giving style tips and catching several bugs,
including a subtle one in util.filecmp() which neutered it.
Co-authored-by: Mathieu Guay-Paquet <mathieu.guay-paquet@polymtl.ca>
Backport #28935 by @silverwind
The `ToUTF8*` functions were stripping BOM, while BOM is actually valid
in UTF8, so the stripping must be optional depending on use case. This
does:
- Add a options struct to all `ToUTF8*` functions, that by default will
strip BOM to preserve existing behaviour
- Remove `ToUTF8` function, it was dead code
- Rename `ToUTF8WithErr` to `ToUTF8`
- Preserve BOM in Monaco Editor
- Remove a unnecessary newline in the textarea value. Browsers did
ignore it, it seems but it's better not to rely on this behaviour.
Fixes: https://github.com/go-gitea/gitea/issues/28743
Related: https://github.com/go-gitea/gitea/issues/6716 which seems to
have once introduced a mechanism that strips and re-adds the BOM, but
from what I can tell, this mechanism was removed at some point after
that PR.
Co-authored-by: silverwind <me@silverwind.io>
(cherry picked from commit b8e6cffd317401d980600e339eb21b15b9bc64c1)
Backport https://github.com/go-gitea/gitea/pull/28726 by @fuxiaohei
Fix Uploaded artifacts should be overwritten
https://github.com/go-gitea/gitea/issues/28549
When upload different content to uploaded artifact, it checks that
content size is not match in db record with previous artifact size, then
the new artifact is refused.
Now if it finds uploading content size is not matching db record when
receiving chunks, it updates db records to follow the latest size value.
(cherry picked from commit 7f0ce2dfc7f4a0c50f6895f6d478f5230089f1c7)
Backport #28877 by @KN4CK3R
Fixes#28875
If `RequireSignInView` is enabled, the ghost user has no access rights.
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
(cherry picked from commit b7c944b9e4e9f847719fbce421b2f4fee7281187)
Backport of #2143
This solves two bugs. One bug is that due to the JOIN with the
`forgejo_blocked_users` table, duplicated users were generated if a user
had more than one user blocked, this lead to receiving more than one
entry in the actions table. The other bug is that if a user blocked more
than one user, it would still receive a action entry by a
blocked user, because the SQL query would not exclude the other
duplicated users that was generated by the JOIN.
The new solution is somewhat non-optimal in my eyes, but it's better
than rewriting the query to become a potential perfomance blocker (usage
of WHERE IN, which cannot be rewritten to a JOIN). It simply removes the
watchers after it was retrieved by the SQL query.
(cherry picked from commit c63c00b39b8bd2ed3a69ed044933a9626bfca2c1)
- Backport of #1981
- When the user is not found in `reloadparam`, early return when the
user is not found to avoid calling `IsUserVisibleToViewer` which in turn
avoids causing a NPE.
- This fixes the case that a 500 error and 404 error is shown on the
same page.
- Add integration test for non-existant user RSS.
- Regression by c6366089df
(cherry picked from commit f0e06962786ef8c417b0c6f07940c1909d3b91ba)
(cherry picked from commit 75d806690875a4fc38eb1e3c904096be34657011)
(cherry picked from commit 4d0a1e0637450865c7bbac69e42d92d63b95149c)
(cherry picked from commit 5f40a485da1b2c5f129f32e2ddc2065e3ba9ccd0)
(cherry picked from commit c4cb7812e39add6f7ff3d6f3f2d4e02c66435f0e)
Backport #28140 by @earl-warren
- Make use of the `form-fetch-action` for the merge button, which will
automatically prevent the action from happening multiple times and show
a nice loading indicator as user feedback while the merge request is
being processed by the server.
- Adjust the merge PR code to JSON response as this is required for the
`form-fetch-action` functionality.
- Resolves https://codeberg.org/forgejo/forgejo/issues/774
- Likely resolves the cause of
https://codeberg.org/forgejo/forgejo/issues/1688#issuecomment-1313044
(cherry picked from commit 4ec64c19507caefff7ddaad722b1b5792b97cc5a)
Co-authored-by: Earl Warren <109468362+earl-warren@users.noreply.github.com>
Co-authored-by: Gusted <postmaster@gusted.xyz>
(cherry picked from commit fbf29f29b5225be8e5e682e45b6977e7dda9b318)
Backport #28716 by wxiaoguang
Gitea prefers to use relative URLs in code (to make multiple domain work
for some users)
So it needs to use `toAbsoluteUrl` to generate a full URL when click
"Reference in New Issues"
And add some comments in the test code
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
(cherry picked from commit def178ce323e6c300e02b9aa225227178e5ca2e1)
Conflicts:
tests/integration/issue_test.go
https://codeberg.org/forgejo/forgejo/issues/2158
Backport #28590 by @lunny
Fix https://github.com/go-gitea/gitea/pull/28547#issuecomment-1867740842
Since https://gitea.com/xorm/xorm/pulls/2383 merged, xorm now supports
UPDATE JOIN.
To keep consistent from different databases, xorm use
`engine.Join().Update`, but the actural generated SQL are different
between different databases.
For MySQL, it's `UPDATE talbe1 JOIN table2 ON join_conditions SET xxx
Where xxx`.
For MSSQL, it's `UPDATE table1 SET xxx FROM TABLE1, TABLE2 WHERE
join_conditions`.
For SQLITE per https://www.sqlite.org/lang_update.html, sqlite support
`UPDATE table1 SET xxx FROM table2 WHERE join conditions` from
3.33.0(2020-8-14).
POSTGRES is the same as SQLITE.
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
(cherry picked from commit 18da3f8483d5359f44bdac5ea46c6d2a54d94358)
- The current architecture is inherently insecure, because you can
construct the 'secret' cookie value with values that are available in
the database. Thus provides zero protection when a database is
dumped/leaked.
- This patch implements a new architecture that's inspired from: [Paragonie Initiative](https://paragonie.com/blog/2015/04/secure-authentication-php-with-long-term-persistence#secure-remember-me-cookies).
- Integration testing is added to ensure the new mechanism works.
- Removes a setting, because it's not used anymore.
(cherry picked from commit eff097448b1ebd2a280fcdd55d10b1f6081e9ccd)
[GITEA] rework long-term authentication (squash) add migration
Reminder: the migration is run via integration tests as explained
in the commit "[DB] run all Forgejo migrations in integration tests"
(cherry picked from commit 4accf7443c1c59b4d2e7787d6a6c602d725da403)
(cherry picked from commit 99d06e344ebc3b50bafb2ac4473dd95f057d1ddc)
(cherry picked from commit d8bc98a8f021d381bf72790ad246f923ac983ad4)
(cherry picked from commit 6404845df9a63802fff4c5bd6cfe1e390076e7f0)
(cherry picked from commit 72bdd4f3b9f6509d1ff3f10ecb12c621a932ed30)
(cherry picked from commit 4b01bb0ce812b6c59414ff53fed728563d8bc9cc)
(cherry picked from commit c26ac318162b2cad6ff1ae54e2d8f47a4e4fe7c2)
(cherry picked from commit 8d2dab94a6)
Conflicts:
routers/web/auth/auth.go
https://codeberg.org/forgejo/forgejo/issues/2158
- https://github.com/NYTimes/gziphandler doesn't seems to be maintained
anymore and Forgejo already includes
https://github.com/klauspost/compress which provides a maintained and
faster gzip handler fork.
- Enables Jitter to prevent BREACH attacks, as this *seems* to be
possible in the context of Forgejo.
(cherry picked from commit cc2847241d82001babd8d40c87d03169f21c14cd)
(cherry picked from commit 99ba56a8761dd08e08d9499cab2ded1a6b7b970f)
Conflicts:
go.sum
https://codeberg.org/forgejo/forgejo/pulls/1581
(cherry picked from commit 711638193daa2311e2ead6249a47dcec47b4e335)
(cherry picked from commit 9c12a37fde6fa84414bf332ff4a066facdb92d38)
(cherry picked from commit 91191aaaedaf999209695e2c6ca4fb256b396686)
(cherry picked from commit 72be417f844713265a94ced6951f8f4b81d0ab1a)
(cherry picked from commit 98497c84da205ec59079e42274aa61199444f7cd)
(cherry picked from commit fba042adb5c1abcbd8eee6b5a4f735ccb2a5e394)
(cherry picked from commit dd2414f226)
Conflicts:
routers/web/web.go
https://codeberg.org/forgejo/forgejo/issues/2016
Backport #28587, the only conflict is the test file.
The CORS code has been unmaintained for long time, and the behavior is
not correct.
This PR tries to improve it. The key point is written as comment in
code. And add more tests.
Fix#28515Fix#27642Fix#17098
(cherry picked from commit 7a2786ca6cd84633784a2c9986da65a9c4d79c78)
In the `TestDatabaseMissingABranch` testcase, make sure that the
branches are in sync between the db and git before deleting a branch via
git, then compare the branch count from the web UI, making sure that it
returns an out-of-sync value first, and the correct one after another
sync.
This is currently tested by scraping the UI, and relies on the fact that
the branch counter is out of date before syncing. If that issue gets
resolved, we'll have to adjust the test to verify the sync another way.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
This tests the scenario reported in Codeberg/Community#1408: a branch
that is recorded in the database, but missing on disk was causing
internal server errors. With recent changes, that is no longer the case,
the error is logged and then ignored.
This test case tests this behaviour, that the repo's branches page on
the web UI functions even if the git branch is missing.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
- Backport of #2134
- It's possible that `canSoftDeleteContentHistory` is called without
`ctx.Doer` being set, such as an anonymous user requesting the
`/content-history/detail` endpoint.
- Add a simple condition to always set to `canSoftDelete` to false if an
anonymous user is requesting this, this avoids a panic in the code that
assumes `ctx.Doer` is set.
- Added integration testing.
(cherry picked from commit 0b5db0dcc608e9a9e79ead094a20a7775c4f9559)
- Backport of #2100
- Make the reference URL in the "Reference in New issue" feature
absolute again as it wouldn't render as a link otherwise.
- Adds integration test.
- Regression by 769be877f2
- Resolves#2012
(cherry picked from commit c74bae28973092eeeaf2fb9a17cbe41d286648db)
- Backport of #2094
- It's possible that `PageIsDiff` is set but not `Commit` resulting in a
NPE in the template. This can happen when the requested commit doesn't exist.
- Regression of c802c46a9b &
5743d7cb5b
- Added 'hacky' integration test.
(cherry picked from commit 8db2d5e4a76f05b34e4f889e7a00ecd6578d3639)
- When a user requests a archive of a non-existant commit
`git.ErrNotExist` is returned, but was not gracefully handled resulting
in a 500 error.
- Doesn't exist in v1.22 due to it being refactored away in
cbf923e87b
- Adds integration test.
- The transaction in combination with Git push was causing deadlocks if
you had the `push_update` queue set to `immediate`. This was the root
cause of slow integration tests in CI.
- Remove the sync branch code as this is already being done in the Git
post-receive hook.
- Add tests to proof the branch models are in sync even with this code
removed.
Backport of https://codeberg.org/forgejo/forgejo/pulls/1962
(cherry picked from commit a064065cb9a6e39597e38c37a405d066cfabf7f7)
Fix#28056
Backport #28361
This PR will check whether the repo has zero branch when pushing a
branch. If that, it means this repository hasn't been synced.
The reason caused that is after user upgrade from v1.20 -> v1.21, he
just push branches without visit the repository user interface. Because
all repositories routers will check whether a branches sync is necessary
but push has not such check.
For every repository, it has two states, synced or not synced. If there
is zero branch for a repository, then it will be assumed as non-sync
state. Otherwise, it's synced state. So if we think it's synced, we just
need to update branch/insert new branch. Otherwise do a full sync. So
that, for every push, there will be almost no extra load added. It's
high performance than yours.
For the implementation, we in fact will try to update the branch first,
if updated success with affect records > 0, then all are done. Because
that means the branch has been in the database. If no record is
affected, that means the branch does not exist in database. So there are
two possibilities. One is this is a new branch, then we just need to
insert the record. Another is the branches haven't been synced, then we
need to sync all the branches into database.
(cherry picked from commit 87db4a47c8e22b7c2e4f2b9f9efc8df1e3622884)
- Backport #1882
- Be more specific of which element we want and also don't include the
href into the selector, so if the value changes, it will show the value
that was rendered.
- Ensure stable order of passed repository IDs.
- Resolves codeberg.org/forgejo/forgejo/issues/1880
(cherry picked from commit 79bc4cffe5437179543ce5f0e8ebe0f1e2301216)
- Backport of https://codeberg.org/forgejo/forgejo/pulls/1848
- `ReposParam` is passed to the pagination as value for the `repos`
query. It should paginate to other pages with only the selected
repositories, which was currently not the case, but was already the case
for the links in the selectable items.
- Fix the wrong value being passed for issues/pulls lists.
- Fix the formatting of repository query value for milestones lists.
- Added integration testing.
- Resolves https://codeberg.org/forgejo/forgejo/issues/1836
(cherry picked from commit c648e5ab3a341b97807b9a1c4cf312d4acdc08d4)
backport #28213
This PR will fix some missed checks for private repositories' data on
web routes and API routes.
(cherry picked from commit bc3d8bff73a5bd307dc825254b51bfedd722f078)
Backport #27610 by @evantobin
Fixes#27598
In #27080, the logic for the tokens endpoints were updated to allow
admins to create and view tokens in other accounts. However, the same
functionality was not added to the DELETE endpoint. This PR makes the
DELETE endpoint function the same as the other token endpoints and adds
unit tests
Co-authored-by: Evan Tobin <me@evantob.in>
(cherry picked from commit 93ede4bc83ccb231b9ca67041318a0811d1d34dd)
- If you attempted to get a branch feed on a empty repository, it would
result in a panic as the code expects that the branch exists.
- `context.RepoRefByType` would normally already 404 if the branch
doesn't exist, however if a repository is empty, it would not do this
check.
- Fix bug where `/atom/branch/*` would return a RSS feed.
(cherry picked from commit d27bcd98a41b69e313535e5e91e4272136a4bab1)
(cherry picked from commit 07916c87235f246c809d61b74c55e796eca23fc8)
(cherry picked from commit 2eedbe0c55cb7109eb722ab9172933a26e878307)
(cherry picked from commit 3810d905c6f90e3c44e61c6ba8b8f4a219976c0b)
- The current architecture is inherently insecure, because you can
construct the 'secret' cookie value with values that are available in
the database. Thus provides zero protection when a database is
dumped/leaked.
- This patch implements a new architecture that's inspired from: [Paragonie Initiative](https://paragonie.com/blog/2015/04/secure-authentication-php-with-long-term-persistence#secure-remember-me-cookies).
- Integration testing is added to ensure the new mechanism works.
- Removes a setting, because it's not used anymore.
(cherry picked from commit eff097448b1ebd2a280fcdd55d10b1f6081e9ccd)
[GITEA] rework long-term authentication (squash) add migration
Reminder: the migration is run via integration tests as explained
in the commit "[DB] run all Forgejo migrations in integration tests"
(cherry picked from commit 4accf7443c1c59b4d2e7787d6a6c602d725da403)
(cherry picked from commit 99d06e344ebc3b50bafb2ac4473dd95f057d1ddc)
(cherry picked from commit d8bc98a8f021d381bf72790ad246f923ac983ad4)
(cherry picked from commit 6404845df9a63802fff4c5bd6cfe1e390076e7f0)
(cherry picked from commit 72bdd4f3b9f6509d1ff3f10ecb12c621a932ed30)
(cherry picked from commit 4b01bb0ce812b6c59414ff53fed728563d8bc9cc)
(cherry picked from commit c26ac318162b2cad6ff1ae54e2d8f47a4e4fe7c2)
- https://github.com/NYTimes/gziphandler doesn't seems to be maintained
anymore and Forgejo already includes
https://github.com/klauspost/compress which provides a maintained and
faster gzip handler fork.
- Enables Jitter to prevent BREACH attacks, as this *seems* to be
possible in the context of Forgejo.
(cherry picked from commit cc2847241d82001babd8d40c87d03169f21c14cd)
(cherry picked from commit 99ba56a8761dd08e08d9499cab2ded1a6b7b970f)
Conflicts:
go.sum
https://codeberg.org/forgejo/forgejo/pulls/1581
(cherry picked from commit 711638193daa2311e2ead6249a47dcec47b4e335)
(cherry picked from commit 9c12a37fde6fa84414bf332ff4a066facdb92d38)
(cherry picked from commit 91191aaaedaf999209695e2c6ca4fb256b396686)
(cherry picked from commit 72be417f844713265a94ced6951f8f4b81d0ab1a)
(cherry picked from commit 98497c84da205ec59079e42274aa61199444f7cd)
(cherry picked from commit fba042adb5c1abcbd8eee6b5a4f735ccb2a5e394)
- Use the 'existing' jsonschema library for the nodeinfo integration test.
(cherry picked from commit 73864840f27274d4cdaef23d47a6a71fc60529c3)
(cherry picked from commit da36df306b7a75434c75ed5f63608e06266ca480)
Conflicts:
go.mod
https://codeberg.org/forgejo/forgejo/pulls/1581
(cherry picked from commit 2b4ab46d8eacd2e6b2318f26e327ec59b804ea23)
Conflicts:
go.mod
https://codeberg.org/forgejo/forgejo/pulls/1617
(cherry picked from commit 8064130344eb0d797838f8444a6d5c0e3d425716)
(cherry picked from commit ca32f14bc215cdeabbf1643ef46a0c8c9e7f3ae8)
(cherry picked from commit 6a4abb928f556796041e2e59ec3b772d9b577009)
(cherry picked from commit 0059a44ae8066211c56754c56f3570076476af51)
(cherry picked from commit 8dc8451fd080bacea9947ab8da3ea33d0a4249ac)