There's no way to tell cron to run a job on the first Saturday of a month, so we
tell it to run every Saturday, and manually check whether it's the first week of
the month. This is not ideal because we'll get notifications about failed
releases three times a month, but it's better than nothing for now.
1) the cron schedule was wrong: it was doing every saturday, rather than
the first saturday of each month.
2) It wasn't triggering a deploy despite pushing a tag because clearly
github doesn't want that to happen.
Now it triggers a deploy, and it also allows triggering from the UI,
letting you specify minor/patch bump and whether to ignore blocking
PRs/issues. As such I'm removing support for the old method of pushing
the tag. The new way is the only way.
Github actions refuses to trigger a workflow from another workflow, but
if you use your own personal access token (in this case,
GITHUB_API_TOKEN), it should work.
This script is failing currently on
https://github.com/jesseduffield/lazygit/pull/3631 because that fork's
master branch is 300 commits behind our own, but the feature branch is
up to date.
The thing is, we don't actually need to involve the master branch. All
we care about is the feature branch's own commits, so this commit simply
fetches those commits and checks them.
It is annoying when CI builds suddenly start to fail because the linter was
updated and finds new things to complain about.
Updating the linter and fixing the code accordingly should be a dedicated
activity.
It used to be a common thing to have to update Config.md in a PR (and we often
forgot despite the template). As of #3565 this is no longer necessary, so remove
this from the template.
Updating docs in general is still a good thing to think about, so we leave this
in.
Codacy's coverage report feature requires the use of a secret key, which
is only available on the main repo and is not available on forks. So,
the step has been always failing on any forks. This commit ensures that
we only run it on non-forks.
This greatly diminishes the value of the coverage reports. I've talked
to one of the Codacy people and advised that they should just have an
API key for coverage reports which is not a secret, like what bugsnag
does.
This PR captures the code coverage from our unit and integration tests. At the
moment it simply pushes the result to Codacy, a platform that assists with
improving code health. Right now the focus is just getting visibility but I want
to experiment with alerts on PRs when a PR causes a drop in code coverage.
To be clear: I'm not a dogmatist about this: I have no aspirations to get to
100% code coverage, and I don't consider lines-of-code-covered to be a perfect
metric, but it is a pretty good heuristic for how extensive your tests are.
The good news is that our coverage is actually pretty good which was a surprise
to me!
As a conflict of interest statement: I'm in Codacy's 'Pioneers' program which
provides funding and mentorship, and part of the arrangement is to use Codacy's
tooling on lazygit. This is something I'd have been happy to explore even
without being part of the program, and just like with any other static analysis
tool, we can tweak it to fit our use case and values.
## How we're capturing code coverage
This deserves its own section. Basically when you build the lazygit binary you
can specify that you want the binary to capture coverage information when it
runs. Then, if you run the binary with a GOCOVERDIR env var, it will write
coverage information to that directory before exiting.
It's a similar story with unit tests except with those you just specify the
directory inline via `-test.gocoverdir`.
We run both unit tests and integration tests separately in CI, _and_ we run them
parallel with different OS's and git versions. So I've got each step uploading
the coverage files as an artefact, and then in a separate step we combine all
the artefacts together and generate a combined coverage file, which we then
upload to codacy (but in future we can do other things with it like warn in a PR
if code coverage decreases too much).
Another caveat is that when running integration tests, not only do we want to
obtain code coverage from code executed by the test binary, we also want to
obtain code coverage from code executed by the test runner. Otherwise, for each
integration test you add, the setup code (which is run by the test runner, not
the test binary) will be considered un-covered and for a large setup step it may
appear that your PR _decreases_ coverage on net. Go doesn't easily let you
exclude directories from coverage reports so it's better to just track the
coverage from both the runner and the binary.
The binary expects a GOCOVERDIR env var but the test runner expects a
test.gocoverdir positional arg and if you pass the positional arg it will
internally overwrite GOCOVERDIR to some random temp directory and if you then
pass that to the test binary, it doesn't seem to actually write to it by the
time the test finishes. So to get around that we're using LAZYGIT_GOCOVERDIR and
then within the test runner we're mapping that to GOCOVERDIR before running the
test binary. So they both end up writing to the same directory. Coverage data
files are named to avoid conflicts, including something unique to the process,
so we don't need to worry about name collisions between the test runner and the
test binary's coverage files. We then merge the files together purely for the
sake of having fewer artefacts to upload.
## Misc
Initially I was able to have all the instances of '/tmp/code_coverage' confined
to the ci.yml which was good because it was all in one place but now it's spread
across ci.yml and scripts/run_integration_tests.sh and I don't feel great about
that but can't think of a way to make it cleaner.
I believe there's a use case for running scripts/run_integration_tests.sh
outside of CI (so that you can run tests against older git versions locally) so
I've made it that unless you pass the LAZYGIT_GOCOVERDIR env var to that script,
it skips all the code coverage stuff.
On a separate note: it seems that Go's coverage report is based on percentage of
statements executed, whereas codacy cares more about lines of code executed, so
codacy reports a higher percentage (e.g. 82%) than Go's own coverage report
(74%).
This has several benefits:
- it's less code
- we're using the same mechanism to generate all our auto-generated files, so if
someone wants to add a new one, it's clear which pattern to follow
- we can re-generate all generated files with a single command
("go generate ./...", or "make generate")
- we only need a single check on CI to check that all files are up to date (see
previous commit)
At the moment, test_list.go is the only file that we generate using go:generate.
We will add another one in the next commit though, and we might add even more in
the future; it's useful to have a single check on CI that checks them all.
This spares me effort when it comes to making release notes.
Yes, sometimes it may be easier to start a message without an imperative e.g. 'When X happens, do Y'
but I don't want to overwhelm the contributor with details.
From the go 1.19 release notes:
Command and LookPath no longer allow results from a PATH search to be found relative to the current directory. This removes a common source of security problems but may also break existing programs that depend on using, say, exec.Command("prog") to run a binary named prog (or, on Windows, prog.exe) in the current directory. See the os/exec package documentation for information about how best to update such programs.
After going and adding labels for all of these I found out that 'improvement' should be 'enhancement' and 'bugfix' should be 'bug'
but I don't know how to bulk update them (and I can't rename because the desired labels already exist).
I'll work that out later, this is good enough for now