This PR:
The idea is clear described by some developers:
This whole concept of explicitly listing each and every file manually (or with a fragile wildcard) is an obvious sisyphean task. I’d say all we need to do is run git archive and be done with it forever, see #16734, #6753, #11530 …
I agree, I’ve never been a fan of it. I don’t think we have any files in the git repository we don’t want to ship in the source tarball.
The suggested changes have a downside which is pointed by luke-jr:
… but the distfile needs to include autogen-generated files.
This means that a user is not able to run ./configure && make
right away. One must run ./autogen.sh
at first.
Here are opinions about mandatory use of ./autogen.sh
:
It’s probably ok to require autogen. I think historically configure scripts were supposed to work on obscure unix systems that would just have a generic shell + make tool + c compiler, and not necessarily need gnu packages like m4 which are needed for autogen.
I also think it’s fine to require autogen. What is one dependency more, if you’re building from source.
~Also this PR provides Windows users with ZIP archives of the sources. Additionally the commit ID is stored in these ZIP files as a file comment:~
Note for reviewers: please verify is git archive
output deterministic?