Is 6.3 ready for release?
ken at linuxfromscratch.org
Mon Jul 30 14:32:39 PDT 2007
On Mon, Jul 30, 2007 at 01:02:29PM -0500, Bruce Dubbs wrote:
> I put out a 6.3-rc1 a week ago and there really has been very little
> feedback. Is it ready for final release?
> -- Bruce
Wow, a whole week. Maybe, hardly anybody has tried it because this
is the holiday season in the Northern hemisphere.
To expand on what I posted earlier about test failures:
Tar is repeatedly failing '26: incremental' for me, looks like a
regression. But nobody else has commented.
The bash failure I reported was a fubar in my script (trying to
chown the test log so it was writable by the appropriate user, but
before it was created ;). But, my second run did seem to have one
failure - 'run-test' shows
- as to what that means, your guess is as good as mine.
The perl failure didn't happen on the second run, I guess it's just
another unreliable test.
And the vim test failure is totally impenetrable.
In general, the more I run test suites, the less confidence I have
in them. Sure, sometimes they point to problems (e.g. when our
build order in clfs was wrong and the findutils tests crapped out),
but a lot of the time I wonder why we bother.
And moving on to farce test results ("how repeatable is it?") -
apart from cc1 and cc1plus I also had a failure in libc-2.5.so (not
yet investigated) and in private mail from archaic I know he saw a
failure in nscd (I've got his files for this, but haven't investigated
it yet). He also noted in that mail that coreutils seems to be using
a pre-created info file in his second and third builds - (I've only
run two, and misread the diff : coreutils.info was created by
makeinfo version 4.9 in the first build, and 4.8 in the second).
But, we don't expect user to build it twice, so I'm not too worried
about that, I just add it to my "coreutils Makefile is a POS"
thoughts :-) Nobody else has reported any difference in
libc-2.5.so so that is probably a local fubar or a shortcoming in my
OTOH, nobody has yet reported on their successes or failures in
using it to build BLFS, whether for a server or for a desktop. I
was hoping to spend time looking at farce before I ran a third build
without tests, and later a by-the-book build with only toolchain
tests, but if you want to get it released I'll happily drop those.
I won't be able to finish a desktop for some time.
Which only leaves space/SBU measurements. Because my build hasn't
been by-the-book, and has run tests whenever possible, I'll only
comment on those I know look wrong (chapter 6 only)
1. Kernel headers. The time is still minimal, but I think these take
304MB, not 286MB (that should apply to chapter 5 too) - that's
because the instructions were altered to install to a subdirectory in
the source and then copy them, which is much nicer/safer but
guaranteed to take more space.
2. The coreutils time (1.0 SBU) is presumably without tests, but
mine only took 1.0 SBU with the tests.
3. If the SBU for autoconf without tests is correct, the tests take
about 4 SBU, not 3 (don't you just love newer toolchains?).
4. Similarly, automake is in excess of 13 SBU with tests, not 10.
5. My bzip2 install took 6.4 MiB not 5.3, I attribute this to the
docs which seem to be non-optional according to how the book is
6. Findutils supposedly takes 13.6 MiB - I assume that is without
the tests: with the tests mine took a little more time but only 12.6
7. Gzip for me takes 3.3 MiB not 2.2 MiB.
8. Inetutils for me takes 0.3 SBU (not 0.2) and 12.1 MiB (not 8.9).
9. Iproute2 for me takes 0.1 SBU (not 0.2) and 5.0 MiB (not 4.8).
Actually, that might be getting a bit close to splitting hairs.
10. The lfs-bootscripts use 0.6 MiB for me, not 0.4 MiB.
Now, time for me to ask a question: Is it worthwhile that we
continue to record SBUs and space ? Everybody knows that many
packages take longer to build and use more space whenever the
toolchain is upgraded. Is it really worthwhile to be so exact about
the time and space. Certainly, space should be constant across
an architecture (well, i686 anyway) for a given toolchain, but the
timings depend greatly on amount of memory (do you hit swap?),
memory speed (try using a via processor), and disk speed.
das eine Mal als Tragödie, das andere Mal als Farce
More information about the lfs-dev