[lfs-support] coreutils (lfs-7.0, section 6.23)
jasonpsage at jegas.com
jasonpsage at jegas.com
Thu Jan 12 22:40:35 PST 2012
>Ran the test suite for coreutils:
>root:/scripts# grep -i fail src/coreutils-8.14/gnulib-tests/test-suite.log
>1 of 270 tests failed. (24 tests were not run).
>FAIL: test-parse-datetime (exit: 134)
>test-parse-datetime.c:142: assertion failed
>That's just the grep output (for '-i fail') in case anyone was wondering if there's more output.
>Is this failure a show-stopper? Or, can it be safely "ignored" (e.g., a bad-but-unavoidable host system)?
I don't know all the specific errors I get, but there are few through
out the process.
I'm new at this myself, so I'll probably be corrected but, consider
maybe writing scripts for the
whole thing and see how it turns out. Generally, if the bits compile,
I'm happy until something is broken
then I use more discernment.
>On the issue of failed tests (particularly in gcc & glibc), I've written some wrappers around those >compilations, and do something like:
> cat test-summary | grep "sources" | egrep -v "known-fail-test-1|known-fail-test-2" | wc -l
Fancy Piping :)
>Which basically scans the test summary files for the failed source files, and excludes the files/tests that >are known (e.g., libmudflap, etc). Then, it counts the lines with the intention of aborting if there are >any lines found OTHER than the "sort-of-expected" failures. I feel this is a reasonable approach for most, >since we're mostly not gcc/glibc devs, and the test failures would be hard to evaluate. Which is to say, >I'm assuming that if the book lists these errors, we may as well exclude them from consideration.
I always have problems with my mudflap. In fact, I think my mudflap
fails me all the time. (B^)>
I hope it's like a SEMI-TRAILER truck, when a Mudflap falls off, the
driver keeps driving without giving it a second thought.
>For the gcc tests, I've scanned through a few other summaries from the GCC status pages, and removed some >badly offending ones that seem to fail on lots of platforms.
>For glibc, I just use the suggested "ignore" list in the book.
>Do you feel this is a reasonable approach; if so, does this merit inclusion in the book itself? Something >like this:
>==== glibc ====
># Well, GLIBC will have errors, so we turn bash error-
># handling off...
>make -k check 2>&1 | tee glibc-check-log
># Now, turn bash error-handling back on, and check for
># the specific errors that are common. If we find any
># other errors, then FAIL; otherwise, PASS.
>__GLIBC_TEST_ERROR_COUNT=$(grep Error glibc-check-log | grep sources | egrep -v "posix/annexc|nptl/tst-clock2|nptl/tst-attr3|rt/tst-cpuclock2|misc/tst-writev|elf/check-textrel|nptl/tst-getpid2|stdio-common/bug22|posix/bug-regex32" | wc -l)
Not my call but as a seasoned developer (31 years, not green) but new to
this level of Linux Manipulation....
I think that code reads terrible.
>if [ 0 -ne $__GLIBC_TEST_ERROR_COUNT ] ; then
> grep Error glibc-check-log | grep sources
This is readable code.
Just my opinion, and looking at the book, it appears efforts have been
made to keep the code as simple as possible and allowing us implementers
the freedom to experiment etc. The book even says deviation doesn't
exclude folks from helping... So I think it's cool you're going through
Again I think with your understnading of linux, it might be more
productive to get the whole process down and then use you analytical
skills to zero in on likely problem areas versus trying to address every
failed test or error.
Remember... my mudflap fails me all the time - but my OS boots.. and I
even saw a network card light up for the first time in a virtual
machine... shoot I was heppy it booted in the virtual machine to begin
with... but onward I march...
So March on brother!
I'm sure others will have more insight than me on your questions.
More information about the lfs-support