LFS-6.6, Stage2, glibc, nscd.c:442
paulgrogers at fastmail.fm
Sun May 30 09:45:54 PDT 2010
Well, that was fun!
But first, I want to thank you all for your help while I persue this.
Since it does happen from time to time, and nobody knows exactly why or
how to fix it, we probably should find out what's happening.
Yesterday I left my first few steps of Stage2 in place--their results
are in the FHS, which Stage1 will ignore. I CAREFULLY backed my way to
the Stage1 environment, then back through the Stage1 steps removing the
packages through the Pass2 gcc. Then I restored the Pass1 gcc. Now I
carefully examined the Pass2 build script, ensuring it's effectively
identical to the book commands, and put an exit right after the
configure. My script, of course, tee's the console log. I thought I
found something after examining that, blowing off the build directory,
and found in the source directory the gcc subdirectory had yesterday's
date--configure must have written something. So I blew that off too and
restarted with a clean one. Pass2 built, I saved both direcories off to
a different partition for further examination. Then I stepped my way
back through the Stage1 packages, restoring what I had saved immediately
after building them in order. Then I tried rebuilding glibc, and it
failed at the same spot. So whatever is happening in the Pass2 build is
reproducible. That's where I am now. But now I have those build
directories available to check.
> In my reference build I have:
> -rw-r--r-- 1 root root 26098 Apr 15 18:09 /mnt/lfs/tools/lib/libssp.a
> -rwxr-xr-x 1 root root 925 Apr 15 16:37 /mnt/lfs/tools/lib/libssp.la
> lrwxrwxrwx 1 root root 15 Apr 15 16:37 /mnt/lfs/tools/lib/libssp.so
> -> libssp.so.0.0.0 lrwxrwxrwx 1 root root 15 Apr 15 16:37
> /mnt/lfs/tools/lib/libssp.so.0 -> libssp.so.0.0.0 -rwxr-xr-x 1 root
> root 12721 Apr 15 18:09 /mnt/lfs/tools/lib/libssp.so.0.0.0
OK, I've got those too. So that's OK.
> The contenst of libssp.la are:
I'll check that. I'm on my twin "every-day driver" box at the moment.
> I'm sure you know that jhalfs provides an alternative to your script.
> I understand concern for clean migration path and NIC/network.
Yes, it does, and I could go from my current 6.1 to 6.3 by installing
the LiveCD. Except that also brings along X & xfce, et al. The way I
go, cloning with the setup, restore, finish steps, I get a system that's
the equivalent of going through the book, replacing the compiling with
the VERY much faster restoring the compiled code from a tarball made
immediately after compiling. My clone script lets me stop there and
have a book-version, Spartan, Linux, before adding my choices from BLFS.
And even when I add those, at those points when it is necessary to
customize something, e.g. fstab, my finish steps ask for the necessary
information. As opposed to just tarballing my finished build, as jhalfs
would produce, the installer doesn't have to find every such file and
change it. IMO, it's simpler, and more like doing a book-install,
cloning new systems my way.
> Just thinking out loud ... if jhalfs can build it using the host
> system, then the scripts must have a fault ... but if the scripts can
> build it using the livecd, then the host system must have a fault. Is
> it the host kernel config, the host glibc, the host gcc, the scripts,
> or what?
Yes, and I may well be doing that a little ways down the road. (And
that's a different path than trying the LiveCD--which, at best, bypasses
the problem entirely with no clues as to what's wrong with a LFS-6.1
environment.) But even then, it doesn't identify exactly what/where the
flaw is. Right now I've got the directories for examination. The
traces should be in there!
> > $ nm -a /mnt/lfs/tools/lib/libssp.so.0.0.0|grep stack
> nm -a /usr/lib/libssp.so.0.0.0 | grep stack shows similar output on
> backups with linux-126.96.36.199 similar as 2.6.33,
> e.g.,: 00000c10 T __stack_chk_fail 00000c50 t __stack_chk_fail_local
> 000025a8 B __stack_chk_guard
I can check that too. But pardon my question, that's not the issue,
is it? That's just why Pete Jordan's, x2164, workaround works. We
EXPECT libssp to have __stack_chk_guard. The problem seems to be that
the glibc compile is trying to pull it in before its time, isn't it?
> The configuration item CONFIG_CC_STACKPROTECTOR:
> * prompt: Enable -fstack-protector buffer overflow detection
> * type: tristate
> * depends on: CONFIG_X86_64 && CONFIG_EXPERIMENTAL &&
> * defined in arch/x86/Kconfig
> * found in Linux Kernels: from 2.6.19 release still available on
> 2.6.34 release
AHA! So maybe the Host System Requirements saying 2.6.18 is wrong?
Patching up a level should be fairly straight-forward, but I don't want
to take a scatter-gun approach. Has a reference 6.6 ever been built
> # CONFIG_CC_STACKPROTECTOR is not set
> is not seen in any /boot/config-x.y.z (kernel config) files I have
> saved until gcc became >= 4.2
Nor in mine! So it seems the HSR needs GCC>4.2, kernel>2.6.19?
> I guess this is something different from libssp Can you not go across
> the gcc 4.2 boundary to or from a 2.6.18 kernel?
I suppose that's POSSIBLE, but I'm not sure by me. Running two versions
of the compiler, and keeping them straight, is a bit more than I feel
comfortable with, honestly.
> What if you built a Linux-188.8.131.52 kernel on the host and try that?
I'd fear that too large a kernel upgrade would require other confuration
changes in my host system. A patch from what was 17, now 18, to 19
would probably reduce those chances.
So it's beginning to look like from my LFS-6.1 system, with gcc-3.4.3
and linux-184.108.40.206, originally, patched to 220.127.116.11, is just too far
a chasm? The 6.6 book's HSR needs patching? I need an interim step,
say my own build of 6.3? Is that where we are? If that's confirmed,
I'll abandon this 6.6 build until I've done that--but I'd like to have
all your best advice on that--it's twice as much work!
paulgrogers at fastmail.fm
Rogers' Second Law: "Everything you do communicates."
(I do not personally endorse any additions after this line. TANSTAAFL :-)
http://www.fastmail.fm - The professional email service
More information about the lfs-support