the move to custom xml

Seth W.Klein sk at
Tue Apr 8 14:56:01 PDT 2003

Alex Groenewoud <krisp at> wrote:
> Some months ago there was talk of converting the book to custom XML 
> to escape the horrors of DocBook.  As I suppose version 4.1 won't 
> be released until April 28th, now seems to be the ideal time to do 
> that conversion.  I guess Seth is all set to begin the attack.

Not really, i'm afraid. The issue for me is automation. To put it
bluntly, i wish (need, perhaps) to do things with my life besides
building LFS. Gerard's goals for the LFS project have explicitly
excluded automation and so i would be foolish to expend more than
a little effort on the LFS Book itself.

What about nALFS, you ask? I have used nALFS an was quite pleased
with it when i "discovered" it. After using it for a while i came to
these conclusions concerning the ALFS method:

1: It invents a new (XML) format for commands when we already have
   (ba)sh. This is not ideal because, not only does it save effort
   to use an existing tool where possible, but reinventing sh is
   unlikely to produce anything better since if there was anything
   better someone would have developed it in the last 30 years or so
   during which far brighter minds than ours have been using Unix sh.

2: It encourages ALFS processors to be large monolithic applications.
   The 35 years and counting of Unix use suggest that the typical
   Unix design involving small specialist tools written in whatever
   language is most suited and communicating via pipes and files is
   the wisest design to use unless it proves absolutely unsuited to
   a particular application.

I don't mean to say that XML and XSLT are useless. I have work done
right now. It stores all data (urls, commands, and documentation)
using XML as the container. I have these two features implemented:

1: New release checking. I've been doing this for months with line
   oriented data storage. I simply ported the data (currently about
   150 packages worth) to the XML storage, replaced a piece of awk
   with a piece of xsl, and modified the bash master script (which
   uses the filesystem for its database) to the new environment.

2: Source downloading. Again, i ported the data over, wrote a little
   xsl and awk, and now i could download all packages with:

   $ make download

   some other variations, including the one i actually use, are:

   $ make SOURCES=/var/cache/src sudown
   $ make DOWN_USER=lfs SOURCES=/tmp/src sudown
   $ make down glibc-2.2.2.tar.bz2

   And, of course, when there's a new release, i add the download
   instructions, rerun make , and it pulls down only the new file.

My next step is storing and extracting build commands. I will be able
to to do something like:

$ echo SOURCES=`pwd`/src >
$ echo PARTITION=/dev/hd6 >>
$ echo MOUNTPOINT=/mnt/tmp >>
$ make
xsltproc --xinclude --withstringparam want stage1 build.xsl index.xml
$ su - -c "sh `pwd`/"

Something which would be neat for new users would look like this:

$ make is_sane

which would extract and run a script that would attempt to compile
and run a static test program linked against ncurses. Although PLFS
may make this less useful.

If there is sufficient interest, i will see about releasing this

My one concern is the check script which hits once for
each package from there. I could see them getting quite argry with me
if 500 lfs-dev subscribers got home between 17:00 EDT and 18:00 PDT
and ran the thing. It would be even worse if a few thousand freshmeat
or slashdot readers got wind of it.

If someone can tell me that they'll never notice, i'll set my mind
at rest; otherwise i'll have to see about making the data available
another way and having the script print out a giant "don't use me"
instead of running.

Eventually, i would hope to expand the script to run unattended and
output to a mailinglist.

Seth W. Klein
sk at               
Maintainer, LFS FAQ   
Unsubscribe: send email to listar at
and put 'unsubscribe lfs-dev' in the subject header of the message

More information about the lfs-dev mailing list