[lfs-support] What Is "The" LFS Partition?

Bruce Dubbs bruce.dubbs at gmail.com
Tue Nov 6 11:28:54 PST 2012

Feuerbacher, Alan wrote:

>> For historical reasons, the MBR scheme only allowed four
>> partitions. And (using e.g. fdisk) you can create exactly four
>> PRIMARY partitions. So, if you need four partitions or less, that's
>> fine. But if you need more, one of the four partitions (usually the
>> last) can be an EXTENDED partition. This extended partition
>> (usually) covers the rest of the disk, and in the beginning you
>> will have an extended partion table.
>> From other recent reading I now understand that this whole scheme
>> is a kludge, and that's why it's not straightforward to understand.
>> I always wondered why there was a limit of four primary partitions,
>> and why there was even the notion of primary partitions, as opposed
>> to extended partitions, at all. The extended type is a kludge.

When hard disks were first introduced, they were quite small.  I died 
one that was 14 inches in diameter, 5 platters, and 3MB.

The original DR-DOS, that preceded MDS-DOS, set up the first 512 byte 
sector as the boot sector.  It allowed four partitions.  The scheme has 
been modified over the years with backward compatibility in mind. Now, 
it just can't be modifified any more and for 2T drives and still keep 
backward compatibility.  It's time to throw away the buggy whip.

>> When you format a block device as a filesystem (e.g. with mke2fs),
>> the first few bytes of the (block device seen as an) array of bytes
>> gets initialized with some "magic" values.
>> When the "mount" command is used on a block device (e.g. "mount
>> /dev/sda7 /mnt"), it looks at the first few bytes for those
>> "magic" values, to figure out which type of filesystem is there. It
>> then instructs the kernel to interpret the block device as a
>> filesystem of a certain kind.

Actually that's the first physical 512B sector.  The MBR starts 446 
bytes into the sector and the last two bytes have the signature of 0xAA55.

> So "mount" essentially associates a physical location in a particular
> partition (say, the first few magic bytes of /dev/sda7) with a
> directory name ("/mnt"), no?

A bit more than that.  It has to set up data structures and do some 
other initialization.

> Why does one have to create a directory with that name before
> executing the "mount" command?

The system has to know where to attach the data structures in the file 
tree.  You could create a script to do a 'mkdir -p <mountpoint>; 
mount...', but that's overkill.

>> Suggestion: Do not play with LVM until you understand the basics!
> But LVM is touted as being easier to deal with than the older system.
> Is that not the case?

No it's not.  At least in the beginning.  It's rarely needed outside a 
large installation where disks are being added and removed all the time.

Large distros what to do things 'one way', so they use it by default 
even though it only provides a benefit to a relatively few users.

> For example, last night I managed to make GPT partitions and an LVM
> filesystem on my new hard disk (earlier today I emailed this list
> with a blow-by-blow account of doing this), based only on material
> from Net searches. Now, if I can do that, I would think that the
> process is substantially easier than the old methods.

I'd just use GPT and then use mkfs on the partitions created.  You only 
need a bios_partition on the drive you will boot from.

>> Although using several partitions for one system (e.g. /, /boot,
>> /usr, /var, /opt) has it's merits - in particular if you have a
>> power break and the filesystems are broken -
> Can you elaborate?

Yes, please.  I don't understand that statement either.

>> I now find it much easier to just have the whole system on one
>> partition.
> Easier mainly because you don't have to create a lot more
> partitions?

Easier, but less flexible.  Being able to mount things like /home or 
/boot allows much better sharing between different builds/distros.

   -- Bruce

More information about the lfs-support mailing list