[lfs-support] LFS-7.0 with LVM

Baho Utot baho-utot at columbus.rr.com
Sun Jan 29 17:41:02 PST 2012



On Sunday 29 January 2012 08:08:58 pm Bruce Dubbs wrote:
> Baho Utot wrote:
> > For me it is ever try to manage 16 regular partitions?
>
> How about two regular partitions: / and /boot, and lvm for everything else.
>
> And yes, I do manage 16 regular partitions:
>
> $ sudo fdisk -l /dev/sda
>
> Disk /dev/sda: 320.1 GB, 320072933376 bytes
> 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000080
>
>     Device Boot      Start         End      Blocks   Id  System
> /dev/sda1   *          63      208844      104391   83  Linux
> /dev/sda2          208845    19743884     9767520   83  Linux
> /dev/sda3        19743885    25607609     2931862+  82  Linux swap /
> Solaris /dev/sda4        25607671   439398399   206895364+   5  Extended
> /dev/sda5        25607673    46588499    10490413+  83  Linux
> /dev/sda6        46588563    67569389    10490413+  83  Linux
> /dev/sda7        67569453    87104429     9767488+  83  Linux
> /dev/sda8        87104493   108085319    10490413+  83  Linux
> /dev/sda9       108085383   191992814    41953716   83  Linux
> /dev/sda10      191992878   233954594    20980858+  83  Linux
> /dev/sda11      233954658   254935484    10490413+  83  Linux
> /dev/sda12      254935548   317862089    31463271   83  Linux
> /dev/sda13      317862153   338842979    10490413+  83  Linux
> /dev/sda14      338843043   359823869    10490413+  83  Linux
> /dev/sda15      359823933   380804759    10490413+  83  Linux
> /dev/sda16      380805120   419866623    19530752   83  Linux
> /dev/sda17      419868672   439398399     9764864   83  Linux
>
> Well, make that 15 regular partitions, 1 extended, and one swap.
>
> My sdb is 750G.  I can see 5-10 10G partitions (for different lfs
> builds) and on LVM partition for everything else.  Right now i have:
>
> /dev/sdb1            2048    20973567    10485760   fd  Linux raid
> autodetect
> /dev/sdb2        20973568    41945087    10485760   83  Linux
> /dev/sdb3        41945088    62916607    10485760   83  Linux
> /dev/sdb4        62916608  1465149167   701116280    5  Extended
> /dev/sdb5        62918656  1465149167   701115256   8e  Linux LVM
>
> So I can experiment with standard jfs, xfs, and reiser filesystems and
> boot from them without an initramfs.
>
>    -- Bruce

The above is just what I am talking about ;)

What I do is to create a new lvm partition for the system under test. . .

Then bend break and mutilate as necessary.  After I am done and it is no 
longer needed...just remove it from the grub menu and lvm and I am done.
every thing is clean.  What if from your list above I would need to kill 
say /dev/sda9 and add that space to say sda12 and sda5?  What happens to all 
the partitions after it and then what happens when you need to fix the grub 
menu? Isn't all the partition renumbered after fdisk del 9?

With lvm all that is needed is to extend the lvm-partition and resize2fs and I 
am good.  That lets me have a partition start out small and expand or shrink 
it as needed. When you remove a lvm partition the space goes back to the "lvm 
pool" and can then be reassigned/reused.

Also I don't have to remember what partition goes with what, was that sda12 or 
was it sda6 that had arch linux or was it centos, well which centos I have 4 
of them.  With lvm I have lvm-centos-router, lvm-centos-email lvm-centos-http 
lvm-centos-junk.  Now I know which is what and don't have to have any notes 
of what is what ;)

Once you get used to using lvm you will find that it simplifies disk 
management.

You don't even need any partition for lvm if you don't want to, (for data 
volumes).  For example you could if you wanted have both sda and sbd in a 
volume group and then you don't need to keep track of which disk is which.

If for example you need more storage you can add a disk to the volume group 
and your good.  You can also add a new disk to the volume then move all the 
stuff thats on sdb to the new one and then remove the sdb disk from the 
volume with out much effort.  Install the hardware add the disk to the volume 
group then 

pvmove /dev/sdb1
vgreduce <volume> or vgreduce --all <volume-group> 
pvremove /dev/sd1

Done. remove the drive from the system.

LVM also allows snapshots of your running system.




More information about the lfs-support mailing list