/tmp/*

Bill Maltby - LFS Related lfsbill at wlmcs.com
Sat Oct 26 09:25:42 PDT 2002


On Sat, 26 Oct 2002, Ian Molton wrote:
>On Sat, 26 Oct 2002 15:36:11 +0000 (UTC)
>lfsbill at wlmcs.com (Bill Maltby - LFS Related) wrote:
>
>> 
>> Ditto. I was addressing scenarios wherein (multiple?) users of the
>> server will be running a mix of things and the use of /tmp is less
>> predictable but generally more active. 
>
>Hm. I still dont see any real benefit.
>
>tmpfs will flush to swap if low on space.
>
>not usign tmpfs, your file will be read from the disk cache, and backed
>on disc.
>
>Im not seeing huge benefits here, unless your users are creating
>BILLIONS of tiny temp files very fast...

Benefits may be marginal, depending on config/activity. But, if /tmp is a
real HD-based file system and we assume that the files will persist long
enough to require a sync to the disk, we incur the overhead of updating
the FS related stuff on the HD, which we would not incur on a
tmpfs. Instead, we only have the tmpfs FS overhead, which should all be in
memory. So, there is benefit there. But, for typical single-user
workstations, who would notice? On multi-user servers, it may be more
noticable.

Secondly, getting full and having to flush to swap was one of the
considerations that caused me to mention "thrashing". If the usage is so
high (volume-wise) that substanial swapping would occur, it makes no sense
to use tmpfs (not because of the speed of swap though). But even if swap
is hit, its overhead to the system is less than a file system. I'm not
talking *latency* here, just overhead. Since a swap is either a 1-1
mapping of memory (real/virtual?) or a hashed mapping (depending on *IX
flavor), there is less overhead (potentially) in cou, memory and HD
activity when writes/accesses are needed. No i-nodes to process/maintain.
No trees to walk (well, simpler ones depending on hash algorithm).

Additionally, recall that vm is quite generous about aging out buffers and
cache. So, if /tmp is tmpfs, there is a greater liklihood that it will
more quickly reuse some of those inactive buffers and caches for tmpfs
files. For "non-dirty" buffers, no flush will be needed. For LRU (least
recently used) caches, they can be discarded and the memory reused. Also,
IIRC, sync occurs at XX second intervals, requiring update of FS meta-data
and data. So if /tmp is on a real HD, this small overhead will be incurred
that would not be incurred on a tmpfs-based setup.

Every way I look, if /tmp is not a real HD-based FS component *and* the
system is not *excessively* loaded, I see the *potential* for lower
overhead. This *may* translate to reduced latency and better response.
As said before, activity and config *heavily* influence these issues and
the results may not be noticed. But I do have confidence that the vm
manager can make fairly effective decisions and I will notice if I
overload things.

In summary: given the correct conditions (a lot of correct conditions will
occur in real life), you are right that no improvement may be seen. But,
improvement *might* be measurable in terms of system load. In other
conditions (and there are lots of them too), the difference will be
observable, as well as measurable in terms of system load.

Speaking only from a background of multi-user *IX systems, I can only say
that tuning the config properly for the use and load can yield substantial
benefits. Tmpfs is one of those "tunables". On more traditional *IX
systems, it was buffer and cache high/low-water marks that were used to
accomplish a similar effect.

-- 
Bill Maltby
billm at wlmcs.com

-- 
Unsubscribe: send email to listar at linuxfromscratch.org
and put 'unsubscribe lfs-dev' in the subject header of the message



More information about the lfs-dev mailing list