Aug 122011
 

defragSometimes it happen that people that I’ve brought on the “Linux side” of the operating systems ask me, “Which antivirus should i use ?” or “should i defrag my disk ?”
This is one of the most complete answer I’ve found to the second question, source Ubuntu Forum

I’d actually strongly suggest not defraging … the reason behind this? Even on windows most defraggers have 2 options, 1 Defragment, 2 Compact … sometimes called something different, but the end result’s the same.

The “defragment” is supposed to make all files into contiguous blocks. The “compact” is supposed to defragment and make the free space into a contiguous block. Now while this sounds nice, the reality is quite different:



Fragmentation does cause productivity loss as each file access now could become hundreds of HDD reads / writes. The file system makes all the difference:

  • FAT based file systems save each file directly following each other. So if you later edit / add to this file, the added portion needs to be saved somewhere else. This will create a fragment (or more than one). A new file is saved starting from the 1st blank spot, even if that blank spot is too small to contain the entire file.
  • NTFS is a bit better in theory, it allows some free space around each file. Then if it notices that the file will become fragmented, it “attempts” to save the entire file in a new location. The caveat to this is “efficiency”: if it would not take too much time to save the entire file over again it will be done, otherwise just create a new fragment. Just how it determines “too much time” is up for grabs … and like most M$ ideas also a secret!
  • Ext3/4 also generates blank spaces “around” each file. Where it differs significantly from NTFS is that it’ll only fragment when it’s impossible to keep the file as one single fragment … that (usually) only happens when the disk becomes too full.

Why defrag utilities don’t work:

Defragment:
All this really does is what ext3/4 already does … but after the fact. It checks all the files on the HDD, then for all those with more than one fragment it “tries” to move them to an empty space which is large enough. After a while you end up with lots and lots of small bits and pieces of empty space throughout the HDD. With Linux’s ext3/4 these are actually designed to prevent fragmentation, they’re given a size by ratio and each time the file grows beyond that space it’s moved somewhere else and given a larger “buffer zone”. With a windows defrag they happen per accident and the size can be anything from one allocation unit (usually 4096 bytes) to many gigs. There’s no real “plan” behind it, and it doesn’t matter how many times the file’s grown or what its size is. It’s even possible to have absolutely no space between files, so you end up with a FAT like scenario.

Compact:
This is the worst thing you could do to your HDD. Apart from the fact that you’re diminishing the mechanical life by overworking it, compacting will cause fragmentation in the future. This is because it literally rearranges files to match that found on FAT systems. So the very first edit after a compact will create extra fragments (at least one extra). Some defragers (like Norton Speed Disc) allows for “smart” rearrangement of files. All they basically do is move files which are changed a lot (like documents) to the end of the disk, and all non-changing files (like programs) to the start. They still remove the buffer zones between files. The only time I’d even consider something like this is when the HDD will only be used for read access, but even then there’s sometimes better arrangements performance wise.

All that said … absolutely nothing is ever perfect. Even ext4 systems do become fragmented (but very seldom such that performance is lost). Main culprits of causing fragmentation on ext3/4 are database systems. These have multiple read & writes in concurrent environments, thus rearrangement to remove fragmentation could actually damage performance. In such cases it’s usually a good idea to “now and again” copy the fragmented file off to an external HDD, delete the original & copy it back. This method lets the ext file system “decide for itself” where to best place the previously fragmented file. And then of course don’t over fill your HDD, a good limit is around 50%. But absolutely never fill it above 80-90%, that’s just asking for trouble (no matter what OS / File System you have)!

Some DBMS’s have ways of getting around this. E.g. Interbase (or Firebird) makes one file for its database which has exactly a specified size, if the data grows beyond this filesize a 2nd new file is created instead of just growing the 1st. This ensures that each file is only one fragment – but obviously the file may contain quite a lot of wasted space. This is not the default, but for serious DB’s it’s recommended.

Popular Posts:

Flattr this!

  7 Responses to “Does linux Need Defrag ?”

  1. While delayed allocation, extents and multiblock allocation help to reduce the fragmentation, with usage filesystems can still fragment.

    http://kernelnewbies.org/Ext4#head-38e6ac2b5f58f10989d72386e6f9cc2ef7217fb0

  2. http://vleu.net/shake/ Ext2 and Ext3 best friend at times.

    “Bad advice” the link you have is not 100 percent the full story it is true but ext2 and ext3 will look for a chance to undo that. Move or Copy the file will normally trigger fix of fragmentation. This is how shake is achieving it. No need to talk directly to the filesystem just perform the operations that will allow ext2 and ext3 to fix it.

    Ext4 with online defrag in fact will wait for low load and try to fix up fragmentation issues without having to use a tool like shake. Btrfs also will include this feature as well.

    Of course in a system that is always under heavy load you still might have to run shake every now and again to fix issues up.

    Note the way shake works you don’t need to stop any operations to perform it since it just normal disk operations. I have a clone of shake for windows. Cases where you have windows defrag say the drive is too far fragmented to defrag. NTFS will in fact defrag using the same method as ext2 ext3 and ext4 with shake.

    Fat your screwed. Filesystem engine contains no natural system against fragmentation.

    General usage of ext2 and ext3 natural operations normally trigger clean up so even shake is not major-ally required.

    Yes “Bad adivce” is right the article has something badly wrong.

    ” In such cases it’s usually a good idea to “now and again” copy the fragmented file off to an external HDD, delete the original & copy it back.”

    Correct answer is run shake over the file. This will fix it up. No copying to another drive and back again.

    Also percentages are bad as well. Make sure there is more space free than the size files you normally will be writing. Lot of 1 TB drives I run near 90 percent full without any fragmentation issues. Yes 10 percent of a 1 TB drive is about 100G. Since most files I write to it are less than 1G the drive has to be quite badly fragmented not to find a fragment the right size to take the file.

    Yes shock horror Linux does have a defrag tool its not less required to be run. System may take 3 to 8 years in home usage to reach a stage of fragmentation on linux needing shake to be run.

  3. Recently, e4defrag became available in Debian as part of e2fsprogs 1.42. After running it on my root partition, I didn’t notice any performance improvement. That is on a system with daily package (un)installations and often massive dist-upgrades. I do make sure to always have enough free space on my file systems (but I have disabled root-reserved space on all of them).

    So, at least in my case, I can say that defragmentation is not a necessity. Once the tool becomes available in more distros, people will test it to determine if it has any effect on their systems.

  4. I’ve read this somewhere else, but did not remember why Linux does not need defrag, actually why EXT3/4 does not need defragmentation.

    Thanks for refreshing my mind.

  5. Many drives are used as archival or mainly read only, such as media drives, where a person my store their photos, movies, music, and other archived files. Having files contiguous can be important for performance and also in the event of a filesystem failure, to improve data carving to recover any data that has not been backed up. Contiguous data in these cases have many advantages. And on occasion these drives have the data reorganized which would then call for a possible defrag and a consolidation of free space. To say that there is no reason to have a drive defragmented is shortsighted.

  6. thanks for sharing, i was unaware of this fact.

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)

*