Question:
why to Windows a file is lots of small fragments?
sam
2012-11-20 14:18:12 UTC
why it doesn't just put the files as whole blocks?
i was reading this from auslogics.com and it says "The answer is pretty simple - because the system used by Windows is very space-efficient and doesn't allow a single bit of hard drive space to be wasted."
can someone help me understand how this is for space-efficiency?
Four answers:
ragtops
2012-11-20 14:36:57 UTC
.

When your disk is new, Windows does write new files into contiguous (adjacent) disk blocks. When you delete a file, those blocks are marked as being available for use.



Let's assume that you have been using your computer for a while, and you have saved and deleted files time after time. Eventually, you will want to save a file for which there will not be enough contiguous blocks on the disk for the whole thing.



Windows now saves that file in multiple pieces, using whole disk blocks which it selects from its list of available blocks. Note that it cannot use partial blocks, so the unneeded space in the last block used by a file is wasted. All of the blocks used for this file are linked so that WIndows can find them quickly and in the proper order when you want that file again.



So Windows is indeed "very space-efficient", but saying that it "doesn't allow a single bit of hard drive space to be wasted" isn't quite true.



In the old days, conventional wisdom said that periodically defragging your disk was a requirement to prevent file fragmentation from slowing down your machine. Defragging reorganizes the data on your disk to group together pieces of fragmented files, increasing disk efficiency. However, today's disks are so fast and so big that many experts say defragging doesn't buy you much in performance and is therefore no longer recommended.

.
schrey
2016-12-07 10:23:28 UTC
Cpersistent is a risky place to shop any records, if she crashes, all long gone. I actual have all my candies on D. 4gigs is all the RAM you will use, living house windows 7 sixty 4 bit purely runs 3.8 gigs.
walmeis
2012-11-20 14:39:29 UTC
File systems try to optimize many different aspects at once. However, techniques to store a 50 terrabyte database differ from those to manage grandma's recipe notes.



The basic allocation unit on a Windows filesystem is called a cluster. A cluster is a sequence of several disk blocks, almost always 512 bytes each. On modern media, a cluster is typically 8 blocks or 4,096 bytes, especially NTFS. On FAT (thumb drives), it is likely to be 32,768 bytes.



For storing music, often the clusters are allocated consecutively, which is what you would expect for "whole blocks" (which doesn't mean much). This means the file is contiguous (or not fragmented).



However, if the file has dodgy history, or it is added to a volume which is nearly full, it is likely to be fragmented.



In the first case, the file had more data added later. The file system logic allocated reasonable space for creating the file. Only when more was added was it discovered that the following clusters were not available (used by other files) so the file grew non-contiguously. This is not really a big deal for most files. In the second case the free space of a nearly full volume is likely to be scattered throughout the volume, maybe the result of files being deleted to make space. Still, if there is 1 GB free, you can write a 1 GB file and use the space up—mostly you don't care that it is in 3,477 different scattered places on disk. That is "usage efficiency".



However, in NTFS, keeping track of all those scattered locations consumes space in the Master File Table (MFT). If the clusters were contiguous, a single entry saying "262,144 clusters begin at volume cluster number XXXX" takes up just a few bytes, fits easily in the primary MFT record, and it is very easy to compute the location of any cluster. However, if the file is fragmented into 3,477 "data runs", that takes many MFT records to describe (150–200 fit into each MFT record). Determining the location of a cluster is more complicated and requires many extra disk accesses compared to a fully contiguous file.



If fragmentation bothers you, you can defragment a volume using the disk utility, something which can take hours. It works fastest with at least 10% free disk space&mash;more is better.
anonymous
2012-11-20 14:32:35 UTC
Windows will always save files to the first available space on disk. This means no gaps are left from deleting files. But of course if a file is too large for the space available it will not fit the space. So it then finds the next free area and puts the next section there. For VERY large files this can mean they get spread over a large area of disk. This is not really a problem as the file system database knows exactly where to find the individual fragments. If a file is then edited and gets bigger, and its' last fragment is near the end of the disk, it may get any additional data stored near the beginning of the disk and this is where it can impact on disk read/write speeds. The continuous head movements from front to back end of the disk can seriously affect speed. That is the point where defragmenting becomes important. It takes all fragments of each file and finds large enough spaces to put them all together. Then stacks one file directly after the other, clearing fragments away from that area as it works.


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Loading...