1342RE: [SpectrumLabUsers] Question about sparse option
- Feb 15, 2013Wolf,
Sparse files are a form of compression. Blocks of '0' are converted to small description stating how many '0' there are. It cuts down the amount of I/O. The problem is that it can become inefficent if the '0' are in small clusters.
You can read more here:
and the implementation at code level here:
I would not recommend it for a near real-time system. It can be useful for long term storage though. Reading a file is transparent, but saving a file as sparse requires specific calls and needs to be enabled on the volume. Unless you are prepared for the overhead of managing the analysis of a file, I would just forget it. The current implementation is dumb as it would need to be completely transparent. It can break other apps.
Date: Fri, 15 Feb 2013 21:07:37 +0100
Subject: Re: [SpectrumLabUsers] Question about sparse option
I have no idea how to instruct the file system to treat a file as 'sparse' when creating it.
From the few bits and pieces I read in wikipedia about sparse files, I also don't think its really appropriate here: The big advantage with sparse files is if a large portion of them contains zeroes, which are actually not written to the file. This is definitely not the file when storing FFTs (with floating point numbers) in spectrum labs buffer file. Sooner or later there will be no such gaps at all, and 'sparse' files (as I understand them to work) will actually cause *MORE* fragmentation than if the file was allocated as a single large block, physically.
But again, I am not an expert in the NTFS file system so maybe someone around here has more details.
All the best,
Am 15.02.2013 11:03, schrieb muzieklampje:Dear Wolf. Would it be possible to add a 'saving as sparse' (only for ntfs file system) option for the buffer file so the harddisk get less fragmented?
With kind regards, Marc, The Netherlands.
- << Previous post in topic Next post in topic >>