Thank you all for the input on this. I didn't realize that the fragmentationlevel can exeed when the buffer file is full in 'sparse' mode. It's a feature in eMule here, but in that case when a file is completed, it's completed though. I noticed that the hdd of the measurement pc here got a fragmentationlevel of 40% over time, but I guess that the only solution for that is to do some more regular maintenance at this point.
Kind regards, Marc
--- In SpectrumLabUsers@yahoogroups.com, wolf_dl4yhf <dl4yhf@...> wrote:
> Hello Marc,
> I have no idea how to instruct the file system to treat a file as
> 'sparse' when creating it.
> From the few bits and pieces I read in wikipedia about sparse files, I
> also don't think its really appropriate here: The big advantage with
> sparse files is if a large portion of them contains zeroes, which are
> actually not written to the file. This is definitely not the file when
> storing FFTs (with floating point numbers) in spectrum labs buffer file.
> Sooner or later there will be no such gaps at all, and 'sparse' files
> (as I understand them to work) will actually cause *MORE* fragmentation
> than if the file was allocated as a single large block, physically.
> But again, I am not an expert in the NTFS file system so maybe someone
> around here has more details.
> All the best,
> Wolf .
> Am 15.02.2013 11:03, schrieb muzieklampje:
> > Dear Wolf. Would it be possible to add a 'saving as sparse' (only for
> > ntfs file system) option for the buffer file so the harddisk get less
> > fragmented?
> > With kind regards, Marc, The Netherlands.