Registered Member
|
Few days ago I left KTorrent downloading for night, but firs checked, if there's space on the disc for whole file. There was.
At the morning i saw no free space left (weird, becouse there sould be about 300MB free), but torrent was downloaded and was being seeded now. Few moment ago I imported this torrent in Azureus, and it shows, that it's downloaded only in 92,6%. |
Moderator
|
What version are you using ? We preallocate all diskpace so the size it takes up, should not change. |
Registered Member
|
|
Registered Member
|
How are you preallocating space exacly? I've just found that ktorrent makes file which has torrent size, but free space on drive reported by for example krusader doesn't make any smaller. I made small test: Now I have exacly 381MB free on drive and tried to start torrent having over 12GB size. And guess what - it successfully started
Last edited by cyrylas on Mon Jun 05, 2006 11:25 pm, edited 2 times in total.
|
Moderator
|
|
Registered Member
|
ls -l shows size of full file. But look just at the screenshoot below, to see, that there's something not good. It was took yesterday with that allocated file over 12GB.
The more I'm investigating this thing, the more weird it's making. Once again. That everything in my computer before i start any torrent: Then I started downloading largest torrent file ever found: size 25,82GB: It started, but nowhere i have so much free space. And now, afeer few megs of download: Torrent is saving into /home/majkel/dl, so there's dafinately should be some error, because this torrent is even bigger than whole partition...
More interesting are results of that commend:
|
Registered Member
|
|
Registered Member
|
I know it's impossible - checked, filesystem is clean.
I tried baobab and it shows some nice info (is there an easy way to temporaly change language in KDE to english? Then I could put info in readable to everybody langualge ): It tells, that filesize is 4.4GB, but allready localized only 4.9MB. I thint that, ftruncate isn't allocating file on disc, bul onty tells filesystem, that file, which will be there some day will have 4.4GB, but not now. |
Moderator
|
|
Registered Member
|
Are you (the devs) absolutely sure that ftruncate will actually allocate all the needed blocks? most filesystems these days have sparse file support by default, so if all you do is set the size, or seek somewhere and write, its only going to allocate the needed blocks. AFAIK, you have to physically go out and write _something_ to the entire file for it to actually allocate blocks.
|
Registered Member
|
I'm pretty sure you have to specifically create a sparse file if that's what you want on all filesystems that support it. I don't know anything about ftruncate specifically, but I very much doubt the problem has anything to do with sparse files. |
Registered Member
|
AFAIK, linux/ext[23] does sparse files by default. Its kinda how it works. I read up on the ext algorithm a while back, and basically, it only allocates when needed, though it will preallocate some adjacent blocks on creation/write, then free the non used preallocated blocks. And if XFS, JFS, and Reiser didn't work this way, I'd be very surprised. ktorrent currently allocates multi GB files in meer seconds (if theres allot), whereas any other BT client will take up to minutes per torrent (of course that depends on the size). |
Moderator
|
Still it should **** out when there is not enough diskspace left. Manually preallocating is a pain and very slow. |
Registered Member
|
Actual free space is calculated using the number of free chunks, since none are actually allocated to blank/sparse points in the files, it doesn't show up as used. Tis how everyone else has to handle it. Clients like Azureus, and other clients have several allocation modes, "manual, write all bytes" which takes forever but is guaranteed to make sure you get the file allocated, and as un fragmented as possible, and then theres plain just write when needed, and then theres the ftruncate option which "should" work on less sophisticated filesystems, that dont bother to do any preallocation or use any anti fragmentation algorithms. Ext is "normally" good at keeping itself free from major fragmentation, that is till you start downloading many files using p2p clients and sparse files. I totally fraged up a few filesystems that way, till the transfer speed was down to less than 1 MB/s and fsck said it was about 60-80 % non-contiguous. As far as I've been able to check (I'll look harder later, but I see no libc method of creating non sparse files on linux), there is no other way to totally preallocate a file. Now, if you want to save a little time in allocation, grab the block size of the fs, and the number of blocks the file is likely to take up, and write 1 byte or word to each block. should take a little less time, hopefully (I'm not so sure, as fast as io is these days, 4k shouldn't take any longer than 4 bytes once its in the queue.) It would be easy to test anyhow. |
Moderator
|
Registered users: Bing [Bot], blue_bullet, Google [Bot], rockscient, Yahoo [Bot]