Registered Member
|
Regular HDDs and NAS (again, containing HDDs) are still the predominant file storage devices. These devices are known to slow down considerably if bombarded with multiple requests at a time. As such, it's rather surprising there is no file manager out there which queues copy/move operations per default. There are a few external copiers (teracopy, fastcopy, both for windows) that implement this behavior though. I'd like to see the same implemented in KDE/Dolphin. It's by no means an easy task, of course.
As a first step, the file manager should assume all transfers have to be queued and worked on one by one:
This will cover 90% of use cases and won't stress the HDD and/or the network needlessly, resulting in less time needed to copy the files. Then there are of course cases where concurrent file operations would be faster, for example fileop1 running on sda1 -> sdb1 and fileop2 running on sdc1 -> sdd1 I am not quite familiar with how deep into the inner workings of a system KDE can look, but I think it should be possible to get the information of which device is the source and which device is the destination. If getting the information automatically isn't possbile, let the use specify how their system is wired in the options. This would change the above to this:
This still doesn't include SSDs into the scheme. Getting the information on whether the device is an SSD or not might be tricky, but I know microsoft somehow manages to do that with their defragmentation program (which then only runs TRIM on the disk instead of defragmentation). So it's at least theoretically possible. That would mean the final scheme would look something like:
It would be really nice to have this in a future version of dolpin, (along with some general rework on file operation handling, because this is how dolphin currently compares on my system) |
Registered Member
|
This is a very good proposal.
I would like to propose also the ability to don't buffer those file. https://github.com/Feh/nocache |
Registered Member
|
Should such a feature be implemented on a per user base or a system wide base? Keep in mind that unlike MS-Windows other systems offer a real multi user environment.
I do not really see any personal security threads in a system wide approach, and it certainly would make more sense from a performance point of view. But I could imagine users getting irritated if their actions get "paused" for no apparent reasons, since they themselves have not initiated any other activity. Also for a system wide approach an implementation would probably make more sense on a lower level, since copy activities can be started from other environments too. Further there are background processes on every system that write files too. These would have to be taken into consideration too, but that idea actually points out a problem: typically such activities are logfiles or state descriptions getting written. Delaying those by means of a queuing system certainly is not a good idea at all. On the other hand the majority of systems today is used as a personal device. So implementing the feature inside the desktop environment could cover many relevant situations. And system tasks would be left untouched. The feature would only deal with manually triggered user space activities. That would be a compromise, but would work around more severe issues. So thumbs up from my side! |
Registered Member
|
A "per-user" implementation makes more sense IMHO. It does not make the queue system useless, because at least the transfers are still limited to a maximum of number of users active as opposed to no limit at all.
There are rarely HDD intensive copy/move operations triggered automatically (automatic backup script is the only one that comes to mind atm). So the queue system only triggering on user-initiated copy/move operations makes sense as well. Even if, say a backup script is ran, limiting the transfers to backup+user transfers still has a speed gain as with multiple users. |
Registered users: Bing [Bot], claydoh, Google [Bot], rblackwell, Yahoo [Bot]