Is it possible to call CopyFile without locking the host?

I feel like I may have posted this here before, if I have then I am sorry but I didn't find it. Further more I would have made several drastic changes to this process since then.

So here's the scenario: I'm a SysAdmin, I want to be able to copy an arbitrary set of files to my end user systems as fast as I possibly can (the speed at which this is done serves no other purpose then to feed my ego BTW). So far my process takes what ever directory I feed it, creates a thread for each system in the predefined list (this will use our AD OU's soon) and copies the files from the target directory to the end user system recreating the directory structure. This works OK, the systems that are not reachable time out at roughly the same time, report their errors and exit their threads. The remaining threads queue up on their own and copy the files to their host system as those files become available. I've noticed through using Perfmon that each thread is reading the source files from the disk every time, this sucks. I'm about to research the idea of mapping the file(s) into memory and copying them to the end user systems from there. I'm posting here to see if anyone has any insight or experience doing this.

tl;dr: How could I copy a file from one source to multiple destinations?


Library used: WinAPI

closed account (G309216C)
Hi,

I would believe as a System Administrator,you do have the credentials for the users of your network. I also assume you can access the network users files. I will be taking the assumption as a reality and offer my solution to your problem.

You are making your application\software more complex than it should be. There are Native Windows API functions available for these type of concepts and problems.
First you must create a connection with the targets using WNetAddConnection2() function,
MSDN Documentation on WNetAddConnection2():
http://msdn.microsoft.com/en-us/library/windows/desktop/aa385413(v=vs.85).aspx

Then call the function with the correct parameters (NETRESOURCE struct) as proposed by MSDN then check the return value of the function, if return value is NO_ERROR (#define NO_ERROR 0L) the connection is successful else call GetLastError function then check it across with
MSDN System Error Codes:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms681381(v=vs.85).aspx

After the successful connection the remote machine will become a file descriptor then you can copy your files across and also copy your application in because you can also get that to spread the file(s), in order to execute it just call ShellExecute() function to execute the files remotely therefore everything is automated.

MSDN Documentation on ShellExecute()
http://msdn.microsoft.com/en-us/library/windows/desktop/bb762153(v=vs.85).aspx

Be careful and put a lot of error checking mainly about self-information beucase the first entry of the program will always be the same unless you did PE patching and such so compare the targets with itself so that the computer does not execute it on itself again which will cause a infinite loop.


Notice: Before you download & execute the files and such on remote machines make sure that the targets are online via doing a ICMP request & response check in order to verify if the target is online or not. If the target is not online do not download and execute if else download and execute becuase after all there is not point trying to make a connection with something offline.
There are few more solutions to effectively & with agility determine the remote network status such as TCP but do not use UDP because there is a chance it will not get there.

Good Luck
Last edited on
I would believe as a System Administrator,you do have the credentials for the users of your network.


We don't but this isn't relevant because these users don't have permission to install software on their machines. This point also makes having access to their shared drives irrelevant, even if it were somehow a good idea to install software from the network to 300+ machines at two geographically distinct sites.

Creating shared drives to individual machines for the purpose of copying data is a terrible idea, there is an MS KB article out there that details why. Also ShellExcecute() will not perform a silent installation from an MSI which is something I might want to do, it is a poor replacement for CreateProcess().

I actually have the system service already written up for the remote machines, as well as the "Launcher" as I call it. Those components are functioning well as far as I am concerned, my bottle neck is with getting the files to the end user machines as fast as I can, EDIT: even this is functional it's just not as good as I think it could be. This is about me showing off, not simply getting the job done any way I can. I've already done that.
Last edited on
I don't mean to come off as abrasive, I know that you're a knowledgeable contributor and I've seen you help people here before. You go over their heads sometimes but that's more their problem then it is your fault. Your suggestion for the ICMP request is something I'd like you to elaborate on if you can, I'm not happy with my current method of determining if a remote machine is on or not so that might help me.
That was much easier then I thought it would be. I'll test multi-threading it tomorrow and report back with what I find if any one is interested in a recap.
I would think that the speed of the copy would be limited to the network speed. So you don't need to get too clever with the actual copy program.

Isn't xcopy with suitable options sufficient?
Due to poor planning from a fourth party that wrote some software our client requires us to use I am stuck in XP land. You are correct that eventually the network speed will be the bottle neck, that is my goal, but right now according to performance monitor the thing holding me back is the disk queue length on my non-striped SATA 1 drive.
please don't consider it rude that a beginner like me is expressing his opinion to experts like you guys.

as i can understand, the current bottleneck is with your hard disk, am i right?

if copying one file to multiple destinations crowded the disk bus, then probably each thread is loading its own copy of the file.

i can think of some solution:

if you load one copy of that file into the memory (maybe in a string structure), and find a way that the copy-function use the existing instance of the file stored in the memory rather than loading its own copy of the file.


this approach might eliminate the current bottle neck being the hard disk.
@ Rechard3: Replace "string array" with "File Map" and "copy function" with "Write File" and that's exactly what I am doing. Input is always welcome.
oh....
so these things already have names.... :)
I've noticed through using Perfmon that each thread is reading the source files from the disk every time, this sucks.
You'd expect that. However, you should let the OS do the caching. Windows NT boxes don't have spare memory. Otherwise unused memory is used for disk caching. That's why it's important not to run all those crap Services and icon bar applets, ...

but right now according to performance monitor the thing holding me back is the disk queue length on my non-striped SATA 1 drive.
This means that you're getting cache misses. It might be worthing noting what's happening with memory.

That memory mapping thing is trying to implement what the disk cache already does. And if you run out of real memory, it'll page the mapped memory; in effect, caching a file on disk (double whammy?)

My guess is that your server doesn't have enough memory to cache the files that are being transferred. There's not a lot you can do as, in the end, the file data will be sourced from disk.

A possible solution is a torrent. That way each client is reading data off multiple spindles or cached by peers. But I appreciate that it may not be a practical solution for a sysadmin on a corporate network.
Last edited on
kbw said:
That memory mapping thing is trying to implement what the disk cache already does. And if you run out of real memory, it'll page the mapped memory; in effect, caching a file on disk (double whammy?)

I actually didn't think of that, this is called "Disk Thrashing" for anyone else who is reading this, that will be something that I need to watch out for.


I don't think I'll run out of memory because my idea was to map each file only one time into the program and have each thread write to their destination from the same File Map. I'll admit I don't even know if this is possible\practical yet since this isn't something that I've done before, but that in itself another reason I'm trying. The premise behind my idea is to limit the number of times Windows reads the data from the disk, the WinAPI doesn't seem to have a multicast feature (not on that I've found anyway) so this was my next best idea. Paging would be an issue if it was to occur, but since priority is determined first by the processes base priority then by the number of threads running it should be as safe as I can make it. There is also no GUI or COM libraries being loaded yet so the memory footprint is pretty small so far.

EDIT: I just found the MADCAP functions, this might be the route I take instead...
Last edited on
Topic archived. No new replies allowed.