Push Application and other files once committed.
I have these software suite that has lots and lots of files. At the moment, all the client computers (12 in total) access this app through a network share and run the app and its files from that location. We want to minimize impacts that network outages can cause. I am looking for other than ensuring proper network deployment is done.
These files change constantly with new .dlls and new .exe replacing the older ones. And lots of .logs and .tst files are also generated.
So i am thinking that each machine needs to have a local copy, but this local copies need to be on sync with the changes the operator does to the files on the central location.
Another thing is that when the app is used, it generates lots on files, never replacing or appending, always writing new data.
I was thinking about using SVN or Git, once the operator replaces/adds new files and commits the changes, automatically push the repo to the all 12 clients using a post-commit hook. So far i think this is very viable.
However, not sure how to handle all the generated data. Each client would have to push these new files to the central location.
Any other ideas or thoughts?
From the sound of it, this app is actively being developed. I would see if an update mechanism could be programmed in as a prestart, and have a hook on exit to upload the data.
Just my two cents.
|reply to PToN |
maybe i didn't follow all of that, would this work?
|reply to PToN |
12 clients access this same set of files and in the course of that access dll's and exe's change constantly? How does it handle versioning and the same files being changed at the same time by different clients? That sounds like it would be an even bigger problem honestly. If the app has that sorted out, it should be extendable to handle what you want.
Do the files need to be updated once or twice (such as app startup and shutdown) or do they need to be continuously updated as it runs?
- The customer send us a test program to run on certain device
- This test program files are placed in a directory "tests" within the "app" folder network share
- When the client PC runs a test, they double click the shortcut to the app's exe which is in the network share. All the app files (excluding the tests) are loaded into memory.
- The tester goes go to file -> open and selects the test to run
- When the test starts, every single output/result is logged into several files in the same network share.
What i am trying to accomplish it is to eliminate having to have the network as a point of failure. I was thinking that something like Git or SVN would help me doing this by having the clients update from the central "repository", then when the test is done, have the clients push the changes/additions to the origin.
This way, they all have access and are able to run the tests even when the network goes down for whatever reason.
I may be complicating things, but i need to present several solutions to eliminating downtime on these machines.
I would focus on why the network can't be relied on ... the traffic is uses is trivial
|reply to PToN |
No, I don't think that's complicating things at all. I think you're on the right track with this. It's really just like application development and the Git/SVN/etc. approach is how that is usually handled and it works just fine.
I am curious though, have you had network issues in the past? Any to the point this has become a serious issue? Or are you just trying to be proactive and solve it before it becomes an issue?
Just trying to cover all the what-ifs.
Plus, we recently had an scenario where some admin accidentally moved several files and they were thought to be deleted. Backups were no good as when they were restored, all the restored data was garbage. (dont ask why because i dont know why).
So thinking on ways that i can cover that kind of horrors, when the date is lost, backups are crap and lets add that the server (let me change network to server as the network is pretty redundant, should had said it better at first) crashes.
Having the clients have a local copy of whats needed, it could had allowed work to continue even if everything was lost.
Plus it is already noted to correct why the back ups failed, etc. Just looking for other ways to keep it going.