<![CDATA[JoeBrancoIT.com - Blog]]>Wed, 14 Jun 2017 04:49:07 -0700Weebly<![CDATA[Informational Desktop Overlay]]>Fri, 09 Jun 2017 07:00:00 GMThttp://joebrancoit.com/blog/informational-desktop-overlayI was presented with a problem where the organization wanted to display some simple text information on the desktop like what Microsoft's Sysinternals BGInfo provides.  BGInfo is old and has not been updated to support DPI scaling on High Resolution displays and the organization had not approved the free ware available so as a side project I began developing my own Informational Desktop Overlay... This is a post about my experiences and development process. 

First Approach: Windowed Application

​On first instinct I thought maybe – I could make a Windowed application that was borderless and had a transparent background with a text box.  I soon realized that I could not find a way to keep that window above the desktop when a user would press Win+D to show the desktop.  I attempted a few different methods but it seems like I could not restore the window until the user manually restored the application – there is likely a way but I did not find it in my early discovery. 

Second Long Approach – C#  and GDI+

Prior Experience Expectation

​I knew I could draw above everything if I used GDI – albeit if a window underneath the drawing redrew I would lose the GDI drawing.  I wrote a “tetris” like game called "Stacked" for a Game Development/Data Structures Class in C++ (can see a version of the game re-written for Android here).  The project guidelines intended for us to make a game that ran in Windows Command Prompt Console – so I did – in a way.  The menus were all based in Command prompt but when the game started it used GDI calls to draw the board, score, and objects on top of the desktop.  Input was received through the command prompt so the command prompt had to maintain focus to play the game.  The game would flicker and get slow the more it had to draw on the screen but hey it was pretty slick for a first year game development course.

Program requirements

Some of these were added via scope-creep but in the end this is what the application goals were:
​When the desktop is visible display in the upper right hand corner the Host name and IP address at a minimum.

​If the application is provided additional input at launch: run a command, or script capture the output and display that additional information.
Display all information without modifying the background image causing skewing, scaling, or changing the wallpaper mode.

Evolution of solution:  WinGDI+ Active Window

​I chose to work with C# as it is the language I use on a near day to day basis for the last 4 years.  I did my research and was soon able to start drawing with GDI+ on the screen (the easiest part).  The tricky part was finding a way to make sure I don’t draw over other windows.  I was too close to the program and not thinking like a game developer but more like some business application data-manipulation developer (which is what I had been doing exclusively in programming for the last several months – not game dev). 
I approached it with the idea in mind that I want this to show up when a user presses Win+D or Alt+Tab, or based on the “active window.”  I was thinking too much about the windows.  I added more code using “User32.dll” to get the active window handle and then that window’s Title.  With some if-statements I was able to get some basic functionality where I could force it to draw only when the active window was null, task switcher, start menu/Cortana, or an empty string.
This was a good starting point but I was running into ugliness when the text flickered during each loop or would draw over some windows, and would randomly disappear when user pressed Win+D and then clicked on the desktop

Hooks on Active Windows

​I started futzing about more looking for ways to tell when an active window changed so I could redraw less frequently and not only when it triggered in my main program loop.  This phase of development led to lots of code that I ended up ripping out but also allowed me to really strengthen up my methods that would last.
I started working with Windows Hooks.  I used more “user32.dll” methods to create hooks, read windows event messages, and strengthen my main Loop.  I added a .NET stop watch to the main loop, replacing a wait method, allowing redraw after a specific delta but allowing the loop to process windows messages.  I implemented a Windows hook with a delegate to an overloaded Redraw method of the Overlay object.  Getting the right windows events hooked was tricky – I ended up hooking to: SwitchStart, MinimizeStart, and Foreground (window changed).  This was almost good enough. 

​At this point my program would…

  • Draw when the user Alt+Tab’d, Win+D, Opened start menu, clicked on the Desktop, or any Window with empty string for Window Name.  The flicker was minimized although certain conditions made it flicker more than I wanted.  
At this point I thought I was 95% done that it was almost as good as I was going to get in C# so I switched back to another program that I had prototyped giving this one a rest.

​A Ray of game dev light hit me:

I finished the other application about 8 hours of coding and testing – A command prompt application with a variety of command line switches.  It was a simple program, not very interesting and did not require much research but had a lot of little nuances and branching conditions. 
When I got back to my desk my code for Overlay was still up – Thinking about my wants for the program it finally occurred to me the solution I had been missing all along.

First the 'want'

APPEAR to draw over the Desktop wallpaper ALWAYS when not obstructed. 

I was currently checking for whatever the active window was not caring about where it was on the screen or if it was obstructing the overlay draw area.  In game dev I can drop a ray and find what it intersects with why would this be any different – surely there is a way to know what window is under the four vertices of my draw area?
User32.dll has a method for that: IntPtr WindowFromPoint(Point point) - why did I not think of this.  Time wasted on getting the delegate in the correct hooks in windows events - Arrrgh. 
So initially I added to all my existing code (keeping the event hooks) a check in the main program loop to see if there was a window over my Overlay area… if not redraw.  This worked!  I then did some more tuning to how I managed the list of window titles that I was allowing the program to draw over preparing to add them to a config file that could be easily modified in the future.  I added code to work up the handles of objects over the overlay’s to get the root parent window because at times applications in Windowed (not full screen) mode were overlapping the overlay area were returning a handle with a window title but a null string unless placed very specifically.  I could now drag windows over the overlay area and it would get cleared out by the dragging window’s redraw and then when the window left the overlay area would redraw my overlay text.  It worked great.

Final evolution in C#

​Implemented the .NET settings file to have a list of all the window names I will allow the Overlay to draw over.  I added handling of a “debug” parameter to my launch arguments so that I could force it to write the name of any window that was covering the overlay so I could add it to the Config file.  With the Debugging running I noticed a few oddities that I traced back to the Redraw’s called from the Windows Event hooks.  I commented out the creation of the hooks, the Overloaded Redraw, some extra variables and had a fairly good overlay…. But bloated by .NET and C#.

Re-write in C++

The performance of .NET and C# were not great using 8 MB of RAM constantly.  For a program that is expected to run 100% of the time and only display information sometimes, I was unhappy with the resources required.
Seeing as most of the workload portions of the application were using native windows GDI+ and User32 .dll's it was an easy task converting the functionality from C# .NET managed code to unmanaged lighter weight C++.  I wanted to also be careful that whatever I did I didn’t have any dependencies on .NET or having a particular C++ redistributable installed.  Some compile time settings took care of the C++ redistributable dependencies.  Alternative implementations of some methods and classes avoided the .NET dependency.  I wrote a quick and simple stop watch class to get timers back in my main loop.  By limiting the number of times it is able to evaluate if there is a window over the Overlay area and if it should redraw I can throttle the amount of CPU used and limit any redraw flicker. 

​The first new addition to the program during conversion was adding logic to make the program recalculate the draw area every few seconds so that if the display dimensions changed the overlay would adjust to the new proper location.

Display issues

After I had the program completed and working on my Windows 10 desktop I started to make sure it would work on Windows 7 and Windows 10 computers without Dev Tools.  I had some Windows 10 imaged VM’s with minimal specs on an ESXi host and tested my application.
Nothing.
Tested on Windows 7 laptop – Nothing
​The program was running without crashing or errors but nothing was displaying on screen.
Installed Visual Studio on my Windows 7 laptop and started doing compiling there – nothing.  While debugging I found that the overlay position was being calculated incorrectly due to the display topology… easy fix but that didn’t explain the VM’s.  I installed remote debugger on a VM and debugged from my main Win 10 workstation – all the variables were correct, the return from the GDI+ DrawString() method was successful.  After some searching I found that the way the display driver was configured on the VM, the lack of resources available for Video, and not having VMware tools installed was to blame.

Debugging

To continue my application testing I installed a copy of my customized image of Windows 10 on a newish test laptop I had available and tested my application… Nothing – no drawing.  What?! I thought this was figured out!  Installed remote debugger and started looking into the cause.
Seeing the same behavior – No Errors, Variables are showing the correct draw locations, and return from DrawString() states success.  While getting frustrated and going back and forth to get the resolution test different resolutions I eventually realized that this was the first windows 10 machine that had DPI scaling turned on.  I was aware of DPI scaling but was unaware of the way Microsoft had implemented a form of DPI Scaling emulation for applications that don’t tell Windows they are High DPI-aware. 

I had converted the program to use a Command Prompt window to display the state of variables while the program ran (instead of using breakpoints which is difficult when you are trying to measure the intersecting windows and such).  I could see where the Overlay was supposed to be, when it was supposed to be drawing, that the draws were successful, and when there was a window covering the overlay space.  Everything looked right in the application but it wasn’t looking right on the screen – nothing was displaying.  I thought that maybe due to the scaling the overlay was off the screen somewhere and added some test code to make the overlay move 10 pixels down and left every few seconds.  After running the program and waiting I saw that the overlay appeared.  It took me two more restarts to notice that it was only drawing inside of this smaller rectangle originating from the bottom left hand corner of the monitor (thinking back I believe it was bottom right because my debug information that displays in top left was not displaying either).  After doing some measuring I was able to conclude that it was drawing inside a RECT that matched the resolution of a DPI scaling of the native resolution.   Somehow DPI scaling was limiting the drawing to this emulated resolution. 

I worked with trying to find a way to fake the emulated desktop draw area… I think the issue was I was pulling the handle for the “Desktop” window to make the GDI+ DrawString() call and Windows was treating the area I wanted to draw out of bounds of the DPI Scaled “Desktop” window and not drawing.
 
I started reading more into DPI scaling in Windows 8 and 10 – more than the skimming I did before – and found there was a method that could be called in a thread that would tell windows to treat the process as “HighDPI” aware.  This worked on Windows 10 but does not on Windows 7.  I added code in to determine the OS at runtime and skip any calls to the function but it was no good.  I did not want to have two versions of my application one for Win 7 and older and one for Win 8 and newer.  While looking through compile options hoping there was a way I could bundle the required win10 resource into my application or have some sort of runtime ignore calling the function I just happened to notice answer:
Manifest Tool > Input and Output > DPI Awareness
 
​After cleaning up my earlier attempts at getting around DPI scaling, by just adding this setting the application now worked properly on Windows 7, Windows 10 without scaling, and Windows 10 with DPI scaling.

Code Synopsis

Main()
Creates Overlay object
  • Reads in arguments to see if Debug is one of them
  • Else treats the launch arguments as a Console command and arguments for that command
  • Runs the Command + arguments  - gets the console output from that command feeds it into the Overlay object as additional text.
  • Calls the Run method of the Overlay object.
Overlay.Run()
  • Contains a main loop with switch statements that evaluate what action should be performed. 
  • Each loop calls method to determines if there is a window above the Overlay area but the draw method is only able to be called at most once every 250 ms. 
  • There is a wait at the end of the loop for 30 ms to limit CPU.  CPU usage is less than 2% while drawing and less than 1% while not drawing.  Because of the implementation of the stopwatch class when the overlay area is exposed there should be at most 30ms delay before ReDraw() can be called. 
  • The method call for Overlay Location Recalculation is on a separate stopwatch limiting call to every 4500 ms.
]]>
<![CDATA[Coming around to IBM's BigFix]]>Fri, 10 Feb 2017 08:00:00 GMThttp://joebrancoit.com/blog/coming-around-to-ibms-bigfixI was first introduced to BigFix in late 2012 - at first I complained at the choice to implement their own query and action languages over using known OS specific languages.

​It took me too long to realize the benefits of BigFix - My goal here is to be BigFix Positive instead of comparing faults of other tools.  I also don't want to get into all the features of BigFix just the ones that have me recognizing it's power.

​BigFix Relevance language combined with Powershell/VBS/Command Prompt
Recent Example:​
  • Cisco Remote Code Execution Vulnerability in WebEx browser add-ons and extensions
    • I was tasked with identifying the machines with the extension installed in Chrome and then forcing them to upgrade to the latest version.
    • In BigFix I created an analysis that used a Relevance query to look at all of a machines user profiles, find the chrome appdata directory, look through any chrome profiles, for the folder matching the GUID of the WebEx Extension and retrieve the version. In about 1-2 hours I had perfected the query and had the results from majority of the enterprise's online clients (~8000 machines)
    • Next using the same query I created a Fixlet that used the query as relevance, and portions of the query intermixed with BigFix ActionScript to insert Registry Values for the affected machines to make the Cisco WebEx Extension an Administrator enforced extension so that on next launch of chrome the latest version of the Extension would be downloaded and installed despite any user changes to settings that might have prevented an automatic update. 
    • By the end of the afternoon I started to see the affected client counts drop and I had a great analysis that I could report that showed the dispersion of versions of the Chrome WebEx plugin that I could easily pass on to security and management over the coming few days as clients relaunched chrome.  (we did not force terminate chrome on users)
​BigFix works with third-parties to provide easy to deploy updates
  • ​One of the really nice built-in no fuss solutions from Bigfix is that when new versions of software are released to fix vulnerabilities BigFix within a day has a package available to deploy to update.
    • Examples: Adobe Flashplayer, Reader, Google Chrome, Microsoft Windows, Office, Notepad++, etc.
    • With Chrome we do have customizations to apply post install.  The original author of our Chrome update job had about 200 lines of commands and scripts that handled 32-bit and 64-bit separately because he wasn't using Relevance to the full extent.
    • With Relevance language and making use of it's parameters (variables) I got the install script down to about 80 lines of code.  When Google and BigFix released an updated package we could copy all but the first 4 lines that specify the download locations and version to get the new version in place.  
Smart Clients - Dumb Servers
  • ​This concept was another difficult one for me to wrap my head around at first.
  • When you query a client - you query the client not information stored on the server about the client.  So even if the server has information about the client in it's database it isn't used for new queries...you have to wait for the client to report it's information.
  • The reason why is that it provides for a less complex infrastructure and more accurate results
  • Clients check with the server to get the logic it needs to process and then reports the results back.  The infrastructure is very simple to setup.  Install top level server, attach a database, install relays (which can also be clients), and then push the clients out.  No need for lots of services to be installed, AD Scheme changes.  
Managing clients with/without AD
  • Using relevance it was easy to logically organize computers based on attributes, names, users, programs installed, registry values, flag files.  No changes were made to AD: no new groups created, no reliance on location in OU's.  It was incredibly flexible.  
  • Ability to read/parse XML, INI, JSON files using standard document navigations native to each in order to build action relevance and to make changes via action script.
​Enforcing Policies with/without AD
  • Where we used BigFix we used very little Active Directory Group Policies.  We created Policies in BigFix that would evaluate on our schedule and if it found that reg values, files, permissions, services, etc were not in the configured state - run the job and correct the configuration.  No delays at logon while group policies refreshed, no worrying about machines having issues processing group policy.


http://​support.bigfix.com  - Inspector Relevance - has great documentation on the relevance language and links to the documentation for BigFix ActionScript.  The Fixlet debugger may be downloadable without a login from BigFix and you can experiment with the language and power of it (some items won't work without a BigFix client installed which requires a license).  ]]>
<![CDATA[Simple Threaded Copier - CLI]]>Fri, 04 Dec 2015 08:00:00 GMThttp://joebrancoit.com/blog/simple-threaded-copier-cliPicture
As a follow up on my Simple Threaded Copier - I ended up creating a new Visual Studio Solution with three projects.
  1. ​The threaded copy classes to create a library like isolation
  2. Graphical User Interface (GUI) implementation of the Threaded Copy Library
  3. Command Line Interface (CLI) Implementation of the Threaded Copy Library
​The Command Line Implementation became the primary focus for my development and enhancements. 

​It is much easier to setup scheduled copies with the CLI rather than the GUI.  The ultimate goal was to make the GUI work with command line arguments and allow the CLI to optionally launch the GUI with a switch.  I am not there yet.

​The CLI Project adds only one file of code to initiate the console application, parse the command prompt, initiate copy, and monitor the progress of the copy.

​Features of the CLI:
  • /L \\server \\server1 \\server2 ... \\serverN - Specify a manual list of Servers to copy to
    • OR 
  • /F filename.txt - Specify a file with a list of server names to copy to
  • /O - Overwrite existing files
  • /RP - Remove Partials - if a .partial file already exists for the destination file delete start over.  If not specified it will get the size of the partial and continue as if the file is of the same origin.
  • /T - Count of Bytes to be copied per second
  • /S - single file copy path originating from the share or local drive - Share$\folder\folder\file.ext
  • /D - the destination path originating from share to file name - Share$\folder\folder\destFile.ext
  • /SF - Source Folder for Multiple copies in a directory
  • /DFL - Destination File List - the list of files to be copied to the destination path from the source folder
  • /DF - Destination Folder - Share$\folder\DestFolder
​Sample Usage
(single file)
​STCCLI.exe /L \\server \\server1 \\server2 /s C:\temp\myfile.file /d c$\temp\mycopiedfile.bat /t 6400 /o

​(multiple files)
STCCLI.exe /F servers.txt /SF C:\temp\ /df c$\temp /DFL Files.txt /T 64000 /O /RP



]]>
<![CDATA[Free Reign - Software solutions]]>Sat, 06 Jun 2015 01:05:09 GMThttp://joebrancoit.com/blog/free-reign-software-solutionsPicture
At my current company I started as a contractor that was given pretty free reign to evaluate the state of things and design and prototype solutions to improve the environment.

Thinking back these are some of the things I found and worked on:







Windows Imaging (Adding support for Windows 7 and beyond):

When I got there they were using Windows XP primarily and deploying these year after year re-sysprepp'd thick images using Altiris Imaging tools (Symantec Ghost+).

Solution:

I brought in Microsoft Deployment Toolkit (MDT) 2012 with all of my knowledge from my previous employer plus some additional research.  Our MDT solution is really well built now and easy to maintain.  All Reference (Gold) images are 100% automated, super clean, and can be updated by nearly anyone on the team with little instruction.  The imaging spans our 73 different divisions with their distributed software deployment servers, custom naming, the works.  I've even built my own custom Wizard UI's (without SCCM) to make really lite-touch deployments.  Last year I updated to have a One touch In-Place Windows XP to Windows 7 Upgrade/Refresh.

Various HTA's

There were a number of HTA's they used to create dashboards for the sales people that were getting the HTML elements replaced and updated every Quarter and someone had to go and relink all the data, rebuild the formatting, and various other tedious tasks - it was quite a poor use of a technicians time for 10+ hours. 

Solution

I modernized the HTA's and have them programmatically loading the data to display from XML documents.  The use of the XML allowed us to keep the previous months data available when we roll out new sets of advertising material to the sales team.  Also allowed us to create different views for sorting data.  The biggest plus was I was able to provide a spreadsheet to the marketing department to fill in that we could quickly evaluate, touchup and then run a script against to generate the XML data for the various HTA's.




Screensaver

The Screensaver is the one that has me a little annoyed today.  The original screensaver in use still today - was "developed" by a third party and is a simple picture of our newly re-branded trucks rolling across the screen at various different areas.  The Flash screensaver doesn't work if Flash Player is not installed or is broken, or if there happens to be a broken Shockwave player.  When I got there they were deploying Flash player and Shockwave Player (way past the end of Shockwave player) to every machine.  Because of the old images that were old and reused year after year with more problems installed over top to add support for new models, the Shockwave player seemed to break a lot.  The flash animation is not smooth and is quite jagged with v-sync issues, blacks out any 2nd, 3rd, nth monitors and runs in a 4:3 box on the primary monitor.

Solution:
Having the love for programming I set out one night to rebuild the screensaver in Windows Presentation Foundation since that will make use of hardware acceleration and DirectDraw resulting in smooth animations.  It also allowed us to deploy a computer and not have to worry about whether flash was installed or not.  The advantage I liked the most was being able to use the full aspect ratio of the monitor and have the truck drive across multiple monitors.  The screensaver's only requirement was .NET framework 4.0 client profile - which was in the process of being rolled out to the enterprise.  I couldn't get the guy I reported to at the time to take action on the screensaver because he was afraid of me taking his job when I got converted from contractor to direct employee.

Screensaver 2:

A project was brought to me by my director where there was a possible major rebranding.  Marketing went to a third party again and came back with some "Screensavers" and a Desktop Background.  What they sent was four 1600x1200 JPG's all labeled "Screensaver_" + Something.  I wasn't sure what I was supposed to make of them since they were all Grayscale images with a few accent colors  on logos but the gray scale was really white washed out ... So I have these 4 pictures that were mostly white.  Right Screensavers...

Solution:

I stared at these terrible files and emailed back and forth trying to figure out what it was...after getting nothing new...I took the elements of the images given and came up with a way to animate them and create a loop using my existing screensaver template.  I tried to keep to their grays but made it darker.




Screensaver 3:

This week I was asked about looking into Screensaver for a new potential rebranding project.  I was excited because I thought I was going to be making the screensaver.  Find out Corp marketing went to a third party again and the third party this time was going to be building a flash animation and then using some $40 (most likely) software to compile into a screensaver file.  They confirmed that it wouldn't change aspect ratio and it wouldn't span monitors, as well as still keep dependencies on a system installed Flash Player.




Solution:

Told my director - he wants me to build the screensaver and is trying to convince the marketing director to let the 3rd party work on design but let me do the actual building of the screensaver.



Kiosks:

The kiosks in use when I was brought in were ghost images that were applied to machines but were less maintained so drivers would need applied to the newer machines manually after build - or they had to select really old machines for Kiosks.  Kiosks required a fair amount of technician configuration each time.

Solution:

Created scripts that could be run on my new Windows XP and Windows 7 images that will take a normal build and then apply all the lock downs to any existing accounts (including default user) except the Administrator.  Leaving the Administrator account untouched allows technicians to logon work on the machine but all other accounts would get configured in a locked down state.  This allowed any machine to become a kiosk and allowed for a better technician experience with less configuration needed.   The scripts also would installs other scripts in the system, any other software needed, creation of user accounts, disabling services, installing hotfixes, and setting up auto logon.

]]>
<![CDATA[One to Many Copy tool - Redubbed - SimpleThreadedCopy]]>Fri, 22 May 2015 01:58:25 GMThttp://joebrancoit.com/blog/one-to-many-copy-tool-redubbed-simplethreadedcopy
I realized now my pictures do not enlarge - no big deal they are ugly but will fix next time.



So I had MVVM Light Nu-Get packages in my code and was partially using them but when working on mod tools for a game I'm developing this weekend I realized I was making more trouble by using MVVM Light and removed it.  I also broke something in my project configuration while I was making changes and moved all of my code to a new project.  Thought it was no longer a "SimpleBatchCopy" tool and that it deserved a new name ... so "SimpleThreadedCopy" is the new name.

It is so much more than it started out as and is now doing most of the things I wanted but was hesitant to implement due to complexity.  It reads a source file once for every write it makes simultaneously to many remote computers at an adjustable throttled speed.

My file copies using the previous version of this tool continued to crash and it got frustrating - so I took a few hours tonight to make some major modifications. 

Starting Needs:

1.  Ability to resume failed Copies

2.  Better tracking of where it failed - like on which server (although I usually get graceful handling while I'm testing on single copy - I'm getting a full program crash in real life.

3.  Continuing on a single server write failure.

4.  Better memory management.




Today I made some attempts at resolving the following needs.
1a.  Resume Failed copies
Ability to resume failed copies worked.  In my code I was creating a new thread on each read for each server to a method that accepted the FileStream, Buffer, and buffersize.  I created a new class that contained information about the destination and also contained the FileStream object.  That way I pass the DestinationData object to the new thread and if there is a failure I can set a property in that thread to indicate it failed and skip it on all further writes.  I am also using this object to populate the new ListView that shows information about each individual server - just realized I didn't ever implement the calculation for Percent Complet - no biggie - the progress bar is an indication of the progress for how much of the source file has been read - which should match every server after they all catch up.

1b.  Resume Failed copies - identifying files to overwrite v. files to resume.

If the destination file + ".partial" exists - it resumes and appends all data to .partial - This could technically end up in broken files if the partial isn't the same as the first part of the source.  I don't care tonight- I want to resume my copy at work.

1c.  Resume Multiple Failed copies - that may be at different stages

This is problematic - have to make sure that the bytes/sec between the files that failed matches the gap or some of the files will get broken if the buffers don't line up.  For  a server's file to start receiving bytes of data - the size of the destination file must match the Already Read size of the input source stream. - so if a file doesn't line up - it should get skipped in every pass and you will have a file that doesn't finish.  Thinking of it - I'm not sure if it will properly indicate this at the end.  I should add some logic in for that. 

1d.  Rename the .partial file to the actual destination file at end of copy.

File.Move() covers this ... does not yet overwrite if destination already exists - but will leave the partial behind and that works for me at this junction.  Will fix.

2.  Better tracking of where it failed.

I made an attempt at this with the new DestinationData object that contains information about each thread.  It should let me know when a particular server fails - and shouldn't crash the program.  We'll see - I didn't get enough testing into it.

3.  Continuing on a single Failure

See #2

4.  Better memory management

I didn't clean up hardly any objects in my code and relied heavily on C# garbage collection.  But I made a few passes and nullifying objects where they were no longer in use.  There are still lots of opportunities for me to go through the code and clean it up.




I did break the UI a little bit when I added the new ListView in - I need to modify the calls inside of the Resize Methods but didn't want to deal with it tonight so I made a few adjustments to prevent important stuff from getting hidden - and mostly succeeded - the "Add Job" button is mostly hidden.




I do want to save logs automatically and do better logging - It's pretty pathetic right now.








]]>
<![CDATA[One to Many Copy Tool]]>Wed, 13 May 2015 00:37:54 GMThttp://joebrancoit.com/blog/one-to-many-copy-toolPicture
Simple Batch Copy Tool

Version 0.1 - Rapid Prototype

Same results as a Batch Script to copy one file to many locations.  Not much UI. 

Uses System.IO.File.Copy() method for copying files to destination.




  • Basic tool operation:
  • Specify the input file or directory.
  • Then click "Edit Servers or %servername% and then a dialog opens where you can enter a list of servers.  click OK.
  • Type the common UNC path that would appear after \\%servername%\ for where the file or folder will be copied to
  • Click Add and a "Copy Task" is created for each Server.
  • Click Go to start processing each file.



There is threading so that the form still updates and functions while the copies are going.



Picture
Version 0.2 - UI Improvements

Improved UI significantly and added exception handling.

UI Additions:

  • Added the ability to remove copy Tasks (Selected, All, Completed)
  • Improved Log Area to show contents more than just one line.
  • Log can be saved to text file.
  • UI Scales better.




Picture
Version 0.3 - Copy Speed Throttle Added

Bandwidth Throttling Added

A need arose at work to copy 33 GB of data to around 70 site servers.  Each site has limited bandwidth so we can't interrupt employees by letting copy go full speed.

Throttling speed of 10 KB/s is hard coded.  This was a prototype version to prove I could do it.  I would not be able to complete the copy in a timely manner


The file copy progress was a mystery... the files on the remote server show 0kb until copy is complete so I had no idea how long copies were going.





Picture
Version 0.4 - Added Progress bar and customizable throttle speed

Progress Bar Added

Bandwidth Throttling UI Added

Modified the way the copy tasks are stored and processed.  Instead of one task per destination - multiple destinations are now stored in one task.


Added code to read the input stream buffer then write it to each multiple destination in the Copy Task.

This could have potential savings when doing large scale file copies because the disk is only being read once per source file.  UI only updated partially to support this.

Also added the old System.IO.File.Copy() functionality for un-throttled copies.



Version 0.5 - Added multi-threading copy.

No UI enhancments - some poorly implemented elements (like destination shows "Collection"  the Progress column isn't updated.  Log is poorly implemented from the changes implemented in adding threading (can be fixed simply). 

In Code added threading for the copy task.

I am reading in Bytes into my buffer then looping through all the destinations and creating a thread essentialy for each write into a destination stream.

Call a wait and then join all the threads. 


Results of multi-threaded copy

I tried to compare the copy times between v 0.4 and v 0.5 but due to other activity on the network and not having a proper testing lab my results are a little skewed.

Copying a 307,142,656 byte file to 3 Remotes sites.

v 0.4 (single threaded - read once write multiple)  = 38 Minutes 10 Seconds

v 0.5 (multi threaded - read once write multiple simultaneously) = 9 Minutes 51 Seconds (the throttle speed is different because I changed the wait from 1000 milliseconds to 900 milliseconds thinking that possibly the threads would not join within one second - I don't think it matters with the number of copies completed but the throttle speed will never be true unless I add a lot of logic to make it so and it will have to handle the different copy speed for each thread )



Copying a 307,142,656 byte file to only 1 Remote site

v 0.4 = 20 Minutes 48 Seconds

Reading and then sequentially writing into each destination (single threaded):



Where am I going from here...fixing UI, Making sure exceptions are handled, making sure directory copy is still functioning - It may have been  broken but I have been testing with single files.  Fixing the logging.

Posting code snippets - and the program.




]]>
<![CDATA[Leveraging the most from Microsoft Deployment Toolkit]]>Tue, 11 Mar 2014 04:17:45 GMThttp://joebrancoit.com/blog/leveraging-the-most-from-microsoft-deployment-toolkitMicrosoft has provided a very powerful tool when it comes to deploying and customizing it's operating systems and I cringed when I walked into my current company in August of 2012 and saw they were using Altiris' Image Deploy which is a glorified ghost tool. 

Image deployment was something I saw that I could improve right away by implementing MDT.  I have used MDT since 2008 and have learned many different best practices and got this opportunity to start clean.

I've written over 100 pages of documentation on how I setup MDT and the best practices implemented (lots of pictures).  After having customized and deployed to our 70ish remote locations our company finally made the decision to eliminate out-of-support Operating Systems from the environment about 2 months ago.  The company looked at multiple outside contractors to do the work but I was able to sell my Director on using MDT with User State Migration Tools (USMT) to help upgrade our remaining clients (previously we were not using USMT).

MDT was the simple part the tricky part was writing the logic and tools to get our software deployment tools to reinstall the users software automatically after the upgrade process completed.  We have been using Altiris DS for doing most software installations but this Windows XP migration to Windows 7 has really driven us to move our software deployments to IBM Endpoint Manager (aka Tiviloi Endpoint Manager and inside our company as BigFix).

High-Overview of process

Windows XP is live and running:
     • Zero-Touch process is started
     • Information about the machine is gathered
     • Programs are cataloged and relics are made to indicate programs for reinstall.
     • Office, Credant, Our In House Sales tool, and Lotus Notes detection takes place
     • (If detected) Credant Encryption Data is gathered
     • Windows Pre-installation Environment (WinPE) is applied to the machine
     • Computer reboots to WinPE
Windows PE is live and running (total elapsed time so far: 10 minutes)
     • Reconnects with Division Deployment Server
     • Captures User State with Hard Link Migration
     • Cleans excess data from the Hard drive
     • Applies Windows 7 32-bit
     • Customizes image - Applies patches, configures Unattend.xml
     • Reboots
Windows 7 is booting (total elapsed time so far: 25 minutes)
     • First boot drivers are installed and configured
     • Windows auto logs into Administrator account with disabled shell.
     • Joined to domain
     • Applications installed (Sep, Altiris Dagent, HP/Lenovo utilities, etc.)
     • If needed Lotus Notes reinstalled
     • If needed Office 2007 reinstalled
     • User State Restored – Profiles recreated, data put back, etc.
     • If Credant Encryption Needed
         • StateStore Backup of Hard-Links is removed
         • Encryption Indexes are scanned and repaired
         • Credant Encryption is reinstalled to recognize files already encrypted.
     • BigFix Agent reinstalled
     • Corporate Customizations reapplied
     • Reboot
Windows 7 Reboots and stays at Ctrl+Alt+Del (total elapsed time so far: 45 minutes)
     • Users can logon most base build applications are already there
     • BigFix starts installing patches, chrome, remote controller
     • BigFix installs programs required for upgrade based on the existence of relics.
     • BigFix completes upgrade installs and prompts the user to Reboot.



So I had the opportunity to write a script to parse the system find programs we wanted to reinstall, create a file on the system that would be migrated by USMT.  One of the last steps in our Upgrade MDT Task Sequences was to create a final relic file that tells BigFix the upgrade has completed - this triggered BigFix to scan for relics and install software based on the existence of the relics.

The Script determines how to create relics based on Application Definition XML file.  The script parses the XML and compares against Sysinternals PSInfo (with /s) output as well as any custom definitions, like the existence of files or folders, to create the application relics. 

]]>