Revision 624 – Shorter XLSX Sheet Names in HUP

In the army, in boot camp, “short-sheeting” is both a method of using only one top sheet to make a bed, plus in “blading a bud” by the surprise of a very short pit. “blading” is like a prank not a shank, and a “pit” is a sagging cot — both the noun and the verb of sleeping on one. but I digress.

This revision is a reaction to the surprise of having only 32-characters for a XLSX sheet. Didn’t see that coming, but hey, it’s 4x as long as an 8.3, so I guess we win.

I had to make a quick shortening implicit function in the HUP client, so this commits that change to the tool.

While doing this, “NoCapHeader filtering” was added to swap in the words “% Capacity” to “Utilization %” — it seems there’s a reader who sees a [% Capacity] for a link-utilization report having nothing to do with Storage and assumes it’s a measure of his used storage. huh? There’s no way to predict users! That’s cool. It’s a fairly unglamourous filter, but I’m a fan of faster comprehension, so it’s there now. This is automatically in the use of the viwc-hup-slxs.jar, no need to activate.

Revision 622 – HUP Report CSVs filed away in Pure-Java

I’m no huge proponent of Java, but it does run anywhere there’s a JRE. Unfortunately, .NET isn’t. Bash/Korn/Dash shell scripts aren’t. Some AWK scripts aren’t (BSD and GNU fragmenting USL much?).

I do recognize the benefit of working within one toolset. The VirtualWisdom product is Java, so there tends to be a Java Runtime everywhere we are.

For this reason, there’s now a pure-java variant of the “fileawayscript” which was used to pseudorandomly determine consistent pathnames for CSV reports report.csv.zip sent out my the VW Platform emailing generated reports (see also the MailDropQueue project for a way to accept and store those reports).

This update is merely to simplify the process and reduce the dependency list back to “java” and nothing else.

Revision 618 – HUP Tools Added

This revision saw the introduction of the viwc-hup-xlsx.jar file which can import CSV reports of specific templates to a single XLSX file for eventual HUP report generation.

Currently, we run this using:

java -jar viwc-hup-xlsx.jar -h file://some/pathname/base

… or we can just use a non-protocol pathname (i.e. lacking any “xxx://”) to activate the local-file convenience assumption:

java -jar viwc-hup-xlsx.jar -h work\customer-ID-4414\

The result is a local file HUP-yyyy-MM-dd-HH-mm.xlsx such that yyyy-MM-dd is the current date, HH-mm is the current time at which the report generation started.

The next major revision would see FC8, FCX data incorporated, and direct ingress from the portal service rather than reports, but that’s in future.

Revision 617 – Dump Existing Nicknames moved from VIFT; initial DCFM extraction

After some discussion, the query used for “–dump-config=nickname.csv” in VIFT was improved and added back into VICT. This allows a simple export of the existing Nicknames for use in the “-R” option to revision 615 so that a customer who wants to generate a Nickname Diff can do so using:

  1. java -jar vict.jar -D nickname.csv
  2. java -jar vict.jar -N bna://server/ -D nickname.csv -n diff-nicknames.csv

This ensures that diff-nicknames.csv includes only the NEW nicknames for import.

Additionally, notice how –nickname=bna://server/ is used; this is an abbreviation for bnapsql:// for which cmcne:// also applies:

java -jar vict.jar -N bna://server/ -N cmcne://otherserver/ -D nickname.csv -n diff-nicknames.csv

If you’re leveraging the PHC-based Suggested nicknames, you can still use the VICT as a copying entity:

  1. java -jar vict.jar -D nickname.csv
  2. java -jar phc.jar -A\some\path\bob
  3. java -jar vict.jar -N bna://server/ -N \some\path\bob-Nicknames.csv -D nickname.csv -n diff-nicknames.csv

There’s some initial work for vict.jar -N dcfm:// but the schema is still up-in-the-air.

Now go keep those nicknames updated!

Revision 615 – RemoveNicknames Allows Nickname-Diff Creation

There has been a strange issue with diminishing efficiency of nickname import as the number of nicknames increases and the number of other threads competing for a database lock increases. Apparently it’s too difficult check nicknames before setting, and I’ve seen roughly 3 nicknames every 2 seconds on some portals.

Considering this, and that we have ways of getting nicknames from files (including basic 2-column format), I’ve added a method of taking a parsed list of nicknames and removing it from the nickname base.

The “–removenicknames=” option (“-R”) parses a nickname input (BNA, file://, OCI, HTTP, FTP, same logic as adding) and removes them from the internal nickname base rather than adding.

This would let us:

  1. collect new nicknames from all the current sources
  2. remove a list of assumed-current nicknames
  3. write out the remainder for import

Additionally, collecting the current assumed-current plus the new nicknames gives us the new assumed-current.

Revision 613 – fctransfers Gives us SFTP Uploads

In rev 605, the code for uploads was completely dependent on the code brought in with fctransfer.jar’s convenience library.

This revision evolves that change: all uploads are delegated to the fctransfer project, which means we get SFTP uploads for free:

Upload methods are now:

  • FTP: java -jar vict.jar -U ftp://scott:tiger@ftp.example.com/path/ -u file1 -u file2
  • SFTP: java -jar vict.jar -U sftp://scott:tiger@ftp.example.com/path/ -u file1 -u file2

Notice the similarity? That’s intentional.

Of course, both upload methods send a checksum including a notify element.

There remains some oddity in the commons-net-ssh used; I may need to swap out the underlying ssh to sshj or such.