Revision 624 – Shorter XLSX Sheet Names in HUP

In the army, in boot camp, “short-sheeting” is both a method of using only one top sheet to make a bed, plus in “blading a bud” by the surprise of a very short pit. “blading” is like a prank not a shank, and a “pit” is a sagging cot — both the noun and the verb of sleeping on one. but I digress.

This revision is a reaction to the surprise of having only 32-characters for a XLSX sheet. Didn’t see that coming, but hey, it’s 4x as long as an 8.3, so I guess we win.

I had to make a quick shortening implicit function in the HUP client, so this commits that change to the tool.

While doing this, “NoCapHeader filtering” was added to swap in the words “% Capacity” to “Utilization %” — it seems there’s a reader who sees a [% Capacity] for a link-utilization report having nothing to do with Storage and assumes it’s a measure of his used storage. huh? There’s no way to predict users! That’s cool. It’s a fairly unglamourous filter, but I’m a fan of faster comprehension, so it’s there now. This is automatically in the use of the viwc-hup-slxs.jar, no need to activate.

Revision 622 – HUP Report CSVs filed away in Pure-Java

I’m no huge proponent of Java, but it does run anywhere there’s a JRE. Unfortunately, .NET isn’t. Bash/Korn/Dash shell scripts aren’t. Some AWK scripts aren’t (BSD and GNU fragmenting USL much?).

I do recognize the benefit of working within one toolset. The VirtualWisdom product is Java, so there tends to be a Java Runtime everywhere we are.

For this reason, there’s now a pure-java variant of the “fileawayscript” which was used to pseudorandomly determine consistent pathnames for CSV reports report.csv.zip sent out my the VW Platform emailing generated reports (see also the MailDropQueue project for a way to accept and store those reports).

This update is merely to simplify the process and reduce the dependency list back to “java” and nothing else.

Revision 618 – HUP Tools Added

This revision saw the introduction of the viwc-hup-xlsx.jar file which can import CSV reports of specific templates to a single XLSX file for eventual HUP report generation.

Currently, we run this using:

java -jar viwc-hup-xlsx.jar -h file://some/pathname/base

… or we can just use a non-protocol pathname (i.e. lacking any “xxx://”) to activate the local-file convenience assumption:

java -jar viwc-hup-xlsx.jar -h work\customer-ID-4414\

The result is a local file HUP-yyyy-MM-dd-HH-mm.xlsx such that yyyy-MM-dd is the current date, HH-mm is the current time at which the report generation started.

The next major revision would see FC8, FCX data incorporated, and direct ingress from the portal service rather than reports, but that’s in future.

Revision 592 – CSVPipe Separated

Much as I don’t like CSV, I’ve used arrays of strings just as some would use CSVs. If only CSVs were deterministically-parsed… if only spaces didn’t confuse some parsers…

… so in support of butchering text-streams with CSV-like data, and to facilitate the post-processing of content (ie feel like a min/max/avg/deviation/mean+/- 3sigma?), I’ve separated off the RowPrinter content into a separate jar. This allows a basic file to be read, parsers, munged, and ejected back out for others to consume.

A few testcases help to keep me honest.

Tersely, I wrote in the change log:CSVPipe: different chunking and summarizing; ability to set diff keys for chunking; test cases; testsuite.at