In Revision 509, I added a filter to skip hard-zoning in the awk-based zoneshow filtering to nicknames.
The “fix” is merely to discard them: No nicknames are present when they’re used, and only 1% of customers use them (I’ve seen it twice, ever)
In Revision 509, I added a filter to skip hard-zoning in the awk-based zoneshow filtering to nicknames.
The “fix” is merely to discard them: No nicknames are present when they’re used, and only 1% of customers use them (I’ve seen it twice, ever)
Working with a customer in Newark, CA, the dupe-detection was doing things in a UNIX-like manner: case-sensitive. Humans (meatware) don’t work this way, so the software is now looking for dupes ignoring case.
I’m not proud of this one. Please bear with be:
In revision 493, I wrote about how the parser for “–nickname=” actually pushes content to three separate parsers, and simply chooses the one with the best results. That was all about not trying to guess the content, but leave the guessing to the parsers. Whichever one gets the most, it wins. Too easy, and extremely scalable.
Problem is, the underlying Apache lib used to fork-off the incoming stream — to avoid downloading a file multiple times to parse it — that doesn’t always seem to work.
I put a lot of time and concern into trying to figure out why, but in the end, I just added a retry-counter.
When all parsers return a “shoot, I dunno” response, we simply run it again. And again. And again. …not so obsessive because we give up after 3 times, but you’re free to make it as psychotic/obsessive as you want.
To describe this, I verbosely wrote “add retries to the parsing so that we can thrash on a file if we need to just-get-it-done”
I promise to do better design in the future, but for now, this will only re-download a file for each full retry cycle. This doesn’t matter at all for file:// URLs, but for ftp://, bnapsql://, and http://, it will show up as multiple tries.
For this revision, I wrote: “enable the –nickname= function to fork inbound content to a number of parsers; the one with the most results wins. Net result: the FAE or user needs not worry what they send to the tool, it will try to figure out what the file is. Supports user-selected columns in CSV, Brocade ZoneShow, and BNA”
What does that mean, in detail?
In past, the –nickname= fed directly into a single consumer that understand the user giving a “;WWN=x” or “;Nickname=y”, and uses those columns as input to nickname data.
Now, there are three parsers all feeding from the same resource, so even remote content (ie ftp:// and http:// URLs) is only downloaded once, but forked to many parsers. Without the user worrying about format, the three parsers try to interpret the stream to see what they can dig up. The “winning” parser is the one with the most results, effectively adapting to whatever the user sends it …of the three formats currently understood directly:
zoneshow
output (accuracy is challenged if the user sends a logdump of a screen-scraped output; for best results, treat the output as binary, and convert it directly using plink.exe
or ssh
, not a screen-capture of a log dump)To re-iterate, the following URI types are understood:
It seems that someone is creating Demo Databases rather than use mine (which are cleaned up by Nick).
They’re broken. Specifically, they have extra data appended at some point. HINT: if the data in your five-minute summary ends at 2011-11-16, but your last interval is 2012-04-17, you have the broken one.
PHC now detects that, using both the packages and the final summary.
In order to increase the versatility of TransformUDC to define groups by pattern, in revision 485 I added the ability to use \\N expressions in transforms. This however needs to use a very recent GNU-specific awk program. Some revisions (including that un UnxUtils.zip) do not understand this function.
In revision 480, I reduced the jarfile by moving java builds to their own isolated directories, avoiding propagating cross-packaged cruft (the “Free Wifi at airports” problem). Reduces unpredictable build results, ambiguous depen
In this revision, I added the fallback logic so that if a file:// URL is not used, and the file looks more like a local file, look for the local file instead.
For revision 476, I wrote “made PHC “do the right thing” a bit better by dropping out some config in the bat file and activating an “auto” mode; may have issues in a “-h uri” mode”
In short, there it is: the PHC becomes “Automatic-er” and allows me to be lazier. Avoiding possible user-error is merely a stretch-goal 🙂
Consider the 7 characters:
PHC.BAT
What this does is:
Auto Mode
Auto Mode simply does the smarter things for you, so that fewer choices are necessary:
PHC-XLSX.jar
has enabled XLSX creation, build phc.xlsx
phc-OUISignatureList.xml
which can be used as the basis of edits to OUISignatureList.xml
phc-vm.csv
which can define HBA speeds based on having all HBAs under ProbeSWconvert the simpler inittab entry to the wannabe-complex-for-koolness /etc/rc.d/init.d/so.d/damn.d/cool.d/random-long-name init config forced by CentOS-6.0