How to Fix pg_hba.conf to Allow bnapsql:// to Read Nicknames

The bnapsql:// protocol was added over two years ago; this protocol connects to BNA’s backing database (progresql) and asks it directly for some information:

wpg_div_wp_graphviz_1

The benefits of this method versus an SMI-S method are simple:

  1. it doesn’t require a license fee to check or try
  2. it grabs both “zone aliases” and “aliases” (the “SMI-S” interface — CIM-XML — only shares “zone aliases”)

This worked fine until BNA-12.0.2 (including HPNA and CMCNE); after that, the vict.jar starts to report an error something like this:

FATAL: no pg_hba.conf entry for host “192.168.1.1”, user “dcmuser”, database “dcmdb”, SSL off
Please add the client’s IP address to the file
ie: host all all 0.0.0.0/0 md5

So what’s the problem?

pg_hba.conf is like a hosts.allow used in old UNIX: it lists those allowed to talk to the server. It’s like an Access-Control List.

In BNA-12.0.2, the standard entry was changed from:
host all dcmuser 0.0.0.0/0 md5
to:
#MIGRATION#host all dcmuser 0.0.0.0/0 md5

…so you can see that it’s merely been commented out, as well as an IPv6 equivalent. In short, we’ve lost access to the backing database due to a change in BNA’s ACL to better protect itself.

So what’s the solution?

Strange as it may seem, the error message holds the key to the solution:

Please add the client’s IP address to the file
ie: host all all 0.0.0.0/0 md5

Now, I’d never accuse anyone from not bothering to read the error message, no! 🙂 Seriously, this sort of error message seems like so much spewing TL;DR, and The problem is: which one? which pg_hba.conf? Did I get the correct one of two, three, or four?

Just like everyone else, I like to get stuff done and go home; in support of getting things done, without “throwing my peers under-the-bus too much”, here’s more detail about fixing this problem:

The vict.jar tries to give a hint with a filename, but that only works on Windows installs of a specific version. In short:

  1. find all the pg_hba.conf files
    • everything but windows: locate pg_hba.conf
    • everything but windows: find /usr/apps -name pg_hba.conf
    • windows: use whatever windows has this week as a search tool to find these files
  2. change each one, checking when changed
  3. you may need to SIGHUP the database server
    • on linux/UNIX/MacOSX/BSD/everything-but-windows: killall -HUP progresql or
    • on linux/MacOSX/BSD: ps axwl|grep progresql; kill -HUP (the PIDs shown by that command)
    • on UNIX (USL) and UNIX variants (including AIX): ps -ef|grep progresql; kill -HUP (the PIDs shown by that command)
    • windows: forget it: there’s no signal subsystem. Just restart the postgresql service every time. Yeah, that’s heavy-handed

There might be an easier way to find out which directory holds the pg_hba.conf file that matters, but it’s not consistent. I doubt it’s a huge benefit to knowing exactly which pathname on every system supported by progresql; rather, the method of finding it might be more efficient.

a few more links
(links also inline to survive printing)

How to Collect DCNM and BNA Data via SMI-S Interface

OK, I need to come clean on one thing: this article isn’t about SMI-S per-se, but about connecting via CIM-XML. The thing is, “what is CIM-CML?” When a client connects to, let’s say, BNA, it can talk CIM-over-HTTP, CIM-over-HTTPS, or CIM-over-RMI. In hindsight, maybe I should have focused on RMI, but I had reasons. Had I titled this “…Data via CIM-XML over HTTP”, I would anticipate glazed eyes, and no real up-take on why this matters.

The trick is: it doesn’t matter a whole lot. …but it’s there if you need it, simply because I had it around.

We typically draw information from BNA (and alpha-quality in DCNM) by speaking directly to the underlying database, like this:

wpg_div_wp_graphviz_2

So normally, that’s a command such as:

java -jar vict.jar -N bnapsql://bna.example.com/
java -jar vict.jar -N dcnmsql://dcnm.example.com/ (again, needs QA)

These use the BNADatabase passwords, not the user’s password with which he is more familiar. These are typically hindered by ACL (the evil “pg_hba.conf”, all 4 of them).

The thing is, this method (in BNA) gets the data that isn’t available by SMI-S… err… CIM-XML. This gets the aliases that are not zone aliases. If you don’t recognize the difference, or remember “the McData way”, understand that some data isn’t available.

So there I was working on a DCNM Writer for a customer. It’s been taking way too long, and in order to test, I had added a DCNM CIM-XML client to the parsers. I needed something to bang on the DCNM and see what it had for when I try to push changes into it.

I needed this:

wpg_div_wp_graphviz_3

I decided to complete a functional BNA client (alpha), together with a DCNM client, and make those available to both vict.jar (VW3) and (VW4) vw4tools.jar via underlying FibreChannel-Parsers. They’re used like this:

java -jar vict.jar -N bnacql://bna.example.com/ (this one needs QA)
java -jar vict.jar -N dcnmcql://dcnm.example.com/
java -jar vw4tools.jar -N bnacql://bna.example.com/ (this one needs QA)
java -jar vw4tools.jar -N dcnmcql://dcnm.example.com/

The abbreviation for the protocol is BNA/DCNM, followed by CQL, the CIM Query Language, which is actually similar to SQL92 (Language, not Microsoft product). Microsoft has a variant for the WMI called WQL. If you like, you can be more explicit able the defaults:

java -jar vw4tools.jar -N dcnmcql://scott:T1ger@dcnm.example.com:5988/cimv2

Of course, you’d want a -o or -n to make use of the collected data, and you’ll see collected nicknames show up as NicknameParser counts (these data sources feed a text stream that is parsed by NicknameParser). vw4tools has full capability to –pattern itself into some upper-level entities, or just spit out fcports.

…and that’s the power of what I’ve done: the BNA and DCNM portions are merely small layers over the underlying capability. I could replace the vCenter collector with a CIM-XML client, or use that to interrogate various storage devices, but I assume VirtualWisdom4 Discovery will eventually do that for us in a much more Quality-check and code-reviewed and reliable manner.

As a reminder, the things I build are intended towards the installation timeframe, where a few hiccups are accepted so long as the task is completed. I don’t necessarily feel these tools would be used beyond installation day.

Version 1.0.73 – Collect Nicknames via CIM-XML CQL Client

This edit allows two additional “protocol” values: instead of just http, ftp, bnapsql, dcnmsql, and the formats listed on FibreChannel-Parsers docs, this adds:

  • bnacql://user:pass@server:port/path to query a BNA server using CQL
    • ie bnacql://bna.example.com/
    • ie bnacql://scott:tiger@bna2.example.com:5988/cimv2
  • dcnmcql://user:pass@server:port/path to query a DCNM server using CQL
    • ie dcnmcql://dcnm.example.com/
    • ie dcnmcql://scott:tiger@dcnm.example.com:5988/brocade1
    • ie dcnmcql://customer:pass@dcnm.example.com/

For example, I’ve been hammering away at it using a command like this:

java -jar vict.jar -N dcnmcql://admin:adminpass@192.168.1.130/ -n nick.csv
… and I would see that the collection extracted 3 DeviceAliases.

The DCNM CQL client draws out Device Aliases, but I haven’t found fcaliases yet.

The BNA CQL client will draw out Zone Aliases, but not Aliases of the non-zone-alias sort.

Why? I needed a CIM-XML client for some work I was doing, and I had the code loosely working so that I could use it to test the other real deliverable. Since I had a DCNM client already, I split the Cisco-specific stuff out, and slotted in a BNA client. The DCNM client (via dcnmcql) is working just fine, but I don’t have a test server to beat up with the BNA client. It works in theory?

How is this useful? Not a whole lot, since VW4 will use a protocol like these to collect information, but I’d like to point out something:

this doesn’t need a license

This would actually let a customer check “will VW4 see all of my aliases?” which — as Application Engineers and Deployment Techs know — is actually a fairly long pole in the circus tent of VW4 deployments.

Version 1.0.72 – Fix a Null Pointer Exception

This release is simply a bug fix: Chris Carlton gave the vict.jar a command that caused a parser to be not-sane; the resulting null propagated, and trashed out the entire parser.  Unfortunately, the exception cascaded to the array of parsers, breaking isolation and tearing down all the parsers.

This would affect both vict.jar and vw4tools.jar as both share the underlying FibreChannel-Parsers

I need to create wrapping exception blocks to stop repeats of the cascade, but in the short-term this one symptom is resolved in this release.  …with my apology.

Version 1.0.71 – SwappedNicknameParser

I created a specific instance of the NicknameParser as a convenience: I worked with a colleague on a limited environment wherein he could not run a “awk -f swap-1-2.awk” to swap columns, and wasn’t getting results from the parser.  To be honest, it took us both too long to realize that the simple WWPN/Nickname order was swapped to Nickname/WWPN.

I hate being surprised by software when there’s a deadline; as well, I like to cater to jet-lagged Application Engineers and anyone who “just wants to get the gig done”.

This adds a NicknameParser as a --nickname=file.csv;WWN=1;Nickname=0 but avoids having to explain that.  This situation is common enough, this addition just helps get it done with very little drawback.

FibreChannel-Parsers added the SwappedNicknameParser; vitools includes a test case to ensure that the fcparsers.jar picked up during the build includes the convince feature.

How to Convert Nicknames to JSON for VirtualWisdom4

VirtualWisdom4 enables a deeper insight to the metrics behind performance and available of large-scale data networks, but has a few challenges in the initial setup. One we see fairly often in the deployment side is the fact that few customers know what JSON is. Sure, a developer will say “JSON, yeah I got that”, but VirtualWisdom4 users are not all developers. Heck, our field staff don’t interpret JSON, don’t recognize when a “{” is where a “[” should be.

VirtualWisdom4 collects data as soon as it gets access, but that data is not aggregated into upper-level entities such as Hosts and Storage Arrays until those entities exist. In essence, VW4 collects immediately, but the data is of limited benefit until those entities are created, so creating those entities is critical to return-on-investment. The sooner we get to visualizing data, the sooner we can begin trouble-killing.

Often, it’s easier just to convert from a common format that our customers understand, or can produce from other tools, into JSON.

 

One example we use is the basic 2-column Comma-Separated-Variable format that our previous VirtualWisdom3 product line understood. Of course, CSV has no schema as well, and suffers other fragility such that disagreeing assumption on whether spaces are used, or quotation-enclosure, or even end-of-line markers. Even a basic CSV parser needs to be very liberal about what it accepts — which means, to a coder, that assumptions and stream-cleanup are required. We can parse CSV into JSON, but the variances in format now hit us further out for the most part. We still lack schema validation, so we still can only validate JSON — both punctuation and schema — right at the VW4 import. Be prepared for some last-minute quick-fixes to import data.

Let’s consider a basic 2-column:
50:0A:09:85:98:12:34:56, NetApp-123456
50:0A:09:86:98:12:34:56, NetApp-123456
10:00:00:00:c9:12:34:56, Oracle01
10:00:00:00:c9:12:34:57, Oracle01

We want to convert this to basic entities that can be aggregated to create upper-level entities. In past, we’ve created the HBA Card as a way to offer WWPNs to an upper-level host, but within the version 1 JSON format, permission to create HBA Ports, HBA Cards changes. With Version 4.0.2, we gained the ability to define a FibreChannel Port, still within version 1 of the JSON format. FCPorts are either HBA Ports or Storage Ports, and are available to the Entity-Creation Utility where either HBA Port or Storage Port are presented, so it’s the most efficient way to offer a Name/WWPN mapping to VirtualWisdom4.

We can convert this 2-column CSV into a JSON using a number of means. Since VI is a Java company, and Java does indeed “run anywhere”, I tend to use Java for portability in addition to its behavior as an early-bound language (symbols are verified during development, not last-minute on the user’s desktop such as scripting languages — but that’s a personal preference)

Let’s look at this flow:

wpg_div_wp_graphviz_1

On our implementation, we’re going to use “CSV” as our common format, and replace our conversion look with vw4tools (an open source pure-java parser library), so our flow looks like:

wpg_div_wp_graphviz_2

Running Java

The hardest thing about this process is finding Java. If you don’t know how to run “java -version” for your platform, check that link.

Running vw4tools

VW4tools is run simply using the “-jar” option:

java -jar vw4tools.jar ...

If you have to use a full pathname to java, or you need to use a .exe, or both, this might look something like this:

"C:Program FilesCMCNE 12.0.2jre64binjava.exe" -jar vw4tools.jar

Please pay attention to the double-quotes: they tend to enclose things with spaces.

Running vw4tools with an Input File

VW4Tools works like an inhale/exhale action: it inhales files to process, may do things with that information, then exhales JSON files (or other debug files).

To give vw4tools an input file to process, give each file with a “–nickname=” or “-N ” and a URL of the file. For example:

java -jar vw4tools.jar -N http://example.com/some/query?fabric=Neo&pill=red ...

If the file is a local file, then you don’t need to use the “file://” that would otherwise be required: the fibrechannel-parsers used by vw4tools automatically tries a local file if there’s no protocol (i.e. no “http://” nor “file://”). The formats and protocols supported by FibreChannel-Parsers is listed on Compatible File Formats. Without a protocol, a local file is checked, which we can leverage to be lazy:

java -jar vw4tools.jar -N switch45.csv ...

You’ll see the underlying parser library tell you how successfully it parses the data using a number of different methods:

(vw4tools) parsed 0 zones, 3 aliases via NicknameParser
(vw4tools) parsed 0 zones, 3 aliases via VW4InvalidAddedParser
(vw4tools) parsed 2 zones, 7 aliases via ShowZoneParser
(vw4tools) parsed 4 zones, 6 aliases via AliShowZoneParser
(vw4tools) parsed 4 zones, 6 aliases via ShowZone2Parser

This is normal, and can signal if the parser you expected isn’t the most efficient to do the job. Of note: you don’t need to tell vw4tools what format the -N file is in. That’s intentional to make it easier for jet lagged engineers to get the job done.

To make it faster, we can do this with many files at once:

java -jar vw4tools.jar -N FabricA.csv -N FabricB.zoneshow -N FabricC.alishow -N FabricD.fcalias ...

This example shows files with “extensions” to hint at their content, but vw4tools is “the honey-badger of parsers”: it just don’t care. The file you feed it may be called whatever you like, it’s the stream inside it that matters.

Running vw4tools to an Output File

So that’s all well-and-good, but we want a usable JSON file for import. The “–nicknameout=” or “-o” option gets us there:

java -jar vw4tools.jar -N switch45.csv -o xyzcheese.json

Running this at the XYZCheeseFactory, that customer can immediately try this file on VW4’s validation before import. I’ll remind you that vw4tools doesn’t know what’s in your VW4 Appliance or OVA and may give it things it doesn’t want to talk about yet. Things like WWPNs it hasn’t yet discovered. This typically means that additional steps are required such as removing some hosts from the “inhaled” content, but we can cover that in a later article.

Conclusion

This should give a basic idea of how to convert a CSV into a usable JSON, most of the war towards having fcport entities which can be aggregated into hosts and storage arrays.

By using a number of parsers at once, you don’t need to guess what format a file is in, just toss it into vw4tools to get a result.

You do need to make sure you have a “-o” to get the result “exhaled” for import.

As well, around December 2013, versions of vw4tools needed the “-o” right up next to the output file: “-oacme.json”, not “-o acme.json”. This has long since been fixed, but I have a habit of doing to the more cooperative way just to get things done.

Additionally, VW4 seems quite interested in the name of the file, so it must be a .json, not a .JSON. I don’t know how well it works if the file has no extension. This may signify that other import formats will some day be possible — for example, acme.xml. That, with a schema, would make the pre-deployment preparation much more predictable.