During a recent engagement, a particular client had an issue where they would bill for Lotus Notes applications hosted on their environment. (Charging for server disk space is a good way of ensuring that applications dont hang around on the servers after they've stoppd being useful). However, there was little faith on the catalog.nsf database containing all items, and little appetite for a globally replicated solution. So how can you do this ?
- Start off by writing an agent which sends 'sh dir -xml' to all servers' consoles from LotusScript. This returns the server directory as an XML stream, and is extremely fast. It also causes very little load on the target server, and little network load.
- Parse this XML into memory (which in LotusScript is pretty horrible) and build up a memory structure (using Lists and Classes) which bind individual databases together (using Replica ID).
- So by this time, we now have a very up to date directory listing on all servers. We can now impose order. For instance,
- if its a mail server, and the database is in a 'mail*' directory, then its a mailfile..
- If its on an application server, in the 'apps' directory, then its an application.
- If an application has a second-level directory, then its a complex application comprised of more than one database.
- If its on a hub server, then its on a globally replicated application.
All of these rules are easy to write once the items are in memory.
- Now spit this out as an Excel spreadsheet. Ah. Now only an insane person would wish to load a copy of Excel onto the server, so the scheduled agent can construct an actual excel spreadsheet. So spit out a CSV (Comma Separated Values) file. most office/openOffice users associate *.csv with the spreadsheet program.
And there we have it. Again, get the data model right, get it into memory if you can, and its a pretty straightforward task..
I guess it boils down to having decent standards for your mailfile and application deployments.. How do you do yours ?