In legacy payment systems, the 'databases' employed tend to be esoteric, closed, proprietary subsystems hard-wired into the application itself. For example, in ON/2 (running on the Stratus VOS platform) the database is an internal sub-system called 'DBI.' This makes sense: when ON/2 was first rolled out (1982!), SQL was in its infancy (I think Larry Ellison was still on his first wife) and OLTP vendors were 'rolling their own' to meet very specific database needs.
And let's be clear here: those internal inventions have stood the test of time. 'dbi' is approaching 25 years and is still serviceable. There are some serious shortcomings to the approach though:
The transaction log - nominally a database table - is, in fact, just a log. This fact greatly and unnecessarily complicates (note: this is a personal opinion I'm expressing here) some of the very critical touchpoints of your financial switch: namely, how you implement 'multi-part' transactions like reversals and completions (of pre-auths).
It's very esoteric and akin to a black art. The great thing about the SQL world is that everything is standards-based. Our clients have deep bench strength in the SQL vendor offering they've selected (e.g., Oracle, MS SQL Server, etc.) and they can contribute all of their expertise withough having to have internal knowledge of the jPOS application or even a general knowledge of payment systems. This is huge. A talented DBA is a very important part of a successful jPOS implementation. I know this from experience.
It completely wipes out any flexbility you have in terms of dropping that application (as is) onto another operating platform. Continuing with the 'dbi' example, that's a Stratus VOS construct pure and simple. You're tied to that thing like a boat anchor. With a SQL-based approach, you've got flexbility. Assuming your app is written in Java and 'talks' to a SQL DB via JDBC (that accurately describes jPOS), then you can be (as an example) Windows/SQL Server today and Linux/MySQL next week (we've actually done it on the same day). That's a pipe dream in the legacy world.
So, now we have this jPOS world where we can take advantage of SQL databases. But, as a jPOS implementer, you have some important responsibilities that go along with that shift in deployment models. Here are the practices that you absolutely must get established:
Backup nightly: All defined tables ought to be backed-up as part of an overnight operation.
Replicate real-time: There are a sub-set of tables where a nightly backup isn’t sufficient enough protection from the results of an unexpected loss of data. These tables are dynamic – they can be expected to change constantly throughout the day. These tables need to be replicated on a real-time basis to a copy located on a separate server. Ideally, all user and system tables in the primary database should be replicated to a failover database machine.
Cull older days: In older ‘legacy’ systems, a new file was created to house transactions associated with each specific capture date. While this made file maintenance a nightmare, it had the positive effect of limiting the size of any file (and its associated indices) to a single day’s worth of transactions, meaning that performance of the transaction engine would remain relatively constant over time (assuming all other factors remained unchanged). In a jPOS-based approach, the authorization engine makes use of SQL technology to facilitate its underlying DB requirements. The advantages of employing SQL technology are dramatic: development, test, support, reporting and offline integration efforts are probably 2 to 3x easier (and surely far less esoteric) with SQL underpinnings vs. the proprietary approaches of the past.
However, using SQL technology does raise the bar in terms of proactively ensuring that the application does not inexorably lose its performance edge over time. Namely, the transaction log houses all activity from all capture dates. This log must not be allowed to hold an ever-increasing number of days of activity. Otherwise, the high standards of performance demonstrated during user acceptance testing will slowly ebb away. It stands to reason that an online system that has to fight through indices reflecting (e.g.) one year’s worth of production traffic will be hard pressed to perform at an acceptable speed. The number of days to keep online (i.e., within the confines of the TranLog) will vary according to processing power and transaction volume. Older data ought to be moved to a separate table location, where it can be accessed for historical query purposes and not negatively encumber online processing speed.
In conclusion, the size of the tranLog needs to be proactively managed each evening through automated, scheduled routines that cull each day as it falls of the end of the online range.
Defragment Indices: From experience, we know that regular execution of index defragmentation will result in faster performance, compact indices and a re-establishment of ‘target’ fill factors.