You’re limping through the desert, dying of thirst, when you come upon an oasis with what appears to be a bottomless well. You can’t believe your luck. Then you drop the bucket in and discover that the rope tied to it is so knotted and twisted that it stops short of the water line. By the time you straighten it out so that you can take that desperately needed drink, it may be too late.
You can’t believe your luck.
In the often unforgiving landscape in which financial service providers operate, Big Data is a lot like that well. In an age of exceedingly cheap and copious storage capacity, banks can accumulate seemingly endless amounts of information (at least until all the silicon is used up). But limitations in the design and implementation of systems in use at many firms ensure that access to it is exceedingly slow and cumbersome, with accuracy often sacrificed along with velocity.
But gain access to it, they must. In this analogy, legislators and financial supervisors play the role of the sun relentlessly and mercilessly turning up the heat. This year, in fact, financial institutions are expected to have their own peculiar variety of global warming to cope with, or at least European warming, in the form of new sequels to the European Union’s Capital Requirements Directive and Capital Requirements Regulation.
CRD V and CRR II, which still must be enacted by the European Parliament, are coming along just three years after CRD IV and the original CRR were introduced. The revised versions, encompassing more than 500 pages of rules and regulations, are likely to keep banks and their compliance departments busy into the 2020s.
The more data you have, the more data they want
These revisions result in part from repeated rounds of competing innovations by supervisory authorities that demand more and better data, and information technology companies that contrive ever more sophisticated – and faster and cheaper – ways to provide it, with banks caught uncomfortably in the middle. The easier it becomes to generate bits of information, the more bits the authorities ask for, and on it goes.
And a new set of rules often requires more than just one set of bits. National authorities often exercise wide discretion, tinkering with an edict to make it fit local needs and market characteristics. This is especially true with the Analytical Credit Dataset, or AnaCredit, a European Central Bank program that will require banks to collect and report numerous details about each credit on the balance sheet.
The Big Data phenomenon is not limited to financial services, of course, but it has been felt there especially keenly. Large financial institutions must process hundreds of millions, if not billions, of customer transactions and continuously monitor and update prices of assets and liabilities on their books. For it to be of any use, this raw data must be converted into a form that allows myriad interrelated variables to be calculated and analysed. As input values change for each one, so do output values on countless others and on key metrics derived from them related to risk and financial efficiency.
Banks must process this information to manage their operations effectively, make the right decisions, take appropriate action at various levels and functions and plan out their futures, so they have hardly been innocent victims as this technological and regulatory escalation has broadened. Their businesses have been, or at least should have been, benefiting from the enhanced capability to obtain and manipulate massive quantities of data.
But increased regulatory scrutiny has ratcheted up the need to produce it more frequently, often in real time and even in unreal time; stress testing using hypothetical dire scenarios is becoming more regular and elaborate, with greater consequences for failure. The expansion in the creation and manipulation of bits and bytes has been so sudden and immense that the term “Big Data” seems almost like a quaint relic. “Very Big Data” might be more apt today, and there’s a risk that the correct sobriquet tomorrow will be “Too Big Data.”
The hive mind is here
The need to keep systems from getting bogged down in the ever growing quantity of data led to the development of the In-Memory Data Grid (IMDG). As the name hints, an IMDG is a group of servers configured so that their random access memories (RAMs) behave as a single entity. This permits them to store massive datasets collectively in RAM that otherwise would be available only in conventional storage drives within the servers.
It may be barely noticeable when you’re sitting at your workstation or in front of your laptop, but reading data from a whirring, humming hard drive takes an eternity, compared to reading it from RAM. Using an IMDG can allow a system to perform the task as much as 500 times faster – as long as a compatible processing capability is also in place.
Indeed, storing data is only half the battle, the far easier half to win. At the risk of going to the well once too often, it might be worth thinking of an IMDG as an oasis in which water is all around you on the surface, rather than deep underground, but without the right vessel, taking a drink could still be slow and messy.
This is where an In-Memory Compute Grid (IMCG) comes in. This is a system that divides data processing, not just storage, among the collective RAM of multiple servers; the result is data that is retrieved with greater speed and agility and processed that way, too.
A few years ago, when IMCG technology was still new, it was common to try to bolt a processor onto an existing IMDG system. Connecting sets of servers using different methods for the storage and processing functions, and connecting those functions to each other, proved problematic and limited the effectiveness of the grids. But the technology has moved on, and it is rare nowadays to find an IMDG designed without an accompanying IMCG.
Fast is good; fast and adaptable is better
A system that has been upgraded so that an operation that once was processed in the blink of an eye now takes only a fraction of a sliver of a blink may seem gratuitously zippy. But as the demands on systems grow larger and more complex by orders of magnitude – in response to the greater demands on bankers to perform real-world and what-if analyses, provide efficient customer service and make the most prudent, effective decisions – so must the processing speed.
But as critical a feature of IMCG systems as speed is, they have a lot more going for them that is making them an essential resource for financial institutions. IMCGs are scalable – if you can link some servers’ RAMs, you can link some more – and flexible.
An IMCG works by distributing software code across servers in the grid in the most efficient way. Once it has been designed and implemented, it becomes a comparatively easy matter to amend the processing method for use in any configuration of servers. This allows fast, two-way traffic among core functions within a firm and its narrower silos, too.
The advantages, the necessity even, of such a system are obvious at a time when supervisors are demanding data derived through a variety of means, at a finer level of detail, more often.
With its speed, flexibility and scalability, IMCG is becoming the go-to technology for keeping financial organisations running at peak efficiency, especially large firms that engage in multiple business lines and in many jurisdictions. But regulatory and operational challenges keep expanding and evolving, and there is no end in sight.
Lawmakers and financial supervisors are unlikely to stop issuing new requirements anytime soon, after all – CRD VI, anyone? – and it would be foolish to expect customers and competitors to start settling for less, either, so banks and the personnel that make them run will require ever more sophisticated tools. That means innovation will have to continue apace, so is IMCG fated soon to become the went-to technology?
That’s highly unlikely. Speed is its most visible benefit and probably its biggest appeal for IT departments at financial institutions. But its ability to be adapted readily to new situations and requirements, perhaps even ones not yet envisioned, is what should allow IMCG to serve as the backbone of the technological infrastructure in financial services for far longer than the blink of an eye.
No comments:
Post a Comment