News

News & analysis from across the industry

How to achieve a competitive advantage in Open Banking

 

Anyone reading the industry press will undoubtedly have seen opinion pieces on Open Banking. The fact that Open Banking is going to transform the industry isn’t new news. When I read the different opinions there is a lot of talk about the new ways of working and the innovative propositions that individuals believe will give the best commercial return. One thing I do find absent from the debate is how these new propositions will be launched and where the operational expertise will be found to bring them to the market. So I want to provide some thoughts for executives whose teams are working on how they will compete as the industry rapidly evolves.

 

Investment in Open Banking
This year Accenture produced a report entitled ‘The Brave New World of Open Banking’ which analysed the concept and how players in the industry needed to adapt. It made a number of interesting points:

– By 2020, €61bn (7%) of the total banking revenue pool in Europe will be associated with Open Banking enabled activities.
– In a survey of executives at 100 large banks, 65% of respondents see Open Banking as more of an opportunity than a threat, 52% see it as a way to differentiate from their traditional competitors and 99% plan to make major investments in Open Banking initiatives by 2020.
– Banks in Europe (75%), North America (53%) and Asia (51%) see Open Banking as critical to their digital transformation.
– The number of banking APIs available for third parties to connect to exploded from double digits ten years ago to over 1,500 in 2017.
What is crystal clear is that there will be significant revenue opportunities. Banks looking at ways to get involved and APIs will be the basis on which these new propositions are built.

APIs are not a new invention. They have been used for many years to create defined integration points between applications, and have been at the heart of mobile payments, geo location applications and financial management tools. However, the idea of working in an open ecosystem, where organisations can access technology usually protected by a financial institution’s gated walls, is enough to leave risk managers reaching for the panic button or searching for more information before these propositions go live.

 

Competitive advantage
One of the fundamental questions all operators will need to answer is that of how they will compete. Accenture’s report highlights ‘first-mover’ advantage and the ability to create a partnership model or by establishing ‘API dominance early’. They have this to say:

‘The proliferation of bank developer portals, where third parties can access APIs, speaks to the pressure that banks in these markets feel to export customer data, enable bank micro-services and build strategic partnerships that benefit their end customers. In Europe, BNP has partnered with the Open Bank Project7 to make a wide range of APIs available in a sandbox environment in which developers can experiment before going live with customer offerings that incorporate atomized banking services like identity management.’

They predict that the basis of competition will effectively shift from fully integrated banking solutions to banks competing through portals to entice more developers to use their component services. To do that they will need to be able to rapidly and safely connect to different APIs.

 

Dynamic Implementation
APIs are undoubtedly a wonderful thing. They allow you to add features you don’t need to develop yourself. However this makes it difficult to develop a robust test plan, and if you don’t control the “test system” then you are going to hit your first set of problems. Suddenly, implementation can look very different.

It may be expensive to connect to the system, or the system owner may only make it available during certain hours. In addition, the system under test might only return behaviour that is out of your control. What do I mean by this? For example, you may wish to test dropping a communication channel half way through a test, as people increasingly do their banking on the train to and from work, and tunnels tend to be signal proof. Or you may be trying to identify test cases for a stolen credit card when the API insists the card is OK. Such “negative” test cases are very hard to control with external test systems.

If you look at payment systems, this can throw up problems as the full end-to-end transaction might react differently in a situation you have not been able to assess. The result is that you will be going to market with a system that is only partially tested. Hope is not a test strategy and, in essence, that’s the approach people could be taking if they don’t test accurately. The cost of getting it wrong, and experiencing an outage on a new product is huge. Equally, the need to quickly and cohesively ‘regression test’ your platform after changes becomes even more essential if you’re exposing your organisation’s assets to third parties.

In my mind, this is going to become a source of greater concern over the coming years and I suspect it may be driven by some high-profile failures.

 

Confidence in Test and Launch
The approach to testing has changed radically over the last few years but the vast majority of organisations remain stuck in a process which causes them delays, and results in launches with systems riddled with problems. The way around this is by using ‘service virtualisation’, allowing you to replicate an end-to-end test of all points in a transaction in the way it works in the real world.

Service virtualisation not only lets you test in isolation from the API provider, but allows you to orchestrate calls to multiple APIs entirely under your own control. You can more easily perform negative tests that the API may not allow, make the connection available to your whole test team 24/7 and allow access at zero cost. In short, it liberates you from many of the constraints, but more importantly it reduces cost and risk. It also allows you to start testing ahead of the “delivery” of an external API.

Through the use of virtualisation, you can take back control of API testing, modify the API in order to prototype, and massively scale the tests for stress and performance analysis. With virtual models and 24 hour availability you can schedule complex regression tests that run automatically regardless of how insignificant a change is made; and this is anywhere in the transaction lifecycle. When switch/migrations occur between APIs or an API release, service virtualisation allows multiple “versions” to be tested in any combination.

 

The way forward
I often see organisations investing substantial sums of money in their new propositions, yet they take them to market using test a test methodology which is out of date. The technology and thinking which goes into testing has advanced substantially, and those virtualising their operations before launch are able to launch more quickly, more securely and more cheaply. This means they have the advantage of being able to evolve and stay ahead of the competition.

While the appeal of the new propositions will be one source of competitive advantage, the ability to stay ahead and adapt will be just as important.

This article was first published on Finextra by our CEO Anthony Walton

Find out how t3 Can change the way you test payments

Get in touch