Banking CIO Outlook
show-menu

Critical Role of Vulnerability Management

Konráð Hall, Executive Director, Íslandsbanki [ISB: ICE]

Konráð Hall, Executive Director, Íslandsbanki [ISB: ICE]

Technical environments are getting ever more complex, and new tech stacks are added, most often without decommissioning previous solutions. With an increasing number of vulnerabilities found, managing them is a risk that must be taken very seriously. Automation is a prime candidate to reduce the effort that goes into vulnerability management.

Identifying and patching vulnerabilities is one of the foundations of a good security posture. Unpatched vulnerabilities are the red carpet for threat actors and should be taken very seriously, all the way from IoT devices to the mainframe. This is however a very challenging situation. Firstly, technical environments have the tendency to grow, both in terms of the number of applications, infrastructure devices, and tech stacks and even though every new initiative has the promise to decommission old systems, we must admit that the success rate is nothing to write home about. When adding hybrid cloud and on-prem setups to the equation, this becomes more difficult to manage. Secondly, the number of known vulnerabilities has increased over the last few years, and we can expect that to grow in the foreseeable future.

It is quite interesting to look at the inner workings of vulnerability tracking. Publicly known vulnerabilities are registered and catalogued within a program called CVE (Common Vulnerabilities and exploits). Every vulnerability is assigned a unique CVE number and a severity score (CVSS). This program dates back to 1999 and is the de facto catalogue for vulnerabilities, and widely used by scanning and reporting platforms. Since 2016 the CVE Program began actively expanding the number of organizations participating in the program. So, all in all, there is an increase on all fronts, more code, more researchers, and more CVE numbers. But what is to be done?

" Unpatched vulnerabilities are the red carpet for threat actors and should be taken very seriously, all the way from IoT devices to the mainframe "

A solid vulnerability management policy must be instigated and supported by upper management. In practical terms that means that a written threshold is put down, based on a given risk appetite. Some organizations might be satisfied when all high-severity vulnerabilities are patched within 90 days on internal networks, and within 30 days on internet facing networks. Others might want all medium[1]severity ones patched within in the same timeframe or less. Writing the policy is however the easy part. The tricky part is executing and complying to the policy, because this work is tedious, and one can rest assured that it is never done. To begin, all assets have to be known, then they have to scan with a proper vulnerability scanning utility. Prioritise the work at hand based on the policy, using CVE scoring and placement of assets.

At Islandsbanki we include the vulnerability metrics in our Key Risk Indicators (KRI) and that has proved to be a good driver for a successful involvement and understanding beyond the IT Operations team. And that is a key ingredient to maintaining a healthy environment because many vulnerabilities call for business decisions, such as decommissioning or upgrading EOL systems, refactoring code, and possibly additional investments. I use a basic process when it comes to patching:

1. Fix the vulnerability in a timely manner – Default action
2. Accept the risk associated with the vulnerability, either temporarily or permanently
3. Shut down the asset containing the vulnerability

If an asset cannot be patched for some reason, it should be shut down unless a risk acceptance from a relevant product team is signed off. A car without brakes and safety belts would never be allowed to be used in traffic.

All that has been described is however a reactive process. The next step is to prevent vulnerabilities to go into circulation, and I think that should be the main focus for all software and infrastructure delivery going forward. And here we can utilize the benefits that automation brings to the table, whether it is for traditional IT Operations or for software deployments via CI/CD pipelines. Using vulnerability scanners during the build process, quality gates can be implemented that will stop the deployment when vulnerable code is detected. Developers are forced to revisit the code and make the necessary adjustment before it is committed again. The same thinking should be applied to IT Operations. It is amazing how many vulnerabilities can be found in a fairly recent version of almost any OS in the market. And rather than updating a golden image every three months, all server deployments should be done via pipeline where it is scanned and preferably patched automatically before being introduced in a live environment. And this will make vulnerability management easier and an integrated part of daily operations.

tag

IoT

Weekly Brief

Read Also

Shaping the Future of Banking with ITMs

Shaping the Future of Banking with ITMs

Michael Noftsger, Chief Administrative Officer (CAO), Forcht Bank
Human-Centered Banking for Stronger Local Economic Resilience

Human-Centered Banking for Stronger Local Economic Resilience

Stephanie McClendon, Chief of Community Banking, First Federal Bank
Why Your AI Models Need to Talk to Each Other (And Maybe Take Yoga Together)

Why Your AI Models Need to Talk to Each Other (And Maybe Take Yoga Together)

Jerry Duan, SVP, Director, Credit Risk Models, United Community Bank
Banking Tailored to Client Needs

Banking Tailored to Client Needs

Aylon Spinner, Head of Technology Strategy and Architecture, CIB, Standard Bank Group
Incident Response - Preparation to Prevent Panic

Incident Response - Preparation to Prevent Panic

Ste Watts, Group Head of Cyber Security Operations (SecOps), Aldermore Bank PLC
The European Cyber Crucible

The European Cyber Crucible

Roberto Baratta Martinez, Director of Loss Prevention, Business Continuity and Security, ABANCA