When a firewall fails to defend a network…

While it is true that a firewall today still is an important piece of networking equipment to logically separate different networks, its functionality has shifted over the years from plain network filtering to application level protocol inspection and intrusion prevention and then some. As such it is a valuable asset that should be part of a broader security architecture, which is not often the case. It is quite common that time and budgets for IT security is limited, which is quite logical if you’re not running an IT security business but only try to do business securely. And placing a firewall at an entry point in the network seems sufficient at first glance. As the (in)famous old saying goes: “Security is like , a little is better than none at all”. There is one big risk that I’d like to point out that should be quite obvious, but a lot of people still seem to be missing this: If a single barrier defense mechanism fails to detect a security threat, it will not be able to prevent it. That, plus at least 5 other disadvantages.

When building a resilient network infrastructure you must assume that measurements taken for prevention can and will fail. In the particular case of a single boundary firewall with an ACL that only filters inbound TCP traffic (no bells and whistles) to a vulnerable webserver for example, you’d know that the firewall will not prevent malicious code from infecting it, because it will simply allow port 80/tcp and not check the application layer protocol (HTTP in this case). Back in the days around the turn of this century, the firewall didn’t prevent Code Red from spreading, and once the infection succeeded, had to deal with the malicious traffic caused by it. Today, such attempts themselves containing code on port 80/tcp should be regarded as malicious traffic, and should stick out like a sore thumb with a properly installed (N)IDS. More modern and stateful firewall equipment goes a lot further than plain packet filtering, and although I’m pretty confident that Code Red wouldn’t stand a chance today, the pattern that I’d like to point out here is that you should not assume a single barrier defense (insert random security product here) to suffice for a resilient network infrastructure. What was considered ‘best practice‘ for network security back then – protect a web server with a firewall – failed miserably, regardless the fact that Code Red could have been prevented by timely patch management. Facing unknown future attacks, you need high quality intelligence from the white hat community to learn first hand what the bad guys are up to and implement a ‘better than best current practice’ to keep the network free from malicious activity. If any security consultant ever tells you about a ‘best practice’, then (s)he is talking about yesterdays technology, which I would take as a lessons-learned-from-the-past to take as guideline, but no more than that.

Regarding the single barrier defense there are a number of risks that will always be with us, regardless of advances in defense technology:

1. Perfect trust is a recipe for complete failure

I’ve written about this earlier; No single product is perfect. Never rely on just one. Always continuesly check and test(!) that measurements taken are functioning correct and accurate/according to plan. Preferably, try to spread policies across equipment handling information of the same classification. Placing firewalls back to back makes not much sense, unless you have different teams that are responsible for them. If you plan on doing this and the budget allows it and the network operators can handle it, use equipment from different brands. So that vulnerabilities in product A will likely not be in product B and vice versa. This will increase the survivability of the entire network you’re trying to defend. Testing your equipment to comply to your security policies can be done in multiple ways, varying from red team/tiger team or security/vulnerability assessment. The fact that you do test your infrastructure makes the life of an auditor easier; You have thought about known unknowns 😉

2. Magic boxes need great wizards

Unless there are no changes to the network, a single firewall in itself is a disaster in slow motion; Over time its rulebase will become hideously complex. It will take a great effort for administrators to make changes unless they have an equivalent of a PhD in computer science. Auditors will not be able to fully validate its purpose. Somewhere around 2001, I was part of a team that had to install 8 Solaris boxes or so, running a bunch of different services that had an additional network interface for management purposes. Although that was a fairly small network, specifying what traffic was allowed – including management traffic – turned out to be quite a task. It’s very tempting to end a rule-base with a default deny-and-log rule. Be aware that this may cause packet amplification when a multicast packet hits the network, and every box starts to send a “denied” syslog message to the logserver. Keeping a log server “clean” – only registering known events – is nearly impossible, keeping  rule-bases in sync with the upstream firewall even more so.

This means that you need to keep track of changes: Document changes to configurations and infrastructure, not only what, but also why. Review proposed changes. Make an estimation on the impact of a change; how much downtime is introduced, how difficult the process, chances of success. Test them somewhere without disrupting daily operations. And, if all went well, apply them. This is a difficult and time consuming process, but in the end well worth the time and trouble. If you’ve ever said or heard “Ask Jim, he did this last time”, or “It was on the whiteboard just yesterday, don’t you remember”, you know for sure you need to take time to document and read proposed changes.

3. Transparency of function is essential

Firewalls are complex. Complexity is a security risk. Sure enough: Firewalls are difficult to master and configure properly. Big rulebases can be reduced to simpler rulebases if they are spread out over multiple firewalls. Virtualization may help here. I rather have no firewall than one that’s misconfigured, as they give a false sense of security. Establishing the fact that a firewall is misconfigured can be equally complex. As a rule of thumb, you should be able to explain the function and use of any host in a network diagram within a few minutes. If not, then there’s something wrong. If you’re missing clear network boundaries, or it appears that everything is talking to everything else, then you are facing a security challenge.

As any IT security book will tell you: To protect a corporate network, some form of information classification should be established first. Security equipment used, should handle traffic accordingly. From a more practical point of view: Information classification is a difficult subject, but can be based on something as obvious as ownership and responsibilities of equipment and networks. Traffic filtering will otherwise very likely be limited at the technical level of arbitrary filtering IP subnets and TCP/UDP destination ports. When information classification has been considered, you’re likely to see multiple network levels, each with its own set of security measurements and not just one at the networks’ edge. The classical DMZ being a first example, where information between different entities should happen in a well defined way.

4. HTTP(S) is the new TCP/IP

Web-traffic is everywhere. Products that are “web-enabled” can automatically work without cumbersome changes to firewall rulebases and policies. The shift towards the web is not likely to decline any day soon. In the old days, HTTP had something to do with the webserver. This is no longer the case. A single firewall routing different types of information that are tunneled across HTTP(S) can potentially mix information from different classification levels. If you think that a boardroom conference call should be handled differently than a user listening to the radio or downloading a product brochure, then a single firewall will not suffice.

5. An unmanaged firewall is an insecure firewall

Buying a box, having an expert come over to (teach you how to) configure it in about half a day, is just the beginning. Firewall management can be somewhat of a struggle if it’s not your core business, but is definitely not something to do “when the other work is done”. It requires quite some technical knowledge of the network traffic, applications and user interaction with them, which also takes time and dedication to learn. It is essential to monitor traffic in real time, including related syslog-messages, visualize traffic with netflow/sflow and SNMP where possible. Yet it is not enough these days to keep the bad guys out of your network. Bad stuff finds it way in through invitations sent through email, such as phishing, containing URLs to sites hosting malware. Just like malicious advertisements that blend in with normal/trusted traffic. Finding a way to stay ahead of these threats is not easy. As said, white hat intelligence may help here. For high risk environments, virtual machines running nothing but a stripped down webbrowser on a hardened OS that a user can only connect to with RDP, could be considered, next to (difficult) IPS and IDS systems. In such an infrastructure a firewall is, as it should, part of a larger security architecture and not something that defends the network by itself. Failing to check events from any security device is just asking for trouble, especially if there’s just one of them.

Regardless of single barrier defenses, generally everybody need some form of disaster recovery and continuity planning. Part of such a plan should include the scenario of what to do when confronted with a failure to control your networks availablity, integrity and/or confidentiality. For example because of a DDoS attack or massive network intrusion, such as a malware infection or botnet activity. In the physical world we’re perfectly capable of understanding that earthquakes and floods can and will occur at any time, yet the concept of a true digital disaster still seems to baffle most people. Reasoning comes from past experiences or so it seems, and although we all understand that you cannot prevent the earthquake from happening, somehow we do expect to be able to prevent a digital disaster. I guess this is because all digital things are man made, we’d like to think we have full control over them and at the same time are able to fully control them all at once. … It’s not that “the systems” have taken over, it’s just that “the systems” have become part of a digital ecosystem in which there is no full control. The difference between digital disasters and natural occurring disasters is that the former is initiated – intentionally or by accident – a human being. For a lot of people, digital disasters don’t happen very often, but when they do they do make headlines. “Never attribute to malice, what you can adequately explain with incompetence.” Protecting the companies reputation is likely to be of some importance and a plan to protect it, be it through education and/or hi-tech defense systems, should be on the agenda, including a budget for that.

Epilogue

I picked Code Red here because is has been more than a decade since it spread around the internet, but I could have picked on just about any Internet worm. And although Code Red has been dead and buried for a long time, we must remember that any disaster on any scale may happen and that the event of it happening cannot be prevented, we certainly can reduce its impact if we think about failures and what it would mean if outages occure at any level. Cost and timely recovery will certainly affect anyone’s business. There is nothing more frustrating than shutting down the network and effectively stopping all related business processes to start the recovery process. In the days that Code Red hit, I LMAO, mostly because we weren’t using Microsoft products in our DMZ at the time, but also knew that even it we did, it couldn’t do much damage because of our layered approach of handling any inbound connection from the internet.

Let me be very clear: A firewall is a great and vital piece of equipment for protecting networks. Just make sure it’s not protecting the network all alone! Evaluate business information. Set up and make use of information classification. Make network segments according to information classification. Create clear boundaries in your network. Make a multi-layered defense barrier for critical assets. Assume (catastrophic) failures will occur – including security events and incidents – when designing networks. If you can’t detect security incidents, it is highly likely that you won’t be able to prevent them. A 100% perfect defensive infrastructure focused on the prevention of incidents does not exist, and if it does, it will fail eventually. But even if you can’t prevent security breaches, you certainly can try to limit its impact. Trying to detect unknown events takes a lot of expertise and experience, if that is beyond the budget, considering tolerable losses might be an option here. Make continuity plans. Take care in configuring HIDS, NIDS/NIPS and make sure they line up to firewall policies. Send logs to a central server. Examine logs thoroughly. Make sure you know normal from abnormal behaviour. Act upon suspicious or unknown events. Continuously test your infrastructure.  Make disaster recovery plans. Test disaster scenario’s and interview staff afterwards about having experienced a controlled disaster! Learn and adapt.

Last: With IPv6 being on the horizon, network address translation (NAT) appears to be something of the past. This will lead to the inevitable return of what once was known as the proxy. All that is old, will be new (again).

PS. This post is “food” for bs bingo 😉

 

 

 

This entry was posted in IT Security and tagged , , , , , , . Bookmark the permalink.