Here, in no particular order, are a few ideas on how taking a minimalist approach can improve the security of your systems:
Avoid multiple integration points
Integration is complex and the more highly coupled a system the harder it is to develop, manage and maintain. From a security perspective, an integration point presents an additional set of challenges. The exchange of information and services will likely span trust domains, a state that dictates detailed analysis and the instigation of specialist security precautions. If you can apply the enterprise integration principle of “consolidate first, integrate second”, you’ll substantially reduce the number of attack vectors.
Where integration can’t be avoided, try to employ a common approach and consider both your security and integration architecture upfront. If you have to integrate with multiple systems or external entities then apply the same standards, patterns and practices for each interface. With this approach a common security model can be thrown across the entire solution and you’ll reap the rewards of a solution that is both supportable and extensible.
Build, don’t shrink, your machine configuration
Remembering our principle that less is more, build the simplest configuration of your host servers you can get away with. Start with everything either turned off or shut down and then make sure every decision to enable a service or open a port is an informed one. Question all software installed on the server. The installation of any development tools on the server is a cause for concern, as is the installation of third party products such as Adobe Reader and Java. Third party products must be kept updated with the latest patches if they are not to create chinks in your otherwise secure server’s armour.
Restrict access on all fronts
Yes, restricting access is rather the point of security but do you know who can access your business critical systems? The question doesn’t just relate to system users but IT staff, external support vendors, external agencies and other departments within your organisation, such as marketing, sales etc. Consider both online and physical access. All too often the focus is only on the registered users who log in everyday and do so through the front door (i.e. the login page). Think outside the box for access controls. For example, who can enter the server room, what special privileges does the call centre staff have, what happens to the backup tapes, and what is the disposal policy on obsolete media used by the system?
Apply the administrator role sparingly
This one should be self-evident but unfortunately the spread of administrator rights can be insidious. It is usually the result of insufficient security roles. Keep the assignment of administrator rights on the system or the underlying platform to an absolute minimum. Exactly how many people should hold the admin role is about finding the sweet spot between only having one person in the role (and hence a single point of failure) and everyone granted full rights by default (chaos). Use role-based security to assign users the absolute minimum level of security they require to perform their jobs.
Clean-up the firewall
Like loose change and junk in the garage, firewall rules tend to exponentially increase in number over time. The result of a lack of housekeeping can be a firewall that resembles a piece of Swiss cheese from a security perspective. Document your firewall rules and regularly review those that are in place. If they aren’t required, remove them.
Prefer a longer release cycle
You should strongly consider reducing your release frequency unless you can fully regression test each revision.
In general, smaller changes are less likely to see a full regression test run against the system. Often, the temptation is to focus the testing efforts on the areas of the system known to be affected by the change. This leaves the opportunity for exploitable defects to arise in the untested areas of the system. Where larger, and hence less frequent, releases are undertaken the scale of change often justifies the execution of a comprehensive regression suite.
The bottom line is that security risks are reduced when a full test is completed. If you aren’t in a position to undertake this scale of testing for smaller incremental releases, then a low frequency release cycle is likely the safer option.
Know the risks of shared services
Yes, consolidating multiple applications onto shared infrastructure does reduce your costs of ownership. However, be wary of the sociability problem. A poorly secured application on one server may create a significant vulnerability for a highly secure system using the same resources. If your IT infrastructure leverages shared resources then you need to take a holistic view of the system and its environment to appreciate the full range of security concerns.
Avoid multiple data replication channels
Business Intelligence platforms are always under pressure to provide analysts with timely information from multiple sources across the organisation. The solution to these demands is often a plethora of data feeds into a central data warehouse or a collection of data marts. This can raise security concerns, as we see a mixing of operational systems with the analytical. So while your operational system may be fully secure, it may still be leaking information out the backdoor to the data warehouse, where security controls may not be so rigorous.
Master data management is another problem often handled through data integration between systems. Again, the risk is for data to be replicated from a system of high security to one of a lower level.
Address these risks by applying the principles of information management and establish governance controls around the access, movement and duplication of potentially sensitive information. Although not always practical, endeavour to minimise the need for solutions that rely upon data duplication.
Keep your valuable information in one place
When it comes to removable media, most of us would have heard the line “I last had it with me on the bus/train…” The copying of data to removable devices, be they disks, phones, USB memory, DVDs and so on is commonplace. However, with this practice commercially sensitive data is all too easily moved off site, whereupon it becomes uncontrolled.
The operational teams of most companies will typically move backup media (tapes etc.) between sites in lockable containers. Despite this rigour in the server room, this is seldom the practice when staff are taking information home or off to other offices.
Where an information disclosure risk is identified, reduce the movement of data by defining and enforcing appropriate security policies in this area. Where data has to be carried offsite, look to encrypt the information in order to minimise the risk should it be dropped in the car park (another common excuse).
The exception doesn’t always prove the rule
Does a minimalist approach apply in all cases? Not quite. One example of where more really is better is in the involvement of multiple contributors to the development of the threat model. By holding frequent risk workshops and inviting a cross-section of staff to attend, you can have confidence that the potential threats captured represent a comprehensive (if not exhaustive) list of vulnerabilities within the system.
A common failing of these workshops is the list of invitees only extends to IT staff. Unsurprisingly, in these cases the threat model emphasises technology risks and neglects possible attack vectors via business channels. In short, the wider you can spread the net, the more complete the threat model.
I’ll look to cover threat modelling and the benefits you can gain by applying the technique in a future post. In the meantime, keep it simple and keep it secure.