For those looking at implementing network security solutions (particularly new technologies), the evaluation process can be confusing, expensive, and drag on past project deadlines. Beyond the references of trusted sources, finding a solution usually defaults to checking the proverbial box at the lowest possible cost. Whichever security solution can solve the issue (regulatory or policy), albeit at its bare minimum, and allow the organization to move on to other issues is often “good enough.”
However, good enough in network security often turns out to be less than adequate, which leads to yet another round of evaluation for a new solution, and the cycle begins anew. How can we break this cycle, or better yet, head it off from the start with a better system for evaluation?
Some evaluation metrics are more difficult to assess than others, for example, levels of security are often tricky to differentiate between comparable products. In network security, it’s not always as simple as adding more characters and complexity to a password (although that can have its own issues) or more bits to an encryption algorithm. For example, how do you tell if one firewall is more “secure” than another?
Thankfully, there are some widely recognized best practices that can be used as guiding principles in evaluation. Security technology evaluation is a complex process, but it can generally be broken down into three primary categories:
- Assurance – A relative gauge that the solution provides the expected level of security and works as intended.
- Functionality – The solution needs to provide a basic level of features and usability to accomplish job functions.
- Cost – Short/long, direct/indirect, and ongoing costs of the solution.
Obviously, each of these three categories does not exist in a vacuum. They must be balanced against each other as a part of a holistic evaluation of the product against the security requirements. In order to weight them properly for a specific use case, let’s break them down further into more specific criteria.
In assessing the assurance of various solutions, there are a number of questions that need to be answered. For example: Does the solution provide proper security for the assigned function? Does it cause issues with other systems? Does it work reliably? The better question may be, where to start? Security expert Bruce Schneier proposed a basic five step process, with five seemingly trivial questions, in his paper, “Evaluating Security Systems: A Five-Step Process.”1 These questions are exactly that – a starting point – intended to spark discussion and thought about how to evaluate security systems. As such, let’s discuss each question and the potential issues that they raise.
1. What assets are you trying to protect?
Making no assumptions as to the solutions being evaluated, the starting point is always figuring out exactly which assets you are trying to protect. Whether it’s a database, a PLC, a network segment, or an entire facility, the closer you can get to specifically defining the scope of the security implementation, the better you will be equipped to evaluate and select the appropriate security solution.
Poorly defined scope can lead to spiraling expectations and looking for a “cure-all” or silver bullet to blanket a wide swathe of assets with a modicum of security, rather than providing the appropriate security to those higher risk assets that really need it. This so-called “scope creep” is a common problem that can lead to ballooning costs and frustrating wasted time and effort that can all be prevented by identifying, prioritizing, and executing on a well-defined security implementation strategy.
2. What are the risks to these assets?
In prioritizing the assets which require protection, the risk to those assets must be properly assessed. Generally, the level of risk for any given asset is defined as the severity of a possible issue combined with the value of the affected asset(s). For example, for those assets which are very valuable but would not cause a significantly severe issue if they were compromised, the risk may be moderate to high. Meanwhile, for those assets that are relatively low in value but absolutely critical to business processes, the risk may be deemed extreme.
As a side note, some folks will attempt to assess the “chance” of attack when considering risk. It should be mentioned that it’s nearly impossible to calculate the odds of an attack actually occurring. However, if you have experienced past attacks, have identified potential probes through SIEM applications, or have had significant turnover in administration or security personnel overseeing the assets in question, there may be some legitimate added risk considerations. In general, it is best to assume an objective perspective and assume all assets are at an equal chance of an attack – 100% – and prioritize from there.
3. How well does the security solution mitigate those risks?
This may be the single most overlooked factor in selecting a security solution, despite being perhaps the most important. Whether it’s taking a dishonest vendor’s word for their product’s security capabilities or simply misunderstanding the technology, there are too many cases where organizations purchase a security solution that does not solve the security problem. It also may be difficult to assess the security actually provided by the solution. While functionality is manifested in features that are apparent and available to users, security assurance is often unrepresented by such mechanisms and is thus not usually visible to users.2
It also may be another case where the holistic view of the security problem and its potential impacts have not been properly considered. Think about the scope of security – sometimes the evaluated technology is not going to be the only piece required for the solution to function. A particular solution may be required but only solve part of the security problem, and additional solutions may be required to provide adequate security and meet the requirements. Sometimes these are add-ons that seem like they are included, so it’s important to document everything in advance before you end up with half of a security solution. In other cases, performance may not meet expectations. Benchmarks are great and proof-of-concepts are even better, but it is often difficult to tell how a particular solution will truly perform until it’s actually implemented. More on this in the Functionality section later in this document.
Failure to properly secure may also be a function of product failure. If the security solution is prone to frequent failure or is vulnerable to attack itself, it is not fulfilling its required role. It is especially vital in critical security situations, such as defense, nuclear energy, intelligence, and others, that the solution be extremely reliable and as immune to attack as possible. Even in a use case as seemingly benign as manufacturing, if a security solution fails, it may bring down production for an extended period of time, costing the company millions.
4. What other risks does the security solution cause?
It might seem paradoxical, but implementing a security solution can actually make systems less secure. It can take the form of less visibility, employing removable media, or even a software bug that opens a vulnerability in something like a firewall. While the issues created by a new security solution are not usually critical, as Bruce Schneier points out in his paper, “The trick is to understand the new problems and make sure they are smaller than the old ones.”
By thoroughly vetting potential security solutions, performing POCs and/or sandboxing, you can better understand these problems before they arise. Armed with this information, you can then select the solution that meets the security requirements with the least “side effects.” Alternatively, if budgetary or other constraints prevent implementing the least disruptive solution, then at least you can plan for the issues and apply any required workarounds or fixes before the solution goes live.
5. What costs and trade-offs does the security solution impose?
Security systems do not exist in a vacuum, and rarely (despite numerous vendor claims) can they be dropped into your network or system seamlessly without anyone even noticing. In fact, noticing the effects is often kind of the point with security. If you don’t notice it, it’s probably not doing its job.
If you’re lucky, security systems won’t cause more than a very minor disruption to normal processes and job functions. If you’re not so lucky, they may end up causing significant issues with data accessibility and productivity, or even bring down an entire system or application. While assessing the financial costs of the solution will be addressed later on in this document, there are other process “costs” and trade-offs to consider. For example, the solution may impose process delays or impede important job functions.
Security and usability are typically viewed at odds, in a sort of zero sum range.
However, security and usability are not mutually exclusive, but part of a larger balance that also includes the functionality included in the solution.
With very-high-security measures, there may be systems that are utilizing connections or data that suddenly become unavailable. Processes that were a simple mouse click away may now require multiple steps or logins, or even physically getting up and walking across the room. Users may be able to make some sacrifices for the greater good, but placing too many barriers between them and the systems and data they need to do their jobs will not only have a significant business impact, but they’ll often actively work to subvert your security measures in order to get back to full productivity.
While higher security and increased complexity can lead to significant issues, there may be other trade-offs for lower performing or less secure solutions. Privacy concerns, slower user/customer experiences, longer/more complicated audits, and more could result from inappropriate or inadequate security solutions. Sometimes driven by an underfunded budget and/or a regulatory requirement, these solutions are often replaced sooner or later and thrown away (to everyone’s grim satisfaction).
It may seem obvious, but it still happens all the time. Careless selection without proper evaluation and internal vetting just creates headaches and waste. It’s important to take a holistic view of potential impacts to users and systems to ensure you do not create more problems than you solve.
The functional aspects of any solution are typically fairly straightforward. Does it provide needed features? Beyond these basic features, does the solutions provide additional functionality that increases utility or usability? However, it’s not always that easy to think beyond the assurance aspect – most companies just want to know if it’s secure. Below are the general functional aspects of security solutions that should be reviewed, in no particular order:
Among the functional aspects, performance is the only one that tends to get any consistent attention. Companies (particularly IT managers and admins) want to know: Will it slow us down, and by how much? The question of latency and/or throughput isn’t always just how a single solution will affect a particular process, but sometimes how a series or sequence of compounding delays will affect it. For example, there may be multiple network levels within a particular organization (manufacturing, banking, etc.) and in order to secure each, the organization will implement a DMZ with a firewall between each one. Now instead of data flowing freely between each level, it must be held up and scrutinized at each level, and at increasing intensity the further down into the critical systems it goes.
Sometimes, depending on the particular solution, performance concerns can be mitigated by things like additional processing power or simplified whitelisting rules. However most modern organizations have their systems (databases, big data systems, etc.) operating at or near peak capacity all the time, and adding any processing overhead is not just inconvenient, but also potentially very costly. This is one of the reasons that cloud-based platforms have become so popular – the added power is available on-demand and relatively cheap. The downside is that unless the performance is significant enough, the round trip to the cloud could potentially add latency, and it could also add security concerns by sending data outside the organization’s network.
Highly dependent on the type of solution being evaluated, it’s very helpful to assemble a list of features from competing solutions to compare apples to apples the types of features that are available. Once the type of solution is selected (based on assurance or other factors), it is fairly straightforward to simply go down the list, prioritize the most important features, and select the solution that includes the most while staying within budget. In other cases, there may be solutions that are similar in nature but not the same technology and where the features do not line up as cleanly.
The problem is typically how to prioritize the available functions and figuring out what makes one function more desirable than another. This decision typically comes down to the feature sets that will most (or least) impact non-security operations and capabilities outside the scope of current requirements, including potential future requirements. Functions required for the system to operate will not (or at least should not) be optional. For that reason, it may make sense to give some or all of the function prioritization process over to the end users (security personnel). They can assign a score to each function to help clarify which are important to them, and which are not.
As an extension of features that are outside the scope of current requirements, flexibility can be very important for organizations experiencing significant growth, or those with rapidly shifting connections and priorities. While industrial or critical infrastructure organizations may have relatively static requirements that change maybe once or twice per decade, financial and technology organizations may be experiencing shifting requirements on a minute-by-minute basis. While complexity remains the enemy of security, the ability to add or change functions quickly and easily can be a highly valuable asset.
While security is important, once the assurance aspects have been settled, companies really just want their employees to be able to do their jobs. Any features that reduce the impact on (or better yet, help them to increase) productivity are often the deciding factor in selecting one solution over another.
For example, a car may feature anti-lock brakes, power steering, and a seven-speed gear box. If they are not significantly different than a comparable vehicle, these functional features may take a metaphorical back seat to the comfort of the seats, cabin headroom, and the number of cupholders.
This “cupholder” feature may be something as simple as a cleanly designed user interface or a push-button reset, but may be significant in the eyes of a user who has to use the solution every day. As such, it’s advisable to include the end users in the solution evaluation as they will be taking the brunt of any poorly designed or difficult-to-use products.
Assuming assurance and functionality are comparable, the final decision often (maybe too often) comes down to cost. Unfortunately, this often takes the form of asking, “What is the cheapest solution to meet our minimum requirements?” rather than “What can we afford to meet as many of our priorities as possible?” It’s also not in human nature to consider the full breadth of potential financial liabilities involved beyond the initial purchase. The cost can often be much higher than it appears, once all aspects are considered, so it can be well worth investigating them fully before moving forward with any solution.
+ Total Cost of Ownership
The total cost of ownership, or as it is commonly referred to, “TCO,” uncovers the lifetime costs associated with the solution at the time of purchase (direct costs), after purchase (indirect costs), and for the duration of its use (ongoing costs). While it can be challenging to fully document the litany of potential costs (some may never be incurred while others may change month to month or every year), as close as can be reasonably compared, it’s in every organization’s best interest to attempt to capture the TCO of any given solution, security or otherwise.
+ Direct Costs
The initial or “direct” costs of a solution are typically the most well-documented and straightforward. They are also what most often appear on the initial invoice, so they’re usually pretty hard to miss.
For software solutions, the direct costs are fairly simple – the included base software/OS, add-ons (including any software applications), and any other required one-time license fees for additional users or backup copies.
For hardware solutions, the direct costs include the device itself, or multiple devices for dynamic failover or load balancing, any accessories required, including wiring or cooling, and any physical accommodations, including space considerations (rack space, footprint). There may also be precautionary purchases of backup units or spare parts.
Beyond the costs of the base solution, there are also some potential upcharges for additional engineering, including customization. One concern of note with customization is the difficulty many organizations have with creating a truly optimized design. Underspec’d (or overspec’d) solutions risk becoming obsolete, providing insufficient protections, or resulting in excess capex spending on unused features. This is also where the flexibility aspect comes in, as the ability modify or augment the solution with increased or additional capabilities can prevent wasteful spending on direct costs.
Installation costs are also included as part of the initial investment, unless the solution can be self-installed, and can be significant, depending on the scope of implementation. As such, unless it is otherwise unavoidable, it’s best to assess these services through a POC or smaller test installation first before moving forward with a broader implementation.
Lastly, shipping and taxes, depending on where the vendor is located and the size/weight/cost of the solution, can be a significant added cost and should be a consideration, if typically at the bottom of the list.
+ Indirect Costs
Where direct costs are up front and fairly obvious, indirect costs tend to be less visible and related to internal changes that need to be made in order to accommodate the solution. One of the first indirect cost considerations tends to be the complexity of implementation and speed to operational readiness. An overly complex implementation that extends beyond the planned window can also often mean unscheduled maintenance and downtime, which leads to loss of service and revenue.
Other indirect costs may also involve the aforementioned operational trade-offs. These include changes in the scope and scale of operations, changes to network architecture, and new workflows and processes. There may also be cascading impacts and changes to other systems, including legacy technology and solutions.
While they are sometimes perceived solely as ongoing costs, internal staffing and labor costs can also fall into the indirect cost category. For example, there may be one-time costs to find and/or train appropriate personnel with the skill sets required for the operation of a specialized security solution. As such, organizations have an inherent incentive to attempt to identify solutions that do not require additional training or highly specialized skills.
+ Ongoing Costs
While they aren’t usually significant in any individual line item, the cumulative, ongoing costs of any given security solution can easily surpass the combined direct and indirect costs, especially over time.
The most common ongoing cost for software-based solutions is license fees associated with their ongoing use. As straight up software purchases make way for software-as-a-service (SaaS) solutions, license fees have become not just accepted but the norm in enterprise security solutions, especially those with a cloud element. If it’s possible to make a one-time investment rather than an annual license agreement, that can often save a significant amount of money, although the software may not receive as frequent updates, and new features may have to wait on purchasing the newest version.
The most common ongoing cost for hardware-based solutions is maintenance, which can encompass a variety of services provided at a wide range of levels, from full-service support agreements to basic access to software updates. While it is nearly ubiquitous for hardware, it is also quite common for software license fees to include a maintenance/update component. The cost of maintenance varies so widely that it is definitely in the best interest of organizations to find out ahead of time and assess as best as possible the benefits provided. The reliability of the software or hardware may also incur additional ongoing costs with the frequency of reboots, reconfiguration, or updates. In the case of hardware solutions, in some cases it may be more cost-effective to carry a set of replacement devices rather than carrying a maintenance agreement.
Based on current personnel, operating labor may be considered a “baked-in” cost that adds to the overhead of security staff. However, the levels of expertise required combined with the frequency of systems management/maintenance may actually create a much wider range of costs than anticipated. For example, a senior IT expert needed for daily firewall maintenance vs low level IT manager needed to check on a data diode every few months.
Among the more hidden ongoing costs are utilities – paying for the power draw and/or cooling of servers or devices – and engineering related to complicating effects on change management and reconfiguration. While these may not add significantly to the bottom line, in larger scale deployments they could add up to much more noticeable costs.
One last point of consideration is the cost of system failure. From hardware repairs and replacement to the potential security risk due to a vulnerability or misconfiguration, security system failure costs are not isolated to an individual incident, and the effects could be on a scale anywhere from an individual device to the entire enterprise. If considering a hardware-based security solution, consider seeking out higher Mean Time Before Failure (MTBF) rates as those devices will be replaced less often. For software-based solutions, ensure (as much as possible) that there is a concrete plan for regular updates and patching to cure any identified vulnerabilities.
If you choose wisely, the security solution you select might save money in the long run, although you may never see any actual money returning to your company’s account. Return on investment (ROI) can take the form of savings where the solution replaces more expensive products or approaches, preventing a breach, lowering cyber insurance rates, or reducing management time and effort.
In other cases, the savings might take the form of simplifying operations. For example, a telecommunications company may have a regular maintenance schedule for a remote tower. Ordinarily, a technician would have to get in his truck (or worse yet, a helicopter) and head out to the tower to check on its status. With a remote monitoring security solution such as a data diode, the company may be able to prevent the technician from making trips until work is actually required, saving his time and the dollars associated with the round trip.
Rather than simply take the word of a vendor, it’s very common to ask for independent evaluations and testing to validate vendor claims. Most reputable vendors will already have performed some sort of third-party evaluation or assessment of their products for this very reason.
+ Evaluations & Certifications
As one of the original sources of evaluation criteria, the U.S. Department of Defense “Orange Book”3 was created in 1983 to provide some of the earliest guidance to government agencies and private entities looking to evaluate security solutions. Eventually the Orange Book was merged into and superseded in 1998 by a consortium of international organizations that developed what became known as “Common Criteria.” Most organizations are now familiar with the Common Criteria Evaluation Assurance Level (EAL) certifications.
While EAL provides a strong and reliable assessment of the functional aspects of the evaluated solution, it is not intended to measure the “strength” of security provided. Higher levels of certification only signify more rigorous testing, usually as required by specific (typically governmental) regulated standards. For example, if testing a safe, Common Criteria would only test the specifications provided by the manufacturer against the standard processes. It may have been manufactured flawlessly and operate exactly as expected, the door not opening without a specific 10-digit code. However, the safe may also have a built-in back door that can be opened with a default universal code, which obviously puts a damper on its security potential.
As such, a higher EAL cannot guarantee that one security solution is more secure than another. Also beware simplicity in product certifications – simpler products with less functionality tend to be far easier to certify at higher levels. Indeed, simplifying the concept of a product down to a low enough level may eliminate the value and usefulness of evaluations completely. If the evaluated core component is not the only element of the required solution, then any evaluation that does not take into consideration the other elements is relatively worthless.
+ Validation & Penetration Testing
There are also plenty of reputable independent organizations that provide validation and penetration testing services to help evaluate security solutions. If the technology is new or relatively unknown, it may be in your best interest to enlist the services of these organizations, and to get the vendor to pay for it, if you can. Again, many reputable organizations will have already performed some of these tests, so in these cases you can simply ask to see the report or summary.
Evaluating a security system can be a painstaking task, but taken systematically, it can simplify the process. Once you have taken into consideration the assurance, functionality, cost, and validation the question is then, weighted together, is the solution really worth it? In other words, does (Assurance & Functionality) – (Tradeoffs, Risks, & Costs) = a positive result? While it isn’t exactly as simple as an elementary math formula, grading each category and comparing across competing solutions can provide significant assistance and help to clarify which is the best fit for your requirements, budget and organization as a whole.
1 Schneier, Bruce. (2006). Evaluating Security Systems: A Five-Step Process. 10.1007/1-4020-8090-5_20
2 “Criteria to Evaluate Computer and Network Security.” National Research Council. 1991. Computers at Risk: Safe Computing in the Information Age. Washington, DC: The National Academies Press. doi: 10.17226/1581