Table of Contents
Security Concepts in High Performance Networks
Achieving the same security at a significantly higher throughput is one of the core objectives of this project. In this work package, we focus in particular on how systems can be tested, how intrusion detection systems can be adapted to the new requirements and how DDoS attacks can be efficiently mitigated. Security in and through flexible networks–realized by Software Defined Networking (SDN)—is another central research area in which we are involved.
Devices, services, protocols, and other network mechanisms require evaluation before they can be used in production. A very good test infrastructure for a new network system is the production network in which it has to function. However, tests in such an infrastructure have their downsides. For one, it is hard to gain access to these systems as usually privacy regulations and policies understandably forbid the usage of unproven and untested devices in the production network. Moreover, the infrastructure cannot be controlled which leads to a lack of repeatability and reliability of network tests. The state of the art to work around this issue is to record, anonymize, and reuse network traces and — at least as a best practice — publish these data sets along with the results of the analysis. As they are recordings of real network traffic, they can assure that they accurately reflect features of the network traffic that even the tester might not have thought of. The inherent limitation in this approach is that these network traces only depict the properties of networks during their recording time. Traffic patterns could possibly only be present within the network under observation; maybe even only at the time of recording. Newer protocols or changes in user behavior lead to fast obsolescence of these network traces. For instance, HTTP/2 or the QUIC protocol cannot be found in older data sets. As it is hard to gain access to newer traces or to ensure comparability with older tests, systems are often tested against old data sets that do not represent current network features, e.g. the DARPA Intrusion Detection Evaluation data set from 1999 is still in use today. Another key point of tests is to not only test in a realistic environment but to also test edge cases and push the network system to its limits. This cannot easily be done in a production network or with traffic recordings. Therefore, specialized systems for network tests that can both produce a realistic and controllable environment are important to proof the viability of new network systems. These systems should not only be able to reuse existing traffic traces but should also be able to produce new data. Within the scope of this project we are therefore working on creating a framework for network tests (prototype see picture). Furthermore, we are working on publishing self-recorded network data.
Hardware-based IDS Acceleration
Intrusion Detection Systems (IDS) are important for security in networks to find and circumvent attacks. Recent attacks show how legacy systems in networks can be a liability and how intrusion detection can help to mitigate the extent of the attack. However, network bandwidth increases fast; it increases a lot faster than computing capabilities of hardware platforms. At the same time, detection mechanisms become more and more complex. From simple string matching to more complex regular expression matching to meta data analysis and complex machine learning algorithms for anomaly detection, intrusion detection has become increasingly resource demanding. The core of any modern IDS is the string matching and regular expression matching engine. It is the most important and resource intensive part of the system and therefore offers the biggest opportunity for improvement. CPUs offer medium performance for any generic calculation. Specialized processors, however, can be tailored to the use case of their operational area and optimized accordingly. A highly parallelized design of a use case specific processor can therefore in theory improve matching capabilities of IDS. In this project, we analyze three basic concepts for such processors. For one, an FPGA-based system where the regular expressions themselves are translated and molded into hardware and an FPGA-based co-processor with its own regular expression based assembly language that can be translated into an ASIC. Furthermore, we take a look into graphics processors (GPUs). GPUs are optimized for massive parallelization. GPU cores are much smaller and less versatile than CPU cores but for certain applications that share key features of graphics processing, moving the calculation from the CPU to the GPU often enable far-reaching improvements.
Concept of the DDoS Mitigation Setup
The system we are looking into is a network-based mitigation system set up within the network infrastructure. Potential targets in the network infrastructure are known by the defenders but are neither in contact with nor controlled by neither the DDoS mitigation administrators nor the mitigation service. The figure shows a simplified, schematic view of the environment in which the mitigation system is set up. In red, the mitigation system itself is shown, while the gray parts represent the parts of the network infrastructure that are directly connected to the mitigation system. On the left side, the data aggregation based on information from the core routers of the network is shown. The Baden-Württemberg Extended Lan (BelWü) — among other parts — contains several core routers connected to other ISPs (e.g. the Swiss research network SWITCH) and Internet Exchange Points (IXPs, e. g. DE-CIX in Frankfurt) as peering partners. We are collaborating here with the bwNetFlow project, which is another research project financed by the state of Baden-Württemberg and focuses on the realization of an interface between the core routers to collect flow information, establish an automated processing platform, and detect anomalies. The project exports the NetFlow data of the core routers, aggregates the data, enriches the data with additional information and provides the data to subscribers. On the right, the mitigation system close to the servers we want to defend — the attack target T — is shown. SDN capable switches in front of the targets provide the necessary flexibility to realize an effective mitigation. An SDN controller controls the switch and can forward attack traffic to the observer for analysis or drop traffic identified as attack traffic. A CAPTCHA server can be used to white list legitimate clients during an attack.
For testing purposes, a local test setup was implemented as shown in the figure on the left. The system consists of the local setup of the mitigation system without the NetFlow data export which is evaluated separately. Additionally, the system entails a web server functioning as an attack target in test runs, one machine simulating attacks, and one machine simulating regular clients.
Zero Trust Network Management
We are working on a platform for zero trust network management as part of the project extension (bwNET100G+ Extension). The currently predominant perimeter security model is failing more and more often to provide sufficient protection against attackers. We analyse to what extent the zero trust model that is popular in some commercial networks can also be applied to the open and heterogeneous research network of a German university or the BelWü as a whole. The concept presented herein to implement such an identity-based network model focuses in particular on the components which are necessary for authentication and authorization. The feasibility of the model is demonstrated by a self-implemented prototype that protects access control