Denial of Service Countermeasures:

Intelligence Development and Analysis at the Network Node Level


George Kostopoulos

Professor of Computer Engineering

Sonali Chandel

Instructor of Computer Science

Douglas Van Wieren

Associate Professor of Computer Science

Renxiang Gu 仁翔

Network Specialist

International College, New York Institute of Technology

Nanjing University of Posts and Telecomm

Nanjing, China




This paper presents the results of an ongoing research where denial of service, DoS, countermeasures were studied and simulated. The philosophy behind this research is that artificial intelligence at the network node stage can recognize and avert threats on a real time basis. Along this objective a firewall algorithm was designed that examines the header of each and every file requesting packet. Packet requests are classified into accounts by their destination URL, and metadata are being developed based on the volume of packet-requests destined for the same server, as this volume compares to the average such requests. Based on these two parameters – volume over unit of time and average volume over several units of time – an account’s activity is deemed insignificant, significant or critical, from the DoS viewpoint threats, and is being identified as level green, yellow or red. At the green level, packets are being propagated as expected. At the yellow level, packets are being propagated along with a flag advising the next network node of the existence of a potential threat. Finally, at the red level packets are being either delayed or possibly blocked and a warning is being sent to surrounding network nodes and to the targeted server’s URL.  Although at a very small scale, the computer simulation conducted in this research demonstrated that artificial intelligence can be successfully deployed at the network node stage, and that further research refining the designed firewall algorithm is merited.              





Over the past decade, cyber warfare has moved from the science fiction domain into the nightmares of CIOs and IT managers, and has become the daily fear of all who have any dependence on the Internet; and who doesn't? At least in communications, there has been a significant relief, thanks to cryptography which has effectively countered a wide range of threats. “Nevertheless, there are security threats (such as Denial-of-Service, DoS) which cannot be prevented using cryptographic methods.”[1].  Business Continuity, vis-à-vis a cyber attack, was an insignificant issue a few decades ago, while now it is the utmost concern of practically every organization. Although, cyber-pharmaceuticals have provided successful protection against viruses,   DoS, attacks appear to remain out of control. There is a wide variety of DoS attacks, but most fall in one of the following three categories [2]:


·         Flooding (of) the server, network or terminal devices

·         Injecting of Disconnect or Connect messages

·         Sending  forged error messages “


That is, DoS attacks aim at incapacitating or saturating one or more of the target's resources -bandwidth, computing power or storage. The research described in this paper addresses the first of the above three categories, which also happens to be the most common.


Modus Operandi


Over time, DoS attackers create botnets[1] under their control and program them so that at a later time are used against a targeted server.  Botnets are made of “...handler machines, each of which controls the agents (groups of Internet accessing computers) that actually carry out the (pre-programmed and often prescheduled) attack.” [3]. Such attacks are referred to as Distributed Denial of Service, DDoS. Figure 1 illustrates the typical model of a DDoS topology.


The Internet has not been designed with the cyber criminals in mind, and is totally vulnerable to sophisticated attacks to the point where “…even a single attacker is easily able to achieve a complete DoS.” [4].


DoS attacks are not limited to hosted commercial websites or corporate databases. Included are the Domain Name System[2], DNS, servers. Attack of a DNS may result in the inaccessibility of thousands of websites.


Figure 1. Basic Topology of a Distributed Denial of Service.



The time is overdue for the design and implementation of a comprehensive Internet SCADA[3] system that will monitor network traffic for unusual and potentially harmful activities. Surprisingly, neither is there such system in place, nor is there any “…comprehensive method to protect against all known forms of DDoS attacks.” [5].

Significant research activity has been in progress that warns on the need for an Internet SCADA. [6,7,8]. Such system, however, has to be of a well defined scope, of measurable effectiveness, of controllable complexity and of practical scalability, focusing on an early DoS detection.


In DoS countermeasures “…header analysis is the foremost method used in (the) detection…” [9] of potential DoS attacks. This analysis aims at identifying an origin-to-destination relationship using a variety of packet traceback[4] techniques. [10,11,12]


DoS countermeasures start with the detection of a potential attack, proceed with an assessment of the attack’s potential, and end with the necessary actions that need to be taken. Such actions include delay or rejection of suspected packets, and forward and backward notifications of the pending DoS attack. 


The Proposed Concept


The Internet consists of two basic parts; namely, the servers/databases that host the data/information and the network, which is a global grid of routers that facilitates the communication between Internet clients and Internet servers.  The concept pursued in this study is that:


If artificial intelligence is embedded in the Internet routers, the routers, collectively, can create an Internet SCADA able to detect and prevent potential DoS attacks.


Based on statistical observations, the artificial intelligence, embedded in the routers, will be able to monitor packet traffic and dynamically establish criteria of innocence and criteria of guilt for large numbers of packets destined to the same URL destination.

In the router, the received packets will be mapped onto the statistical traffic of the immediate past. Based on the volume of similar packets in a sliding time-window and on the density of arrivals the artificial intelligence will decide on the integrity of any given packet.


Decision on a packet’s integrity falls in one of the four cases illustrated in Table 1. While letting one malicious packet through may not be catastrophic, blocking a bona fide packet can be very undesirable, especially in e-commerce.  If this is the case, delaying a packet that is suspect might be better than blocking it.



Packet Classification


Artificial Intelligence


Packet Intent



Bona Fide





Packet Perception






Packet Blocked











Packet Passed




Packet Blocked


Table 1.  Perception of Packet’s Integrity.


The Algorithm


The developed algorithm, appearing in Fig. 2, was programmed in Java and was driven by a random number generator. The random numbers were representing the destination URLs of the packets that were arriving at the router. Aiming at demonstrating the principle of this research, the algorithm assumed only 100 unique URL destinations.  For an assumed unit of time, 2,000 random destination requests were generated all corresponding to the 100 website-UR. A typical histogram – snapshot – of 2,000 requests in a unit of time is shown in Fig. 3.

Figure 2Simplified Flow Chart of the Developed Algorithm.

Figure 3 A Typical Histogram of a Snapshot of 100 URL Destinations.

The averaging of these snapshots over time, and in a rolling fashion, provides a dynamic average that represents the real-time demand for the 100 URLs. A database was created holding the momentary average requests for each URL destination.


The developed algorithm uses these averages as a criterion in assessing the possible threat level in each snapshot.


The algorithm classifies each URL’s demand level in the snapshots, and should the demand be upward disproportionate to the respective average, appropriate action will be taken. A question here arises as to what constitutes disproportionate.  For the purpose of this research over twice the average is classified as significant increase – yellow level – and over three times the average is classified as critical increase – red level. The action at the yellow level may be notification of the neighbouring routers, while that at the red level may go a step further delaying the forwarding of the requests or even blocking them.



Testing of the Algorithm


To test the algorithm a seed database of averages was created, presumably representing the past six 10-second snapshots. After the start, the database presumably holds the average of the past six snapshots. The traffic of the websites was assumed to fall in the four groups. Fig. 4 illustrates the seed database of averages that was used in the testing of the algorithm.


The random numbers of Figure 3 were normalized to the distribution of Figure 4 creating the snapshot, shown in Figure 5. This is the initial snapshot of the testing process.


The algorithm ran ten times iteratively performing the following operations:

Text Box:  
Figure 4. Seed Database of Averages, DA0, of Destination URLs.

A new snapshot is received, representing     the number of requests to each of the           presumed 100 URLs. The numbers in the     snapshots were all random.

-                   Each value is compared to the respective     one in the averages database.

-                   Judgement is pasted onto each value in        the snapshot as to the presence of a potential DoS threat.

-                   The database of the averages is updated      taking into account the preceding six           snapshots.


A test of the algorithm indicated that the average traffic of each URL destination was indeed dynamically adjusting.  Table 2 shows the values of the continuously computed average of URL#82.




Previous Average



New  Average





































Table 2. Impact of New Values on Averages. Averages a


After the above iterations, the algorithm was tested again, this time with four DoS attacks introduced. Figure 6 illustrates part of a snapshot with DoS attacks.


The algorithm was also computing the requests-to-average ratio, thus alerting the monitoring software or operator. Fig. 7 illustrates the Snapshot Normalized to the Respective Average Showing the Level of the Potential Threat.

Figure 7 Snapshot Normalized to the Respective Average Showing the Level of the Potential Threat.



Text Box:  
  Figure 6. Partial Snapshot of Router's Traffic with two Possible DoS Attack Present























The above described research has demonstrated the feasibility of building a SCADA that can oversee the traffic that goes through a router. Although the performed modelling was in small scale, the aimed objective was met; namely, the creation of artificial intelligence that dynamically recognizes the traffic as it fluctuates over time, and the ability to discern aberrations from the normal that could imply a possible DoS attack. The next step in this research is to actually integrate this algorithm in a router's firewall and experience its online performance.  
































References (The availability of the referenced URL was confirmed on May 25, 2010)



[1] Granzer, Wolfgang, “Denial-of-Service in Automation Systems” (p.468)


[2] Schmidt, Andreas C., “Securing VoIP Networks using graded Protection Levels” (p.3)


[3] Karig, David and Lee, Ruby, “Remote Denial of Service Attacks and Countermeasures” (p.13)


[4] Zhou, Xing et al, “Evaluation of Attack Countermeasures to Improve the DoS Robustness of RSerPool Systems by Simulations and Measurements”  (p.11)


[5] Specht, Stephem M. and Lee, Ruby B., “Distributed Denial of Service: Taxonomies of Attacks, Tools and Countermeasures”,  (p. 5)


[6]   Daniels, Thomas E. and Spafford, “Network Traffic Tracking Systems:Folly in the Large?” , Proceedings of the 2000 Workshop on New Security Paradigms, Feb. 2001         


[7] Krugel, C. Network Alertness: “Towards an Adaptive, Collaborative Intrusion Detection System’. PhD dissertation, Vienna University of Technology, 2002.


[8] Wood, A. D. and Stankovic, J. A., Denial of Service in Sensor Networks. IEEEComputer, 35(10):54-62, 2002


[9]  Xiong, Jin, “Analysis of Four Distributed Denial of Service Counter-measures” (p.1)


[10]  Savage, S., et al., “Practical Network Support for IP Traceback”, Proceedings Applications Technologies Architectures Protocols Computer Communication (SIGCOMM), pp 295-306, August 2000.


[11] Chao Gong, “Single Packet IP Traceback in AS-level Partial Deployment Scenario”


[12] Rohit Jain, IP Traceback to Prevent Denial of Service Attack




[1]   Botnet Robot Networks. This term is used to refer to groups of network resources that have been hijacked by a cyber criminal. The criminal's objective is to use them for a consorted attack against a targeted server aiming at the server's saturation and consequent inability to respond to bona fide users.

[2]  Domain Name System (DNS) It is an array of servers located throughout the Internet that convert the domain names to Internet numerical IP addresses.

[3]   SCADA (Supervisory Control And Data Acquisition). An information system that monitors and controls a critical process.

[4]     Traceback This is the process of attempting to determine the true origin of a DoS attack by soliciting the cooperation of all prior routers. A variety of reasons make such cooperation impractical.