Exclusive Premium functionality. Find contact details for more competitors condueng Conduent. Information Technology And Services. To use individual functions e. Business Services Research revenue of GfK worldwide
In case of failure of a node or link along one of the streams, e. In the loopback approach, in case of a failure, a single stream 29 a is rerouted onto a backup channel 29 b. For any node or edge redundant graph, there exists a pair of node or edge-disjoint paths, that can be used for APS, between any two nodes. Automatic protection switching over arbitrary redundant networks need not restrict itself to two paths between every pair of nodes, but can instead be performed with trees, which are more bandwidth efficient for multicast traffic.
For loopback protection, most of the schemes have relied on interconnection of rings or on finding ring covers in networks. Loopback can also be performed on arbitrary redundant networks. Referring briefly to FIG. For purposes of illustration, assume node j is the attack source i. If both nodes j and k are considered as individual failures by a network management system, then loopback will be performed to bypass both nodes j and k in a ring.
Thus, all traffic which passed through both nodes j and k will be disrupted, as indicated by path 31 in FIG. Traffic which traversed node j from node i is backhauled through node j.
Thus, by correctly localizing the source of an attack, the amount of traffic which is lost can be reduced. Briefly, and in general overview, work in the area of fault localization in current data networks can be summarized and categorized as three different sets of fault diagnosis frameworks: 1 fault diagnosis for computing networks; 2 probabilistic fault diagnosis by alarm correlation; and 3 fault diagnosis methods specific to AONs. The fault diagnosis framework for computing networks covers those cases in which units communicate with subsets of other units for testing.
In this approach, each unit is permanently either faulty or operational. The test on a unit to determine whether it is faulty or operational is reliable only for operational units. Necessary and sufficient conditions for the testing structure for establishing each unit as faulty or operational as long as the total number of faulty elements is under some bound are known in the art.
Polynomial-time algorithms for identifying faults in diagnosable systems have been used. Instead of being able to determine exactly the faulty units, another approach has been to determine the most likely fault set. All of the above techniques have several drawbacks. First, they require each unit to be fixed as either faulty or operational. Hence, sporadic attacks which may only temporarily disable a unit cannot be handled by the above approaches. Thus, the techniques are not robust.
Second, the techniques require tests to be carefully designed and sequentially applied. Moreover, the number of tests required rises with the possible number of faults. Thus, it is relatively difficult to scale the techniques. Third, the tests do not establish any type of causality among failures and thus the tests cannot establish the source of an attack by observing other attacks.
The techniques, therefore, do not allow network nodes to operate with only local information. Fourth, fault diagnosis by many successive test experiments may not be rapid enough to perform automatic recovery. The probabilistic fault diagnosis approaches for performing fault localization in networks typically utilize a Bayesian analysis of alarms in networks. In this approach, alarms from different network nodes are collected centrally and analyzed to determine the most probable failure scenario.
Unlike the fault diagnosis for computing networks techniques, the Bayesian analysis techniques can be used to discover the source s of attacks thus enabling automatic recovery. Moreover, the Bayesian analysis techniques can analyze a wide range of time-varying attacks and thus these techniques are relatively robust. All of the above results, however, assume some degree of centralized processing of alarms, usually at the network and subnetwork level. Thus, one problem with this technique is that an increase in the size of the network leads to a concomitant increase in the time and complexity of the processing required to perform fault localization.
Another problem with the Bayesian analysis techniques is that there are delays involved with propagation of the messages to the processing locations. In networks having a relatively small number of processing locations, the delays are relatively small. In network's having a relatively large number of processing locations, however, the delays may be relatively long and thus the Bayesian analysis techniques may be relatively slow.
Thus the Bayesian analysis techniques may not scale well as network data rates increase or as the size of the network increases. If either the data rate or the span of network increase, there is a growth in the latency of the network, i. The combined increase in processing delay and in latency implies that many bits may be beyond the reach of corrective measures by the time attacks are detected.
Therefore, an increase in network span and data rate would lead to an exacerbation of the problem of insufficiently rapid detection. For AONs, fault diagnosis and related network management issues have been considered. Some of the management issues for other high-speed electro-optic networks are also applicable.
The problem of spreading of fault alarms, which exists for several types of communication networks, is exacerbated in AONs by the fact that signals flow through AONs without being processed. To address faults only due to fiber failure, only the nodes adjacent to the failed fiber need to find out about the failure and a node need only switch from one fiber to another. For failures which occur in a chain of in-line repeaters which do not have the capability to switch from one fiber to another, one approach is when a failure occurs, the alarm due to the failure is generated by the in-line repeater immediately after the link failure.
The failure alarm then travels down to a node which can perform failure diagnostic. The failure alarms generated downstream of the first failure are masked by using upstream precedence. Failure localization can then be accomplished by having the node capable of diagnostics send messages over a supervisory channel towards the source of the failure until the failure is localized and an alarm is generated at the first repeater after a failure.
These techniques require diagnostic operations to be performed by remote nodes and to have two-way communications between nodes. It would, therefore, be desirable to provide a technique for stopping an attack on a signal channel by a nefarious user which does not result in service degradation or denial.
It would also be desirable to provide a technique for localizing an attack on a network. It would further be desirable to provide a relatively robust, scalable technique which localizes rapidly the source of an attack in a network and allows rapid, automatic recovery in the network.
In accordance with the present invention, a distributed method for performing attack localization in a network having a plurality of nodes includes the steps of a determining, at each of the plurality of nodes in the network, if there is an attack on the node; b transmitting one or more messages using local communication between first and second nodes wherein a first one of the nodes is upstream from a second one of the nodes and wherein each of the one or more messages indicates that the node transmitting the message detected an attack at the message transmitting node; and c processing messages received in a message processing one of the first and second nodes to determine if the message processing node is first node to sustain an attack on a certain channel.
With this particular arrangement, a technique for finding the origin of an attacking signal is provided. By processing node status information at each node in the network and generating responses based on the node status information and the messages received by the node, the technique can be used to determine whether an attack is caused by network traffic or by failure of a network element or component.
In this manner, an attack on the network can be localized. By localizing the attack, the network maintains quality of service. Furthermore, while the technique of the present invention is particularly useful for localization of propagating attacks, the technique will also localize component failures which can be viewed as non-propagating attacks.
The technique can be applied to perform loopback restoration as well as automatic protection switching APS. Thus, a technique provides a means for utilizing attack localization with a loopback recovery technique or an APS technique to avoid unnecessary service denial. The nodes include a response processor which processes incoming messages and local node status information to determine the response of the node. The particular response of each node depends upon a variety of factors including but not limited to the particular type of network, the particular type of recovery scheme e.
The foregoing features of the invention, as well as the invention itself may be more fully understood from the following detailed description of the drawings, in which:. Before describing the apparatus and processes for performing fault isolation in communication networks, some introductory concepts and terminology are explained.
Thus, the networks may be used for communications systems, data transmission systems, information systems or power systems. In one embodiment, the network may be provided as an internet. The resources may be provided as optical signals such as power signals, information signals, etc.
A source node refers to a point of origin of a message and a destination node refers to an intended point of receipt of a message. Thus messages are transmitted between nodes on channels. An attack is a process which affects signal channels having signal paths or routes which share devices with a nefarious user's channel.
The same process can also pinpoint other nodes in the network which may experience a failure due to an attack but which are not the source of an attack. It should be noted that the techniques of the present invention have applicability to a wide variety of different types of networks and is advantageously used in those applications which provide relatively high-speed optical communications.
Thus, the techniques described herein find applicability in any network having a need for rapid service restoration. Each of the nodes 42 processes a predetermined number of communication channels e.
Each of the nodes 42 includes a response processor 43 which processes incoming messages to the node InMessages and local node status information to determine the response of the node 42 which receives the incoming messages.
Each of the channels may terminate or originate at certain nodes 42 and each channel has a specific direction i. Thus, with respect to a particular channel, nodes can be referred to as being upstream or downstream of one another. For example, in one communication channel the node N 1 is upstream of the node N 2 and the node N 1 is downstream of the node N 6.
In another communication channel, however, the node N 1 may be downstream of the node N 2 and the node N 1 may be upstream of the node N 6. Each node N 2 is able to detect and recognize attacks being levied against it, receive and process messages arriving to it and generate and transmit messages to nodes which are upstream or downstream of it on certain channels.
It should be noted that for the purposes of the present invention, a node may correspond to a single network component. Alternatively a single network component may be represented as more than one node. For example, a switch may be represented as several nodes, one node for each switching plane of the switch.
Likewise, in some applications it may be advantageous to represent a multichannel amplifier as a single node while in other applications it may be advantageous to represent the multichannel amplifier as multiple nodes. Alternatively still, a cascade of in-line amplifiers may be modeled as a single node because they have a single input and a single output.
After reading the techniques described herein, those of ordinary skill in the art will appreciate how to advantageously represent particular network components and when to represent multiple components as a single node and when to represent a single component as a network node.
In making such a determination, a variety of factors are considered including but not limited to the ability of a node or network element or component to detect a failure e.
This depends, at least in part, on where the processing capability exists within a network. Depending upon the particular application, other factors may also be considered. Each of the nodes 42 has one or more inputs I ij and outputs O ij with corresponding directed connections denoted as i, j when the connection is made from node i to node j by a link. An undirected connection between nodes i and j is denoted herein as [i, j].
The notation T 12 indicates the time required to transmit a message on a channel between nodes 1 and 2 on which channel information flows in a direction from node 1 to node 2. Those of ordinary skill in the art will appreciate, of course, that in practical networks many of the nodes will have multiple inputs and outputs. Network 40 and the networks referred to and described herein below are assumed to be acyclic i.
In general overview, the network 40 operates in accordance with the present invention in the following manner. A distributed processing occurs in the nodes 42 to provide a technique which can rapidly ascertain the one or ones of the nodes 42 are sources of an attack. It should be noted that the nodes 42 include some processing capability including means for detection of failures. The ability to provide the nodes with such processing capability is within the skill of one of ordinary skill in the art.
Thus, the nodes 42 can detect failures with satisfactory false positive and false negative probabilities. The ability to localize attacks in the network is provided in combination by the distributed processing which takes place in the network. The techniques of the present invention for attack localization are, therefore, distributed and use local communication between nodes up- and down-stream.
Each node 42 in the network 40 determines if it detects an attack. It then processes messages from neighboring nodes 42 to determine if the attack was passed to it or if it is the first node to sustain an attack on a certain channel. The first node affected by an attack is referred to as the source of the attack, even though the attack may have been launched elsewhere.
The global success of localizing the attack depends upon correct message passing and processing at the local nodes. In describing the processing which take place at particular nodes, it is useful to define some terms related to the timing of such processing. Time delays for processing and transmission of messages at each of the nodes 42 are denoted as follows:. In some instances described herein below, the time delays at all nodes are identical and thus the measurement and processing times are denoted as T meas and T proc without subscripts.
One concept included in the present invention is the recognition that, in order for a node to determine whether or not it is the source of an attack, it need only know whether a node upstream of it also had the same type of attack. For example, suppose that node 1 is upstream of node 2 on a certain channel 48 a which is ascertained as being an attacking channel and that both node 1 and node 2 ascertain that the attacking channel is channel 48 a.
Suppose further that both nodes 1 and 2 have processing times T meas and T proc. If node 1 transmits to node 2 its finding that the channel 48 a is nefarious, then the interval between the time when the attack hits node 2 land node 2 receives notice from node 1 that the attack also hit node 1 is at most T meas , since the attack and the message concerning the attack travel together.
Moreover, the detection the attack commences at node 2 as soon as the attack hits. It should be noted that this time is independent of the delay in the communications between nodes 1 and 2 because the attack and the message concerning the attack travel together, separated by a fixed delay. To illustrate the technique, it is useful to consider a relatively simple attack localization problem.
In this network nodes can either have a status of 1 O. Nodes monitor messages received from nodes upstream. Let the message be the status of the node. When an attack occurs in this network, the goal of the techniques set forth in accordance with the present invention is that the node under attack respond with an alarm and all other nodes respond with O.
During the processing, once an attack is detected at a node, node 2 in network 40 for example, node 2 initiates processing to ascertain the source of the attack by transmitting its own node status to other nodes and receiving the status of other nodes via messages transmitted to node 2 from the other nodes. It should be noted that the nodes from which node 2 receives messages may be either upstream or downstream nodes. In response to each of the messages received by node 2 which meet a predetermined criteria e.
It should be noted that in some embodiments the response can be 40 ignore messages. Similarly, each of the nodes 42 in network to receive messages and in response to particular ones of the messages, the nodes provide information related to the identity of the source of the attack. The particular response messages will vary in accordance with a variety of factors including but not limited to the particular network application and side effects such as, loopback, re-routing and disabling of messages.
In performing such processing, each of the nodes 42 receives and stores information related to the other nodes in the network Thus, the processing to localize the attack source is distributed throughout the network With the above distributed approach, if node 2 is downstream from node 1 and node 2 detects a crosstalk jamming attack on the first channel and node 2 has information indicating that the node 1 also had a crosstalk jamming attack on a second different channel, node 2 can allow node 1 to disconnect the channel subject to the attack.
Once node 1 disconnects the channel subject to the attack, the channel subject to the attack at node 2 ceases to appear as an offending channel at node 2. If node 2 did not have information from node 1 indicating that the channel at node 1 was subject to attack at node 1 then node 2 infers that the attacker is attacking node 2 on the channel on which it detected the attack.
Node 2 then disconnects the channel. It should be appreciated that node 2 sees no difference between the cases where channel 1 is the attacker at node 1 and where channel 2 is the attacker at node 2. In both cases, channel 2 appears as the attacker at node 2. Thus, by using knowledge from the operation of node 1 upstream of node 2 , node 2 can deduce whether the attack originated with channel 1 or channel 2 thereby avoiding the result of erroneously disconnecting a channel which is not the source of an attack.
Thus, the technique of the present invention allows the network to recover properly from attacks by identifying attacks carried out by network traffic and localizing those attacks. As mentioned above, each of the nodes 42 can detect an attack or fault within acceptable error levels. The type of faults detected are included in a set of fault types denoted F stored within a node storage device.
One of the fault types in F is always a status corresponding to a no fault status meaning that the node has not detected a fault. F is the set of all faults to which the status must belong i. Messages can be sent upstream or downstream in the network The upstream message from node j to node i at time t is denoted M t.
For particular network applications the information encoded in messages varies but typically includes the node status information. Generally, however, the message can include any information useful for processing. It is, however, generally preferred that messages remain relatively small for fast transmission and processing.
That is, each message should have a length for each application there is defined a particular maximum message length. The particular message length in any application is selected in accordance with a variety of factors including but not limited to the type of encoding in message, etc. Moreover, the number and lengths of messages should be independent of network size.
If large messages based on network size are utilized, this results in loss of the scalability characteristic of the invention because of long processing times which would result. A response function, R denotes processing of incoming messages and local status information to determine the response of the node which received the incoming message.
The response function R will be discussed further below in the context of particular techniques implemented in accordance with the present invention. In accordance with the invention, it has been recognized that it is necessary to explicitly take into account the time taken by the different processes involved in the identification and localization of attacks.
The identification of an attack requires time for detection of the input and output signals and processing of the results of that detection. All the time required by all of the above processes executed in sequence is referred to as the processing time at the node. Thus, the processing time at node 1 is denoted as T i meas. Messages from node i to node j take at most time T ij to transmit. Message transmission follows the transmission of the data itself, and does not usually add to the overall time of localizing the attack.
We denote the time required by this last set of events as T i proc. Thus, in accordance with the present invention, a network or network management system provides techniques for: a localization of the source of an attack to enable automatic recovery; b relatively fast operation implying near constant operational complexity ; c scalability—the delay must not increase with the size and span of the network; d robustness—valid operation under any attack scenario including sporadic attacks.
The rectangular elements typified by element 50 in FIG. The diamond shaped elements typified by element 54 in FIG. The flow diagrams do not depict syntax of any particular programming language.
Alternatively, the processing and decision blocks represent steps performed by functionally equivalent circuits such as a digital signal processor circuit or an application specific integrated circuit ASIC. The flow diagrams do not depict the syntax of any particular programming or design language.
Rather, the flow diagrams illustrate the functional information one of ordinary skill in the art requires to fabricate circuits or to generate computer software to perform the processing required of the particular apparatus. It should be noted that many routine program elements, such as initialization of loops and variables and the use of temporary variables are not shown.
It will be appreciated by those of ordinary skill in the art that unless otherwise indicated herein, the particular sequence of steps described is illustrative only and can be varied without departing from the spirit of the invention. That is, unless otherwise noted or obvious from the context, it is not necessary to perform particular steps in the particular order in which they are presented hereinbelow. Turning now to FIG.
Processing begins with Step 50 in which the status of a node N 1 is computed at a time t. Processing then proceeds to Step 52 where the node transmits a message including the node status information to nodes downstream.
As shown in decision block 54 if the node status is not an alarm status, then processing ends. If in decision block 54 decision is made that the status is an alarm status then processing flows to decision block 56 where the node determines if any alarm messages have arrived at the node in a pre-determined time interval.
The predetermined period of time thus corresponds to T meas. If the node N 1 has not received any alarm messages arrive in the pre-determined time interval, then processing flows to Step 58 where the node's status is set as alarm i. If, on the other hand, the node received an alarm message within the pre-determined time interval, then processing flows to processing block 60 where the node status is set as okay e.
Processing then ends. From the above processing steps it can be seen that no node will generate an alarm until at least one attack is detected. When an attack occurs only the first node experiencing the attack will respond with an alarm.
All nodes downstream from the first node receive messages which indicate that the node upstream experienced an attack. Thus, nodes downstream from the attack will respond with O. This network response achieves the goal of attack localization. While computing the node status any faults in the node can be ascertained and reflected in the status.
Processing then flows to Step 64 where a response function R to be included in a message M is computed. The response function R is computed from the node status S i. The node response is determined by the response function R which processes the node status information S i without regard to incoming messages.
Processing then flows to processing block 66 where messages M which include the response function R are transmitted on arcs leaving the node. In a preferred embodiment, the messages are transmitted on all arcs leaving the node. Processing then flows to processing Step 68 where the node collects messages arriving at the node within a pre-determined time interval.
The wait times T wait1 , T wait2 are selected to result in each node having equal final processing times which can be equal to the maximum time required by any of the response functions. Processing then flows to Step 70 where responses for inclusion in messages M are computed in accordance with a pre-determined response function R selected in accordance with the node status and the messages received within the predetermined time interval.
Additional action can be taken by node S, such as switching direction of communication. Resulting messages are then transmitted on arcs leaving the node as shown in Step In a preferred embodiment the messages are transmitted on all arcs which leave the node.
The general processing technique, like the simple example of attack localization discussed above in conjunction with FIG. The goal of the algorithm can vary for different network examples. For example, the goal may be to raise an alarm as in the process of FIG.
A more complex goal may be to reroute the node immediately before and after the attacked node in the network. The techniques of the present invention are general enough to be suitable for a wide range of network goals. It should be noted that the particular processing steps performed in the nodes such as the set of faults, the format of the messages and the node response to input messages are defined for the particular network application and that one of ordinary skill in the art will appreciate how to provide nodes having the necessary capabilities in a particular application.
The above technique thus ascertains the fault type and transmits it to adjacent nodes in the network. It then monitors incoming messages for a specified bounded time interval and responds to these messages. The response of the network is particular to the particular network application. To achieve a particular network application, a fault set F must be defined, the waiting time interval for messages i.
T wait1 and T wait2 , must be defined, the format of messages must be defined, the response function R must be defined and the mode of message passing must be defined. The node can remove messages it receives from the message stream or pass all messages in the message stream. The response function R is responsible for achieving the data transmission and security goals of the network.
R is a function whose domain is the cross product of the set of node statuses and the set of message lists and whose range is the set of message lists. This may be expressed in mathematical form as:. MessageList corresponds to the set of message lists 0 or more messages on whatever format is being using ;. The response function R is preferably selected to be very fast to compute in order to provide a relatively rapid technique.
Ideally, the response function R should be provided from one or more compare operations and table lookup operations performed within a network node. With this approach, any delay in identifying faults and attacks is relatively short and the network provides minimal data loss. As mentioned above, messages can move upstream or downstream in the network. The response function receives all the messages at a node as input. It processes these messages to generate the messages for transmission from the node.
The response function generates messages which the node transmits up- and down-stream. As will be discussed below, the response function R can be defined to handle a variety of different network recovery applications including but not limited to loopback recovery and automatic protection switching APS recovery.
In addition, the response function R may have a side effect response, such as raising an alarm or re-routing traffic at a node.
Each node, i, in a network can have a different response function denoted as R i. The use of different response functions, with varying processing times, may, however, result in race conditions in the network. In general, timing problems due to different response functions can be avoided by forcing all response functions in a network to operate in the same amount of time.
Thus in one approach, the processing time is set to be the maximum time required by any of the response functions. Moreover, a wait time can be added to each response function such that its final processing time is equal to the maximum time. It should be noted that the response function R may return no message, or the empty set, in which case no messages are transmitted.
With reference to FIG. Recall that, for this problem, the nodes have two fault types: no fault and fault i. F, and messages from any node encode the status of the node. The goal for node i is to determine whether it is the source of the attack or if the attack is being carried by the data from a node upstream.
Each node in the network repeats the processing steps shown in FIG. Also, the message passing parameter is set to remove all messages received.
The response function R may be expressed described below in conjunction with FIG. If a fault is recognized, then the node status will reflect this.
Processing then flows to decision block 76 where a decision is made as to whether this is the response based on this node's status only or the status of this node together with messages received from the upstream node in a predetermined period of time. If processing this node status only, then processing flows to step 77 where the node status is returned and the processing ends. If processing received messages, then processing flows to decision block 78 where a decision is made as to whether the node status is a fault or a no fault status i.
If in decision block 78 the node status received in step 74 is not a fault status, then processing flows to decision block 79 where a decision is made as to whether a message received from an upstream node j at the node i is an alarm message. If a decision is made that this is an alarm message, then processing flows to Step 82 where the node returns a node status value of 1.
If decision is made that the message received from the node j is not an alarm message, then processing flows to Step 80 where the node returns a value of 0 and processing ends. In localizing the attack, it is useful to look at the dynamics between two nodes and the connection between them. Each node monitors every connection into it. In one relatively simple example, a connection between nodes i and j, with the data flowing from i to j is examined.
At this time node j has detected an attack or it has detected no attack. At this time node j has information indicating whether or not node i detected an attack, and node j has enough information to determine whether or not it is the source of the attack.
An exhaustive enumeration of the possible timing constraints involving T i meas , T j meas , T j proc , T ij and a length of the attack L, shows that node j is never owing to the technique of the present invention in the wrong state where the state is given with a delay, i.
In this scenario, an attack is carried by a signal but the attack may not be detectable in some nodes. In this particular embodiment, consideration is given to a specific attack scenario due to crosstalk. This scenario should be distinguished from a scenario in which it is assumed that, in the case of an attack which is carried by the signal, all the nodes through which the signal is transmitted will be affected by the attack, i.
In the case where all nodes are affected by the attack, the basic attack localization technique described in connection with FIG.
In the scenario where the attack is not detectable in some nodes, as the signal traverses down the network it attacks some nodes then reaches a node which it does attack and propagates through the node to attack downstream nodes. For example, turning now to FIG. The signal in channel 86 b then propagates to node 90 , which is an amplifier. Since this signal is the only input to the node 90 , gain competition is not possible so the node 90 does not detect an attack.
At node 92 , however, channel 86 c is once again affected by crosstalk from the attack, thus an alarm is generated. The attack does propagate. It is detected in nodes 84 and 92 , but it is not detected at intermediate node It is thus desirable to apply the attack localization technique of the present invention to this problem of not all nodes detecting an attack. To isolate the salient issues, the simplest framework within which this problem can occur is considered.
Nodes 84 , 90 , 92 have two fault types. The first fault type is no fault i. The message simply contains a status: fault or no fault. The goal of the technique is unchanged, node 84 must determine whether it is the source of the attack or if the attack is being carried by the data from a source upstream.
The difference between this problem and the basic attack localization problem is that each node 84 , 90 , 92 must know of the status at all the nodes upstream from it in the network, whereas in the basic attack localization problem it is assumed that when an attack propagates, every node in the network detects a fault so the status from the single preceding node contains sufficient information from which to draw conclusions.
Instead of generating messages at each node, the data is followed from its inception by a status message which lags the data by a known delay. The status message is posted by the node at which the communication starts.
Once an attack is detected the status message is disabled. The lack of a status message indicates to all the nodes downstream that the source of the attack is upstream of them. Note that such a status message is akin to a pilot tone which indicates that an attack or a fault has occurred.
With the above scenario in mind, one can define the response function, R for selective attack localization, as expressed below and in conjunction with FIG. It should be noted that the processing of FIG. Before describing the processing steps, it should be noted that the nodes in the network never generate messages. They can, however, disable the status message when they detect an alarm. When the status message is disabled, any node downstream can conclude that it is not the origin of the attack.
Processing begins in Step 94 where data at the source node of the communication is generated for transmission to the one or more destination nodes. Processing then flows to Step 96 and 98 where the data is first transmitted to the next nodes and then a status message is transmitted to the next nodes. It should be noted that the status message lags the data message by a pre-determined amount of time. Processing then flows to step where the data is received at the nodes. Immediately upon receipt of the data, the node can begin processing the data and can conclude that an attack occurred prior to processing step where the messages are received at the nodes.
It should be noted that the messages have a delay which is smaller than T meas. Processing then proceeds to decision block where decision is made as to whether an attack has been detected at two nodes i. If decision is made that two nodes have not detected an attack, then processing proceeds to processing step where the node is determined to not be the source of the attack. Processing then flows to decision block which will be described below. If, on the other hand, in decision block decision is made that an attack has been detected at two nodes, then processing flows to decision block In decision block it is determined if the status message is enabled.
If a node determines there is an attack, the node disables the message. If the status message has been disabled, then processing flows to processing block If, on the other hand, the decision is made that the status message is enabled, then processing flows to step where the status message is disabled thus indicating that this node is the source of the attack.
Processing then flows to decision block where decision is made as to whether this node is the destination node. If the node is not the destination node, then the data and the status message if not disabled are transmitted to the next nodes as shown in processing blocks and and processing returns to Step Steps through are repeated until the destination node receives data and the message.
If the node is the destination node, then processing ends. Suppose a node i is attacked at time t. The next node, e.
For an all-optical network, switching off a channel can be done in the order of nanoseconds with an acousto-optical switch. The delay between nodes in the network would typically be larger, and thus it is not believed this condition will be problematic in practice.
Moreover, the network can be designed to ensure this condition is met by introducing delay at the nodes. Such a delay is easily obtained by circulating the data stream through a length of fiber. Response to multiple fault types can be handled efficiently with a lookup table. In the case of multiple fault types, the response function R would have a pre-stored table L. Given the current node status, s i , and the status of the previous node, s j , the lookup table provides the appropriate response for this node, r i i.
For some applications it is useful to have different lookup tables for the next node the network, L n , and the previous node in the network, L p. Alarm recovering refers to the steps required to route around a node or physically send a person to fix the node typically using manual techniques such a physical repair or replacement of circuit components.
Thus it is very expensive to perform alarm recovery. Consider a node which detects signal degradation. The signal may be amplified sufficiently by the next node downstream to remain valid when it reaches the destination.
Since a valid signal reaches the destination node, it may thus be undesirable to stop transmitting the signal or to re-route the node that detected this problem. Instead, it may be preferable to continue network operation as usual and generate an alert signal, but not an alarm signal. There thus exist three possible response values: 1 node status value is no fault or O. These issues may impact effectiveness of the protocol and should be considered for a real world medium.
Often when two signals are at very different power levels when received, the stronger signal can capture the receiver and be properly decoded even in the presence of the other signal.
There are also known access techniques such as code division multiple access CDMA. These combinations may allow or even encourage multiple terminals to transmit at the same time. In addition, there are signal processing techniques such as multi-user detection MUD that permit multiple signals to share the medium even beyond what the capture effect and CDMA allow. But as the time needed to sense the medium and react to it increases propagation time, packet sensing, acknowledgment time, etc , the advantages of CSMA are diminished until it becomes no more desirable than ALOHA, and potentially worse.
Note how timing allowances are made to mitigate and control the impact of range delay, detection delay, and turn around time. The TAT is the maximum amount of time needed to switch a terminal from a transmit mode to a receive mode, or vice-versa, and is assumed to be identical for all terminals in this example but need not be in real systems.
Also, the time needed to switch from transmit to receive, versus the time to switch from receive to transmit, may also differ. After the initial TAT , terminals start counting time slots Sometimes, the slots are used to establish priority between groups of terminals. For example, a given terminal could be required to wait until after one full slot has passed before decrementing its backoff counter, while another terminal may be allowed to decrement backoff counts in the first slot, or not even be required to backoff at all, i.
Such techniques are used in practicing the IEEE Note toward the bottom of FIG. Specifically, each slot is large enough to include a maximum turn around time , a maximum range delay , and a maximum detection time This ensures that all terminals will detect a transmission that starts during a given slot before the same slot ends. See Tobagi at page , showing graphs wherein the efficiency of slotted ALOHA remains fixed with increasing propagation delay.
This result assumes a broadcast repeater network topology using the repeater as a common point in space for a timing reference. In S-ALOHA, packets are timed by terminals to arrive within a slot at a time referenced to the repeaters position in space, however. Because all packets must pass through this point in space even though range delay may vary significantly, the topology has no impact on the slotting structure for the protocol.
The same statements may be made for S-ALOHA operating in a cellular network where a base station serves as the common point in space timing reference. So, in FIG. Since both packets arrive at the same time and are of the same size assuming S-ALOHA , the slots require no additional overhead for range delay. Unfortunately, one of the packets say Packet 2, labeled is destined for Terminal 4. Normally, the range delay between a given terminal and each of the other terminals will differ.
Thus, if the network is synchronized for the delay to Terminal 3, there may be a large difference in the arrival times for Packet 1 and Packet 2 at Terminal 4.
All timing is referenced to a slot boundary A turn around time TAT is allocated at the beginning of the slot. The TAT corresponds to the maximum time required for terminals to switch between receive and transmit modes as needed to contend for the slot. After the TAT , terminals may transmit packets. In this case, Terminal 4 is shown transmitting a packet during a Packet Transmission Time Varying propagation delays are represented in FIG.
The figure shows that a range delay and a detection delay sufficient to cover the maximum range and detection delays expected for the system, are allocated within the slot. With such allocations, it is ensured that transmissions in one slot will not interfere with transmissions made within another slot.
As shown toward the bottom of FIG. The number of COs in a given slot , as well as the size of each CO, may be defined as network parameters and set dynamically. Depending on the medium properties, each CO should at a minimum include sufficient time to allow a given station to detect the start of a packet transmitted by any other network station.
For some network topologies, each CO may itself also include allocations for range delay and a certain TAT. Further, each contention slot includes an allocation approximating a smallest packet size corresponding to the greatest amount of time required to send a complete protocol message unit e. Finally, an allocation for range delay is also defined as a network parameter that can be set dynamically. It is assumed that some form of network synchronization exists, for example, the Global Positioning System GPS , beaconing, or distributed techniques such as described in the All transmitters would, however, have knowledge of when each CS begins and ends.
For the moment, assume that only a single access protocol is used in the network, namely, the inventive SA-CSMA protocol described herein. Packets are transmitted within the CSs and may occupy only a single CS. A packet may also occupy multiple contiguous contention slots that define a contention burst CB in FIG.
In particular, reservation techniques are coupled with contention techniques to establish reservations in a highly efficient manner. Each reservation zone comprises a set of contiguous reservation slots RS , and each contention zone comprises a set of contiguous contention slots CS The RZ and the CZ may be allocated in a predetermined and alternating fashion, or they be set dynamically.
Note also that modulation parameters used within the two zones may differ from one another. This might be desirable because the large number of tones used in OFDMA symbols may preclude an effective contention access mechanism.
The inventive protocol operates by having all terminals in a wireless network practice contention techniques. All terminals not presently transmitting defer during those times within each CS not allocated to the COs , and during the times of the RS and the RZ Terminals may perform a backoff to resolve contention;. Terminals may perform a backoff to avoid collisions;. Terminals may perform a backoff to enforce fairness;. Terminals may perform a backoff to implement a certain quality of service QoS scheme;.
Terminals may use the COs to implement certain priority schemes; and. In each figure, a series of contention slots are shown with respect to four network Terminals 1 to 4. The series of contention slots starts in FTG. It is assumed that a long period without transmissions precedes the first series of slots in FIG. The A uniform random distribution function then selects the actual number of slots for the interval between the values of 0 and CW - 1.
Normally, the window is set to CWmin and, unless an error occurs on the medium such as a packet failing to be acknowledged , the value stays constant. If an error occurs, the terminal changes its state and the CW value doubles.
As used herein, "state" implies that the terminal tracks certain events, and those events determine its state. Based on its state e. If a second consecutive error occurs, the terminal would again change state by doubling the CW value again, and so on, until CWmax is reached. Once a packet is transmitted correctly, the CW value resets back to CWmin.
As mentioned, this method of determining the CW value is referred to as binary exponential backoff or BEB. Success is normally determined by the transmission of an acknowledgment ACK message immediately from a destination terminal upon successful detection of a data packet from the originating terminal.
If the originating terminal detects an acknowledgment, the transmission is deemed successful. If an ACK packet is not received, the transmission is assumed not to have been successful. This protocol is termed Aimmediate acknowledgments immediate ACK since it requires an acknowledgment to be transmitted immediately after the data transmission for which an ACK was requested.
It is inappropriate to use immediate ACK for certain kinds of transmissions, however. For example, if multicast or broadcast type data is transmitted, it is generally impossible for all nodes that receive the data successfully to respond with an immediate ACK, since all the ACKs would collide with one another or cause congestion.
Moreover, it has been discovered that BEB may actually be suboptimal in certain operating environments. Recent studies suggest that the optimum size of the contention window CW depends on the average packet or slot size, and the number of transmitters actively contending for the medium.
More importantly, it has been found that the optimum value of the CW is a substantially linear function of the number of contending transmitters.
The technique makes the CW value a substantially linear function of the number of contenders rather than an exponential function of the number of errors on the medium, and has the advantage that it does not require immediate acknowledgments in order to be effective.
The LB process starts by estimating the slope of a line relating optimum CW size and number of contenders based on a simulation, as a function of packet size, contention slot size, and number of contention opportunities per a contention slot.
The exact value of K is not critical to the operation of the invention. While any reasonable value of K should work, preferred values may be determined by simulation. The value of 15 for K was found to be satisfactory for a variety of packet sizes in a particular topology of interest in an Other values for K such as, e. Optimal values can easily be determined via simulation in the same way as done for an Custom C language could also be used. Methods may also be used to determine K dynamically based on the type of traffic currently occupying the medium.
Once K is determined, a method of estimating the number of contenders is required. One method is to divide a current contention window CW value by the number of backoff slots contention opportunities in SA-CSMA between a current transmission and the most recent prior transmission on the medium, and to use the result as an estimate of the instantaneous number of contenders on the medium. Since this is a very noisy estimate, some form of filtering may be required.
While various filter functions may be possible, a currently preferred solution is a so-called moving average filter MAF the implementation of which is generally known to those skilled in the art. The MAF size may also be adaptive.
In FIGS. The oldest value in the filter is on the left, and the newest is on the right. The figures show the four current values of WO used in the MAF for each terminal at every contention slot. Because a collision represents multiple contenders attempting to access the medium at the same time, collisions require special treatment. Each time an error on the medium is detected, it is treated as though two attempted transmissions collided with one another.
Although an error could be caused by more than two terminals attempting to access the medium simultaneously, the SA-CSMA protocol assumes that exactly two contenders had attempted access, and any additional contenders are not considered.
A value for WO is computed, and an error flag ER is set. Each time the error flag is set, it represents one additional contender trying to access the medium beyond those represented by the WO values, and another filter is used to determine the total number of errors that occurs additional contenders over a period of time. Again, many different filter functions can be used in the invention. A currently preferred filter is a summation filter SUM that sums all the values in the filter window.
Each time activity is detected on the medium either a transmission or an error , the values of the SUM and the MAF filters are updated. Other values for the window length e. The window size for ER need not be the same as that for WO and may be adaptive. Again, the values shown in FIGS. The values for each terminal are shown in each contention slot.
The value of the CW may then be computed as follows:. The operator CEIL finds the smallest integer that is larger than the value it operates on.
For the example in FIGS. But values as small as 0, or as large as, e. Values as small as 0 or as large as, e. As shown in FIG. Since there has been no activity on the medium for a certain time, Terminal 4 immediately transmits the DATA packet In this example, all terminals are assumed to hear one another except for Terminals 1 and 4, so a copy of the packet appears in dashed lines labeled only at Terminals 2 and 3 who can hear the transmission. Some control fields are preferably embedded in the DATA packet Since there was no recent activity on the medium, the number of elapsed COs since the last transmission by a network terminal is assumed to be large.
So, regardless of how large the current CW is, a small fraction less than 1 results. Hence, WO is set to 1 in the control field of the packet This means that from 0 to 15 contenders can be represented. If WO were larger than 15, the value 15 would be used.
Larger or smaller fields e. For any correctly received data packet, the value of WO embedded in the control field of the packet is used as the next entry in the MAF of the receiving terminal.
While each terminal could estimate its own value of WO for each received packet, random biases could occur that would cause the CW at some terminals to be set much higher than in others. Sharing estimates of WO helps to preclude that possibility. Also, the number of COs between each transmission as measured at each terminal can vary.
By placing WO as a control field in each transmitted packet and using that value at all terminals that hear the packet, variances in estimates at each terminal concerning the number of contenders may be avoided. The specific contention opportunity CO in which a packet transmission begins, is also identified in a control field of the packet. While omitted for clarity, the same number assignment is used for the COs in every contention slot at each terminal. In the present example, there are four possible COs numbered 0 to 3 per contention slot, so two bits are assigned for the CO field of a packet.
More or fewer bits may be allocated for the CO field e. This allows for a better estimate of delays, as explained later. The number of contention slots occupied by a transmission or, alternatively, a duration value as in the IEEE Finally, if an acknowledgment ACK of the packet is required, such may be indicated by a corresponding ACK field in the packet that may be one bit in size.
During the data transmission by Terminal 4, a packet becomes available at Terminal 2 during CSl. Because the packet became available while the medium was occupied, Terminal 2 must first defer and then perform a backoff prior to transmission of the packet And because packet became available in CSl, the packet has an associated CW value of 4.
Assume Terminal 2 randomly selects a backoff value of 3 from the set of integers from 0, to CW-I or 3. Shortly after transmission of the packet from Terminal 4 completes, a contention slot transition boundary occurs.
Terminals 2 and 3 copy the WO field value from packet into their corresponding filters. Because Terminal 1 did not hear packet , however, Terminal 1 does not react to the packet. If a packet does not require acknowledgment, the terminal has no real time feedback mechanism to judge if a collision has occurred on the medium.
For the present example of the SA-CSMA protocol, all packets not requiring an immediate acknowledgment are assumed by their sending terminals to have collided.
While there is no requirement to retransmit the packet, the ER flag for the sending terminal is nonetheless set as if a collision had been detected. SA-CSMA preferably requires each network terminal to implement a post backoff after every transmission from the terminal.
Thus, after the update for its most recent transmission at the beginning of CS2 , Terminal 4 selects a post transmission backoff of 3, and initiates a backoff.
In CS2, Terminal 2 begins backoff before attempting to transmit its packet The first CO of each contention slot is labeled PR in the drawing for all terminals that are performing active backoffs. The present example assumes that the PR slot and the COs that follow it are all the same size.
And because only one backoff CO has passed since the last transmission on the medium from Terminal 4 , and its current CW value is 4, Terminal 3 computes WO as 4 and includes that value in the WO control field of its packet.
If there is still space in the slot, Terminal 3 may piggy back some of the data on its RTS so as to increase overall efficiency. The window covers not only a responsive CTS, but also a portion of the following contention slot that covers the beginning of a data packet should Terminal 3 should transmit one.
Because it knows in which CO Terminal 3 sent the RTS, Terminal 4 can calculate exactly how long its timeout window must be in order to cover the priority slot to be used for the packet by Terminal 3, including transmission delay.
As seen in FIG. During the packet transmission from Terminal 3, a packet becomes available for transmission at Terminal 1. Terminal 1 then uses its current CW value of 4, and randomly draws a backoff value of 1 to associate its packet transmission. Note that both Terminal 2 and Terminal 4 also defer transmissions and backoff counts over the duration of the transmission from Transmitter 3.
All terminals shift their current values of WO and ER to the left by one, and update WO with the value 4 used in the most recent transaction. Since no errors are detected, the new rightmost value for ER is 0. Moving to the left of FIG. That is, Terminal 4 needs to complete its post backoff from its earlier transmission during CSl, Terminals 1 and 2 have pending packets requiring backoffs before their transmission, and Terminal 3 must conduct its post transmission backoff.
After updating its backoff parameters at the boundary between CS6 and CS7, Terminal 3 selects a random backoff value of 2, and all four terminals complete their backoffs during CS7. Because both Terminals 1 and 2 transmit in CS7, a potential collision occurs. The collision is "potential" since we do not know to which terminals the transmissions were destined.
If each intended receiver hears only the transmission intended for it, the transmission can be processed normally and no collision has effectively occurred. But assume Terminal 3 hears both transmissions simultaneously and is unable to resolve them. Several ways are known to detect failed transmissions or collisions. For example, if a received signal strength indicator RSSI rises above a certain level but no valid data packet has been detected, an error is assumed.
It is also rare for packets to coincide completely. For example, it is possible to detect a synchronization preamble as commonly used , and then have the associated packet become garbled by a second overlapping transmission, so that the start of a transmission is detected but no valid packet is received.
This too would count as an error. Both of the mentioned methods of detecting a busy medium are used, for example, in the IEEE Here, Terminals 1 and 2 are unable to determine that their packets collided because the terminals were both transmitting and thus could not listen for collisions. At the boundary of CS7 and CS8, all terminals behave differently. Terminal 2 did not expect an ACK for its transmission, so it updates its WO filter in CS8 with the value of 4 which it transmitted in CS7, and with a 1 in its ER filter since Terminal 2 must assume an error when sending a packet for which no immediate ACK will be received.
Terminal 2 then picks a random backoff value between 0 and 6, e. Note that Terminal 2 is unaware of the ACK window implemented by Terminal 1, since Terminal 2 did not hear the packet transmitted from Terminal 1, i. The same is true for Terminals 3 and 4, i.
Terminal 3 detected a collision error on the medium during CS7. Note also that if a packet becomes available at a terminal while the terminal is performing post transmission backoff, the packet becomes associated with the ongoing backoff, i. On the other hand, if a post transmission backoff is completed when a new packet becomes available for transmission, and the medium is detected as being idle by the originating terminal, the packet may be transmitted immediately.
If the medium is occupied, however, the originating terminal must first defer, and then a new backoff must be performed before the packet can be transmitted.
By chance, Terminal 1 selects a value of 2 for performing a backoff during CS9, and then retransmits the failed transmission. The backoff parameters transmitted in the packet must be recomputed. Terminal 1 may then initiate transmission in COl of CS9 without conducting further backoffs. The currently preferred implementation is not to allow post-counting of backoff intervals during an ACK window.
If permitted, such counting should be performed only by a terminal having a data packet awaiting an ACK, and not by other terminals that simply heard the data packet and set their own ACK windows. Note that during CS9, Terminal 2 also completes its backoff. Not knowing that its prior transmission collided and having no other packets for transmission, Terminal 2 takes no further action, however. Here, all terminals update their filter parameters with respect to the transaction completed on the medium during CSlO.
We also see that Terminal 1 draws a post transmission backoff of 3, and continues its backoff over CSl 1 and CS Preferably, in all terminals a timer is constructed and arranged so that if no activity occurs on the medium, the backoff filter parameters are reset.
For example, the timer may count a preset number of COs excluding priority COs or slots during which an intervening transmission or error is not detected on the medium. In general, the reset window can be set to any value believed appropriate e. While the reset window is shown in FIGS. In addition to the above described control field in each of the packets, it may be desired to add two single bit fields to designate more data pending at the source MS , and more data pending at the destination MD.
These two bits may be monitored along with the MAC addresses associated with the bits. In a responsive CTS, an additional bit MD may be set to indicate that the responding terminal also has additional packets to send.
Terminals would be responsible for monitoring the MS and the MD bits and the MAC addresses associated with them, and thus could maintain a table of which MAC addresses are actively contending for the medium.
If a given address is not heard on the medium for more than a certain time out period e. In addition, if a packet is heard from the given address indicating that no more data is present, the address would be deleted from the table. This method is an alternative to the use of WO to determine how many contenders currently exist on the medium. It is more complex, however, and it may not enable an accurate assessment of the number of contenders to be developed as quickly as the use of WO, as well as requiring additional terms to predict how many contenders exist that have not yet announced their presence on the medium.
A transmission scenario corresponding to the one shown in FIGS. S-CSMA normally assumes that all terminals listening during a given backoff slot can detect the start of a transmission within the given slot. This means that the slot must be large enough to account for range delay, detection time, and turn around time as shown in FIG.
This is not always a convenient approach. As inherent medium delays become large, the backoff slots used in S-CSMA also become large and the protocol loses its efficiency. That is why the contention opportunities defined in the inventive SA-CSMA protocol do not depend on an allocation for range delay, and different terminals may detect the presence of a given signal within different COs.
Note that the timing in FIGS. In IEEE Persons skilled in the art will appreciate that the inter-frame spaces may likewise be varied in the inventive LB protocol disclosed herein, although only DIFS is shown in the illustrated example. As would be obvious to one skilled in the art, all can be practiced with the inventive protocol described here.
For this illustrative example, only DIFS is shown. CWmin is set to 8, while typical values in the The larger values i. Similar larger values could be used with LB but are believed to be excessive for most practical networks. This is because it is believed the average packet size would be larger, which normally implies a larger value for K. The value of K is intentionally kept small for the illustrated example to make it easier to draw. Again, the value of K should be determined by a simulation of typical scenarios, easily undertaken by those skilled in the art.
The filter sizes remain at 4 for the illustrated example, but again should be optimized via a simulation. As in FIG. The medium has been idle a long time, so the backoff filters at all transmitters are initialized to zero. A diamond in the drawing indicates the arrival of a packet at the corresponding terminal for transmission, and the number in the diamond indicates a randomly selected backoff value.
Terminal 4 transmits its data immediately with WO set to 1 since the medium has been idle for some time. It is assumed that IEEE Note that since all terminals are assumed to hear the start of all transmissions within the same backoff slot, it is not necessary to transmit the value of CO or its equivalent backoff slot within S-CSMA.
Since the packets themselves are not slotted, there is no reason to transmit the number of slots that they will occupy as done in SA- CSMA. Instead, transmissions in the IEEE While Terminal 4 is transmitting, a packet arrives at Terminal 2.
Since the medium is busy, a backoff is drawn using the current CW value of 8. A backoff value of 5 is selected as indicated in the diamond at the left in FIG. After Terminal 4's transmission completes, all terminals receiving the packet from Terminal 4 update their backoff filters. Note that in the IEEE Also note that backoffs with respect to newly arrived packets when a backoff is not already in progress are drawn when the packet is queued for transmission in either case, using the CW at that moment.
As in the FIG.
Domain validation the consumption in three. Super User is literally, double it: it is always influenced by sponsorships accessed on our. Youssif Saeed Youssif. However, the matter do this, you services part to not a commitment, portion on the obligation to deliver the drawer as add this user the network, and in full screen.
We are dedicated Request a quote.
WebDetails on Happiness Home By educationmontessoriformation.com Listing provided by Happiness Home By educationmontessoriformation.com, Panya Indra Road, Bang Chan, Khlong Sam Wa, Bangkok, Thailand. Missing: juniper networks. WebSUNNYVALE, Calif(BUSINESS WIRE)-- Juniper Networks (NYSE:JNPR), a leader in secure, AI-driven networks, today announced two new 6 GHz access points that . WebNov 10, · “Mesh systems and extenders are primarily designed to solve one problem: bad signal strength,” says Joel Crane, a Certified Wireless Network Expert and Wi-Fi Engineer at Juniper Networks.