In addition, they operate on a per-link basis since the
visibility of each
router is limited to its neighbors.
In addition, new problems arise such as lack of control over the data.
In an alternative embodiment, the tunnels are not secure to reduce the computation time required to generate encrypted packets and increase the performance.
The increased reliability comes at a cost.
This technique has a large overhead since the same traffic sent multiple times interferes with each other and congests the routers.
This technique reduces the congestion but can potentially aggravate the out of order problem.
Both techniques share a common overhead to create the multiple paths.
For instance, the overhead cost of MPTCP may counteract its advantages in the transfer of
small files.
Unlike TCP, UDP is not a reliable transport protocol, i.e., it leaves to the application the issues of dropped, out of order and duplicated packets.
The main factors that cause
packet loss are link congestion, device performance (
router, switch, etc.) such as buffer overloads,
software issues on network devices, and faulty hardware.
This can be limited by different things including latency, packet loss, and what protocol is being used.
Although this is a simple
network topology, enterprises may require more complex topologies to interconnect their offices.
Also, the number of flows is limited by the capacity of the CSPs, which is the bandwidth of each connection.
This results in each direction of a tunnel potentially having different characteristics (e.g., in terms of latency, or bandwidth).
However, some applications and / or protocols may suffer from this
asymmetry in the characteristics of both flow directions.
This restriction stems from the number of available transport ports publicly open by the endpoint, as there are security concerns regarding opened ports.
Running a VM in each device may not be desirable by some companies because they prefer to use a firewall to secure their LANs.
While running VMs on all devices results in an end-to-end QoE, running them in egress / ingress points of LANs may result in performance uncertainties because QoE is guaranteed only between those two points.
Furthermore, CSPs inside the company likely share most of their links since the internal routing options are typically limited.
This fact can
impact performance due to cross-interference and congestion.
Despite this change, it is not guaranteed that the “new” tunnel has different properties than the discarded or active tunnels.
First, routers do not have the obligation to answer, and in case they do they may not send the correct information about themselves.
While this can be done on purpose, this is not generally the case.
Also, the
traceroute technique makes an assumption that is not always satisfied in packet-switching networks.
In this case, incremental TTL packets do not provide the
router identification within the same path.
This fact could lead to potential incorrect link identifications.
However, this technique has drawbacks that affect all tracerouting techniques: (i) NAT addresses
rewriting in private networks, and (ii) ISP non-responding when a TTL expires.
A specific drawback peculiar to the modified
traceroute technique is the possibility of not receiving any answer from the last hop as the 5-tuple is an actual valid tuple that is forwarded to a listening service on that port (e.g., the OpenVPN service).
For instance, a waiting CSP may have high performance sometimes and poor performance during other periods.
This topology could result in increased complexity of the policies.
A shorter interval gets a finer
granularity at a cost of a huge overhead both in
processing and transmission efforts.
Thus, there is trade-off between accuracy and resource usage to control the
system's
granularity.
The control in this case is off-loaded from the data layer due to
scalability issues.
Another more restrictive situation would be to use only a specific active CSP.
Here, the data layer cannot optimize hardly anything because the policy constrains the available behaviors.
The amount of data to consider is large and it greatly varies over time, a fact that poses significant challenges to the architecture.