ECF: An MPTCP Path Scheduler to Manage
Heterogeneous Paths
Yeon-sup Lim
IBM Research
y.lim@ibm.com
Don Towsley
University of Massachusetts Amherst
towsley@cs.umass.edu
Erich M. Nahum
IBM Research
nahum@us.ibm.com
Richard J. Gibbens
University of Cambridge
richard.gibbens@cl.cam.ac.uk
ABSTRACT
Multi-Path TCP (MPTCP) is a new standardized transport
protocol that enables devices to utilize multiple network in-
terfaces. The default MPTCP path scheduler prioritizes paths
with the smallest round trip time (RTT). In this work, we
examine whether the default MPTCP path scheduler can pro-
vide applications the ideal aggregate bandwidth, i.e., the sum
of available bandwidths of every paths. Our experimental re-
sults show that heterogeneous paths cause under-utilization
of the fast path, resulting in undesirable application behav-
iors such as lower streaming quality in a video than can be
obtained using the available aggregate bandwidth. To solve
this problem, we propose and implement a new MPTCP path
scheduler, ECF (Earliest Completion First), that utilizes all
relevant information about a path, not just RTT. We compare
ECF with both the default and other MPTCP path schedulers,
using both an experimental testbed and in-the-wild measure-
ments. Our results show that ECF consistently utilizes all
available paths more efficiently than other approaches under
path heterogeneity, particularly for streaming video. In Web
browsing workloads, ECF also does better in some scenarios
and never does worse.
1 INTRODUCTION
One significant factor that affects MPTCP performance is the
design of the path scheduler, which distributes traffic across
available paths according to a particular scheduling policy.
The default path scheduler of MPTCP is based on round trip
time (RTT) estimates, that is, given two paths with available
congestion window space, it prefers to send traffic over the
path with the smallest RTT. While simple and intuitive, this
scheduling policy does not carefully consider path hetero-
geneity, where available bandwidths and round trip times
of the two paths differ considerably. This path heterogene-
ity is common in mobile devices with multiple interfaces
[3, 6, 8, 14, 21] and can cause significant reorderings at the
receiver-side [1, 3, 4, 12, 24]. To prevent this, MPTCP includes
opportunistic retransmission and penalization mechanisms
along with the default scheduler [18]. In long-lived flows,
e.g., a single very large file transfer, MPTCP is able to en-
hance performance using these mechanisms. However, a
large number of Internet applications such as Web browsing
and video streaming usually generate traffic which consists
of multiple uploads/downloads for relatively short durations.
We find that in the presence of path heterogeneity, the de-
fault MPTCP scheduler is unable to efficiently utilize some
paths with such a traffic pattern. In particular it does not
take full advantage of the highest bandwidth paths, which
should be prioritized to achieve the highest performance and
lowest response time.
In this work, we propose a novel MPTCP path scheduler
to maximize fast path utilization, called ECF (Earliest Com-
pletion First). To this end, ECF monitors not only subflow
RTT estimates, but also the corresponding bandwidths (i.e.,
as embodied in the congestion windows) and the amount of
data available to send (i.e., data queued in the send buffer). By
determining whether using a slow path for the injected traffic
will cause faster paths to become idle, ECF more efficiently
utilizes the faster paths, maximizing throughput, minimizing
download time, and reducing out-of-order packet delivery.
This paper makes the following contributions:
• We provide an analysis of the performance prob-
lems in MPTCP caused by path heterogeneity when
using the default scheduler (§3). Using a streaming
adaptive bit rate video workload, we illustrate how
it does not utilize the aggregate available bandwidth
and thus can lead to lower resolution video playback
than is necessary.
• Based on this insight, we design a new path sched-
uler, Earliest Completion First (ECF), which takes
path heterogeneity into account (§4). We provide an
implementation of our scheduler in the Linux kernel.
• We evaluate ECF against the default MPTCP path
scheduler and two other approaches, BLEST [4] and
DAPS [12], in an experimental testbed (§6), across a
range of bandwidths and round-trip times. We use
multiple workloads: video streaming under fixed
, ,
Y. Lim et. al.
bandwidth (§6.2); video streaming under variable
bandwidth (§6.3); simple file downloads (§6.4); and
full Web page downloads (§6.5). We show how ECF
improves performance by up to 30% above the other
schedulers in heterogeneous path environments, im-
proving fast path utilization and reducing out-of-
order delivery, while obtaining the same performance
in heterogeneous environments.
• To see how ECF works in real networks, we com-
pare ECF against the default scheduler in the wild
using the Internet (§7). We show improvements of
16% increased bit rates in video streaming (§7.2) and
26% reduction in completion times for full-page Web
downloads (§7.3), while reducing out-of-order delay
by up to 71%.
The rest of this paper is organized as follows: Section 2
provides the context for our work. We describe the problem
of path under-utilization with the default scheduler in Sec-
tion 3. Section 4 presents the design of the ECF scheduler.
Experimental results using the testbed are given in Section 6,
while results measured over the Internet are provided in
Section 7. Related work is reviewed in Section 8, and we
conclude in Section 9.
2 BACKGROUND
2.1 Multi-path TCP
MPTCP splits a single data stream across multiple paths
known as subflows, which are defined logically by all end-
to-end interface pairs. For example, if each host has two
interfaces, an MPTCP connection consists of four subflows.
These subflows are exposed to the application layer as one
standard TCP connection.
Since ordering is preserved within a subflow, but not
across them, MPTCP must take care to combine subflows into
the original ordered stream. MPTCP appends additional in-
formation called the data sequence number as a TCP header
option to each packet. Based on the data sequence numbers,
MPTCP merges multiple subflows properly and delivers in-
order streams at the connection level.
When an MPTCP sender has data to send, it must choose
a path over which to send that data. This is the task of the
scheduler. The default MPTCP path scheduler selects the
subflow with the smallest RTT for which there is available
congestion window (CWND) space for packet transmission.
In addition, to mitigate performance degradation with path
heterogeneity, MPTCP includes opportunistic retransmission
and penalization mechanisms, which can reinject unacknowl-
edged packets from a slow subflow over a fast subflow and
decreases CWND of the slow path.
Figure 1: Example Download Behavior in Netflix
Resolution
Bit Rate (Mbps)
144p
0.26
240p
0.64
360p
1.00
480p
1.60
760p
4.14
1080p
8.47
Table 1: Video Bit Rates vs. Resolution
2.2 Dynamic Adaptive Streaming over
HTTP
Dynamic Adaptive Streaming over HTTP (DASH) [22] is
the mechanism by which most video is delivered over the
Internet. To stream videos with a bit rate appropriate for
the available bandwidth, a DASH server provides multiple
representations of a video content encoded at different bit
rates. Each representation is fragmented into small video
chunks that contain several seconds of video. Based on mea-
sured available bandwidth, a DASH client selects a chunk
representation, i.e., bit rate, and requests it from a DASH
server; this is called adaptive bit rate (ABR) selection.
A DASH client player starts a streaming session with an
initial buffering phase during which the player fills its play-
back buffer to some prescribed maximum level. During this
phase, once the buffer reaches a second sufficient threshold,
the player starts playing the video, and continues to retrieve
video chunks until the initial buffering completes. After com-
pleting the initial buffering phase, the player pauses video
download until the buffer level falls below the prescribed
maximum level. If the playback buffer level falls below a pre-
scribed minimum required to play out the video, the player
stops playback and fills its buffer until it has a sufficient
amount of video to begin playback again, which is called the
rebuffering phase.
This can lead to an ON-OFF traffic pattern where the
player downloads chunks for a period of time and then waits
until a specific number of chunks are consumed [19]. Figure 1
shows an example of client player download behavior when
a mobile device fetches Netflix streaming video. This trace
was collected using an Android mobile handset (Samsung
Galaxy S3) while watching Netflix through WiFi on May
2014. During the OFF periods, the connection can go idle,
causing CWND resets, as we will discuss in Section 3.
0 5 10 15 20 25 30 0 20 40 60 80 100 120 140 160 180Download Amount (MB)Time (seconds)Initial buffering completesOFFON-OFF cycle ECF: An MPTCP Path Scheduler to Manage Heterogeneous Paths
, ,
Figure 2: Ratio of Measured vs. Ideal Bit Rate Using
MPTCP Default Path Scheduler (darker is better)
Figure 3: Send Buffer Occupancy (0.3 Mbps WiFi and
8.6 Mbps LTE. Including in-flight packets)
3 MOTIVATION
3.1 The Effect of Heterogeneous Paths
We first examine the effect of heterogeneous paths on appli-
cation performance using adaptive video streaming, since
it is currently one of the dominant applications in use over
the Internet [20]. We measure the average video bit rate
obtained by an Android DASH streaming client while lim-
iting the bandwidth of the WiFi and LTE subflows on the
server-side using the Linux traffic control utility tc [13] (full
details of our experimental setup are given in Section 6.1).
The streaming client uses a state-of-art adaptive bit rate se-
lection (ABR) algorithm [9]. The choice of ABR does not
significantly affect the results in this experiment as we use
fixed bandwidths for each interface.
Table 1 presents the bit rates corresponding to each reso-
lution. We choose bandwidth amounts slightly larger than
those listed in Table 1, i.e., {0.3, 0.7, 1.1, 1.7, 4.2, 8.6} Mbps, to
ensure there is sufficient bandwidth for that video encoding.
Figure 2 presents the ratio of the average bit rate achieved
versus the ideal average bit rate available, based on the band-
width combinations, when using the default MPTCP path
scheduler. The figure is a grey-scale heat map where the
darker the area is, the closer to the ideal bit rate the stream-
ing client experiences. The closer the ratio is to one, the
better the scheduler does in achieving the potential avail-
able bandwidth. The values are averaged over five runs. In
a streaming workload, we define the ideal average bit rate
as the minimum of the aggregate total bandwidth and the
bandwidth required for the highest resolution at that band-
width. For example, in the 8.6 Mbps WiFi and 8.6 Mbps LTE
pair (the upper right corner in Figure 2), the ideal average
bit rate is 8.47 Mbps, since the ideal aggregate bandwidth
(8.6 + 8.6 = 17.2 Mbps) is larger than the required bandwidth
for the highest resolution of 1080p (8.47 Mbps). Since the full
bit rate is achieved, the value is one and the square is black.
Figure 2 shows that, when paths are significantly hetero-
geneous, the streaming client fails to obtain the ideal bit
rate. For example, when WiFi and LTE provide 0.3 Mbps and
8.6 Mbps, respectively (the upper left box in Figure 2), the
streaming client retrieves 480p video chunks, which requires
only 2 Mbps, even though the ideal aggregate bandwidth
is larger than 8.47 Mbps. Thus, the value is only 25% of the
ideal bandwidth and the square is light grey. This problem
becomes even more severe when the primary path (WiFi)
becomes slower (compare the 0.3 Mbps & [0.3 – 8.6] Mbps
and 8.6 Mbps & [0.3 – 8.6Mbps] pairs), as shown by the grey
areas in the upper left and lower right corners.
Note that we observe similar performance degradation re-
gardless of the congestion controller used (e.g., Olia [11]). In
addition, the opportunistic retransmission and penalization
mechanisms are enabled by default. This result shows that
even with these mechanisms, the MPTCP default path sched-
uler does not sufficiently utilize the faster subflow when
paths are heterogeneous.
3.2 Why Does Performance Degrade?
In this section, we identify the cause of the performance
degradation when path are heterogeneous. We investigate
the TCP send buffer behavior of the faster subflow in the
traces of the streaming experiments. Figure 3 shows the
send buffer occupancy (measured in the kernel) of the WiFi
and LTE subflows when bandwidths are 0.3 and 8.6 Mbps,
respectively. As can be seen, the streaming sender applica-
tion periodically pauses to queue data into the LTE subflow,
which has significantly higher bandwidth and lower RTT
than the 0.3 Mbps WiFi subflow, and the LTE send buffer
quickly empties due to acknowledgements. The streaming
sender also pauses to use the WiFi subflow, i.e., the sender
has no packet to send, but the sender is still transferring data
over the slow WiFi subflow while the fast LTE subflow is idle.
This shows that the application does not have any packet to
send at that moment; the 8.6 Mbps LTE subflow completes
its assigned packet transmissions much earlier than the 0.3
Mbps WiFi subflow and stays idle until the next download
request is received.
0.30.71.11.74.28.60.30.71.11.74.28.6LTE (Mbps)WiFi (Mbps) 0 0.2 0.4 0.6 0.8 1 0 50 100 150 200 280 285 290 295 300sndbuf occupancy (KB)Time (seconds)WiFiLTE , ,
Y. Lim et. al.
Figure 4: Case When Fast Subflow Becomes Idle
Figure 5: Time Difference of Last Packets
Figure 4 presents a timing diagram to show how a fast sub-
flow becomes idle, waiting until a slow subflow completes its
assigned packet transmissions (here, subflow 1 is faster than
subflow 2). To validate whether such an idle period really
happens, we investigate the CDF of the time difference be-
tween the last packets over WiFi and LTE for four regulated
bandwidth pairs. As shown in Figure 5, as paths become more
heterogeneous, the time differences increase. In particular,
the pause period (around 1 sec) in Figure 3 appears as the
time difference of last packets. Note that this problem is due
to the lack of packets to send, and not because of head of line
blocking or receive window limitation problems discussed
in [18].
Simple scheduling policies based solely on RTTs, e.g., allo-
cating traffic to each subflow inversely proportional to RTT
[12], cannot prevent this problem. For example, consider two
subflows where the RTTs are 10 ms and 100 ms, respectively,
and the CWNDs of both subflows are 10 packets. Suppose
the sender has 11 packets remaining to transmit. If a sched-
uler splits these 11 packets based on RTT, the fast subflow
will complete 10 packet transmissions in one RTT (10 ms)
and the slow subflow one packet in 100 ms. This results in
a completion time of 100 ms, where the faster subflow is
idle for 90 ms. In contrast, waiting for the 10 ms subflow to
become available would result in completion time of just 20
ms. This shows that we must not only consider RTT, but also
bandwidth and outstanding data on the subflow.
The performance degradation of these idle periods be-
comes more severe as an MPTCP connection is used for
multiple object downloads. This is because the congestion
Figure 6: Fraction of Traffic Allocated to Fast Subflow
using Default Scheduler in Streaming
controller resets the CWND to the initial window value and
restarts from the slow-start phase if a connection is idle for
longer than the retransmission timeout [7]. Since MPTCP
congestion controllers such as coupled [23] and Olia [11]
are designed to adapt a subflow CWND as a function of all
the CWNDs across all subflows, resetting the CWND of a
fast subflow because of an idle period can result in the fast
subflow not being fully utilized for consecutive downloads.
Figure 6 presents the average fraction of traffic allocated
to the fast subflow during the streaming experiments and the
ideal fraction given the bandwidth pairs and corresponding
measured average RTTs. As can be observed, the default
scheduler places a smaller fraction of the traffic onto the
fast subflow than the ideal model suggests. Together with
the idle period of the fast subflow, this causes the aggregate
throughput to degrade, resulting in a lower streaming quality
selection than is possible given the available bandwidth.
4 APPROACH
To solve the performance degradation problem with path
heterogeneity, we propose a new MPTCP path scheduler,
called ECF (Earliest Completion First). ECF utilizes RTT esti-
mates, path bandwidths (in the form of congestion window
sizes), and the size of the send buffer at the connection-level.
An MPTCP sender stores packets both in its connection-
level send buffer and in the subflow level send buffer (if the
packet is assigned to that subflow). This means that if the
number of packets in the connection level send buffer is
larger than the aggregate number of packets in the subflow
level send buffers, there are packets in the send buffer that
need to be scheduled to the subflows.
Assume that there are k packets in the connection level
send buffer, which have not been assigned (scheduled) to any
subflow. If the fastest subflow in terms of RTT has available
CWND, the packet can simply be scheduled to that subflow. If
the fastest subflow does not have available space, the packet
needs to be scheduled to the second fastest subflow.
We denote the fastest and the second fastest subflows as xf
and xs , respectively. Let RTTf , RTTs and CW N Df , CW N Ds
Last packet assigned to subflow1……subflow1subflow2Idle (no remaining packet for current GET)Sending packets for next GET…… 0 0.2 0.4 0.6 0.8 1 0 0.5 1 1.5 2 2.5CDFTime difference between last packets (seconds)0.3-8.6 Mbps0.7-8.6 Mbps1.1-8.6 Mbps4.2-8.6 Mbps 0 0.2 0.4 0.6 0.8 10.3-0.3 -0.7 -1.1 -1.7 -4.2 -8.60.7-0.3 -0.7 -1.1 -1.7 -4.2 -8.61.1-0.3 -0.7 -1.1 -1.7 -4.2 -8.61.7-0.3 -0.7 -1.1 -1.7 -4.2 -8.64.2-0.3 -0.7 -1.1 -1.7 -4.2 -8.68.6-0.3 -0.7 -1.1 -1.7 -4.2 -8.6Fraction over Fast SubflowWiFi-LTE Bandwidth (Mbps)DefaultIdeal ECF: An MPTCP Path Scheduler to Manage Heterogeneous Paths
, ,
be the RTTs and CWNDs of xf and xs , respectively. If the
sender waits until xf becomes available and then transfers
k packets through xf , it will take approximately RTTf +
× RTTf , i.e., the waiting and transmission time of k
k
CW N Df
packets. Otherwise, if the sender sends some packets over
xs , the transmission will finish after RTTs with or without
completing k packet transfers. Thus, as shown in Figure 7, in
the case of RTTf +
× RTTf < RTTs , using xf after it
becomes available can complete the transmission earlier than
using xs at that moment. If RTTf +
× RTTf ≥ RTTs ,
there are sufficient number of packets to send, so that using
xs at that moment can decrease the transmission time by
utilizing more bandwidth than just by using xf .
k
CW N Df
k
CW N Df
Based on this idea, we devise the ECF (Earliest Completion
First) scheduler. Algorithm 1 presents the pseudo code for
ECF. Note that the inequality uses RTT estimates and CWND
values, which can vary over time. To compensate for this
variability, we add a margin δ = max(σf , σs ), where σf and
σs are the standard deviations of RTTf and RTTs , respectively,
in the inequality for the scheduling decision:
(cid:32)
1 +
(cid:33)
k
CW N Df
× RTTf < RTTs + δ
This inequality takes into account the case in Figure 7, in
which waiting for the fastest subflow completes transfer
earlier than using the second fastest subflow. To more strictly
assure this case, ECF checks an additional inequality, which
validates if using the second fastest subflow with its CWND
× RTTs to finish transfer) does not complete
(it takes
earlier than waiting for the fastest subflow (at least 2RTTf
for transfer),
k
CW N Ds
k
CW N Ds
× RTTs ≥ 2RTTf + δ
Here, we also use δ to compensate for RTT and CWND
variabilities.
If these inequalities are satisfied, ECF does not use the
second fastest subflow xs and instead waits for the fastest
subflow xf to become available. ECF uses a different inequal-
ity for switching back to using xs after deciding to wait for
xf :
(cid:32)
1 +
(cid:33)
k
CW N Df
× RTTf < (1 + β )(RTTs + δ ).
This adds some hysteresis to the system and prevents it
from switching states (waiting for xf or using xs now) too
frequently.
ECF can be adapted to more than two subflows, although
it compares only two subflows, xf and xs at every sched-
uling decision, the outcome determined by the following
proposition.
Figure 7: The case for waiting for the fast subflow
Algorithm 1 ECF Scheduler
// This function returns a subflow for packet transmission
Find fastest subflow xf with smallest RTT
if xf is available for packet transfer then
return xf
else
Select xs using MPTCP default scheduler
k
n = 1 +
CW N Df
δ = max(σf , σs )
if n × RTTf < (1 + waitinд × β )(RTTs + δ ) then
× RTTs ≥ 2RTTf
+ δ then
if
k
CW N Ds
// Wait for xf
waitinд = 1
return no available subflow
else
end if
return xs
else
waitinд = 0
return xs
end if
end if
Proposition 1. If ECF decides to wait for the fastest subflow
xf rather than using xs , using other available subflows cannot
contribute to earlier transfer completion.
k
CW N Df
Proof. xf is expected to be available in RTTf , which is
the smallest, and completes transfer in
× RTTf . The
other subflows xi not selected as xs have a larger RTT (RTTi )
than RTTs . Note that ECF waits for xf if it results in an earlier
transfer completion than RTTs . Any packet transfer over xi
takes at least RTTi , which is larger than RTTs and RTTf .
Therefore, any other subflow xi not considered as xs must
satisfy the inequality RTTf +
×RTTf < RTTs < RTTi
that xs does. This means that using xi at that moment cannot
shorten the transfer completion time compared to waiting
for xf , which shows that ECF yields optimal decisions. (cid:3)
k
CW N Df
CWND full………subflow subflow CWND availablesend Connection level sndbuf times transfer < Scheduling , ,
Y. Lim et. al.
5 IMPLEMENTATION
We implement the ECF scheduler in the Linux Kernel using
MPTCP code revision 0.89 from [15]. To obtain the required
information for ECF, we utilize the smoothed mean and devi-
ation of the RTT estimates and the send buffer information
in the standard TCP kernel implementation.
MPTCP uses two types of sockets to manage an MPTCP
connection: connection-level and subflow-level. Thus, by
comparing the send buffer information between the con-
nection and subflow sockets, we can determine the number
of packets in the connection-level send buffer not assigned
to subflows. We exploit the sk_wmem_queued field in the
struct sock, which is the number of bytes queued in the
socket send buffer that either are not sent yet, or not yet
acknowledged. By subtracting the sum of sk_wmem_queued
of the subflow sockets from that of the connection socket,
we can calculate the number of bytes not yet allocated to
the subflows. However, MPTCP preserves packets in the
connection-level send buffer unless those packets are ac-
knowledged at the connection level. That is, the number of
in-flight packets in the connection socket can be larger than
the sum of in-flight packets in the subflow sockets. Since
with the simple subtraction those packets are also counted
as packets not assigned to the subflows, we should subtract
the number of bytes in the connection socket that are al-
ready acknowledged in the subflow sockets. Therefore, we
utilize the packets_out field in struct tcp_sock, which
is the number of in-flight packets of the TCP socket. Since
packets_out is denominated in packets, not bytes, we as-
sume that all packets are the same size, that of the maximum
segment size (MSS) of the socket.
Let meta_sk and ski be the connection and subflow i
sockets, respectively. Denote the TCP sockets correspond-
ing to meta_sk and ski by meta_tp and tpi , i.e., meta_tp
= tcp_sk(meta_sk) and tpi = tcp_sk(ski ). Then we esti-
mate k (in bytes) as follows:
k = meta_sk->sk_wmem_queued −
ski ->sk_wmem_queued −
(cid:88)
i
(cid:42)
meta_tp->packets_out −
(cid:44)
tpi ->packets_out(cid:43)
(cid:45)
× MSS
(cid:88)
i
To collect RTT estimates, we use srtt and rttvar in
struct tcp_sock,1 which are the smoothed round trip time
and maximal deviation for the last RTT periods, respectively,
i.e., RTTi = tpi ->srtt and σi = tpi ->rttvar.
1Note that our implementation is based on MPTCP 0.89 forked from Kernel
3.14.33, in which RTT estimates are in jiffies. More recent Kernels such as
3.18.34 maintains RTT estimates in terms of microseconds, e.g., srtt_ms.
To estimate CWND, we utilize snd_cwnd in struct tcp_sock,
which is the CWND in terms of packets. We assume that
a scheduling decision usually happens after a congestion
controller enters congestion avoidance phase and, thus, use
the value of snd_cwnd at that time to evaluate the inequal-
ity in the algorithm. However, a wrong scheduling decision
or application traffic pattern can cause a subflow or con-
nection to become idle, which can trigger a CWND reset
when the idle period is longer than the retransmission time-
out in RFC2861 [7]. When CWND is reset on the subflow,
ECF will use an unnecessarily small CWND, and will go
through slow-start. To avoid this behavior, ECF records the
largest value of snd_cwnd right before a CWND idle reset
(rec_snd_cwnd) event. ECF resets rec_snd_cwnd to zero if
the current snd_cwnd becomes larger than rec_snd_cwnd.
ECF uses the maximum of current snd_cwnd and rec_snd_cwnd
for CW N D as:
CW N D = max (tp->snd_cwnd, tp->rec_snd_cwnd).
Note that this CWND value is used only for ECF decisions;
ECF does not change the current CWNDs that the conges-
tion controller uses (tp->snd_cwnd). Thus, our actions are
consistent with RFC 2861.
6 EVALUATION IN A CONTROLLED LAB
In this section, we evaluate the ECF scheduler in a controlled
lab setting. This lets us evaluate performance across a wide
range of workloads and network configurations.
6.1 Experimental Setup
In our lab setting, we examine performance using three work-
loads: adaptive streaming video over HTTP, simple download
activity using wget, and Web-browsing.
We use an Android mobile device (Google Nexus 5) as
the client. Videos are played on the device using ExoPlayer
[5]. The mobile device communicates with the server over
the Internet using a WiFi access point (IEEE 802.11g) and
an LTE cellular interface from AT&T. Note that MPTCP
requires a default primary interface with which to initiate
and receive transfers. While the choice of interface to use
as the primary is a complex one [3], we use WiFi as the
primary interface since that is the default in Android. The
opportunistic retransmission and penalization mechanisms
are enabled throughout all experiments.
For the server, we use a desktop running Ubuntu Linux
12.04 with the MPTCP 0.89 implementation deployed [15].
We use Apache 2.2.22 as the HTTP server while enabling
HTTP persistent connections with the default Keep Alive
Timeout (5 sec).
ECF: An MPTCP Path Scheduler to Manage Heterogeneous Paths
, ,
(a) Default
(b) ECF
(c) DAPS
(d) BLEST
Figure 8: Ratio of Measured Average Bit Rate vs. Ideal Average Bit Rate (darker is better)
For DASH content, we select a video clip from [10] that is
1332 seconds long and encoded at 50 Mbps by an H.264/MPEG-
4 AVC codec. The original resolution of the video is 2160p
(3840 by 2160 pixels). We configure the streaming server
to provide six representations of the video with resolutions
varying from 144p to 1080p (just as Youtube does). We re-
encode the video file at each resolution and create DASH
representations with 5 second chunks. Recall Table 1 in Sec-
tion 3 presents the bit rates corresponding to each resolution.
The ECF hysteresis value β is set to 0.25 throughout our
experiments (other values for β were examined but found
to yield similar results, not shown due to space limitations).
We compare ECF to the following schedulers:2
• Default: The default scheduler allocates traffic to a
subflow with the smallest RTT and available CWND
space. If the subflow with the smallest RTT does not
have available CWND space, it chooses an available
subflow with the second smallest RTT.
• Delay-Aware Packet Scheduler (DAPS) [12]: DAPS
seeks in-order packet arrivals at the receiver by de-
ciding the path over which to send each packet based
on the forward delay and CWND of each subflow:
DAPS assigns traffic to each subflow inversely pro-
portional to RTT.
• Blocking Estimation-based Scheduler (BLEST) [4]:
BLEST aims to avoid out-of-order delivery caused by
sender-side blocking when there is insufficient space
in the MPTCP connection-level send window. When
this send window is mostly filled with packets over
a slow subflow, the window does not have enough
space, and the sender cannot queue packets to an
MPTCP connection. To avoid this situation, BLEST
waits for a fast subflow to become available, so that
the fast subflow can transmit more packets during
the slow subflow’s RTT, so as to free up space of the
connection-level send window.
BLEST and ECF are similar in that both can decline oppor-
tunities to send on the slow subflow when it has available
2For DAPS and BLEST, we use the implementation from https://bitbucket.
org/blest_mptcp/nicta_mptcp [4]
Bandwidth (Mbps)
WiFi RTT(ms)
LTE RTT(ms)
0.3
969
858
0.7
413
416
1.1
273
268
1.7
196
210
4.2
87
131
8.6
40
105
Table 2: Avg. RTT with Bandwidth Regulation
CWND space, but this decision is based on different design
goals. BLEST’s decision is based on the space in MPTCP
send window and minimizing out-of-order delivery, whereas
ECF’s is based on the amount of data queued in the send
buffer and with the goal to minimize completion time. We
will show in Section 6.2.4 that ECF better preserves the faster
flow’s CWND and thus performs better.
6.2 Video Streaming with Fixed Bandwidth
We begin by investigating whether ECF improves the per-
formance of streaming applications compared to the other
schedulers, while keeping bandwidth fixed for the duration
of the experiment.
6.2.1 Measured Bit Rate. We first compare the sched-
ulers based on achieved bit rate using our streaming work-
load. Figure 8 presents the ratio of the average bit rate of the
default, ECF, DAPS and BLEST schedulers, normalized by
the ideal average bit rate. Each experiment consists of five
runs, where a run consists of the playout of the 20 minute
video. The entries in Figure 8 are based on the average taken
over the five runs. Table 2 shows the average RTT of each
interface measured at sender-side based on the bandwidth
configurations. Note that with the same bandwidth regu-
lation, WiFi yields smaller RTTs than LTE, since the WiFi
network is located in our campus network and incurs lower
delays than the AT&T LTE cellular network.
Figure 8(b) shows that ECF successfully enables the stream-
ing client to obtain average bit rates closest to the ideal av-
erage bit rate, and does substantially better than the default
when paths are not symmetric.
Comparing Figure 8(c) with Figure 8(a), DAPS does not
improve streaming performance; it yields even worse stream-
ing bit rate than the default scheduler with some bandwidth
0.30.71.11.74.28.60.30.71.11.74.28.6LTE (Mbps)WiFi (Mbps) 0 0.2 0.4 0.6 0.8 10.30.71.11.74.28.60.30.71.11.74.28.6LTE (Mbps)WiFi (Mbps) 0 0.2 0.4 0.6 0.8 10.30.71.11.74.28.60.30.71.11.74.28.6LTE (Mbps)WiFi (Mbps) 0 0.2 0.4 0.6 0.8 10.30.71.11.74.28.60.30.71.11.74.28.6LTE (Mbps)WiFi (Mbps) 0 0.2 0.4 0.6 0.8 1 , ,
Y. Lim et. al.
Figure 9: Ratio of Measured vs. Ideal Bit Rate of
Default and ECF Scheduler when using 4 subflows
(darker is better)
Figure 10: Fraction of Traffic Allocated to Fast Subflow
in Streaming Workload – Fixed Bandwidth
configurations, e.g., 4.2Mbps for both of WiFi and LTE. Com-
paring Figure 8(d) with Figure 8(a), BLEST slightly improves
streaming performance with 1 Mbps WiFi and [1..10] Mbps
LTE pairs, but does not improve the average bit rate for other
configurations.
6.2.2 More Subflows. To validate whether ECF works
for more than two subflows, we compare the performance
of the default and ECF scheduler for bandwidth pairs of 0.3
Mbps & [0.3-8.6] Mbps using four