summaryrefslogtreecommitdiff
path: root/net/core
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2018-10-31 08:39:13 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2019-11-28 18:28:54 +0100
commitf4e7ca091d480c10a4eaaa72b500c1eee7948c02 (patch)
tree5365132ec1799e4e024465153e947a3fed539164 /net/core
parentbcb7267a915d800954e2412f44224a0b458329a8 (diff)
net: do not abort bulk send on BQL status
[ Upstream commit fe60faa5063822f2d555f4f326c7dd72a60929bf ] Before calling dev_hard_start_xmit(), upper layers tried to cook optimal skb list based on BQL budget. Problem is that GSO packets can end up comsuming more than the BQL budget. Breaking the loop is not useful, since requeued packets are ahead of any packets still in the qdisc. It is also more expensive, since next TX completion will push these packets later, while skbs are not in cpu caches. It is also a behavior difference with TSO packets, that can break the BQL limit by a large amount. Note that drivers should use __netdev_tx_sent_queue() in order to have optimal xmit_more support, and avoid useless atomic operations as shown in the following patch. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'net/core')
-rw-r--r--net/core/dev.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/net/core/dev.c b/net/core/dev.c
index 547b4daae5ca..c6fb7e61cb40 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2997,7 +2997,7 @@ struct sk_buff *dev_hard_start_xmit(struct sk_buff *first, struct net_device *de
}
skb = next;
- if (netif_xmit_stopped(txq) && skb) {
+ if (netif_tx_queue_stopped(txq) && skb) {
rc = NETDEV_TX_BUSY;
break;
}