mirror of
https://github.com/lkl/linux.git
synced 2025-12-19 16:13:19 +09:00
net: add atomic_long_t to net_device_stats fields
Long standing KCSAN issues are caused by data-race around some dev->stats changes. Most performance critical paths already use per-cpu variables, or per-queue ones. It is reasonable (and more correct) to use atomic operations for the slow paths. This patch adds an union for each field of net_device_stats, so that we can convert paths that are not yet protected by a spinlock or a mutex. netdev_stats_to_stats64() no longer has an #if BITS_PER_LONG==64 Note that the memcpy() we were using on 64bit arches had no provision to avoid load-tearing, while atomic_long_read() is providing the needed protection at no cost. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
committed by
David S. Miller
parent
68d268d089
commit
6c1c509778
@@ -356,9 +356,8 @@ static inline void __skb_tunnel_rx(struct sk_buff *skb, struct net_device *dev,
|
||||
static inline void skb_tunnel_rx(struct sk_buff *skb, struct net_device *dev,
|
||||
struct net *net)
|
||||
{
|
||||
/* TODO : stats should be SMP safe */
|
||||
dev->stats.rx_packets++;
|
||||
dev->stats.rx_bytes += skb->len;
|
||||
DEV_STATS_INC(dev, rx_packets);
|
||||
DEV_STATS_ADD(dev, rx_bytes, skb->len);
|
||||
__skb_tunnel_rx(skb, dev, net);
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user