Skip to content

Commit

Permalink
net: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data
Browse files Browse the repository at this point in the history
[ Upstream commit 66600fac7a984dea4ae095411f644770b2561ede ]

In case the non-paged data of a SKB carries protocol header and protocol
payload to be transmitted on a certain platform that the DMA AXI address
width is configured to 40-bit/48-bit, or the size of the non-paged data
is bigger than TSO_MAX_BUFF_SIZE on a certain platform that the DMA AXI
address width is configured to 32-bit, then this SKB requires at least
two DMA transmit descriptors to serve it.

For example, three descriptors are allocated to split one DMA buffer
mapped from one piece of non-paged data:
    dma_desc[N + 0],
    dma_desc[N + 1],
    dma_desc[N + 2].
Then three elements of tx_q->tx_skbuff_dma[] will be allocated to hold
extra information to be reused in stmmac_tx_clean():
    tx_q->tx_skbuff_dma[N + 0],
    tx_q->tx_skbuff_dma[N + 1],
    tx_q->tx_skbuff_dma[N + 2].
Now we focus on tx_q->tx_skbuff_dma[entry].buf, which is the DMA buffer
address returned by DMA mapping call. stmmac_tx_clean() will try to
unmap the DMA buffer _ONLY_IF_ tx_q->tx_skbuff_dma[entry].buf
is a valid buffer address.

The expected behavior that saves DMA buffer address of this non-paged
data to tx_q->tx_skbuff_dma[entry].buf is:
    tx_q->tx_skbuff_dma[N + 0].buf = NULL;
    tx_q->tx_skbuff_dma[N + 1].buf = NULL;
    tx_q->tx_skbuff_dma[N + 2].buf = dma_map_single();
Unfortunately, the current code misbehaves like this:
    tx_q->tx_skbuff_dma[N + 0].buf = dma_map_single();
    tx_q->tx_skbuff_dma[N + 1].buf = NULL;
    tx_q->tx_skbuff_dma[N + 2].buf = NULL;

On the stmmac_tx_clean() side, when dma_desc[N + 0] is closed by the
DMA engine, tx_q->tx_skbuff_dma[N + 0].buf is a valid buffer address
obviously, then the DMA buffer will be unmapped immediately.
There may be a rare case that the DMA engine does not finish the
pending dma_desc[N + 1], dma_desc[N + 2] yet. Now things will go
horribly wrong, DMA is going to access a unmapped/unreferenced memory
region, corrupted data will be transmited or iommu fault will be
triggered :(

In contrast, the for-loop that maps SKB fragments behaves perfectly
as expected, and that is how the driver should do for both non-paged
data and paged frags actually.

This patch corrects DMA map/unmap sequences by fixing the array index
for tx_q->tx_skbuff_dma[entry].buf when assigning DMA buffer address.

Tested and verified on DWXGMAC CORE 3.20a

Reported-by: Suraj Jaiswal <[email protected]>
Fixes: f748be5 ("stmmac: support new GMAC4")
Signed-off-by: Furong Xu <[email protected]>
Reviewed-by: Hariprasad Kelam <[email protected]>
Reviewed-by: Simon Horman <[email protected]>
Link: https://patch.msgid.link/[email protected]
Signed-off-by: Paolo Abeni <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
  • Loading branch information
Furong Xu authored and Avenger-285714 committed Nov 11, 2024
1 parent dd546d0 commit 3e83f99
Showing 1 changed file with 17 additions and 5 deletions.
22 changes: 17 additions & 5 deletions drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
Original file line number Diff line number Diff line change
Expand Up @@ -4215,11 +4215,6 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
if (dma_mapping_error(priv->device, des))
goto dma_map_err;

tx_q->tx_skbuff_dma[first_entry].buf = des;
tx_q->tx_skbuff_dma[first_entry].len = skb_headlen(skb);
tx_q->tx_skbuff_dma[first_entry].map_as_page = false;
tx_q->tx_skbuff_dma[first_entry].buf_type = STMMAC_TXBUF_T_SKB;

if (priv->dma_cap.addr64 <= 32) {
first->des0 = cpu_to_le32(des);

Expand All @@ -4238,6 +4233,23 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)

stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0), queue);

/* In case two or more DMA transmit descriptors are allocated for this
* non-paged SKB data, the DMA buffer address should be saved to
* tx_q->tx_skbuff_dma[].buf corresponding to the last descriptor,
* and leave the other tx_q->tx_skbuff_dma[].buf as NULL to guarantee
* that stmmac_tx_clean() does not unmap the entire DMA buffer too early
* since the tail areas of the DMA buffer can be accessed by DMA engine
* sooner or later.
* By saving the DMA buffer address to tx_q->tx_skbuff_dma[].buf
* corresponding to the last descriptor, stmmac_tx_clean() will unmap
* this DMA buffer right after the DMA engine completely finishes the
* full buffer transmission.
*/
tx_q->tx_skbuff_dma[tx_q->cur_tx].buf = des;
tx_q->tx_skbuff_dma[tx_q->cur_tx].len = skb_headlen(skb);
tx_q->tx_skbuff_dma[tx_q->cur_tx].map_as_page = false;
tx_q->tx_skbuff_dma[tx_q->cur_tx].buf_type = STMMAC_TXBUF_T_SKB;

/* Prepare fragments */
for (i = 0; i < nfrags; i++) {
const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
Expand Down

0 comments on commit 3e83f99

Please sign in to comment.