• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

lightningnetwork / lnd / 13975285956

20 Mar 2025 05:12PM UTC coverage: 68.667%. First build
13975285956

Pull #9609

github

web-flow
Merge c842aa78c into bcc80e7f9
Pull Request #9609: Fix inaccurate `listunspent` result

29 of 38 new or added lines in 8 files covered. (76.32%)

130412 of 189920 relevant lines covered (68.67%)

23529.31 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

0.0
/itest/flakes.go
1
package itest
2

3
import (
4
        "time"
5

6
        "github.com/lightningnetwork/lnd/lntest"
7
)
8

9
// flakePreimageSettlement documents a flake found when testing the preimage
10
// extraction logic in a force close. The scenario is,
11
//   - Alice and Bob have a channel.
12
//   - Alice sends an HTLC to Bob, and Bob won't settle it.
13
//   - Alice goes offline.
14
//   - Bob force closes the channel and claims the HTLC using the preimage using
15
//     a sweeping tx.
16
//
17
// TODO(yy): Expose blockbeat to the link layer so the preimage extraction
18
// happens in the same block where it's spent.
19
func flakePreimageSettlement(ht *lntest.HarnessTest) {
×
20
        // Mine a block to trigger the sweep. This is needed because the
×
21
        // preimage extraction logic from the link is not managed by the
×
22
        // blockbeat, which means the preimage may be sent to the contest
×
23
        // resolver after it's launched, which means Bob would miss the block to
×
24
        // launch the resolver.
×
25
        ht.MineEmptyBlocks(1)
×
26

×
27
        // Sleep for 2 seconds, which is needed to make sure the mempool has the
×
28
        // correct tx. The above mined block can cause an RBF, if the preimage
×
29
        // extraction has already been finished before the block is mined. This
×
30
        // means Bob would have created the sweeping tx - mining another block
×
31
        // would cause the sweeper to RBF it.
×
32
        time.Sleep(2 * time.Second)
×
33
}
×
34

35
// flakeTxNotifierNeutrino documents a flake found when running force close
36
// tests using neutrino backend, which is a race between two notifications - one
37
// for the spending notification, the other for the block which contains the
38
// spending tx.
39
//
40
// TODO(yy): remove it once the issue is resolved.
41
func flakeTxNotifierNeutrino(ht *lntest.HarnessTest) {
×
42
        // Mine an empty block the for neutrino backend. We need this step to
×
43
        // trigger Bob's chain watcher to detect the force close tx. Deep down,
×
44
        // this happens because the notification system for neutrino is very
×
45
        // different from others. Specifically, when a block contains the force
×
46
        // close tx is notified, these two calls,
×
47
        // - RegisterBlockEpochNtfn, will notify the block first.
×
48
        // - RegisterSpendNtfn, will wait for the neutrino notifier to sync to
×
49
        //   the block, then perform a GetUtxo, which, by the time the spend
×
50
        //   details are sent, the blockbeat is considered processed in Bob's
×
51
        //   chain watcher.
×
52
        //
×
53
        // TODO(yy): refactor txNotifier to fix the above issue.
×
54
        if ht.IsNeutrinoBackend() {
×
55
                ht.MineEmptyBlocks(1)
×
56
        }
×
57
}
58

59
// flakeInconsistentHTLCView documents a flake found that the `ListChannels` RPC
60
// can give inaccurate HTLC states, which is found when we call
61
// `AssertHTLCNotActive` after a commitment dance is finished. Suppose Carol is
62
// settling an invoice with Bob, from Bob's PoV, a typical healthy settlement
63
// flow goes like this:
64
//
65
//        [DBG] PEER brontide.go:2412: Peer([Carol]): Received UpdateFulfillHTLC
66
//        [DBG] HSWC switch.go:1315: Closed completed SETTLE circuit for ...
67
//        [INF] HSWC switch.go:3044: Forwarded HTLC...
68
//        [DBG] PEER brontide.go:2412: Peer([Carol]): Received CommitSig
69
//        [DBG] PEER brontide.go:2412: Peer([Carol]): Sending RevokeAndAck
70
//        [DBG] PEER brontide.go:2412: Peer([Carol]): Sending CommitSig
71
//        [DBG] PEER brontide.go:2412: Peer([Carol]): Received RevokeAndAck
72
//        [DBG] HSWC link.go:3617: ChannelLink([ChanPoint: Bob=>Carol]): settle-fail-filter: count=1, filter=[0]
73
//        [DBG] HSWC switch.go:3001: Circuit is closing for packet...
74
//
75
// Bob receives the preimage, closes the circuit, and exchanges commit sig and
76
// revoke msgs with Carol. Once Bob receives the `CommitSig` from Carol, the
77
// HTLC should be removed from his `LocalCommitment` via
78
// `RevokeCurrentCommitment`.
79
//
80
// However, in the test where `AssertHTLCNotActive` is called, although the
81
// above process is finished, the `ListChannels“ still finds the HTLC. Also note
82
// that the RPC makes direct call to the channeldb without any locks, which
83
// should be fine as the struct `OpenChannel.LocalCommitment` is passed by
84
// value, although we need to double check.
85
//
86
// TODO(yy): In order to fix it, we should make the RPC share the same view of
87
// our channel state machine. Instead of making DB queries, it should instead
88
// use `lnwallet.LightningChannel` instead to stay consistent.
89
//
90
//nolint:ll
91
func flakeInconsistentHTLCView() {
×
92
        // Perform a sleep so the commiment dance can be finished before we call
×
93
        // the ListChannels.
×
94
        time.Sleep(2 * time.Second)
×
95
}
×
96

97
// flakePaymentStreamReturnEarly documents a flake found in the test which
98
// relies on a given payment to be settled before testing other state changes.
99
// The issue comes from the payment stream created from the RPC `SendPaymentV2`
100
// gives premature settled event for a given payment, which is found in,
101
//   - if we force close the channel immediately, we may get an error because
102
//     the commitment dance is not finished.
103
//   - if we subscribe HTLC events immediately, we may get extra events, which
104
//     is also related to the above unfinished commitment dance.
105
//
106
// TODO(yy): Make sure we only mark the payment being settled once the
107
// commitment dance is finished. In addition, we should also fix the exit hop
108
// logic in the invoice settlement flow to make sure the invoice is only marked
109
// as settled after the commitment dance is finished.
110
func flakePaymentStreamReturnEarly() {
×
111
        // Sleep 2 seconds so the pending HTLCs will be removed from the
×
112
        // commitment.
×
113
        time.Sleep(2 * time.Second)
×
114
}
×
115

116
// flakeRaceInBitcoinClientNotifications documents a bug found that the
117
// `ListUnspent` gives inaccurate results. In specific,
118
//   - an output is confirmed in block X, which is under the process of being
119
//     credited to our wallet.
120
//   - `ListUnspent` is called between the above process, returning an
121
//     inaccurate result, causing the sweeper to think there's no wallet utxo.
122
//   - the sweeping will fail at block X due to not enough inputs.
123
//
124
// Under the hood, the RPC client created for handling wallet txns and handling
125
// block notifications are independent. For the block notification, which is
126
// registered via `RegisterBlockEpochNtfn`, is managed by the `chainntnfs`,
127
// which is hooked to a bitcoind client created at startup. For the wallet, it
128
// uses another bitcoind client to receive online events. Although they share
129
// the same bitcoind RPC conn, these two clients are acting independently.
130
// With this setup, it means there's no coordination between the two system -
131
// `lnwallet` and `chainntnfs` can disagree on the latest onchain state for a
132
// short period, causing an inconsistent state which leads to the failed
133
// sweeping attempt.
134
//
135
// TODO(yy): We need to adhere to the SSOT principle, and make the effort to
136
// ensure the whole system only uses one bitcoind client.
NEW
137
func flakeRaceInBitcoinClientNotifications(ht *lntest.HarnessTest) {
×
NEW
138
        ht.MineEmptyBlocks(1)
×
NEW
139
}
×
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2025 Coveralls, Inc