bytespersigop prevents bare multisig in v0.12 #8079

issue rubensayshi openend this issue on May 20, 2016
  1. rubensayshi commented at 4:46 pm on May 20, 2016: contributor

    in v0.12 #7081 (by @luke-jr) was merged adding -bytespersigop (with default of 20).

    this is preventing simple bare multisig transactions from being accepted into mempool when the size is < 400 bytes.
    eg; https://gist.github.com/rubensayshi/ac4f617207f7a50559e85d61c05800be

    from 1-of-7 and onwards they are accepted because the ratio of bytes:sigops is high enough.
    also having 2 or more inputs will work because it again tips the ratio of bytes:sigops over.

    the reason mainly being that GetLegacySigOpCount calls GetSigOpCount with fAccurate=false.

    It looks like changing GetLegacySigOpCount to use fAccurate=true would cause side effects in the few other places (ConnectBlock, CheckBlock, CreateNewBlock) and simply adding fAccurate as arg to GetLegacySigOpCount would affect the value stored in CTxMemPoolEntry.sigOpCount which in turn affects a bunch of other things.

  2. luke-jr commented at 9:41 pm on May 20, 2016: member
    I’m not aware of any legitimate use case for bare multisig with under 15 keys.
  3. jonasschnelli added the label Mempool on May 21, 2016
  4. ScroogeMcDuckButWithBitcoin commented at 9:56 pm on May 21, 2016: none
    Is our (counterparty) fix just a matter of requiring more inputs to the transaction then? Does this not just make everyone’s life harder, including @luke-jr ’s?
  5. luke-jr commented at 3:50 pm on May 23, 2016: member
    Just use OP_RETURN (like you claimed to be years ago) rather than abuse fake multisigs..
  6. ScroogeMcDuckButWithBitcoin commented at 5:28 pm on May 23, 2016: none
    OP_RETURN is only useful to indicate to the network that we wish to be pruned, and we all know that.
  7. petertodd commented at 10:31 pm on June 9, 2016: contributor

    @rubensayshi One possible response could be to make use of data publishing in scriptSigs; I have a demo of that technique here: https://github.com/petertodd/python-bitcoinlib/blob/master/examples/publish-text.py

    In general though, I’d recommend that new protocols be designed such that their transactions appear identical to standard Bitcoin transactions whenever possible, e.g. by using protocols that rely on commitments to data passed around elsewhere. This requires a lot more engineering effort, and in some cases commitment techniques are insufficient as you really do need proof-of-publication of arbitrary data.

  8. mbarulli commented at 11:25 am on June 19, 2016: none
    @petertodd Would a transaction built using your scheme (i.e. putting data in scriptSigs) qualify as a standard transaction?
  9. petertodd commented at 4:41 pm on June 19, 2016: contributor
    @mbarulli Yes - I’d suggest you try out that script and see for yourself.
  10. dexX7 commented at 10:11 am on June 30, 2016: contributor

    I’m not aware of any legitimate use case for bare multisig with under 15 keys.

    Interesting you say this, when m-of-n bare-multisig transactions with n >3 are also non-standard.

    Given the sentiment and hostility against some use cases as demonstrated with #5231, one could imagine the limit in #7081 was intentionally chosen to prevent bare-multisig data encoding.

    Were you aware of this effect, @luke-jr?

  11. sipa commented at 1:08 pm on June 30, 2016: member

    I can personally promise you that I was not aware of the effect of making bare multisig nonstandard when this change was proposed. I believe this is a bug, and it should be fixed.

    I do, in addition, think that bare multisig is something that should be made non-standard over time, because there is no security benefit to storing full public keys in the UTXO set over simply a hash. That’s a personal opinion, and open to discussion. It is not something that should happen as an unintentional side-effect of an unrelated change.

  12. luke-jr commented at 3:58 pm on June 30, 2016: member
    No, I was not. The limit was chosen based on what was believed to be the larger no-op effect on real-world transactions. The problem is not the limit itself, but that the sigops are being counted the old-fashioned way with 20 sigops per multisig rather than only the number of keys. I intend to fix this when I find time, and at the same time re-propose making bare multisig relaying disabled by default (but not as a side effect of this policy). I’m still pondering what a good approach would be to discourage p2pkh spam - maybe a policy to allow larger OP_RETURN spam provided the fee would have covered the equivalent p2pkh spam?
  13. hoffmabc commented at 4:14 pm on June 30, 2016: none
    Many “spammers” using p2pkh don’t want OP_RETURN because it’s easily prunable. I don’t think this incentive will help prevent that as they are most likely willing to pay as much as necessary to force their spam into the un-prunable dataset.
  14. pstratem commented at 11:24 pm on June 30, 2016: contributor
    @hoffmabc Can you explain why they do not want to be pruned?
  15. ScroogeMcDuckButWithBitcoin commented at 3:07 pm on July 1, 2016: none
    Because they want their data to be immutable. @pstratem, will you host my data for me? https://github.com/ScroogeMcDuckButWithBitcoin/dropzone-lib
  16. petertodd commented at 6:11 pm on July 1, 2016: contributor

    @ScroogeMcDuckButWithBitcoin Note that with TXO commitments(1) even unspent UTXO’s can be pruned - the owner of the UTXO is responsible for storing the data.

    1. https://petertodd.org/2016/delayed-txo-commitments
  17. pstratem commented at 11:43 pm on July 1, 2016: contributor

    @ScroogeMcDuckButWithBitcoin

    Let me ensure that I have this right.

    You wish to store data which is globally accessible. This data has nothing to do with bitcoin (or more likely is part of a competing system). You expect a bunch of bitcoin users to store this data for you free of charge forever.

    Would you say that is an accurate description of your expectations?

  18. gmaxwell commented at 0:03 am on July 2, 2016: contributor

    @dexX7 the discussion about the prior change was all open and public. The size picked was believed to be a no-op and was more permissive than a straight scaling would suggest.

    I was advocating a different approach where the transaction size computed for fee purposes is taken to be max(size, sigops*1000000/20000); effectively scaling them to be their share of the used capacity.

    That said, storing bitcoin unrelated data isn’t an “application” Bitcoin Core should support– nor one I doubt Bitcoin users will tolerate in the long run, and one should not count on it being reliable.

  19. ScroogeMcDuckButWithBitcoin commented at 0:21 am on July 2, 2016: none

    @pstratem Well, sounds like you get it. These transactions impose an externality cost onto the network. @petertodd and I discussed this on his recent interview on bitcoin uncensored. This resource that is ‘bitcoin’, has some intrinsic value for the use of storing censorship-prone data. You can tell me that bitcoin ‘shouldn’t’ be used for that, and I respect that point. However, as is currently the case, the interface that’s been provided is nonetheless conducive towards this goal. The core team can have a debate over whether or not to remove this intrinsic value.

    I will suggest, that in light of the mining subsidies approaching a halving, it could be asked how we can displace some of that subsidy. Considering that backpage ads go for as much as $70 US per ad, Bitcoin transactions may also be valuable to the same degree.

    I recognize that I’m an antagonist here, but I think this is a debate worth having, and I respect all of your opinions on the matter. It’s a tough issue. I may write an article in Coindesk on this …

  20. petertodd commented at 9:23 pm on July 5, 2016: contributor
  21. dexX7 commented at 12:30 pm on July 7, 2016: contributor
    @luke-jr: to resolve this issue, how about accurately counting the sigops in the context of the mempool only?
  22. luke-jr commented at 3:16 pm on July 7, 2016: member

    I think that may have DoS issues - someone could construct a transaction that fills the mempool and gets accepted with the lower sigop counting, but will never get mined in practice. We also no longer have mining code to check for such a distinction in the policy, so it would add overhead there.

    Probably just need to calculate it twice when accepting to the mempool.

  23. petertodd commented at 3:36 pm on July 7, 2016: contributor
    @dexX7 To expand on @luke-jr’s reply: “accurate” counting of sigops in the mempool doesn’t change the fact that the consensus code counts 20 sigops per bare CHECKMULTISIG output, which means accepting transactions containing such outputs risks creating blocks that are undersized due to running out of sigops prematurely that could have been used to mine other transactions.
  24. ScroogeMcDuckButWithBitcoin commented at 8:31 pm on July 7, 2016: none
    FWIW - this is … related. I get that UTXOs and TXO’s are used a bit interchangeably here, but, consider the audience: http://www.coindesk.com/immutability-extraordinary-goals-blockchain-industry/
  25. dexX7 commented at 4:15 pm on July 11, 2016: contributor

    @luke-jr: Probably just need to calculate it twice when accepting to the mempool.

    Are you suggesting to continue to use GetTransactionSigOpCost for the CTxMemPoolEntry, and count the sigops for the non-standard check seperately and accurately?

    Thanks for the futher information @petertodd! I’m aware of the acutal cost, in terms of sigops allowed in a block, so I’m wondering, whether we can get away with counting the ops two times.

  26. f139975 commented at 12:10 pm on July 18, 2016: none
    Will this be resolved in 0.13?
  27. laanwj commented at 1:25 pm on July 18, 2016: member

    Will this be resolved in 0.13?

    Doesn’t seem so, there has been a lot of talk but no technical solution to be merged.

  28. laanwj closed this on Jul 26, 2016

  29. MarcoFalke locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-17 15:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me