CIP Proposal: P2SH data encoding

NOTE: this is a proposal for a CIP, this hasn’t been accepted / assigned a CIP number yet!

  Title: P2SH data encoding; at least it's not P2PKH
  Authors: Ruben de Vries
  Type: Standards


Counterparty currently has the following data encoding schemes:

  • opreturn the best solution, doesn’t pollute the UTXO set, is the cheapest, but limited to a maximum 80 bytes
  • multisig the 2nd best solution, only temporarly pollutes the UTXO set, has no upper limit
  • pubkeyhash the worst solution, permanently pollutes the UTXO set, but also has no upper limit

For (literally) 99% of Counterparty transactions opreturn is enough (though some people forcefully use multisig encoding).
Previously for the rest we’d fall back to using multisig encoding since it has no real drawbacks.

Unfortunately since Bitcoin Core v0.12.1 there’s a DoS protection (called bytespersigop) that has essentially made bare multisig non-standard.

So we need an alternative for larger amounts of data and we want to avoid pubkeyhash encoding because it pollutes the UTXO set too much.

With the EVM coming there will also be more transactions with larger amounts of data, so we need this more than ever!


It’s possible to embed data in P2SH redeemScripts by doing 2 transactions
where the first one sets up P2SH outputs with redeemScripts that allow us to put the data in the scriptSig of the spending transaction.

With this method we can easily put in a lot more data than the other methods, does not polute the UTXO set
and signature data can be pruned by Bitcoin nodes that wish to do so.

The original proof of concept, by Peter Todd, of this can be found here:


The Basics

This process requires 2 transactions, the first to setup 1 or more P2SH outputs and then the second spending the P2SH outputs and placing the data we wish to embed in the scriptSig.

The Redeem Script

The data is split chunks of max X size, then for each chunk we create a redeemScript as follows:

redeemScript = {{ data }} OP_HASH160 {{ hash160(data) }} OP_EQUALVERIFY {{ pubkey }} OP_CHECKSIGVERIFY {{ n }} OP_DROP OP_DEPTH 0 OP_EQUAL

dissecting this into pieces we have:

{{ data }} OP_HASH160 {{ hash160(data) }} OP_EQUALVERIFY

this is where we place the data and the OP_HASH160 {{ hash160(data) }} OP_EQUALVERIFY ensure the data can’t be changed.


this ensures the output needs to be signed by the owner, this can be replaced by other scripts such as a multisig or CLTV or similar things that were previously already put into P2SH outputs.

{{ n }} OP_DROP

n is an incrementing number to ensure that each output is unique even when the data chunks aren’t.


this prevents scriptSig malleability.

The Output Script

The output script placed in the first transaction is then:

ouputScript = OP_HASH160 {{ hash160([redeemScript]) }} OP_EQUALVERIFY

The real deal

Below we’ll describe how a Counterparty send would look.

Transaction 1
  • 1 + n UTXOs from the source address to have enough BTC to pay the fees for both the first and the second transaction.
  • 1 output to source with enough value to pay the fee for the second transaction.
  • 1 + n P2SH outputs following the above method with DUST value.
  • 1 change output to send any excess BTC back to source (optional; but in practice always there)
Transaction 2
  • 1 input spending the source output from Transaction 1
  • 1 + n inputs spending the P2SH outputs, including the data in the scriptSig
  • 1 output to specify the destination of the Counterparty transaction (in some types of transactions this is omitted) with DUST value.
  • 1 opreturn output encoding the data 'CNTRPTY' + 'P2SH' to signal that the data is found in the P2SH inputs, with 0 value.
Fees & Coin Selection

The first transaction has send enough BTC to the second transaction so the second transaction can pay for it’s own fee without having to add extra inputs.
So we calculate the value of the source output in the first transaction to be:

estimated_size_of_tx2 = 10  # base size of  TX
estimated_size_of_tx2 += 181  # for source input
estimated_size_of_tx2 += 29 * count(destination_outputs)  # for destination outputs
estimated_size_of_tx2 += sizeof(data)  # for the data
estimated_size_of_tx2 += count(data_p2sh_outputs) * (181 + 9)  # for the overhead of each data output being spend
estimated_fee_for_tx2 = (estimated_size_of_tx2 / 1000) * fee_per_kb
source_output_value = count(data_p2sh_outputs) * DUST + count(destination_outputs) * DUST + estimated_fee_for_tx2

This means the amount of BTC to require when doing coinselection is:

estimated_size_of_tx1_without_inputs = 10  # base size of a TX
estimated_size_of_tx1_without_inputs += 29  # for source output
estimated_size_of_tx1_without_inputs += count(data_p2sh_outputs) * 29  # for P2SH data outputs

inputs = []
for UTXO in coinselection:
   estimated_size_of_tx1 = estimated_size_of_tx1_without_inputs + count(inputs) * 181
   estimated_fee_for_tx1 = (estimated_size_of_tx1 / 1000) * fee_per_kb

   if sum(inputs) >= estimated_fee_for_tx1 + estimated_fee_for_tx2 + source_output_value + count(data_p2sh_outputs) * DUST:

The Counterparty API

In practice this means the create_* API calls will have to start returning a list of 1 or more transactions which the client signs and broadcasts,
so clients need to adapt to this new style and always assume they need to sign and broadcast N transactions.

Pre-segwit we won’t be able to have the second transaction spend from the first transaction until the first transaction has been signed,
this means the client actually needs to do 2 API calls, where the second has the txId of the signed first transaction as param.

Once we can use segwit this problem is gone! However not all clients will straight away be able to sign segwit transactions (requires upgrades of the libraries they use).

Because of this being quite a hassle we propose to have 2 node configs / API params to control all of this: segwit and old_style_api.


When old_style_api = True the API will continue to function as normal, returning a single transaction, as string.

Once old_style_api = False the API will (across all create_* transactions) return a list of 1+ transactions (even when it’s just 1).

While old_style_api = True P2SH encoding will not be used unless explcitily set (so - the default - encoding=auto will not use P2SH encoding).


When segwit = False the first API call will return [tx_hex, None] signalling that it needs a second call with p2sh_pretx_txid added as param,
which should be the txId of the signed first transaction.
The second API call will then return [None, tx_hex].

When segwit = True the API call will return [tx_hex, tx_hex] and will no longer require a second API call.

EVM Transactions

Because the nature of EVM transactions being large® amounts of data,
the EVM API calls (create_publish and create_broadcast) will require old_style_api == False.

Child pays for Parent

The current scheme ensures both transactions pay fee for their own size, in the (near) future when the ‘Child pays for Parent’ functionality
that has been added to Bitcoin Core is widely adopted we can change this so that the second transaction pays a larger portion of the total fee (still needs to pay enough to be relayed).

This will ensure that the first transaction is never mined without the second transaction.

Backwards Compatibility

This is a new encoding that will be completely unrecognised by older clients, any clients who don’t upgrade would loose consensus with the nodes that did upgrade.
It also affects how the Counterparty API works (see above).


This document is placed in the public domain.

already have a fully working implementation:

needs a bit of polishing here and there and more test coverage, but it works.

+1 on this. Great job. I have some minor nits but will save those for once it’s up on github.

Looks great. You can label this CIP 6 and start a pull request with status Draft when you are ready.

Am I correct in reading Peter Todd’s python-bitcoinlib repo that the max data chunk size (MAX_SCRIPT_ELEMENT_SIZE) in a single redeemScript is 520 bytes? That’s quite the increase over OP_CHECKMULTISIG encoding.

but let’s keep the discussion here unless it’s nits about the text :wink:

the consensus max on 1 script is 10k, but the isstandard check is a bit more restrictive:

    // Biggest 'standard' txin is a 15-of-15 P2SH multisig with compressed
    // keys. (remember the 520 byte limit on redeemScript size) That works
    // out to a (15*(33+1))+3=513 byte redeemScript, 513+1+15*(73+1)+3=1627
    // bytes of scriptSig, which we round off to 1650 bytes for some minor
    // future-proofing. That's also enough to spend a 20-of-20
    // CHECKMULTISIG scriptPubKey, though such a scriptPubKey is not
    // considered standard)

“remember the 520 byte limit on redeemScript size” means the max size of 1 element in a script.

peter’s PoC actually splits the data into chunks in the same input like this: OP_HASH160 hash16(datachunk1) OP_EQUAL OP_HASH160 hash16(datachunk2) OP_EQUAL OP_HASH160 hash16(datachunk3) OP_EQUAL.

but I think we should just do 1 chunk per output/input for simplicity and sanity.

also the max isstandard for 1 tx is 100kb

there’s 2 things that - at the very least - need to be discussed.

1. arc4
I’d like to make it so that the first transaction of the pair can be regarded as a plain btc-only transaction and doesn’t have to be parsed at all.
that means only the second transaction really get’s parsed.

we normally arc4 the data with the the txId of the first input on the transaction,
so that would be the first input of the second transaction (because I don’t want to parse the first transaction).
however since that actually is the txId of the first transaction … which we can never know before encrypting the data …

so if we stick to my plan to ignore the first transaction that means we have 3 options:

  1. take something else that is present in the second transaction and already known when the first transaction is created. I think the only option on that front would be the source.
  2. simply arc4 with a fixed string ("COUNTERPARTY" or something)
  3. stop doing arc4 entirely (only for p2sh encoding for now)

Afaik the purpose of arc4 is to obfuscate that it’s a Counterparty transaction, to the point where it at the very least requires doing arc4 decrypting to check against it instead of a simple pattern match.
arc4 encryption is a lot like a simple XOR:
arc4(b'434e5452505254590000000000000000000039380000000002faf080', b'COUNTERPARTY') == b'434e5452505254590000000000000000000039380000000002faf080'
arc4(b'434e5452505254590000000000000000000039390000000002faf080', b'COUNTERPARTY') == b'e937d2bb75cde1cc3d86191ae853aa117f60436fda7f4184f89b428e'

if used with the same seed then any bytes that are the same between 2 input strings are actually also the same!
and since we start all our data with a CNTRPRTY prefix you could easily just filter on that, so that’s why the txId is used as seed to force someone who wants to blacklist Counterparty transactions to expend CPU on decrypting.
so option 2 isn’t really a useful option.

I think the only available data for option 1 would be the {{ pubkey }} used in the data P2SH script, but that would also have a high likelyhood of often being the same (and again resulting in very similar and easier to filter data).

writing all this down, maybe we should just fetch the prev TX and use the txId from the first input of it…

2. P2SH source
the P2SH data outputs still contain a {{ pubkey }} OP_CHECKSIGVERIFY to secure the data output from being spend by others, when the source is a P2SH address we have 2 options for this part:

option 1: specify 1 pubkey to be used for this (eg; in a multisig the person who constructs the initial TX chooses one (his most likely))
option 2: allow the ‘user’ to specify this part of the redeemScript, he needs to make sure that part leaves the stack empty though (so using OP_CHECKSIGVERIFY not OP_CHECKSIG).

for now I’ll leave the implementation with only option 1; it’s only securing DUST in value, the complexity of adding option 2 is quite high, it can be added at a later stage if neccesary.

In looking over this CIP, it made me think about IPFS again. I don’t want to hijack this thread, so I started another one here:

Merged this in draft status.

I wish there was a way to do this in 1 transaction, but I can’t think of a way to do it and I know you (Ruben) put a lot of thought into it as well.

Not sure this method is necessary anymore…

hmm a normal counterparty send with multisig encoding will need to pay about +300% extra fee following that PR to get mined.

I have to go over the calculations again to make sure I didn’t make any mistakes, but here’s an atempt at showing the cost increase for different size:

it seems around 500 bytes of data the overhead of needing 2 TXs for the P2SH becomes more efficient than the cost of paying for 20 sigops worth of size with multisig encoding.

We should also consider the “cost” of needing two txs vs one tx. This was a sticking point when I proposed a simple method for integrating subassets via asset descriptions without any changes to consensus related code. In that case, the general feeling was a change in consensus code was preferred to using two txs to accomplish a subasset issuance.

You’re right, but it’s not that complicated looking at the code required in counterparty-lib.
And ff we want to eventually unleash the EVM on mainnet I think this is a neccesity.

Though considering we can continue using multisig for now (with some proper fee estimation code added) we should probably delay P2SH encoding until EVM and until segwit has activated.
Because using segwit at least it won’t require 2 API calls, it will just be 1 API call that returns 2 TXs to sign and broadcast.

Also keep in mind the Bitcoin Core devs really dislike bare multisig!
Eventhough there’s absolutely no negative effects for bitcoin of letting it live on, I wouldn’t be surprised if they at some point turn it non standard under the “we should encourage best-practice P2SH” banner.

And most likely there won’t be many people opposed because everyone will think that it’s a good move against data embedding, eventhough the above CIP clearly proves there’s alternatives and that killing bare multisig won’t do anything in their fight against data embedding.

I will update the CIP and implementation later to restrict it to segwit only

Does counterparty-lib force the second transaction to spend the output of the first transaction so they are confirmed in order? If so, that isn’t too bad. We can just submit them both (in order) and watch for the second one to confirm.

I think that’s the plan. The problem is, prior to segwit, you can’t be sure of the output of the first transaction until it is confirmed.

yea, the counterparty-lib API provides you with an unsigned TX, prior to segwit the txid of it will change when you sign it so it’s impossible to construct both of the transactions at the same time.

with segwit we can construct them both because the txid of a segwit TX won’t change after signing, which means we can construct and return both at the same time and the ‘user’ can sign and broadcast both at the same time.

Can we use multiple OP_RETURNS to encode more that 80 bytes of data?

I believe it is allowed by the bitcoin protocol but it is considered a non-standard transaction. Wouldn’t this be the most efficient way of encoding data?

Perhaps we can lobby bitcoin core to allow multiple OP_RETURNS as standard. If it is more efficient, then why wouldn’t they do it?

because they’re opposed to embedding data in the blockchain, 80 byte opreturn is already a compromise from their perspective since they feel all you need is hashes (and even that is a compromise).

@rubensayshi - Can we lose the extra output on the setup transaction? That would save space.

In other words, change:

Transaction 1 Outputs
1 output to source with enough value to pay the fee for the second transaction.
1 + n P2SH outputs following the above method with DUST value.
1 change output to send any excess BTC back to source (optional)

To this instead:

Transaction 1 Outputs
1 + n P2SH outputs following the above method with DUST + enough to pay fees for the second transaction.
1 change output to send any excess BTC back to source (optional)

I don’t see the reason for the extra output since the pubkey is already required in the redeem script. Am I missing something?