A somewhat unreadable summary of the new protocol can be found here.
If you haven't used Joinmarket, this is going to be hard to understand; let me suggest browsing this design doc , especially the sections on "Transactions" and "Entities", although if this is 100% new to you, you probably need to start with the joinmarket main page and leave this till later. At the very least, know that in Joinmarket a "Maker" is an entity that sits in the joinmarket pit, offering to do joins for a price, while a "Taker" is an entity that takes up these offers and constructs the coinjoin transaction, paying the fees for the privilege of defining the size, being the coordinator and getting the transaction immediately.
First, simpler conceptually but quite important is: running on multiple message channels. Currently that just means multiple IRC servers, but it is possible for Joinmarket to run on other messaging servers; for example, I've tried it on matrix; it works, although the behaviour is not quite viable for now. There may well be other good alternatives, including people setting up their own servers. The code now allows this in a straightforward manner, by subclassing
This buys us a lot. First, we can avoid the frequent problem of a single IRC server going down and bringing all transactions to a halt. Second, it gives an extra layer of censorship resistance - if, as intended, Maker bots make themselves available on multiple channels, there isn't the possibility of a sneaky server operator preventing counterparties seeing their offers. Offers that are published on 3 servers are treated just the same as if they are published on only 1 (assuming the Taker is connected to all of them, of course).
Potential problems with this, and solutions:
The code allows automatic switchover between messaging servers if a connection gets cut during a transaction (it's not perfect in this regard but there are test cases that show it can work, at least a lot of the time, which is a heck of a lot better than nothing). But since a bot is not a persistent identity, this throws up the potential issue of spoofing - if my bot is called "waxwing" on server A and B, what's to stop someone else registering/using "waxwing" on server C, should I not currently be connected to C? To be clear, successful spoofing is mostly limited to "order stealing" (pretend you're waxwing and nab the transaction from me) and DoS effects, since the private data is communicated with E2E encryption already, but that's still something we have to avoid.
To address this, we give the bots an ephemeral pseudonymous identity for the period that they are running - they generate a keypair and, much like Tor hidden services, their name is tied to their public key with a simple encoding (in this case, truncated base58 of hash of pubkey). Then every private message has a signature tagged onto the end, that verifies only to this pubkey. These signatures are attached to every message, E2E encrypted or not. Meaning, once the conversation starts, it can continue on any message channel, but only with the owner of that ephemeral keypair.
This has an extra little hole at the beginning of the conversation - since the initial message from the Maker (the order announcement) is in plaintext and already published in the "pit" channel, the attached signature is replayable on another server. Such replay is avoided by binding the signature to the server on which it operates by adding a tag into the signed message which indicates the server (see the
hostidfield in the config file). This patches up the anti-"order stealing" defence mentioned above. It might be possible to tighten this defence further; remember that for the initial non-E2E encrypted portion of the conversation, the server has control anyway.
The other issue with this signing approach is rather technical, but to mention it: adding signatures pads out the channel messages with quite a bit of data, but it's not turning out to be a big problem. Currently both the pubkey and the signature are appended, which is wasteful; we can use ECDSA public key recovery, but that can be added in a later update, as there are some rather fiddly details to work out.
In my previous post I described the basic idea of PoDLE. This is implemented in 0.2.0. At a high level, it functions as a rate limiter, based on the fact that utxos are somewhat scarce in Bitcoin. The cryptographic trick in PoDLE means that we can require a joinmarket user to commit to a utxo without revealing it in advance, and so not losing privacy just by suggesting a transaction. If they want to get access to Makers' utxos, they need to "use up" (reveal, and not use again more than a certain number of times -
taker_utxo_retries in the config) their utxos. We ratchet up the scarcity with further config variable
taker_utxo_amtpercent, restricting them to use only utxos of a certain age and certain size relative to the size of the proposed coinjoin. These values are relatively lax and not intended to be changed by users initially - general, although not complete, agreement on these values will be needed or at least highly preferable. The intention of the code as it stands is to give the honest Taker the maximum chance of being able to do a transaction with no intervention - by choosing utxos from this transaction, or otherwise elsewhere from his wallet - while making it difficult for a snooper to carry out the privacy degrading attack on Joinmarket which we've been suffering. There is a lot more to say about this attack, how to defend against it, but I'll defer that for a later post and concentrate on the functionality in 0.2.0.
taker_utxo_retries- this is accomplished with a nice trick that extends what was in the previous PoDLE blog post. Greg Maxwell pointed out that multiple retries can easily be achieved by choosing multiple NUMS values in a deterministic way - for those interested, the code I've implemented to do it is here. This means that the hash value for a utxo depends on an integer selected; so if
taker_utxo_retries=3, the Taker will choose one from 0,1,2 and generate the hash for that value. This means that each commitment hash is only allowed to be used once, but the same utxo can be used 3 times. This is enforced on both sides, more on that in a minute.
taker_utxo_age- self explanatory; check if the utxo has >= \(N\) commitments. I consider this to be a very important element, because if very small (say 1), an attacker's only requirement is to have a large number of utxos and simply "regenerate them" - say he has 10 utxos, spend them to 10 new utxos. It costs fees, and it must wait on a confirmation, but that isn't very long. More on why this is important in a later post.
taker_utxo_amtpercent- if an honest taker wants to do a 1BTC coinjoin, it isn't unreasonable to expect them to have a 0.2 btc (20%) utxo "hanging around" to use - most likely one of the inputs for their transaction, or in the wallet, or outside. Meanwhile a snooper trying to get the whole orderbook might need a lot of fairly large utxos to get the top end of the orderbook with this restriction. So, it's considered very useful, although note of course they don't have to spend these coins (except to themselves to regenerate).
Potential problems and solutions
What happens if the Taker doesn't have such a utxo lying around? The "retries" part will only fail if he's already tried to use it \(N\) times (3 times by default), and if he has no other utxos that fit. The "age" part is something that could easily happen with a newly loaded wallet, if we use the default of 5 confirmations. Still, waiting an hour is not the end of the world - and it's only likely to happen on first use of a wallet. The "amtpercent" filter is also quite unlikely to cause a problem.
Still, all of these cases can happen, and it would be a shame to leave a Taker unable to use the system in certain cases. The solution provided is to add utxos external to the wallet - this is messy, not just because it's inconvenient, but mainly because it requires access to the private key for those external utxo(s). Still, a tool is provided that will allow exactly that.
These commitments are stored by the taker in a file
commitments.json. In most cases it's hoped the Taker (Joinmarket customer, let's say) won't have to look at it; it'll store what utxos have already been used, as well as any external utxos that are being stored for usage. So they should not delete it.
On the Maker side meanwhile, a file
blacklist (unfortunate name perhaps!) stores very simply the hashes (remember \(H(P2)\) from the PoDLE post) of the commitments used by other Takers. Given the privacy preserving property of these values, they could be freely shared between Makers if they like; no broadcast method has been implemented yet, but it could easily be done if that's deemed necessary/useful This has now been done, and is switched on by default. Note that there is no way to maliciously broadcast such commitments; you cannot guess the correct hash value(s) for a particular utxo. As for the blacklist file, the Maker can leave it accruing values indefinitely, and probably should, but it won't be a disaster to add random numbers to it, or delete some or all of it; it just means repeat usage of utxos is being allowed.
The real problem with the system is the potential for troublemaking Makers to simply drop transactions of honest Takers, and thus force them to use up more and more of their utxos \(\times\) retries. Whilst this is a problem, note that this kind of DOS already exists, if any Maker wants to do it. The existing pre-0.2.0 code already had the facility to restart and ignore a non-responsive Maker, and that persists. Slightly more failures are expected, and the new user will have a higher chance of getting blocked out - they may only have 1 or 2 utxos available, so only 3 or 6 chances to get the transaction through without having to mess around sourcing external utxos.
Due to this concern I think it's important that (a) the limits are fairly lax - 3 retries, 5 confirms, 20% amount requirement and (b) all the users do *not* just randomly reset these values if they feel like it. Much better if we all use "standard" values, albeit of course a few will choose to be more strict.
Will the snooper be stopped by this? Possibly initially, possibly not - and ultimately the snooper cannot be completely stopped, just restricted. In the next post I will talk about the attack in more depth, and also about how yield generators can operate to give further defence against this attack.Share on Twitter Share on Facebook
Adam Gibson (9)