It appears the recently disclosed Spectre attack will require binaries to be recompiled with mitigations in place. Do we know yet what the implications are for Bitcoin Core? Will we just need to recompile release binaries with a compiler that supports the mitigations?
-
jameshilliard commented at 6:43 PM on January 4, 2018: contributor
-
bolekC commented at 10:49 PM on January 4, 2018: none
As far as I understand, patches are needed for operating system kernels. Recompilation of our software won't help if attacker will use other compiler and kernel is not updated
Advise users to updates operating system ASAP.
-
jameshilliard commented at 12:05 AM on January 5, 2018: contributor
I thought patches were possibly needed for userspace applications in addition to kernels.
-
bolekC commented at 12:32 AM on January 5, 2018: none
If I get it correctly no need to recompile of user space application is needed. Problem is in CPU so only kernel can mitigate this. Application recompilation can probably help for some application faults but I think we want to use this CPU Speculative execution as this is a good feature.
I will think a bit more :)
-
whiteboy19787 commented at 12:37 AM on January 5, 2018: none
ok
On Jan 4, 2018 6:33 PM, "bolekC" notifications@github.com wrote:
If I get it correctly no need to recompile of user space application is needed. Problem is in CPU so only kernel can mitigate this. Application recompilation can probably help for some application faults but I think we want to use this CPU Speculative execution as this is a good feature.
I will think a bit more :)
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bitcoin/bitcoin/issues/12091#issuecomment-355440023, or mute the thread https://github.com/notifications/unsubscribe-auth/AhZfX5VPlF8_ZuGylwoxTXs127TQ6Uk8ks5tHW3egaJpZM4RTZ4Z .
-
jameshilliard commented at 12:43 AM on January 5, 2018: contributor
From here:
While our initial focus has been the protection of operating system and hypervisor-type targets, there are classes of user application for which this coverage is valuable.
-
luke-jr commented at 1:49 AM on January 5, 2018: member
While this might make sense for the wallet, it seems unlikely to be useful for the node itself, and the performance impact might be seriously harmful there. Time to finally split the wallet out?
-
jameshilliard commented at 2:33 AM on January 5, 2018: contributor
- laanwj added the label Build system on Jan 5, 2018
-
bolekC commented at 9:57 PM on January 5, 2018: none
It is quite complicated issue and I don't have a 100% sure opinion after another day of reading things around. Performance testing is for sure a good option.
Moving "wallet" part out to separate application does not help much as then attacker will then attack this new application. And of course there are other wallet applications :-)
- The sensitive data we should protect are the private keys. (any other?)
- The attack is done on memory only (no problem for keys stored on disk)
- As far as I understand reading memory with this bug is quite slow
Maybe we should limit to just ~1ms the time when we store private keys in memory:
- always read private keys from file before using them
- immediately clear memory after using private keys (overwrite all places where private keys where stored)
Anyway these modifications will be quite difficult I suppose.
-
luke-jr commented at 10:01 PM on January 5, 2018: member
Moving the wallet out means we can build that with the mitigations while leaving the node fully performant.
-
bolekC commented at 10:20 PM on January 5, 2018: none
Yeah, but I don't understand how recompilation can help. Need to study more.... The main direction of attack with this bug is from user processes towards kernel protected memory addresses. This is not a problem for us. If someone gets into our process space (can be: shared library) it can read anything he wants without tricks. Anything I've missed??
-
luke-jr commented at 11:04 PM on January 5, 2018: member
I think you might be confusing Meltdown for Spectre?
-
bolekC commented at 11:42 PM on January 5, 2018: none
Could be a bit. I'm trying to understand the issue as much as possible.... It's not easy :-) We don't know all possible variants of this attacks. New ones ,not described yet can exists .... more for sure they exist and someone will find them. Anyway I thing the best protection is not to keep private keys in memory if it is not needed.
-
achow101 commented at 11:46 PM on January 5, 2018: member
@bolekC From my understanding of Spectre, the attack is towards a specific process and its memory. The examples given in the paper are towards the kernel, but AFAIU, it is not limited to just the kernel. In fact the PoC given in the paper does not do anything with the kernel at all.
The basic premise is to give a process some untrusted input which it then does input validation before allowing execution to proceed. The attack makes use of speculative execution that is performed before the input is fully validated in order to leak data from that process's memory. So no, it is not an attack on the kernel and it does require user applications to have mitigation in place in order to avoid the application from having its secrets leaked.
Spectre requires knowing specific memory locations of data in the application, having memory in the user application that the external application can access and measure timings for, and knowing exactly what the code is doing. The third part is trivially achievable with open source applications.
-
jwilkins commented at 1:30 AM on January 6, 2018: none
There are LLVM improvements for spectre coming: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20180101/513630.html
Looks like 5.0.2 will be the version the mitigations arrive in.
-
achow101 commented at 7:13 AM on January 6, 2018: member
As part of mitigating this we should avoid keeping private keys unenecrypted in memory for long periods of time. However
walletpassphraseallows the keys to be unencrypted in memory for basically an arbitrary amount of time. Maybe we should have a hard upper bound on that to avoid having keys in memory for too long to mitigate against these vulnerabilities and other vulnerabilities which may allow someone to read a processes memory. -
bolekC commented at 11:15 PM on January 6, 2018: none
Reading this papers for another day and still no good answer. I suppose there are still not described/published ways of using these CPU vulnerabilities.
I think we should go for not keeping private keys in memory. Always read private key from the storage. Any idea for quick fix?
-
luke-jr commented at 11:21 PM on January 6, 2018: member
I wonder if it would be safest to perform the signing without decrypting the key at all (I think that's possible in theory, not so sure about practice).
-
bolekC commented at 9:22 PM on January 7, 2018: none
Anyway if you keep data in memory that can be used to sign message (with or without decrypting the key) then attacker can do the same it he gets all these data. I think that the best protection would be not to keep priv keys in memory when not used.
-
luke-jr commented at 11:45 PM on January 7, 2018: member
Yes, there is no perfect one-size-fits-all solution. Only trade-offs either way.
-
gmaxwell commented at 9:04 PM on January 8, 2018: contributor
Compiler mitigations aren't useful for us: The attack requires an indirect jump that runs inside the target address space which can be precisely stimulated by the attacker, any indirect jump. If Bitcoin were compiled with special flags, that would still leave libraries-- the whole system would need to be compiled with it. [I'm also skeptical that these mitigations will have acceptable performance too-- speculation across indirect jumps is a big part of why they perform well enough that they're acceptable]
Moving signing into another process would likely have the effect of denying an attacker the ability to stimulate an indirect jump on demand, so it would be useful. The slowness of spectre is probably irrelevant, a few thousand bits per second would read a 256 bit private key pretty darn quickly-- even if it has to read a half dozen addresses first to find it.
Capping decryption time would be penny wise and pound foolish, then parties that want long decryption times would just not encrypt at all, creating insecurity for common cases in exchange for maybe being somewhat more secure in uncommon cases. Please no. Keep in mind that local exploit vectors are MUCH MUCH MUCH more common than remote ones; if an attacker can run enough local code to use spectre then you're probably already in trouble.
Keeping the keys themselves out of memory might be a worthwhile mitigation without much cost. They can be kept out of memory even if the wallet is unlocked.
-
kyle-shank commented at 12:35 AM on January 19, 2018: none
Luke, care to elaborate on signing without decryption? Are there any concepts going around with this?
-
jameshilliard commented at 9:01 PM on January 21, 2018: contributor
If Bitcoin were compiled with special flags, that would still leave libraries-- the whole system would need to be compiled with it.
I figured distributions would likely recompile libraries with mitigations at some point. What libraries do we use and not statically link that would be problematic?
Keep in mind that local exploit vectors are MUCH MUCH MUCH more common than remote ones; if an attacker can run enough local code to use spectre then you're probably already in trouble.
I would expect this is largely an increased risk for systems with browsers since that's the most widespread way that users run untrusted code on their systems.
Is it possible to localize mitigations such as retpoline to code that specifically handles key material/signing(ie generate retpoline instructions only for wallet/signing codepaths) to avoid the performance impact for normal validation codepaths or would retpoline mitigations need to be applied to the full binary to be effective?
-
laanwj commented at 2:20 PM on February 11, 2018: member
Solving this seems to be in the domain of operating system and CPU vendors. I'm a bit skeptical as to what user-space applications can do here. Are there other open source projects (apart from browsers, which run untrusted code in sandboxes) that people know of, that have special mitigations here, for example around key handing?
- laanwj added the label Upstream on Feb 11, 2018
-
jameshilliard commented at 3:10 PM on February 11, 2018: contributor
Solving this seems to be in the domain of operating system and CPU vendors.
Everything I've read so far indicates that full protection will require changes to userspace applications.
I'm a bit skeptical as to what user-space applications can do here.
They can be recompiled with mitigations like retpoline in place(to protect against spectre variant 2 attack), from what I've read alternative mitigations via microcode would have significantly higher performance impact for the variant 2 attack.
Are there other open source projects (apart from browsers, which run untrusted code in sandboxes) that people know of, that have special mitigations here, for example around key handing?
Looks like openssh has begun including mitigations when supported by the compiler.
- MarcoFalke added the label Brainstorming on Mar 24, 2022
-
fanquake commented at 11:23 AM on August 10, 2022: member
I'm going to close this for now, as I don't think there is any concrete action for the project to take at this stage.
- fanquake closed this on Aug 10, 2022
- bitcoin locked this on Aug 10, 2023