To me the most interesting thing here is "Rolling UTXO set hashes". Bitcoin has a (theoretical future) scaling problem because the blockchain keeps growing forever without bound. No matter what you think about Moore's law, everyone knows it can't go on forever, so an infinitely large blockchain is not sustainable. With UTXO set hashes included in blocks, ancient blockchain history can be discarded on a rolling basis, turning the infinitely large blockchain into a fixed size rolling chain.
This doesn't solve the whole problem as the UTXO set itself also grows, but I suspect appropriate incentives could manage this problem (fees for increasing UTXO set size and/or bounties for reducing it).
With these problems solved, Bitcoin would theoretically be able to operate indefinitely without issue. Of course many practical scaling problems remain, the most important being the transaction rate cap.
Cause a 2TB drive is so cheap? People wanting to run a full node can easily afford it. But if BTC stays with small blocks and becomes a settlement layer, nobody will even care about running a node in the first place. Why would I want to run a node to validate payments between corporations?
Increasing blocks increases the load on nodes somewhat, but it keeps the system much more peer to peer, as transaction fees won't end up as high.
There's no obvious easy solution, but making the network expensive and congested is certainly not a real solution.
The problem would be still to maintain the UTXO set. Fortunately, by using a two-party dynamic authenticated data structure, it is possible to have only miners(and also powerful full nodes willing to hold full-state, like block explorers etc) to store the whole set, while others can check proofs of set transformations included into blocks, storing just 32 bytes. For details, see introduction to the paper https://eprint.iacr.org/2016/994 (and blockchain processing benchmark graph, 2nd half of pg. 14)
> We also implemented support for the sha-native instruction support. […] The new AMD rizon [sic] stuff contains this instruction, though, and the use of it gives another 10% speedup in initial block download for those AMD systems.
Wow, using native SHA256 hashing improved performance that much? I suppose Bitcoin does use an awful lot of hashes.
Every single signature hashes a few kilobytes of data, on average, and there are ~10k signatures per block (more with segwit..). That's 10's of megabytes of hashing per block, even without considering adversarial transactions that purposefully inflate those numbers. That's in the range of 10-100 MB. On my CPU openssl benchmarks about 180MB/s for sha256, so 50-500ms of hash time per block. That's not insignificant.
SHA is the core hashing algorithm that the Bitcoin blockchain is built upon, and it is a fundamental part of it. I'm actually surprised it didn't have more impact.
This was on purpose and an example of good engineering. Novice users trying to send funds to SegWit addresses before the network activated could have lost funds. The plumbing was all there to activate (and make use of via command line), however users will get access via UI in the next update.
> however users will get access via UI in the next update.
Update after next-- we'll be doing a short release for segwit support right after 0.15.
Feature freeze for 0.15 (next release) was scheduled on July 16th ( https://github.com/bitcoin/bitcoin/issues/9961 ), segwit lock-in wasn't until aug 9th. We'd shortcut the process for a serious emergency, but not for something short of one.
This doesn't solve the whole problem as the UTXO set itself also grows, but I suspect appropriate incentives could manage this problem (fees for increasing UTXO set size and/or bounties for reducing it).
With these problems solved, Bitcoin would theoretically be able to operate indefinitely without issue. Of course many practical scaling problems remain, the most important being the transaction rate cap.