CreateNewBlock: Check that the produced CBlock is acceptable (except for proof-of-work and merkletree, since those need to be provided later)
This throws an exception from CreateNewBlock otherwise, which is not safe without #1245!
This is bugged. ConnectBlock requires the block to be on the disk already :(
In fact, this might even corrupt the blkindex.dat somehow, so if you tried it, I suggest rebuilding with -loadblock :/
OK, this seems to work now.
Eligius has been running this from block 179513 (56 blocks found) and EclipseMC from 180573 (11 blocks), totalling 67 valid blocks with no problems found.
Now includes tests for CreateNewBlock
This throws an exception from CreateNewBlock otherwise, which is not safe without #1245!
1345 | @@ -1337,7 +1346,11 @@ bool CBlock::ConnectBlock(CTxDB& txdb, CBlockIndex* pindex) 1346 | bool fStrictPayToScriptHash = (pindex->nTime >= nBIP16SwitchTime); 1347 | 1348 | //// issue here: it doesn't know the version 1349 | - unsigned int nTxPos = pindex->nBlockPos + ::GetSerializeSize(CBlock(), SER_DISK, CLIENT_VERSION) - 1 + GetSizeOfCompactSize(vtx.size()); 1350 | + unsigned int nTxPos; 1351 | + if (fJustCheck) 1352 | + nTxPos = 1; 1353 | + else 1354 | + nTxPos = pindex->nBlockPos + ::GetSerializeSize(CBlock(), SER_DISK, CLIENT_VERSION) - 1 + GetSizeOfCompactSize(vtx.size());
Why can't this formula be used in the justcheck case as well?
Position 1,1,1 is "magic" in FetchInputs to refer to the memory pool. Otherwise, it will try to find it on disk
Ok, can you add a comment to explain that?
3468 | - pblock->hashMerkleRoot = pblock->BuildMerkleTree(); 3469 | pblock->UpdateTime(pindexPrev); 3470 | pblock->nBits = GetNextWorkRequired(pindexPrev, pblock.get()); 3471 | pblock->nNonce = 0; 3472 | 3473 | + pblock->vtx[0].vin[0].scriptSig = scriptDummy;
Why the indenting change? And is this part always necessary?
The end-brace releasing the main/mempool lock scopes is removed so that it remains constant until ConnectBlock finishes checking it. The code in between could be indented to clarify, but the diff is easier to read this way.
And is this piece of the code necessary in non-fJustCheck mode?
This is part of CreateNewBlock, which always calls ConnectBlock with fJustCheck. A dummy scriptSig is needed to pass the "scriptSig must be between 2 and 100 octets long" rule.
Of course - you always want to check whether the created block is valid.
151 | + tx.vin[0].prevout.hash = txFirst[0]->GetHash(); 152 | + tx.vin[0].prevout.n = 0; 153 | + tx.vin[0].scriptSig = CScript() << OP_1; 154 | + tx.vout[0].nValue = 4900000000; 155 | + script = CScript() << OP_0; 156 | + tx.vout[0].scriptPubKey.SetPayToScriptHash(script);
Added comments per sipa's request, rebased on master, and fixed a bug in diskless BDB I found testing.
getmemorypool's caching mostly makes testing this impossible, but I did some vague performance testing by disabling it (so every call goes to CreateNewBlock): git master: 1063 transactions/sec checknewblock: 473 transactions/sec checknewblock with signature checking skipped: 782 transactions/sec
IMO, even the worst-case scenario of 0.00321 seconds per transaction processed is worth the additional safety checking, especially when considering the extra caching getmemorypool and getwork both do to minimize CreateNewBlock calls.
ACK
Conflicts:
src/main.cpp
103 | @@ -103,12 +104,48 @@ bool CDBEnv::Open(boost::filesystem::path pathEnv_) 104 | return error("CDB() : error %d opening database environment", ret); 105 | 106 | fDbEnvInit = true; 107 | + fMockDb = false;
This is a very un-C++-ish way of doing this. The C++ Way would be an abstract base CDBEnv class, with concrete subclasses for unit testing (CMockDbEnv or CMemoryDbEnv) and records-to-disk.
But the use of the global bitdb variable is also very un-c++-ish, and if Mike Hearn's experiments with LevelDB and Pieter's experiments with a new wallet storage format go well this whole subsystem might get junked anyway....
Does not compile 32 bit:
test/miner_tests.cpp:89: error: integer constant is too large for ‘long’ type test/miner_tests.cpp:109: error: integer constant is too large for ‘long’ type test/miner_tests.cpp:131: error: integer constant is too large for ‘long’ type test/miner_tests.cpp:139: error: integer constant is too large for ‘long’ type test/miner_tests.cpp:161: error: integer constant is too large for ‘long’ type test/miner_tests.cpp:178: error: integer constant is too large for ‘long’ type
Explicitly made the literals long long. Apparently "long long" was not a standard type until C++11, and compilers implementing it as an extension did not support mere numeric literals bigger than long (the C++11 standard requires these literals be handled properly). GCC 4.5 works with both ways. In any case, being explicit doesn't seem like it hurts, so I've added that. Let me know if it still doesn't work.