1220 | @@ -1221,7 +1221,7 @@ inline NodeRef<Key> Parse(Span<const char> in, const Ctx& ctx)
1221 | // n = 1 here because we read the first WRAPPED_EXPR before reaching THRESH
1222 | to_parse.emplace_back(ParseContext::THRESH, 1, k);
1223 | to_parse.emplace_back(ParseContext::WRAPPED_EXPR, -1, -1);
1224 | - script_size += 2 + (k > 16);
1225 | + script_size += 2 + (k > 16) + (k > 0x7f) + (k > 0x7fff) + (k > 0x7fffff);
Reviewed and built locally. Checks all pass.
This hex notation is new to me. I read that The 0x7FFF notation is much more clear about potential over/underflow than the decimal notation.
Is this why it is used? A more safe representation of an int?
Also, this might not be the right place to ask. Should I push these questions to stack exchange?
I use the hex notation here because it's much more obvious. People might not recognize 32767 as the largest 16-bit signed integer, or 16777215 as the largest 24-bit signed integer; but in the hex notation this is immediately clear.
It's just more readable to reviewers.