> On the other hand, detecting integer overflow in software is extremely expensive, increasing both the program size and the execution time considerably,
Most languages don't care about integer overflow. Your typical C program will happily wrap around.
If I really want to detect overflow, I can do this:
add t0, a0, a1
blt t0, a0, overflow
Which is one more instruction, which is not great, not terrible.
From what I’ve read most native compiled code doesn’t really check for overflows in optimised builds, but this is more of an issue for JavaScript et al where they may detect the overflow and switch the underlying type? I’m definitely no expert on this.
A bit more reading shows there's a three instruction general case version for 32-bit additions on the 64-bit RISC-V ISA. I'm not familiar with RISC-V assembly and they didn't provide an example, but I _think_ it's as easy as this since 64-bit add wouldn't match the 32-bit overflowed add.
Neither x86-64 nor RISC-V is implemented by running each single instruction. They both recognize patterns in the code and translate those into micro-ops. On high performance chips like Rivos's (now Meta's) I doubt there'd be any difference in the amount of work done.
Code size is a benefit for x86-64 however - no one is arguing that - but you have to trade that against the difficulty of instruction decoding.
I thought the main distinction of RISC-V (and MIPS before it, along with RISCs in general) is that the instructions are themselves of equivalent complexity (or lack thereof) as x86 uops. E.g x86 can add a register to memory, which splits into 3 load / add / store uops, but a RISC would execute those 3 instructions directly.
The main distinction now is RISC-descended designs use a load-modify-store instruction set with all ALU functions being register-register, and consequently have a lot more (visible) registers than CISC-descended ISAs (mostly just x86 really).
Historically RISC instructions were 1:1 with CPU operations, in theory allowing the compiler to better optimise logic, but this isn't really true anymore. High performance ARM CPUs use µOPs and macro-op fusion, though not to the extent of x86 CPUs.
Except it isn't. Code isn't one single pattern repeating again and again; on large enough bodies of code, RISC-V is the most dense, and it's not even close.
Decades of demoscene productions beg to differ. That just means compilers are awful, as they usually are.[1] x86 has far more optimisation opportunities than any RISC.
If I recall my lectures, which were 20odd years ago now.
CISC ISAs were historically designed for humans writing assembly so they have single instructions with complex behaviour and consequently very high instruction density.
RISC was designed to eliminate the complex decoding logic and replace it with compiler logic, using higher throughput from the much reduced decoding logic (or in some cases no decoding at all) to offset the increased number of instructions. Also the transistors that were used for decoding could be used for additional ALUs to increase parallelism.
So RISC by its nature is more verbose.
Does the tradeoff still make sense? Depends who you ask.
That is not the correct way to test for integer overflow.
The correct sequence of instructions is given in the RISC-V documentation and it needs more instructions.
"Integer overflow" means "overflow in operations with signed integers". It does not mean "overflow in operations with non-negative integers". The latter is normally referred as "carry".
The 2 instructions given above detect carry, not overflow.
Carry is needed for multi-word operations, and these are also painful on RISC-V, but overflow detection is required much more frequently, i.e. it is needed at any arithmetic operation, unless it can be proven by static program analysis that overflow is impossible at that operation.
It's one more instruction only if you don't fuse those instructions in the decoder stage, but as the pattern is the one expected to be generated by compilers, implementations that care about performance are expected to fuse them.
I have no idea or practical experience with anything this low-level, so idk how much following matters, it's just someone from the crowd offering unvarnished impressions:
It's easy to believe you're replying to something that has an element of hyperbole.
It's hard to believe "just do 2x as many instructions" and "ehhh who cares [i.e. your typical C program doesn't check for overflow]", coupled to a seemingly self-conscious repetition of a quip from the television series Chernobyl that is meant to reference sticking your head in the sand, retire the issue from discussion.
The sequence of instructions given above is incorrect, it does not detect integer overflow (i.e. signed integer overflow). It detects carry, which is something else.
The correct sequence, which can be found in the official RISC-V documentation, requires more instructions.
Not checking for overflow in C programs is a serious mistake. All decent C compilers have compilation options for enabling checking for overflow. Such options should always be used, with the exception of the functions that have been analyzed carefully by the programmer and the conclusion has been that integer overflow cannot happen.
For example with operations involving counters or indices, overflow cannot normally happen, so in such places overflow checking may be disabled.
Most languages don't care about integer overflow. Your typical C program will happily wrap around.
If I really want to detect overflow, I can do this:
Which is one more instruction, which is not great, not terrible.