I'm somewhat of an old fart, but rsync(1) has worked well for me. It integrates well with ssh on both sides, using ssh channels to execute the binary on the server side and transporting data nicely back and forth.
It isn't suited for millions of files, but neither is scp.
It handles as many files as I've ever thrown at it - often in the millions. The great thing about rsync is it is restart-able and the source and destination don't even have to be local.
the biggest weakness of rsync is that it's single-threaded. Single-thread-copy with millions of small files is painfully slow, Rsync is as good as it can be but you just need more threads.
I think that's what a sibling is getting into with the "better to tar files and send them over ssh in some cases" thing. And yes, you can hack it in after-the-fact with xargs/etc but it's clunky compared to just having native multithreading like rclone/etc.
Got any benchmarks/write ups on the subject? I did a bit of testing myself a long time ago and basically the answer just ended up being to use rsync because any differences were marginal. That said, I didn't test with millions of files.
I think perhaps this was a bigger issue back in the day when we were using rotating harddisks. In those day doing a seek would be a lot slower than doing a write.
Today seeks are mostly instants, so maybe my experience isn't valid anymore.
Your experience is valid today but for a different reason: if you're comparing millions of tiny files there's a lot of back and forth. If you're streaming a single archive, it only checks if that single file has been modified.
Like everyone here I've no benchmarks but have got burned trying to rsync around too many small files.
We've put rsync in a HPC scheduler and used it, or some tooling ontop of it really, to copy billions of files for a large-ish HPC compute cluster with many P of data.
Also, rsync can create more faithful copies than scp. E.g. scp can't copy a symlink as a symlink, instead it will follow the symlink and copy the file or directory.
It isn't suited for millions of files, but neither is scp.