Hey there are some details about this scattered throughout. The answer really depends on the technique. For DDP you can fairly easily get same throughput as single gpu throughput (we were getting ~80% gpu util for multiple nodes iirc), as long as all the workers are getting the same sized data.
Once you move to training really large models like Llama 405B with FSDP and use things like CPU offloading, the throughput goes down quite a bit due to all the data transfers between CPU/GPU. If you have large enough clusters and don't have to use CPU offloading, you can get higher throughput.