Yes - it matters a lot who is in control of the code execution. The traditional way was to run the code from or as a child process of the webserver process. FPM, WSGI, etc. move that into a completely separate and unrelated process. Crashes, hangs and security issues are a much smaller problem in the latter case.
> Considering nginx is really just good at serving static content(maybe CDN uses it a lot) and the rest are all proxying
The English is a bit broken in the top-level comment, but both static content and proxying are clearly mentioned as "nginx things", which is entirely fair. If you don't consider proxying to be included under the term "serving" (which is entirely reasonable and not uncommon) this is entirely true - the only kind of "serving" nginx is good at is static files, the rest of what it's good at isn't "serving" but proxying to other servers that render dynamic content - PHP-FPM is one such server (it just happens to not speak standard HTTP).
I don't see why people always get mad when this gets brought up - I've always considered that to be a good architectural choice for nginx. Running application code in the webserver process isn't a good idea anymore so the focus on good static and proxy performance is actually what I think made it ultimately "win" over Apache - the industry has moved on from the old ways and Apache fell behind.
I think that Apache2 + mod_php (or modules for other runtimes) doesn't measurably impact most web applications, and this setup is a bit simpler, since you only need to run and configure Apache, instead of Nginx + PHP-FPM. If you are working with containers, it means that you only need one container, instead of two (if you go with the idea that container should only do one thing.)
Additionally, if you are behind CDN or just plain Varnish, static assets would only be hit once or twice, and majority of the requests would be processed by PHP anyways, so there is no benefit in putting additional pipe between proxy (Nginx) and PHP interpreter (FPM). Especially with smaller page responses, Apache can easily win, since it doesn't have to communicate with an external process.
Raw performance is only one aspect. Splitting up the processes means that a security exploit in one component has basically no way of endangering the other. In a shared hosting environment, you can also run separate fpm processes for each tenant, allowing for custom configuration and even different versions altogether for each.
Just wait, we're going to come back around full circle once someone embeds a WebAssembly runtime into nginx/openresty, and then someone else makes a framework for defining "edge functions" to run inside that runtime.
This is basically what CloudFlare Workers does already.