Not a good reference for windows -- I use HuggingFace APIs on cog/docker deployments in Linux. I needed to use `PYTORCH_NO_CUDA_MEMORY_CACHING=1 -e PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True` envvars to eliminate memory errors on the 3090s. When I run on the Mac there is enough memory not to require shenanigans. Runs approximately as fast as the 3090s but the 3090s heat my basement and the Mac heats my face.