I've had extremely bad luck running go-ipfs at any scale. GC is braindead (literally deletes the whole cache on a timer), networking is slow and drops tons of packets (apparently too much UDP?), and by default it stores each object 2 or 3 times. I'm sure it'll work fine for people using http://dweb.app, and probably go-ipfs will work okay for super casual browsing, but as soon as someone tries to download any substantial IPFS dataset, expect lots of resource limits.
Yup, I had a (tiny, spinning rust) home server that was slowed to nearly a halt. SSH logins would take minutes even when limiting IPFS to 2GiB of the 16GiB of RAM. Stopped go-ipfs and it was instantly snappy again.
My impression of the IPFS project is that the goals are excellent, the core protocol is quite good however they like rewriting the higher level layers far too frequently (for example they have deprecated UnixFS which seems to be the most used format and they keep switching between JSON, Protocol Buffers and CBOR) and go-ipfs seems to be a pretty garbage codebase.
Any chance that, by limiting RAM usage, you forced your application to heavily swap, clogging the disk and making your machine slow?
I have run a public gateway on 2GB of RAM. Later 4GB because it was subject to very heavy abuse, but it was perfectly possible. Perhaps it is a matter of knowing how to configure things and how to not inflict self-pain with wrong settings.
Yes, there is definitely a chance but it was wayyyy worse when I gave it more or unlimited ram. At least this way the machine was operational most of the time. I don't think it was swapping but since I think the limit I applied also affected the page cache it was likely reading it's data from disk a lot more often than it would of if it could own the whole page cache. But this is basically the same effect as swapping.
Maybe there is a Goldilocks value I could find, but I didn't really need IPFS running that much so I just removed it.
It does not delete the whole cache on a timer. It will delete blocks that are not pinned periodically or when it reaches a high-water mark. It does not store each object 2 or 3 times. First, it doesn't refer to anything as an object but rather blocks and a block is only stored once. It will only be replicated if you're running a cluster in which case replication is the point.
I don't really know anything about the Golang GC, but I would not be surprised if the process of scanning for unpinned blocks results in a lot of memory accesses. If too many useful things get evicted from the cache during that process, then I can see why GP is saying it deletes the whole cache.
Why would you ever start anything with, "I don't really know anything about the Golang GC but..." IPFS GC is separate from the Golang GC. IPFS GC deletes blocks on the local node that are not pinned. I'm not sure what you mean by "too many useful things". If it's not pinned it's evicted.
Golang has only 1 tunable GC parameter by design so it could be not opaque enough for certain loads but I learned that putting a ballast in ram fixes the too frequent GC sweeps
If you have any specific complaints or experiences you'd like to share I would be interested in hearing about the but "it's a bit of a trash fire" is unhelpful.
It was slow and buggy when it first released; understandable so I waited a few years, tried again recently now that it's popular and it still is an unpractical proof of concept.
Yes, that has roughly been my experience. The application throughput offered by IPFS is quite low while the packet throughput is very high. I was experiencing 5-15% packet loss over my internet connection while running IPFS. I'm not sure if a bandwidth limit would even help or if it is related to number of connections.
There's different profiles you can select from. You might have the server profile enabled. Powers save probably consumes the least abs you can opt out of sharing altogether. But the entire point is p2p.