Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've had extremely bad luck running go-ipfs at any scale. GC is braindead (literally deletes the whole cache on a timer), networking is slow and drops tons of packets (apparently too much UDP?), and by default it stores each object 2 or 3 times. I'm sure it'll work fine for people using http://dweb.app, and probably go-ipfs will work okay for super casual browsing, but as soon as someone tries to download any substantial IPFS dataset, expect lots of resource limits.


Yup, I had a (tiny, spinning rust) home server that was slowed to nearly a halt. SSH logins would take minutes even when limiting IPFS to 2GiB of the 16GiB of RAM. Stopped go-ipfs and it was instantly snappy again.

My impression of the IPFS project is that the goals are excellent, the core protocol is quite good however they like rewriting the higher level layers far too frequently (for example they have deprecated UnixFS which seems to be the most used format and they keep switching between JSON, Protocol Buffers and CBOR) and go-ipfs seems to be a pretty garbage codebase.


Any chance that, by limiting RAM usage, you forced your application to heavily swap, clogging the disk and making your machine slow?

I have run a public gateway on 2GB of RAM. Later 4GB because it was subject to very heavy abuse, but it was perfectly possible. Perhaps it is a matter of knowing how to configure things and how to not inflict self-pain with wrong settings.


Yes, there is definitely a chance but it was wayyyy worse when I gave it more or unlimited ram. At least this way the machine was operational most of the time. I don't think it was swapping but since I think the limit I applied also affected the page cache it was likely reading it's data from disk a lot more often than it would of if it could own the whole page cache. But this is basically the same effect as swapping.

Maybe there is a Goldilocks value I could find, but I didn't really need IPFS running that much so I just removed it.


It does not delete the whole cache on a timer. It will delete blocks that are not pinned periodically or when it reaches a high-water mark. It does not store each object 2 or 3 times. First, it doesn't refer to anything as an object but rather blocks and a block is only stored once. It will only be replicated if you're running a cluster in which case replication is the point.


I don't really know anything about the Golang GC, but I would not be surprised if the process of scanning for unpinned blocks results in a lot of memory accesses. If too many useful things get evicted from the cache during that process, then I can see why GP is saying it deletes the whole cache.


Why would you ever start anything with, "I don't really know anything about the Golang GC but..." IPFS GC is separate from the Golang GC. IPFS GC deletes blocks on the local node that are not pinned. I'm not sure what you mean by "too many useful things". If it's not pinned it's evicted.


Hahaha oh wow. How embarrassing. I thought the original comment was talking about Golang's GC, as they did specifically mention go-ipfs.

I suppose that's your answer! Simple misunderstanding.


Golang has only 1 tunable GC parameter by design so it could be not opaque enough for certain loads but I learned that putting a ballast in ram fixes the too frequent GC sweeps

https://eng.uber.com/how-we-saved-70k-cores-across-30-missio...


IPFS node garbage collection is not related to Golang GC.


Yep, I made an IPFS pinning service (the second one to exist, IIRC, and the first usable one), and I wish I hadn't. It's a bit of a trash fire.


If you have any specific complaints or experiences you'd like to share I would be interested in hearing about the but "it's a bit of a trash fire" is unhelpful.


hey there stavros, are you from pinata?



No, I'm not.


just checking


Same experience here. It's a real shame. I have the feeling IPFS is trying to do too much and became a bit of a bloated mess.

I love the idea of decentralized content-addressed storage and wish there were a more lightweight way to get there.


It was slow and buggy when it first released; understandable so I waited a few years, tried again recently now that it's popular and it still is an unpractical proof of concept.


IPFS is especially heavy on bandwith

if you plan to host IPFS at home and meanwhile do things on the internet then IPFS isn’t for you

although it’d be a good excuse to upgrade your home network


That seems like an insane usability trade off that would limit adoption quite heavily


> if you plan to host IPFS at home and meanwhile do things on the internet then IPFS isn’t for you

Is it possible to limit the bandwidth and queue depth for IPFS?

I bet you could also lower its limit dynamically whenever web traffic is seem.


"Implement bandwidth limiting" https://github.com/ipfs/go-ipfs/issues/3065

Going on six years now. You can use external tools (like "trickle") or your OS knobs.


nope, the only real possibility (to lower bandwith usage) is to disable peering

without p2p IPFS is nearly useless


So you're saying it's somehow impossible to deploy QoS management in your network to limit IPFS the same way you would limit anything else?

Presumably one could also run IPFS (or Brave) in its own VM, container, or hardware server and to rate limit traffic in and out of it.


IPFS content loads slow enough without rate limiting

rate-limiting will only make the matters worse and again turn IPFS nearly useless

they haven't managed to solve these issues for 6 years now


Am I understanding this correctly?

1. IPFS's bandwidth is so low that it is unusable.

2. IPFS's bandwidth usage is so high that it makes the network unusable.


Yes, that has roughly been my experience. The application throughput offered by IPFS is quite low while the packet throughput is very high. I was experiencing 5-15% packet loss over my internet connection while running IPFS. I'm not sure if a bandwidth limit would even help or if it is related to number of connections.


There's different profiles you can select from. You might have the server profile enabled. Powers save probably consumes the least abs you can opt out of sharing altogether. But the entire point is p2p.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: