If you ever need a backend for storing the edited PDFs, FilePost (https://filepost.dev) could handle that. One API call to upload and you get a permanent CDN URL back. Could be a good complement for a "save and share" feature.
Nice idea, image upload from mobile is underserved. If you ever need a backend for it, FilePost (https://filepost.dev) could work well as the upload target. Single POST with the image, instant CDN URL back. Would pair nicely with a terminal workflow.
Cool setup. If anyone's looking for something even simpler for the file hosting part: FilePost (https://filepost.dev) handles upload + CDN serving in a single API call. No S3 config, no Caddy reverse proxy, files served via Cloudflare edge. Obviously less control than self-hosting, but way less to maintain.
Interesting project. I built something in a similar space: FilePost (https://filepost.dev). Different approach though: API-first, one POST request gives you a permanent CDN URL via Cloudflare. Caps at 500MB per file but focused on developer workflows and automation rather than large single-file transfers. How are you handling delivery on the backend?
this is why i pin every dependency hash in my python projects. pip install --require-hashes with a locked requirements file catches exactly this, if the package hash changes unexpectedly the install fails. surprised this isn't the default in the npm ecosystem
Npm and the other JavaScript package managers do generate and check lockfiles with hashes by default. This was a new release, not a republishing of an old version (which isn’t possible on the npm registry anyway).
i wasn't aware npm lockfiles check hashes by default now. my concern is more about the initial install before a lockfile exists, like in CI from a fresh clone without a committed lockfile. but you're right, once the lockfile is there the hash mismatch would be caught.
i run fastapi APIs on linode with cloudflare in front and honestly the simplicity is underrated. predictable billing, docs that match reality, no surprise platform regressions. for a straightforward API workload the hyperscaler tax doesn't make sense unless you genuinely need their scale
i guess the difference is i chose my hyperscalers à la carte instead of getting the all-in-one bundle. at least when cloudflare breaks something i can still ssh into my linode and debug it directly
interesting that most scrapers are still just regex-searching for @ in raw bytes. on the receiving side i've been dealing with a different angle of the same problem, blocking disposable/temp email signups. a domain blocklist catches 90% but the clever ones use random alias domains that all point their MX records to the same disposable mail infrastructure. checking where MX records actually resolve catches those too
the JSON functions are genuinely useful even for simple apps. i use sqlite as a dev database and being able to query JSON columns without a preprocessing step saves a lot of time. STRICT tables are also great, caught a bug where I was accidentally inserting the wrong type and it just silently worked in regular mode
> caught a bug where I was accidentally inserting the wrong type and it just silently worked in regular mode
Typically one would design their "DTO"s to catch such errors right in the application layer, way before it even made it into the DB.
Different people call this serialization-deserialization layer different names (DTO being the most ubiquitous I think) but in general, one programs the serialization-deserialization layer to catch structural issues (age is "O" instead of 0).
The DB is then delegated to catch unchanging ground truths and domain consistency issues (people who have a DOB into the future are signing up or humans who don't have any email or address at all when the business needs it).
In your case, it's great the DB caught gaps in the application layer but one would handle it way before it even made it there.
The way I think about DB types are:
1. unexpected gaps/bugs: "What did I miss in my application layer?"
2. expected and unchanging constraints: "What are some unchanging ground truth constraints in my business?", "What are the absolute physical limits of my data?" - while I check for a negative age in my DTOs to provide fast errors to the user, I put these constraints in the DB because it's an unchanging rule of reality.
Crucially, by keeping volatile business rules out of the database and restricting it only to these ground truths, I avoid being dragged down by constant DB migrations in a fast-evolving business
yeah fair point, validation should ideally happen at the app layer first. in my case it was a quick prototype where i skipped the proper serialization step and STRICT caught it before it became a real problem. definitely not a substitute for proper DTOs in production
for cloud/VPS hosting hetzner has been solid for me. their object storage is S3-compatible and way cheaper than AWS. the only downside is the region options are limited to EU which is actually a feature if you're targeting EU users. for the directory question, euro-stack.com linked above looks more maintained than european-alternatives.eu. i also just search "X alternative EU" whenever i need something specific
Thanks! I'm also a happy Hetzner customer (for 4 or 5 years now?), can vouch for their object and box storage, and cloud VPS. I also already loved learning about euro-stack.com and end up doing the same as you (searching for EU alternative).