> Tell us your hopes and dreams for a Cloudflare-wide CLI
It'd be great if the Wrangler CLI could display the required API token permissions upfront during local dev, so you know exactly what to provision before deploying. Even better if there were something like a `cf permissions check` command that tells you what's missing or unneeded perms with an API key.
That would be glorious! If ChatGPT doesn't get the permissions right on the first try I know that I'm going to have to spend the next hours reading the documentation or trying random combinations to get a token that works.
Why is "on the first try" so important? What's wrong with telling it your end goal and letting it figure out the exact right combo while you go off and work on something else?
I think this boils down to more discoverability of the entire API. While I'm not a fan of GraphQL necessarily, it does provide the tools for very robust LLM usage typically because of the discoverability and HATEOAS aspects they actually follow compared to most "REST" APIs. I would love if LLMs could learn everything they need to about an API just by following links from its root entry point. That drastically cuts down on the "ingested knowledge" and documentation reads (of the wrong version) it needs to perform. Outdated documentation can often be worse than no documentation if the tool has the capability of helping you "discover" its features and capabilities.
There are plenty of LED strips with audio controllers that work pretty well. I've used them in a few projects. Just go look at Amazon, you can get them for pretty cheap.
Seeing this kind of friction makes me more confident in VeraCrypt. The tools that never seem to run into trouble with platform gatekeepers are the ones I'd worry about.
Well look at something like ANOM. The FBI encouraged its use. Because it was run by the FBI and they could see all the private messages.
If Veracrypt was a honeypot, the powers that be would go out of their way to make it as easy to use as possible. They'd instantly sack whoever made this decision, and reverse it.
The biggest risk in encryption software is that you lose access to your data. You seem to be ignoring that risk completely and focusing on something else entirely.
Web browser is a sandbox by default. Worst a sketchy site does is eat a tab, less if you run an adblocker. Native app? Background processes, hardware ID shenanigans, your contacts, location. The whole buffet.
So I take this is a security concern. How do you feel about the fact that when you open a webapp in your browser, you re-download that app code every time? That the server can send you a backdoor every single time, made just for you, and nobody else will ever know? And that you can't check the "hash" of the webapp, like you can with an app?
On the other hand, an app is sandboxed, too (on mobile OSes like Android and iOS). When you download it, you can check a hash that you can (if you want to) compare with a friend to see if they got the same app. With an app, there is intermediary (the "app store") that would need to collude with the developers to send a backdoor just for you, and even then you would still have the app binary as proof.
That's always a question I have with "secure" web services: if you use ProtonMail, you trust that Proton doesn't send you a web page that leaks your key. But if you trust Proton for that, what's the point of the end-to-end encryption? When you use the Signal app, the whole idea is that you don't have to trust Signal for the end-to-end encryption, at all.
I think the question is: where should the information barrier exist? A web browser puts a barrier between your OS and the company, while an app (potentially) puts a barrier between the client and the server.
For security minded and source-available apps like Signal, the latter is the right choice. For low trust companies with no expectation of app/server separation, the former seems right.
One thing is that on mobile OSes (iOS and Android), the apps are sandboxed. It is wrong to say that they are not, I don't know what people believe. Programs are typically not sandboxed on desktop OSes (though they can be, but the user has to do something about it), but on mobile they most definitely are. That's part of the reason why the security models of iOS and Android are better than desktop OSes.
Just like you don't have to give access to your filesystem to a webapp (but you can), you don't have to give this access to an app.
The reason to like webapps better than mobile apps is, IMO, not security (again, IMO it's worse in terms of security). The reason could be that they want to rely on an open source tech stack (which iOS does not provide, but Android does!). But really my feeling is that it's often either uninformed or political (i.e. it feels like a strong statement against Google to refuse Android apps?). Which again is weird to me because Google controls the browsers development (via Chromium) just as much as they control the Android core (AOSP). People who are happy with chromium should be happy with GrapheneOS, I would say.
Open source helps, but if you didn't build it yourself, you'll need to trust whoever did. F-Droid reproducible builds help in that you only need to trust either F-Droid or the developer, not both.
The browser tends to be safer because it has a stronger sandbox than native apps on a mobile OS. It's meant to be able to run potentially malicious code with a very limited blast radius.
>That the server can send you a backdoor every single time, made just for you, and nobody else will ever know?
There is no "backdoor" when the browser is sandboxed. "backdoor" is a specific thing, I think you need to read up on it before you keep using it incorrectly:
>On the other hand, an app is sandboxed, too (on mobile OSes like Android and iOS). When you download it, you can check a hash that you can (if you want to) compare with a friend to see if they got the same app.
That isn't what "sandboxed" means, it has nothing to do with checking hashes. And no, mobile apps are not really sandboxed, they have full access to your mobile device once you install it and give it access - and let's be real, most people are just going to blindly click "allow" for anything the app requests after installing an app.
>With an app, there is intermediary (the "app store") that would need to collude with the developers to send a backdoor just for you, and even then you would still have the app binary as proof.
You keep referring to "backdoor", and I don't think you really know what that means.
>That's always a question I have with "secure" web services: if you use ProtonMail, you trust that Proton doesn't send you a web page that leaks your key. But if you trust Proton for that, what's the point of the end-to-end encryption? When you use the Signal app, the whole idea is that you don't have to trust Signal for the end-to-end encryption, at all.
That isn't how any of this works. The main value proposition of Signal is that we do trust its end-to-end encryption. Protonmail sending a "web page" that "leaks your key"? WTF?
It's obvious what GP meant - we can verify that the apps we download are the apps everyone else downloads.
We can't do this with Proton where our mail is supposedly end-to-end encrypted. They can easily view our mail if they can send us a different code when we load their site.
> That isn't what "sandboxed" means, it has nothing to do with checking hashes. And no, mobile apps are not really sandboxed
Apps ARE somewhat sandboxes and GP didn't mean than sandboxing == checking hashes. It was 2 sentences appearing one after the other.
>We can't do this with Proton where our mail is supposedly end-to-end encrypted. They can easily view our mail if they can send us a different code when we load their site.
That isn't a problem with how the web works vs how apps work, that's a problem with you trusting Protonmail.
If you really wanted to be secure sending an email or any communication, you wouldn't trust any third party, be it an app or a website - you would encrypt your message on an air-gapped system, preferably a minimal known safe linux installation, and move the encrypted file to a USB, and then insert the USB into a system with network access, and then send the encrypted file to your destination through any service out there, even plain old unencrypted http would work at that point, because your message is already encrypted.
The second you give your unencrypted message to any third-party on any device with an input box and a network connection, is the moment you made it public. If I had to be extremely sure that my message isn't read by anyone else, typing it into a mobile app or a web browser isn't the place I'd start - it would only be done as a last resort.
That is a problem with you not understanding how security works.
> If you really wanted to be secure
There is no such thing as "being really secure". There are threat models, and implementations that defend you against them. Because you can't prevent a bulldozer from destroying your front door does not mean that it is useless to ever lock it.
Even your air-gapped example is wrong, because it means that you have to trust that system (unless you are capable of building a computer from scratch in your garage, which I doubt).
Sending an encrypted over the Signal app is a lot more secure than sending an email over the ProtonMail website, which itself is more secure than sending it in a non-secret Telegram channel. It's a gradient, it can be "more" or "less" secure, it doesn't have to be "all or nothing" as you seem to believe.
>That is a problem with you not understanding how security works.
That's hilariously wrong.
>There is no such thing as "being really secure".
Sure there is. "Being really secure" isn't what I said at all, and it's a vague statement to make. You're reaching to create an internet argument, and I'm frankly bored of this, you're out of your depth.
>Even your air-gapped example is wrong, because it means that you have to trust that system
I'd trust a system that I set up. I'm not going to do it on a system that you set up, that much is for certain.
> (unless you are capable of building a computer from scratch in your garage, which I doubt).
I still have an EPROM burner, so yes, I could, and I have.
>Sending an encrypted over the Signal app is a lot more secure than sending an email over the ProtonMail website
If you really think that, then nobody should be taking security advice from you.
I'm really tired of this pointless internet interaction. Goodbye.
Well, you can verify that the code that you downloaded is the same that everyone else downloaded. Even if it contains webviews.
Now if it contains webviews, it brings the security issue of... the webapps, of course.
Personally, I want an open source app. You can audit an open source app and even compile it yourself. You can't really do that with a website. And I don't mean just mobile apps, that applies to desktop apps, too. I wouldn't run a web-based terminal, for instance (do people actually do that?).
>Well, you can verify that the code that you downloaded is the same that everyone else downloaded. Even if it contains webviews.
Not impossible to do with websites, if the need to do it was there. It would take about 15 minutes to create a browser extension that could make a hash of all the files loaded, to compare with other users with the extension installed - but honestly that's just not needed because if you're connecting via HTTPS, then you're getting the files that are intended to be served, presumably not malicious if you trust the source. And if you don't trust the source, then why are you loading it to begin with??
>Now if it contains webviews, it brings the security issue of... the webapps, of course.
Web applications are sandboxed in the web browser. Very little issue with that, outside of browser bugs/exploits, but bugs and exploits are found in every system ever.
>I wouldn't run a web-based terminal, for instance (do people actually do that?).
AWS has a web-based terminal for EC2 instances. It's not a problem, a lot of people use it.
> And if you don't trust the source, then why are you loading it to begin with??
I trust that Proton (for example) has implemented E2EE in their services. I wouldn't trust them to handle my unencrypted data - I wouldn't trust anyone for that. I don't trust that their security is perfect - no one's security is. So if they're breached, they could serve me malicious JS. I don't trust they're impervious to government pressure or blackmail. By making sure the files served to me are the same as the files served to anyone else, I can be relatively sure I'm not targeted personally. People could also review those files to make sure they're not malicious.
> It would take about 15 minutes to create a browser extension that could make a hash of all the files loaded, to compare with other users with the extension installed
You completely underestimate it. I am absolutely certain that you cannot create a browser extension that meaningfully solves this problem in 15 minutes.
> Web applications are sandboxed in the web browser. Very little issue with that
Except that when we are talking about end-to-end encryption, the sandbox has nothing to do with it. The sandbox defends against something else, not the server serving you an end-to-end encryption program abusing it.
> AWS has a web-based terminal for EC2 instances. It's not a problem, a lot of people use it.
I genuinely can't see if you just don't understand the point being discussed at all, or if you keep saying off-topic things as a way to divert the discussion.
>You completely underestimate it. I am absolutely certain that you cannot create a browser extension that meaningfully solves this problem in 15 minutes.
You are absolutely wrong. I write browser extensions, I can spin up a new one in a minute, and the code to monitor and hash all resources loaded by a webpage is trivially easy to do. It would be simple to set up a server to allow comparing the hashes, in a POC. I'm not talking about making this a robust service that everyone can use, I'm only talking about how easy it is to do in a general way. It's far easier than you think it is.
>>>I wouldn't run a web-based terminal, for instance (do people actually do that?).
>> AWS has a web-based terminal for EC2 instances. It's not a problem, a lot of people use it.
>I genuinely can't see if you just don't understand the point being discussed at all, or if you keep saying off-topic things as a way to divert the discussion.
You're right, I certainly don't understand the nonsense you're trying to convey.
I'm also tired of this pointless internet interaction. Goodbye.
> I'm not talking about making this a robust service that everyone can use
Right. So you cannot do it. Thank you.
> I'm also tired of this pointless internet interaction. Goodbye.
Seems to me that you don't enjoy discussing with people who behave like jerks, which I admittedly did, just for you). You may not have realised it, but you started it. I am happy to disagree in a respectful tone, but you broke it first. Maybe that's something to think about in your next totally meaningful internet interaction, though it sounds like you like telling others that you know better because you are older.
AlBugdy and the person you are replying to are literally right re: server delivered backdoors. Using E2EE applications in a browser moves the trust back from the client to the server.
> That isn't how any of this works. The main value proposition of Signal is that we do trust its end-to-end encryption. Protonmail sending a "web page" that "leaks your key"? WTF?
Yes and it's that you also trust the client, with a server that dynamically delivers code you have no way of knowing fully what payload it's sending you. An example of this vulnerability was discussed when it was pointed out that 1P, Bitwarden and others were susceptible to server side backdoors if used from the web in that research study that came out last month that was posted here.
> And no, mobile apps are not really sandboxed, they have full access to your mobile device once you install it and give it access - and let's be real, most people are just going to blindly click "allow" for anything the app requests after installing an app.
This is genuinely just not true, even if you click allow for all permissions on Android and iOS. An application on a non-rooted device doesn't have "full access."
>This is genuinely just not true, even if you click allow for all permissions on Android and iOS. An application on a non-rooted device doesn't have "full access."
You're wrong.
Facebook has been repeatedly accused and caught using unallowed, hidden, or deprecated APIs to bypass user privacy settings and platform restrictions, particularly on iOS and Android.
Dude, I was here to talk about security, not to be judged on the quality of my English. What I get from your take is that your English is better than mine, but not your security knowledge.
> That isn't what "sandboxed" means, it has nothing to do with checking hashes.
I didn't say it had anything to do with it. I meant that NOT ONLY it is sandboxed, but ON TOP OF THAT you can check that you received the same code.
> You keep referring to "backdoor", and I don't think you really know what that means.
The only explanation I see for you not understanding what I mean by "backdoor" for the end-to-end encryption is that you have no idea how it works. If you're just being condescending about my language, go for it. Tell me I can't speak your language. But don't tell me I don't understand security, you have absolutely no idea what I know.
> Protonmail sending a "web page" that "leaks your key"? WTF?
You obviously don't understand how it works if this surprises you. I would gladly elaborate with anyone who is not a jerk, but that does not seem to be the case here.
"backdoor" isn't really an English thing, it's a tech thing. If you want to talk about tech, you need to know the terminology. This is not something like the difference between "there" and "their" and "they're". I did not correct your English grammar, I corrected your tech terminology.
>I would gladly elaborate with anyone who is not a jerk, but that does not seem to be the case here.
I was not "a jerk". You didn't seem to understand what a "backdoor" is in terms of tech, and I still don't think you do.
This pointless internet interaction is over. Gooodbye.
Now it only ensures that Cloudflare doesn't tamper with the WhatsApp Web code they serve, you still have to trust Meta.
I feel like reaching the same level as "checking the hash for the app" would be very hard in practice. I.e. the web is not built around doing that. Your extension would have to scan all the files you download when you reach a page, somehow make a hash of it, somehow compare it to... something, but then make the difference between "tampered with" and "just a normal update".
Also you just can't "download the sources, audit them and compile them yourself" with a webapp. If you do that, it's just "an app built with web tech", like Electron, I guess?
I think it's more than that. It's a walled garden. If you want to leave go somewhere else, it's further away than just a tab. That increases stickiness.
For example, let's say I'm an airline. I don't want you in the browser, where you're going to have my competitors in the adjacent tabs. I want you in my app, where all you see is my version of the world. (I mean, yes, you can have multiple apps open, too, and switch between them. It's still a bit more friction than moving between tabs. Or maybe that's just my mental model, and young people see apps as just another kind of tab?)
I'd argue it's absolutely ludicrous to give _other people's information_ up to an app (or website). Your contacts contain names, phone numbers, potentially photos and addresses of _other people_.
As long as the application is made aware of the permissions and can prevent functioning when they get denied, that doesn't really help much. It's the choice between getting mugged or never leaving the house.
The ability to deny permissions without the app noticing or filling it with fake data doesn't exist on either system.
The weather app I used sent location data from pretty much everyone who didn't manually go through the effort to opt out to some shady American data broker that got hacked. Most people using the app gave it location permissions because of its ability to warn for rain coming to your precise location with decent accuracy.
Nobody wanted to share their location with these data brokers, but thanks to underfunded privacy watchdogs, you have no idea what happens to any app that you give any kind of permission.
One of the most enraging things about life since 2005-ish is that no matter how private and careful I am, it doesn't even matter because every other inconsiderate fool I know and interact with will HAPPILY let some random company have access to THEIR contacts--which includes me--in order to play Farmville for a month until they get bored of that and offer up my private information to the next bullshit ad company that asks for their contacts.
It used to frustrate me that people didn't care about their own privacy, because I genuinely didn't want evil people to hurt them. But, it's even more angering that people don't have the common decency to consider whether their friends and family would want them sharing their phone numbers, email addresses, photos of them, etc.
Not without my knowledge or your knowledge sure. But I'd bet there's significant percentage of the population who is tired of thinking about permission popups and just hit yes yes YES to get the App started. Especially if it forces retries before going forward.
I think they're counting on these popups wearing people out.
After GDPR made these incessant annoying cookie popups mandatory, I just robotically click any button to dismiss it as fast as possible. Some website could probably write "Give root access" in that box and I'd probably click it without thinking.
As has been said before, sites that don't use unnecessary cookies don't need to have a cookie banner. Having the banner is often just malicious compliance (or for a non-compliant one banner, maliciously non-compliant).
bias disclosure: i used to do Android dev and kinda hate the browser personally.
i don’t get this take. “Web browser is sandbox by default”. sure, it has to do the rail grind with a rake to access system calls, but in a modern system apps are also sandboxed, especially on a smartphone or when downloaded with a managed app service. the OS gives you the ability to specify permissions, although to what degree depends on your provider. your browser _obviously_ also has the permissions you’re talking about. and now we have introduced yet more vectors in the form of cookies where web _applications_ can track activity _between applications_ with that just kinda being part of the spec, and it totally neuters the protections that the OS gives you because once you configure Firefox to get your location for Open Maps, now you’ve totally given control to your location permissions for _all web apps_ to yet another corporate driven point of failure.
don’t even get me started on the UI mess.
my tinfoil hat theory is that the browser is pushed by mostly bad actors trying to get data, while anyone providing a real user experience has a nice native app.
Good night, sweet reputation and flights of angels sing thee to thy rest.
Seriously though, I appreciate this perspective. While I prefer using a browser whenever possible, I'm well aware of modern fingerprinting techniques. But I didn't know about permission "sharing" between apps in the same browser. Thanks!
Privacy and security have always been a game of cat and mouse. Doesn't seem like that's going to change anytime soon.
Exactly. The only app-specific abuse I can think of is apps that wake in the background (Apple said this isn't the case, but it is), Android where apps get push by default, or apps that just hope the user will grant broad permissions that web can't do.
Apps have to request your permission for contacts and location. iOS is really good about not giving bad permissions to apps without user being asked for consent.
It'd be great if the Wrangler CLI could display the required API token permissions upfront during local dev, so you know exactly what to provision before deploying. Even better if there were something like a `cf permissions check` command that tells you what's missing or unneeded perms with an API key.
reply