Just as I was reading this claude implemented a drag&drop of images out of SumatraPDF.
I asked:
> implement dragging out images; if we initiate drag action and the element under cursor is an image, allow dragging out the image and dropping on other applications
then it didn't quite work:
I'm testing it by trying to drop on a web application that accepts dropped images from file system but it doesn't work for that
I admire you for what you've created wrt Sumatra. It's an excellent piece of software. But, as a matter of principle, I refuse to knowingly contribute to codebases using AI to generate code, including drive-by hints, suggestions, etc.
You, or rather Claude, are not the first to solve this problem and there are examples of better solutions out there. Since you're willing to let Claude regurgitate other people's work, feel free to look it up yourself or have Claude do it for you.
1. I mean, yes ? the average farm worker is probably capable of writing a sentence similar to the one you just did and sicking it a prompt.
Unless you mean without LLM assistance, then no.
2. I've no idea, i haven't touched c++ in an age, if i got back up to speed then possibly.
3. To learn how to program in c++ again, figure out best practices and then write the code? A while probably.
But then i'd have to to that anyway to be able to spot any problems in the code and know what to test.
because i'm for sure not putting code out there that i don't understand, especially when the code has been generated by a non-deterministic system prone to subtle hallucinations.
I'm not saying LLM's have no uses, they do some things fine, inflating the capabilities of a tool because of hype isn't a viable mid to long term strategy.
LLM's are poor(but improving in some ways) at consistent multiple-boundary complexity.
My issue wasn't with the statement itself, just that is was very broad, hence my reply.
LLMs can assist with all of those steps, potentially, if you use them for the things they are suited for and have a plan for maintaining quality and consistency beyond "let the LLM's review and test it for me", i'd consider that professional negligence given the current SOTA.
The assistance should be subject an accurate cost/benefit analysis before implying the assistance is worthwhile, was my point.
Nothing would be more effective at killing open source and commercial software business that requiring everyone that writes and ships software to users, directly or indirectly (e.g. an open-source library) to have License To Program from Software Licensing Organization.
> aware of existing and new laws, standards and codes of practice
Yeah, because software business is not at all ruled by fads.
1997: you have to follow Extreme Programming (XP) or you don't get your license
2000: you now have to use XML for everything in XML or you don't get your license
2002: you now have to follow Agile or you don't get your license
2025: you now have to write everything in Rust or you don't get your license
A software engineering licensing body would require licensed individuals to understand things about security and accessibility, which would be a huge improvement. If you are responsible for a trivial security vulnerability you and the company should actually be liable for it.
Sysadmins/other adjacent roles should likely have the same requirements. An unmaintained/unsecured server can create a huge liability.
1. 99.999999% of software is not equivalent to "doing surgery" so doesn't need gatekeeping. I work on free, open-source PDF reader SumatraPDF. What kind of authorization should I get and from whom to ship this software to people?
2. pacemakers and other medical devices have to get approval from the government. So that's covered.
medical CRM software is covered by medical privacy laws which does what you say you want (criminalizes "bad" software) but in reality is a giant set of rules, many idiotic, that make health care more expensive for no benefit at all.
Because those aren't occupied by horrible people. Freedom is intersectional, you can't fight for freedom while indirectly supporting the oppression of others. Sometimes, the benefits of more eyeballs are worth it but there aren't enough people left on twitter for it to be worth supporting
I don't know about the others, but mastodon: yes to all three, since before twitter was bought by Musk. Twitter interoperability use to be good though, but i don't know what they did after locking the public API. Do you have a more limited access to twitter api now? or is it still locked?
You don't seem to be aware of the context of the quote, and you don't seem to be aware of the state of social media.
1. These are not reasons they listed for leaving X. These are lists of problems they identified on Twitter. They did not leave until 2026.
2. Yes, you get better transparency with Mastodons, owing to the fact Mastodons are usually operated and moderated by people with an interest in transparency. BlueSky moderation is also done more transparently (see its labeling system) and in ways that are less absolute (see BlackSky, etc).
3. Yes, you get better user control with Mastodons and BlueSkys. There are third party apps which work well, owing to them having open APIs. BlueSky - Mastodon bridges are common.
4. It's not "only X". EFF hasn't posted to identi.ca in 13 years, Flickr in one year, or comp.org.eff.news since 2000.
Why are you guys so unprepared against someone pointing out that disciplinary actions and criteria for those on Twitter had always been broken? It's obvious that canned_responses.xlsx you were given didn't include responses for that, and that's weird.
Twitter account bans had always been so broken that account bans, account ban evasions, tweet deboosting avoidance, etc. has all, long, been natural parts of life on it, since at least 2010s. I might as well argue that it would not have gone so far "down", psychologically, to the point that its old management would have sold the entire thing to Musk and for people to genuinely believe in positive outcome under him.
The very least you guys could have done it is to recognize the fact that inconsistent, unclear, unenforced policies of old Twitter existed && are not consistent with yours. You guys don't even do that. How even.
Seems like they prefer those platforms and perhaps the algorithm works better for their goals. Maybe they'll grow users over time and it'll be better for the EFF on a post/engagement ratio. Maybe more engaging users are on those platforms? I'm not fan of Bluesky (interactions I've seen are racist and/or far-left lunatics or communists and other such water heads), but then again who cares where they post?
In the age of AI tools like this are pointless. Especially new ones, given existence of make, cmake, premake and a bunch of others.
C++ build system, at the core, boils down to calling gcc foo.c -o foo.obj / link foo.obj foo.exe (please forgive if I got they syntax wrong).
Sure, you have more .c files, and you pass some flags but that's the core.
I've recently started a new C++ program from scratch.
What build system did I write?
I didn't. I told Claude:
"Write a bun typescript script build.ts that compiles the .cpp files with cl and creates foo.exe. Create release and debug builds, trigger release build with -release cmd-line flag".
And it did it in minutes and it worked. And I can expand it with similar instructions. I can ask for release build with all the sanitize flags and claude will add it.
The particulars don't matter. I could have asked for a makefile, or cmake file or ninja or a script written in python or in ruby or in Go or in rust. I just like using bun for scripting.
The point is that in the past I tried to learn cmake and good lord, it's days spent learning something that I'll spent 1 hr using.
It just doesn't make sense to learn any of those tools given that claude can give me working any build system in minutes.
It makes even less sense to create new build tools. Even if you create the most amazing tool, I would still choose spending a minute asking claude than spending days learning arbitrary syntax of a new tool.
This is a fair and valid point. However, why leave your workflow to write a prompt to an AI when you can run simple commands in your workspace. Also you are most likely paying to use the AI while Craft is free and open source and will only continue to improve. I respect your feedback though, thank you!
You're missing finding library/include paths, build configuration (`-D` flags for conditional compilation), fetching these from remote repositories, and versioning.
I have no issue with with code generated by e.g. Claude because it's not "slop".
On average, it's probably better than the code I would write.
I say "on average" because AI doesn't make stupid mistakes, doesn't invert logical conditions. I know I do. Which I eventually fix, but it's better to not make them in the first place, hence "on average".
And in cases that AI doesn't generate code up to my quality standards, I re-prompt it until it does. Or fix it myself.
I'm not a hapless victim of AI. I'm a supervisor. I operate a machine that generates good code most of the time but not all of the time. I'm there to spot and correct the "not all of the time" cases.
But that's my point. LLMs generate good prose "most of the time", certainly better than most people are capable of doing. Yet we frequently react with disgust when we see tell-tale signs of LLM-generated text in articles. Why? Because it indicates the person was probably too lazy to write it themselves and are simply chucking a half-formed thought over the wall? Why don't we hold generated code to the same standard?
AI is assisting you. It'll write efficient code if you guide it to write efficient code. You're not a hapless victim of ai written code.
To give you a concrete examples. Recently pretext library made waves. I looked at the code and noticed that isCJK could possibly faster.
So I spent 30 minutes TELLING claude to write a benchmark and implement several different, hopefully faster, versions. Some claude came up with by itself and some were based on my guidance.
The original isCJK, also written by AI (I assume), was fast. It wasn't obviously slow like lots of human JavaScript code I see.
Claude did implement a faster version.
Could I do the same thing (write multiple implementations and benchmark them) without Claude? Yes.
Would I do it? Probably not. It would take significantly longer than 30 min. and I don't have that much time to spend on isCJK.
Would I achieve as good result? Probably no. The big win came from replacing for .. of with regular for loop. Something that didn't occur to me but Claude did it because I instructed it to "come up with ideas to speed it up". I'm an expert in writing fast code but I don't know everything and I all good ideas. AI knows everything, you just need to poke it the right way.
What worries me is that good, efficient code will no longer be widely shared like before. Everyone will just write their own inefficient version of a general purpose function or library because Claude or some other AI coder made it cheap and easy.
The rules are made by politicians.
All it takes to change the rules is to rotate politicians.
Or enough public dissent that the same politicians are forced to revert the rules.
reply