Hacker Newsnew | past | comments | ask | show | jobs | submit | quietbritishjim's commentslogin

They specified the geometric mean.

The arithmetic mean (what you're thinking of) of 1 and 100 is 50.5.

The geometric mean of 1 and 100 is 10. It gives a sense of the average magnitude.


They edited the comment, previously it did not mention geometric mean.

The geometric mean seems to be the natural mean for relative comparisons between lengths, because the mean of (Planck length, observable universe) is clearly very different from the mean of (house, observable universe).

It's just a programming font ligature. If you copy and paste it you'll see the actual characters e.g.

   auto main() -> int {
(It's also modern C++ trailing return type.)

I enjoy that because I have my browser monospace font set to be one that also has those ligatures, your comment isn't enlightening at all (I set it up that way though, so it's not a problem for me :P )

Why would anybody think more words more better?

    int main() {

The trailing return type pattern was added to the standard, IIRC, to make it easier for templated functions to have return types that depend on the types of the arguments, such as in this example:

    template <typename A, typename B>
    auto multiply(A a, B b) -> decltype(a * b) {
        return a * b;
    }
Its easier for the compiler to parse everything if `decltype(a * b) occurs _after_ the definition of `a` and `b`. Once this pattern was added and people started using it for that purpose, people also started using the pattern for all functions for consistency.

Yes, in that case I completely agree. Using it everywhere is a mistake IMHO. I know there might be a stylistic reason for using it everywhere, but I believe less code is better, unless more code makes it easier to understand.

Sounds like any per-user detection wouldn't have worked in this case.

I don't think that's true. Often, to come up with a proof of a particular theorem of interest, it's necessary to invent a whole new branch of mathematics that is interesting in its own right e.g. Galois theory for finding roots of polynomials. If the proof is automated then it might not be decomposed in a way that makes some new theory apparent. That's not true of a simple calculation.


> I don't think that's true. Often, to come up with a proof of a particular theorem of interest, it's necessary to invent a whole new branch of mathematics that is interesting in its own right e.g. Galois theory for finding roots of polynomials. If the proof is automated then it might not be decomposed in a way that makes some new theory apparent. That's not true of a simple calculation.

Ya, so? Even if automation is only going to work well on the well understood stuff, mathematicians can still work on mysteries, they will simply have more time and resources to do so.


This is literally the same thing as having the model write well factored, readable code. You can tell it to do things like avoid mixing abstraction levels within a function/proof, create interfaces (definitions/axioms) for useful ideas, etc. You can also work with it interactively (this is how I work with programming), so you can ask it to factor things in the way you prefer on the fly.


>This is literally the same thing as

No.

>You can

Not right now, right? I don't think current AI automated proofs are smart enough to introduce nontrivial abstractions.

Anyway I think you're missing the point of parent's posts. Math is not proofs. Back then some time ago four color theorem "proof" was very controversial, because it was a computer assisted exhaustive check of every possibility, impossible to verify by a human. It didn't bring any insight.

In general, on some level, proofs like not that important for mathematicians. I mean, for example, Riemann hypothesis or P?=NP proofs would be groundbreaking not because anyone has doubts that P=NP, but because we expect the proofs will be enlightening and will use some novel technique


Right, in the same way that programs are not opcodes. They're written to be read and understood by people. Language models can deal with this.

I'm not sure what your threshold for "trivial" is (e.g. would inventing groups from nothing be trivial? Would figuring out what various definitions in condensed mathematics "must be" to establish a correspondence with existing theory be trivial?), but I see LLMs come up with their own reasonable abstractions/interfaces just fine.


The first line of the post is:

> I'm the engineer who got PyPI to quarantine litellm.

In guessing they used a tool other than Claude Code to serve the email.


"got" can be read as "indirectly, via a blog post, which I think they reacted to"


I've updated the timeline to clarify I did in fact email them. I’m not yet at the point of having Claude write my emails for me, in fact it was my first one sent since joining the company 10 months ago!


Wait, what? You sent a single email being in a company for ten months?? Or was it the first external email?


I'm not a Swift user, but I can tell you from C++ experience that this logic doesn't mitigate a complex programming language.

* If you're in a team (or reading code in a third-party repo) then you need to know whatever features are used in that code, even if they're not in "your" subset of the language.

* Different codebases using different subsets of the language can feel quite different, which is annoying even if you know all the features used in them.

* Even if you're writing code entirely on your own, you still end up needing to learn about more language features than you need to for your code in order that you can make an informed decision about what goes in "your" subset.


Using structured concurrency [1] as introduced in Python Trio [2] genuinely does help write much simpler concurrent code.

Also, as noted in that Simon Tatham article, Python makes choices at the language level that you have to fuss over yourself in C++. Given how different Trio is from asyncio (the async library in Python's standard library), it seems to me that making some of those basic choices wasn't actually that restrictive, so I'd guess that a lot of C++'s async complexity isn't that necessary for the problem.

[1] https://vorpus.org/blog/notes-on-structured-concurrency-or-g...

[2] https://trio.readthedocs.io/en/stable/


After so wrote the comment below I realized that it really is just ‘um, actually…’ about discussing using concurrency vs implementing it. It’s probably not needed, but I do like my wording so I’m posting it for personal posterity.

In the context of an article about C++’s coroutines for building concurrency I think structured concurrency is out of scope. Structured concurrency is an effective and, reasonably, efficient idiom for handling a substantial percentage of concurrent workloads (which in light of your parent’s comment is probably why you brought up structured concurrency as a solution); however, C++ coroutines are pitched several levels of abstraction below where structured concurrency is implemented.

Additionally, there is the implementation requirements to have Trio style structured concurrency function. I’m almost certain a garbage collector is not required so that probably isn’t an issue, but, the implementation of the nurseries and the associated memory management required are independent implementations that C++ will almost certainly never impose as a base requirement to have concurrency. There are also some pretty effective cancelation strategies presumed in Trio which would also have to be positioned as requirements.

Not really a critique on the idiom, but I think it’s worth mentioning that a higher level solution is not always applicable given a lower level language feature’s expected usage. Particularly where implementing concurrency, as in the C++ coroutines, versus using concurrency, as in Trio.


Python's stdlib now supports structured concurrency via task groups[1], inspired by Trio's nurseries[2].

[1] https://docs.python.org/3/library/asyncio-task.html#id6

[2] https://github.com/python/cpython/issues/90908


Good point. I did carefully say that Trio "introduced" structured concurrency, partly due to this (and also other languages that now use it e.g. Swift, Kotlin).

I will say that it's still not as nice as using Trio. Partly that's because it has edge-triggered cancellation (calling task.cancel() injects a single cancellation exception) rather than Trio's level-triggered cancellation (once a scope is cancelled, including the scope implicit in a nursery, it stays cancelled so future async calls all throw Cancelled unless shielded). The interaction between asyncio TaskGroup and its older task API is also really awkward (how do I update the task's cancelled count if an unrelated task I'm waiting on throws Cancelled?). But it's a huge improvement if you're forced to use asyncio.


I don't see any reason to suggest the HN submitter is the same as the article author, especially considering the high volume of submitted articles by the submitter.


History doesn't necessarily make it clear when a war might have started but didn't because of some specific factor. Mainly you see the wars that did happen. (It has a strong survivorship bias in the sense that a war "survived" history if it went ahead for real rather than being considered and decided against.)


I their point was: the comment they were replying to ("Beating is a normal English idiom") was being disingenuous.

Saying something like "the benchmarks took a beating in the new version" would be inoffensive but "flowers after the beating" is much more specifically about abuse in a relationship.

I don't think "Whether or not you think it's appropriate" was meant to say, don't worry it's fine. I think it just meant, let's not justify by pretending that it's about something different than it obviously is.


Thanks, I get it now. I'm not sure if the comment was necessarily disingenuous but it's clearly not used as an idiom.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: