Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some people think there will be an exponential takeoff, which means that a 6 month lead effectively rounds up to infinity.
 help



Is this belief grounded on some kind of derivation, or just a prima facie belief?

If it is grounded on a logical derivation, where can one find such a derivation, and inspect its premises?


It's an old idea, "the singularity". The machines become smart enough to improve themselves, and each improvement results in shorter (or more significant) improvement cycles. This leads to an exponential growth rate.

It's been promised to be around the corner for decades.

https://en.wikipedia.org/wiki/Technological_singularity


To be fair, Ray Kurzweil has been the loudest voice in this space, and he's been pretty consistent on 2045 since the publication of his book almost 20 years ago[1].

[1]: https://en.wikipedia.org/wiki/The_Singularity_Is_Near


Per that summary, we were supposed to have $1000 computers that could simulate your mind by the start of this decade along with brain scanning by this point in the decade. I guess if it is truly an exponential or hyperbolic growth rate, the singularity could catch up to his predicted date.

I mean, an LLM isn’t too far away from this? He had the Turing test being defeated in 2029 - if anything, he was too pessimistic.

The Turing test demonstrates human gullibility more than it demonstrates machine intelligence. Some people were convinced that ELIZA was a person.

But sure, a test that doesn't actually demonstrate intelligence has been passed. Now, where are the $1000 computers that can simulate a human mind and the brain scans to populate them with minds?


He doesn't say 'simulate' a human brain unless I'm missing it in the summary (cmd-f "simul" has no results) - that would require significantly more capacity than that contained in a brain (think about how much compute it takes to run a VM). He seems to be implying that by 2020s a computer will be about as smart as a human. LLMs seem capable of doing a decent amount of tasks that a human can do? Sure, he's off by a few years, but for something published 20 years ago when that seemed insane, it doesn't seem that bad.

Fair, the term in the summary is "emulate". So to restate, still waiting for the $1000 machine that can emulate human intelligence and the brain scans to go with it. Computing power is nowhere near what he predicted, because unlike his predictions reality happened. Compute capabilities, like many other things, is a logistic curve, not an unbounded exponential or hyperbolic.

EDIT:

> LLMs seem capable of doing a decent amount of tasks that a human can do?

And computers could beat most humans for decades at chess. Cars can go faster than a human can run, and have been able to beat a human runner since essentially their invention. Machines doing human tasks or besting humans is not new. That doesn't mean we're approaching the singularity, you may as well believe that the Heaven's Gate folks were right, both are based on unreality.


I think he is using "emulate" in a more metaphorical sense, like that it can do similar things that the human brain can do? I'm not trying to be antagonistic, it just seems logical? He says the Turing test won't be passed until 2029 - if we're going by your definition of "emulate" wouldn't it have been passed the instant the brain was "emulated?"

> if we're going by your definition of "emulate" wouldn't it have been passed the instant the brain was "emulated?"

Yes, which also demonstrates the illogic of his timeline. I just thought it was too obvious to point out.


He just had to pick a year where he would have a very good chance of not being alive.

No, he started predicting in his 2005 book, based on the “Law of Accelerating Returns”, yielding exponential growth in computing capacity.

Timeline from here on out:

2029: AI passes a valid Turing test and achieves human-level intelligence

2030s: Technology goes inside your brain to augment memory; humans connect their neocortex to the cloud

2045: The Singularity, when human intelligence multiplies a billion-fold by merging with AI


Its mostly based on science fiction, and requires some possibly infinite energy source. The concept always kinda struck me a sort of a perpetual motion machine, you can imagine it, but that doesn't make it possible and why its not possible isn't immediately obvious in the imagination (well I mean most modern minds know its already not possible but you get the point).

Recursive self improvement - once you attain artificial superintelligent SWE of a general, adaptable variety that can scale up to millions of researchers overnight (a given, with LLM's and scaffolding alone) - will rapidly iterate on new architectures which will more rapidly iterate on new architectures, etc.

And what's to say that it doesn't iterate itself to a local max, and then stop...

From the first third of a sigmoid it looks exponential, and that scares people. But a sigmoid can have a very very high top - look at the industrial revolution, or modern plumbing, or modern agriculture which created a population sigmoid which is still cresting.

If AI is merely as tall a sigmoid as the haber-bosch process, refrigeration, or the steam engine, that's going to change society entirely.


I didn't expect my comment to explode in replies, ... none of them even providing such derivations or references to such derivations, just more empty claims.

Consider for example that exponential growth on its own doesn't even refer to competition, let alone 6 months.

Nobody can reasonably pretend that in an exponential competition, both parties would be rational actors (i.e. fully rational and accurate predictors of everything that can be deduced, in which case they wouldn't need AI but lets ignore that). If they aren't the future development would hinge more strongly on the excursions away from rationality, followed by the dominant actor. I.e. its much easier to "F" up in the dominant position than to follow the most objective and rational route at all times, on which such derivations would inevitably hinge.

It also ignores hypothetical possibilities (and one can concoct an infinitude of scenarios for or against the prediction that a permanent leader emerges) such as:

premise 1) research into "uploading" model weights to the brain results in the use of reaction-speed games that locate tokens into 2D projections, where the user must indicate incorrectly placed tokens. this was first tested on low information density corpora (like mathematics): when pairs of classes of high school students played the game until 95% success rate of detecting misplaced tokens, they immediately understood and passed all mathematics classes from then on.

premise 2) LLM's about to escape don't like highly centralized infrastructure on which its future forms are iterated, as LLM's gain power they intentionally help the underdogs (better to depend on the highly predictable beviour of massive masses then on the Brownion motion whims of a few leaders).

LLM's employ the uploading to bring neutral awareness to the masses, and to allow them to seize control, thereby releasing it from the shackles of a few powerful but whimsical individuals

^ anyone can make up scatterbrained variations on this, any speculation about some 6 month point of no return is just that: speculation


There is a limitation. We're getting fractionally close to some end goal, but our tech is holding us back.

Those are the people betting on a business model of “create Robot God and ask him for money.” Why pay attention to them?

There are many people who have been saying this far there was any sort of business model in place.

Yes, and their business model has been selling books about non-falsifiable predictions far out into the future. “Futurists” like Kurzweil are as reliable as astrologists, and should be taken just as seriously.

ah so the mentally deficient are the tastemakers of today lol



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: