Hacker Newsnew | past | comments | ask | show | jobs | submit | giobox's commentslogin

The optional Dirac Live firmware/licence for the miniDSP is an extra $199, so it's really $425.

I have one and personally didn't bother, did the usual UMIK-1 + REW to create the room correction.

> https://www.minidsp.com/products/dirac-series/index.php?opti...


I wouldn’t consider this the end of the matter, and given the past few years experience with Meta yet more layoffs are absolutely possible.

Related to the quest, the horizon worlds team was largely let go (around 1000 employees) earlier in the year and are not part of this latest 10 percent etc.


Docker compose is brilliant while your stack remains on a single box, and will scale quite nicely for some time this way for most applications with minimum maintenance overhead.

My personal strategy has always been to start off in docker compose, and break out to a k8s configuration later if I have to start scaling beyond single box.


I just asked it to create a torque spec diagram of the suspension for my car, a subject I'm pretty familiar with. It amazingly drew everything correctly, displayed the correct torque figures and allowed me to click on individual components to zoom in further, providing more specs.

Genuinely one of the most impressive demos I've tried in a long time. I was able to use it almost like a living version of a classic illustrated Haynes workshop manual.


I asked it about designing a 12 V solar system for a garden shed and it got everything but the broadest of strokes wrong. It figured out there should be a solar panel, a solar charge controller, a battery and some loads, but the wiring was non-sensical and when I drilled in on the solar charge controller settings etc. it completely fell apart. Absolute non-starter for any information you plan on depending on, but good entertainment value and impressive execution.

I have a Mac Pro 5,1 taken apart on my desk right in front of me. I asked it for a diagram of the 5,1 internals. While it was MacProish looking, it was wrong about every visual element. The text fields were right at first glace. Every click I did was basically all wrong too. Visually it looked cool, but actually the first time Ive seen AI be wrong constantly since maybe 2023.

I have an old door on the back yard, been planning to make a bike shelter this week so I asked it to make me a plan. It drew a regular shed with an "upcycled door". But no sign of where a bike should fit into it. No bike would ever fit in that thing, and the only structure it showed how to construct didn't resemble the actual finished thing.

Like every other AI demo I've tried ever, impressive on the surface, but the system fundamentally doesn't understand what it is doing


This is great, AI freeing us from bikeshedding.

I decided to test it out myself.

Went to the website, typed in "Jeep Wrangler JK engine bay with components labeled" (Since I'm intimately familiar with JK engine bays). Seems like a pretty analogous test to what you did, if anything an even easier test.

Let's see what we get .. a very nice looking diagram of a wrangler engine bay with components labeled, looks good.

But wait ..

- The brake fluid reservoir is on the wrong side of the engine bay

- Where the brake fluid reservoir is, it's labeled as the coolant overflow tank, and while the actual coolant overflow tank does exist in the diagram, it has no label.

- The battery is on the wrong side of the engine bay.

- The top of the front grill is labeled as the "oil filter cap".

- The oil fill cap is in the wrong place.

- Half of the battery is labeled as the fuse box, when the fuse box is correctly shown, but unlabeled, on the other side of the engine bay.

- It shows two different windshield washer reservoirs next to each other.

I could keep going on ...

Now I tried clicking on the incorrectly labeled coolant overflow reservoir and it switches to a new page which now shows a completely different looking coolant overflow, but now it's at least located in the correct place in the engine bay.

But of course it doesn't look remotely like the actual coolant overflow container. It also shows the radiator cap as on the top of the coolant reservoir, when in reality it is very much on the top of the radiator itself.

Like .. I can find fault with every aspect of it. But of course, if you didn't actually know much about the topic it'd all look fairly believable. The story of LLMs basically.


It does poorly on creative concepts as well.

I attempted to explore the works of Kinoko Nasu/TYPE-MOON through its characters and the relationships across works and it was mostly nonsense. Sure it had some broad relations correct, but it presented a tiny set of meaningful characters and only attempted to touch Fate/Stay-Night and Tsukihime.

Even more damning was that it produced garbled text for a few of the textual representations and often even if the lettering was clean, the grammar was off.


To be fair, disentangling even just the Fate series is nearly impossible even for humans

Now that you mention it, i didn't try "Metal Gear". Now that would be a ride.

Are TYPE-MOON relationship diagrams the new pelican benchmark?

I had a tab on nuclear reactors open and so typed in "Pressurized Water Reactor" and the result while very visually appealing is completely nonsensical (connected the high/low pressure coolant loops together) and would definitely explode.

https://imgur.com/a/DEb3oD4


Do we ever simply accept that LLMs weren't made for this kind of detail-oriented work? I can't imagine something like this ever being anything other than a toy which can't be trusted.

Will Silicon Valley executives ever accept this reality? If we acquiesce and admit that LLMs are a good tool for prototyping and boilerplate-reduction, but not finished products-- is that when the bubble finally bursts?


I think the unfortunate fact is that most jobs in the world do not require accuracy, so an inaccurate result has a negligible impact over an accurate one.

I used to feel job safety in the knowledge that AI labs weren't likely to solve the hallucination problem. Then it dawned on me that they don't need to — they just need to reduce our collective expectations.


I predict that this illusion of "(in)accurate enough" will last long enough to trigger a cascading avalanche of failures across all fields of human endeavours, an I'd be pretty cautious to bet on quick recovery or even survival of this civilization after that.

Isn't that entirely analogous to our evolved and lived experiences?

We've never had to act with surgical precision except in matters of math/science/engineering.

Like how you fill up your coffee cup up to a level probably +/- 50ml each morning.


No. For most of human evolution, we were hunter-gatherers. Imagine trying to hunt game with the accuracy of LLMs. You'll starve. Picking edible fruits from plants also requires precision, both in terms of the hand/eye coordination of actually picking it as well as in terms of knowing what's edible and what's poisonous.

When you fill up your coffee cup in the morning, I sure hope you aim accurately and don't pour half of it all over your desk. And don't even get me started on the process of making coffee that isn't completely unpalatable.


I also replied because I asked it about a Mac Pro case I had right in front of me. Mostly right words, totally wrong visuals. And while I see what you mean by 'story of LLMs', I ask LLMs about things I know often, and for the last 12 months theyve been pretty dang accurate. This ai visual example is the strongest 'its just guessing' Ive seen in years. For a demo, pretty cool still though. Not sure why OP exaggerated, or simply doesnt know his car as well as he thinks he does.

Does it make sense that maybe it has a model of the vehicle it can pull from its corpus wholesale but then the “guess the next letter” portion takes over for labeling and just guesses poorly?

I queried "your mom" and it created a historical social timeline of motherhood superimposed with a placenta. I approve

I just wrote “sex” and it gave a biology lesson on the topic!

Since ecco the dolphin just had two remasters and a new game announced, I decided to ask for it to show me a map of the first stage of tides of time. Should be easy, it just has to search for it and then generate something off of it. The stage is mostly empty too, just an open area, then a large opening with an upward current that leads to a separate bay with a warp ring. Three spaces, some dolphins and a circle.

It did a diagram that has absolutely nothing to do with the actual stage, not even close. And tells me a complete whole slew of completely wrong information. It shows pod of dolphins that teach you to dash attack (you know it by default). It shows a power sonar crystal (the sonar is a default ability, there is a "power" sonar I guess, but it is not obtained from crystals, and while the game features crystals, there are none in the game until level 3 and they look nothing like the diagram's). It shows air pockets... which are just bubbles (In the game, there are actually air refilling bubbles, but air pockets would refer to a small bit of open air in an underwater tunnel, like, the actual, you know, real life geological feature.)There are some medusas far off in the background in the image (They're yellow. The ones in the game are clear. They are also not present until later levels). An exit cave leads to the sea of silence (An actual stage. Wrong game.). A random cave says "Health source" (???? You do heal by eating fish but???). There is no warp ring.

So basically, the ONLY correct elements in the diagram are the presence of dolphins and the fact the diagram is labeled "Home Bay". Every single other element on this is wrong and would be wrong for all iterations of the Home Bay.

For a visual search tool, this sucks at visuals.


I asked for the classic "an astronaut riding a horse on the moon" and the horse had two heads. (Still pretty impressive mind you)

Interesting! To join the cavalcade of others sharing their experiences:

I first asked it "how big are geckos". It gave me a cool comparison diagram between three gecko extremes (leachianus, Jaragua dwarf gecko, and leopard gecko, if curious). Info all looked correct. Drilling into the Jaragua brought me to a less-impressive page with utter gibberish text and duplicated info boxes. So it goes. I drilled further, but they were more esoteric topics I'm less versed on (lamellar setae), I can't evaluate the accuracy without further research.

I also gave it something broader: "tokay gecko". More duplicate info boxes, and for some reason it "drew" two geckos on top of each other. Kind of cute, but tokays are extremely territorial, so happy cohabitation isn't their default (though it's not unheard of).

Still, despite the issues, I thought it was very neat.


While I'd be perfectly content with an IP67 iPhone with interchangeable battery, the current iPhones are IP68 which is a significant step up in dust/water ingress protection. IP68 devices generally require a sealant, IP67 normally doesn't, making it easier to do a battery hatch etc.

IP68 doesn't require a sealant if you just use enough pressure. Phones are just too thin to screw on the back plate and use a proper gasket. Which is stupid in the first place because most people then go and put a bulky cover on them.

and applying a sealant isn't per-see the problem either

iff

- it's generally commercially available

- and re-applicable after replacement with just generic tools

- and removing the battery doesn't risk breaking your phone due to physical strong binding glue being used as sealant etc.

As a dump example you can design the phone as a sealed unit with the battery department being "outside" the seal. Then have the battery also sealed and apply a bit of "sealant" (wax?, glue?) on the electrical contacts braking the seal on both sides. As the battery and battery compartment back have to only be waterproof and not "rigid" this probably fits "just fine" into most phones (except the most over the top slim ones).

Which is probably more the actual problem. Thinks like phone makers over-obsessing with making phones slimmer on a sub 1mm standard ... and then people anyway putting "thick" cases on the phone to protect it...


It's worth noting now there are other machines than just Apple that combine a powerful SoC with a large pool of unified memory for local AI use:

> https://www.dell.com/en-us/shop/cty/pdp/spd/dell-pro-max-fcm...

> https://marketplace.nvidia.com/en-us/enterprise/personal-ai-...

> https://frame.work/products/desktop-diy-amd-aimax300/configu...

etc.

But yes, a modern SoC-style system with large unified memory pool is still one of the best ways to do it.


My 1080ti is still working away in my kid's PC. If you connect a 1080p monitor, it will still hit 60fps in mostly everything.

The only thing that holds this card back now is a handful of titles that will not run unless ray-tracing support present on card - Indiana Jones and The Great Circle springs to mind etc.

I am very likely going to get a decade of use out of it across three different builds, one of the best technology investments I've ever made.


It really is an impressive bit of hardware. I finally pulled it out of my last system a year ago, but it was definitely holding its own up until that point.


There is a well documented opensource alternative to Tailscale - Headscale. The tailscale client is already opensource, Headscale is opensource drop in replacement for the control server which isn't, and fully compatible with Tailscale clients:

https://github.com/juanfont/headscale

If you can be bothered running the headscale container, you generally don't need to pay for tailscale. It's been pretty well supported and widely used for a number of years at this point. Tailscale even permit their own engineers to contribute to headscale, as the company sees it as complimentary to the commercial offering.


> Headscale is ... drop in replacement

I've been really happy with headscale, but I wouldn't call it a complete drop in replacement as I would with vaultwarden. Some features (e.g. Mullvad integration, ACL tests, etc) are missing.

Upgrading also requires upgrading every minor version or you run into db migration issues, but that comes with the territory of running your own instance.

I would recommend folks look up if headscale suits their needs (like it did for me for many years) before switching over.


The headscale API is very different than the Tailscale API so if you're automating setting up clients it's not quite drop in. Once a client is up, though, from what I've heard it's seamless.


Ugh, I hadn't heard the news about the LocalStack licensing changes. I had some great results building AWS services for local dev as well as CI/CD and testing in GH actions with LocalStack in previous jobs.

I secretly always hoped Amazon would buy out LocalStack and make it the official free local development environment for AWS work, but I guess it probably would reduce revenues spent on AWS based dev and test environments. The compatibility with the AWS CLI was mostly excellent in my experiences.


Unclear what LocalStack's end game is tbh. My company has an active enterprise license atm so their recent changes won't affect me in the short term at least. As of writing I'm still a happy user of LocalStack. Disappointed with their overall direction, but I hold no ill will and I'm sure they had their reasons. I wish them luck.

Hopefully this change was not just an short term attempt to lock in current enterprise customers by shoring up existing income streams. That'll only work in the VERY short term.

It's not difficult to forsee the inevitable customer drain to free competitors or private one-shots easily produced by genai from publicly available AWS SDK code. Maybe they're already feeling that pressure and that's all this change is. I hope not.

AFAICT, they have no appreciable moat to retain customers long term. For example, I have absolutely no interest in their "Pods" or even their console UI, so thosr aren't keeping me around forever. For their sakes I hope they're still shopping themselves around and didn't take some VC poison pill with preconditions for killing the community edition. Really It's anyone's guess though.


> The compatibility with the AWS CLI was mostly excellent in my experiences.

Interesting, I've had the opposite experience. Every single AWS service I've ever tried to build tests around with LocalStack has run into compatibility issues. Usually something works in LocalStack but fails when it hits the real endpoint.

I guess the CLI itself has mostly worked, its more the LocalStack service not behaving the way the real service is documented/works.


Got any concrete examples? I've been happily using LocalStack for roughly a decade now and haven't run into a single compatibility issue, aside from the obvious missing of net new services for the first N months after AWS product launches. Things like AppConfig, etc, but those gaps got filled in time. They clearly prioritized the 95pct/most used features of each service first though. There's a long tail in some AWS services, as one might expect. I've never used any of the more esoteric AWS feature sets of any of their services. Those are the things that tend to end up deprecated. So requiring those long tail featur sets may be the simple answer to having very different experiences.


It also (criminally for an SSH tool) appears for now to only work when the server uses the SSH default port 22:

https://github.com/spatie/scotty/issues/1

Literally would be one of the first things I would have tested personally!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: