Hacker Newsnew | past | comments | ask | show | jobs | submit | kelseydh's commentslogin

This flexibility can exist for office jobs, but customer-facing roles typically need somebody present for the whole shift.

Many customer facing rolls either are dealing with dedicated buyers who are paid to work 9-5 as well and so there is no issue (false - but I'll ignore that you customers are often on a different continent)

If the customer is a retail customer in the majority of cases you are open extended hours. Your employees either work a 6-2 shift and so have the evening, or they work 2-10 and have all morning. (often they are working a shorter shift).

The final group are doctors/dentists. Every boss knows you need to take an hour off to see them every few months and makes provision. They have to have this anyway because sometimes people are sick, or die over night. Thus if it is critical for the job that you have people you have extra people around to make up slack.


And businesses like that could create their schedules such that all employees would have some time for errands during normal working hours, but they usually don’t because it’s easier not to.

Chess anti-cheat now relies on looking at your moves and spotting mistakes. Not even grandmasters play tactically perfect games so this works pretty well for finding cheaters. In theory FPS games could do the same to detect aimbotting.

I still don't understand why we aren't using server-side gameplay analysis for cheat detection. You can have some obvious inhuman-level gameplay heuristics for real time kicks/timeouts during matches and post-game analysis by AI to flag for review or outright automatically ban gameplay that deviates from normal high-level players.

Games very much are using server-side statistics analysis for cheat detection. Valve made a presentation about it and Epic has an API for feeding game state data to ML anticheat for aimbot detection (game-specific and in addition to their existing anticheat measures)

It’s just that it doesn’t work.


But why doesn't it work?

Either everyone on Earth who’s working on this has a skill issue (which is probably hubris?) or there’s not enough differing humanized enough aimbot from human aim (note: Valve manages to screw up even here, with cheaters in Premier basically rage aimbotting these days IIRC)

In addition, there’s not much these things can do against subtler stuff like ESP.


So now we're using an AI cheat snoop to detect the behaviours of AIs, which means the cheat AI will need to learn to avoid the tell-tale patterns the AI cheat snoop looks for and avoid them, which mean the AI cheat snoop will need to....

It won't close the skill gap bump, but the more an AI aimbot degrades itself to mimic a human to beat cheat detection the less advantage it will give to the players using it.

It is scary how nuanced the cheating tools already here. Here is a video promoting cheat software explaining how nuanced their aimbot system can be made to mimic real play: https://www.youtube.com/watch?v=hrBohlkHMjU


and will have to do something along those lines for online play.

Feel the third so much with LLM's. But I get the sense younger generations aren't a fan of where it's moving the world either.

"Despite its failure, the Great Pigeon Census of 1887 is remembered as a cautionary tale..."

This type of writing is considered non-encyclopedic by Wikipedia standards as it injects superficial analysis. The imitation articles would look better without it. Maybe train on this article? https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing


Why is this example non-encyclopedic? It's an informative, falsifiable statement that could be supported by a citation, like here: https://en.wikipedia.org/wiki/Thongbu_Wainucha#:~:text=remem...

Does anybody have a demo of this technology in use? I'm very curious to see how it sounds in practice. Uncanny or hyperrealistic?

https://www.sanas.ai/#playground check the Accent Translation section

I ran into this (or a similar service) when cancelling comcast a few weeks ago. It worked _really_ well. It was slightly uncanny, but I think most people wouldn’t notice anything. It was only some awkward phrasing that made it obvious to me.

Curious: How could you differentiate it from a foreign-educated English-speaking human?

Found a video from a couple years back using this tech. Wasn't Telus in the video but they demonstrated it and the change was subtle but definitely noticable. See how it was 2 years old I am certain the technology has greatly improved since that time.

I wonder about latency especially. Does the AI wait for sentences to finish?

On the flip side there are people who believe that LLM-assisted coding changes require attribution in git history.

As I've written elsewhere in the thread, having worked at a large Enterprise in collaboration with Legal, if there isn't tracking of what AI contributions you have, it's harder to be protected legally by ie Microsoft's indemnity clause if you're sued

It's definitely helpful to know whether a PR was AI-assisted or not and the git attribution line is a simple and effective way of communicating that.

I also recommend specifying model name and version so the maintainer knows upfront the level of slop they are dealing with.


Really sucks we never got to see any of the prototypes or designs they built for it.


Absolutely! At the very minimum, an Apple EVSE would have been a shippable product. But no, Tim couldn't even get that after 10 years, thousands of dedicated employees, and hundreds of millions of dollars spent.


I needed a fuzzy string matching algorithm for finding best name matches among a candidate list. Considered Normalized Levenshtein Distance but ended up using Jaro-Winkler. I'm curious if anybody has good resources on when to use each fuzzy string matching algorithm and when.


Levenshtein distance is rarely the similarity measure you need. Words usually mean something, and it's usually the distance in meaning you need.

As usual, examples from my genealogy hobby: many sites allow you to upload your family tree as a gedcom file and compare it to other people's trees or a public tree. Most of these use Levenshtein distance on names to judge similarity, and it's terrible. Anne Nilsen and Anne Olsen could be the same person, right? No!! These tools are unfortunately useless to me because they give so many false positives.

These days, an embedding model is the way to go. Even a small, bad embedding model is better than Levenshtein distance if you care about the meaning of the string.


It depends on if or not you're trying to correct for typos, or do something semantic. Also, embedding distance is much much more expensive.


There's a section in the docs of our FOSS record linkage software that covers this: https://moj-analytical-services.github.io/splink/topic_guide...


Levenshtein distance is often a poor way to fuzzy match or rank. i suspect that in js, even the trie approach would incur significant GC/alloc thrashing or cost of building a huge trie index.

i tried fuzzy matching using a cleverly-assembled regexp approach which works surprisingly well: https://github.com/leeoniya/uFuzzy


I would argue the opposite, with the 'often' doing some heavy lifting.

It is very likely that you have interacted with a Levenstein distance based spell corrector (with many modifications) and I have touched that code. Used well they can be very powerful.


The United States has one of the highest incarceration rates in the world, with approximately 541 to 614 people imprisoned per 100,000 residents as of 2022–2026. While representing only 5% of the global population, the US holds roughly 20% of the world's prisoners, totalling over 1.8 million people.

For many crimes, the U.S. loves giving eye watering long sentences for offences that would result in a tenth of the prison time in other countries.


If you accrue a high score, Google should give you a plaque like they give to Youtubers with many subscribers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: