The memory controller sends the read to the DIMM that is not refreshing. It is invisible to software, except for the side-effect of having better performance.
Mirroring is more of a reliability feature though, no? From my understanding it’s like RAID where you keep multiple copies plus parity so uncorrectable errors aren’t catastrophic. Makes sense for mainframes which need to survive hardware failures.
Refresh avoidance is a tangential thing the memory controller happens to be able to do in a scheme like that, but you’d really have to be looking at it in a vacuum to bill it as a benefit.
Like I said, it’s all about cache. You’re not going to DRAM if you actually care about performance fluctuations at the scale of refresh stalls.
Clearly, hitting a cache would be the better outcome. The technique suggested here could only apply to unavoidably cold reads, some kind of table that's massive and randomly accessed. Assume it exists, for whatever reason. To answer your question, refresh avoidance is an advertised benefit of hardware mirroring. Current IBM techno-advertising that you can Google yourself says this:
"IBM z17 implements an enhanced redundant array of independent memory (RAIM) design with the following features: ... Staggered memory refresh: Uses RAIM to mask memory refresh latency."
I can google, thanks. My point is that nobody is buying mainframes with redundant memory to avoid refresh stalls. It’s a mostly irrelevant freebie on hardware you bought for fault tolerance.
Do you have evidence that this is a fact? Have you looked at the computing requirements documents for, for example, stock exchanges?
I have it on good evidence that stock exchanges ran on mainframes. They are essentially the counterparty (in a computing sense not a financial sense) in each placed order.
If someone is willing to run a fiberoptic cable from Chicago to New York or New Jersey to exploit reduced propagation delay, admittedly much larger than a refresh stall, wouldn't you think that they or someone else would also be interested in predicting computing stalls. An exchange would face at least a significant reputational risk if it could be exploited that way.
The low latency matching engines in colos run Linux these days, and we use microwave instead of fiber. Incoming orders are processed by hardware receive timestamp, so predicting jitter doesn’t give you an advantage. Clearing and settlement I’m not sure about, not latency critical though, mainframes wouldn’t surprise me there.
After we implemented advanced bot traffic detection and filtering, their reported traffic plummeted by 71%. [...]
But then the sales report came in. Their actual sales went up by 34%.
Their real conversion rate optimization (CRO) efforts had been working all along, but the results were buried under an avalanche of fake clicks. They were not bad at marketing; they were just spending thousands of dollars advertising to robots programmed never to buy anything. Their marketing ROI went from "terrible" to "excellent" overnight.
I don't understand how detecting bot traffic would directly lead to less ad spend.
Can you just tell e.g. Google Ads that you don't want to pay for certain clicks?
Did they modify their targeting to try to avoid bots?
I could imagine that blocking bot traffic, would improve their retargeting and make sure that the retargeting budget is spent on real people leading to an increase in conversion.
What's the API here for Google Ads? How does their site report to Google Ads whether that was a good/bad user?
Is this done through conversion tracking? If so, why would you track anything but a completed purchase in the first place?
I think Google calls it remarketing and it goes through Google Tag Manager. You can "tag" visitors how you want (duration, action, page scroll, etc.). It's just a javascript call to the API which you can trigger however you like.
You wouldn't necessarily want to track conversions for retargeting, since depending on your product or service, a second buy might be unlikely. But someone who checks out multiple product pages or articles on your site might be interested and buy in the near future. That of course are also actions bots could easily do.
Not sure if much serious research has been put into it. I would be suspicious of it deterring them because a lot of initial smoking happens in social situations where friends pass out individual cigarettes.
By the time someone buys their own pack they are probably hooked.
I suspect the obscene taxes blocking out young folks is one of the most effective strategies
I doubt that this is a problem in need of a technical solution.
In any case, this system can easily be circumvented by emulating the key presses on that website.
The report only talks about validating the "fine-grained EP scheme" on Huawei hardware.