My Kyocera will work in orbit and withstand intense radiation. In fact, this very moment my new Duraforce Pro 3 is having fun in a launch-testing thermal/vac chamber.
Kyocera's 'flagship' is high-reliability phones in absolute garbage environments.
Samsung's 'flagship' overheats and earns them class-action lawsuits.
Motorola's 'flagship' is a hinged throwback to the 90s.
Apple's 'flagship' is an overpriced piece of vendor lock-in.
Meanwhile my phone takes serious abuse and laughs at it. I've dropped it and watched it go more than 700 feet down the side of a mountain (Chambless Skarn) and BARELY chip the screen protector. Waterproofing still intact. Case barely scratched.
What you consider a flagship phone is a brittle piece of junk in my hands.
That's not a radiation hardened chip, it's regular off-the-shelf consumer electronics. The "solar radiation" test they advertise is part of MIL-STD-810H. It tests whether the electronics survive regular sunlight on earth. The only ionizing radiation this phone is rated for is UV light.
At least if it had registered memory there might be an argument that it has some radiation resistance, but no it's plain old LPDDR4x.
Ulefone Armor 29 Ultra has the same MIL-STD-810H conformance with "radiation hardening", 16GB of RAM and a flagship Dimensity 9300+. Just not a removable battery.
Funny because the Qualcomm sm7450-ab snapdragon 7 gen-1 page lists itself as only supporting LPDDR5.
>also do you mean perhaps Strontium-90?
Nope. Strontium-60. 25 year half life compared to Sr-90's ~29. It's what we like to use in real space-environment testing on the ground. Nasty stuff.
>source?
You can actually probe your hardware and see what sort of ECC is enabled on a Droid phone. In this case, in-line ECC, so that means some of the RAM is actually sacrificed for error correction instead of having a dedicated extra chip (256 bit, 240 of that is data 16 bit is error correction.) What's awesome about that is that enabling ECC is simply a bit flip in firmware and you don't need the extra RAM modules installed - the installed memory can already do it. You don't need the extra hardware.
> Nope. Strontium-60. 25 year half life compared to Sr-90's ~29. It's what we like to use in real space-environment testing on the ground. Nasty stuff.
So you're suggesting we all just need to buy exclusively flip phones for a few years to send the market a signal that it wants replaceable batteries. Then the free market will do its thing and keep the engine of innovation running
Speaking of which, does anyone want to do a list of "features added to smartphones over the last 10 years" vs "features removed from smartphones over the last 10 years" so we can see just what innovations are at risk?
I'm not suggesting anything, I'm simply offering the reality of the smartphone market. What you are suggesting is a contrived, exaggerated take of how markets function.
People generally like small, thin phones, as evidenced by the billions sold. It really isn't much more complicated than that.
Many flagship phones promise 7 years of security updates now. 3-4 years means the battery will only last for half that time, and heavy users (1 cycle per day) will hit that quota in under 2.75 years.
The battery doesn't cease functioning after 3-4 years. The benchmark says it should have 80% capacity.
It's also not really that expensive to have phone batteries replaced. Apple will do it for $120 including the battery for their flagship models that cost over $1000. Cheaper for lower end models.
I can't take any arguments seriously that claim these phones are becoming e-waste after 2.75 years. Battery replacement is a common process.
Then the law should just make sure that there's a second source at least for the batteries, that technicians have free access to disassembly instructions, and that it can be done without undue effort or risk.
Requiring common tools or technical skills for replacing something that last 4 years is not a hassle to justify enshitiffying phones design as long as you're not vendor locked for such replacement, and a technician can do it in a reasonable amount of time, with reasonable tool and without the risk of degrading the functionality of the device doing so.
I'm old enough to remember the old Nokia phones that had removable cases, removable batteries, and you would have upgrade envy for the last year of your 36 month cell service contract. Then we had wince and early android devices and BlackBerries which were pretty much the same.
> Cameras are free speech... individuals, companies, and communities should be at liberty to hire surveillance tech to protect their persons and their property.
At scale, corporate surveillance can effectively intermingle with, and/or become indistinguishable from, state surveillance. We see that happening today: why wiretap when Palantir exists?
Cameras may be speech, but surveillance has a chilling effect against it.
To me, the more interesting divergence in discussion is on its capabilities.
AI industry insiders (including "safety" groups like ControlAI) talk about the dangers only in terms of its power: "Scheming", job loss, breaking containment, the New Cold War with China.
Critics outside the industry talk in terms of its lack of power: Inaccuracy, erroneous translation of user intent, failure to deliver on its promises and investment, environmental cost from the former, and ultimately the danger of people in power (e.g. law enforcement, military officials) treating its output as valid and unbiased, or simply laundering their wishes through it.
I disagree that anyone should need LLMs for Blender, for example, because Blender is designed by people to be understood and used by people, even if it requires a learning curve. It sounds a bit dangerous to build new things we don't understand, or worse, reduce our understanding of what we currently use because (only after studying our use of the same technology) an LLM apears able to replicate it, mostly.
I'm reminded of Sam Altman's performative helplessness on Jimmy Kimmel, when he described being unable believe a baby without ChatGPT. That's something I believe humanity has been capable of doing for a good portion of its existence, and not something we should give up to the hands of a yet-unproven, yet-unprofitable technology.
Surely there's a middle ground where improved APIs can be leveraged by both people and LLMs alike while keeping those APIs approachable? Why is it necessary that changing the python APIs would lead to "need[ing] LLMs for Blender"? I'm nowhere close to an AI maximalist but this criticism seems grounded in execution concerns. I'm definitely not saying that they won't mess this up and make the APIs overly complex, I just don't think that's necessarily going to be the case.
Regarding whether AI can/could overcome the hurdle of human understanding: I'm not sure if that's really a hurdle. Let's say in theory, a system was crafted by AI to be interacted with exclusively by AI. Broadly, I assume the outcome of the system would be for people, and it would have some purpose or value. Now my question is: how do we verify it functions? If it is a black box that nobody understands, then we can't verify it at all, and we can't debug it if there's something wrong with it. We circle back to the human understanding issue.
(I'm sorry if my tangent about Altman was taken as a personal affront, as I did not mean it to be that. It just muddied the two interesting topics you brought up.)
Are old datacenter GPUs making more money than they were before? Various sources point to GPUs dying quickly (in 2024, a Google engineer suggested 3 years maximum), and even if they don't, newer chips cause rapid depreciation of older ones.[1]
AWS is still offering g4dn instances that run on NVIDIA T4 GPUs, which were first released in 2018. My last employer is still running a bunch of otherwise discontinued g3 instances with 2015 era GPUs because it’s not worth validating the numeric codes on new GPUs. People (especially journalists) underestimate how long these cards are economically useful.
$5B is part of a contact, the remaining $20B is just a non-binding statement that doesn't hold the same weight (but somehow commands the same media fanfare).
The slow realization this whole article was AI prompted was such a disappointment to me. I'm fascinated by the subject matter, and it seems like the person who prompted it is aware of specifics at least... But I also don't want to feed myself LLM-induced pollution that might make it into my own writing or thinking patterns.
Samsung was the last major brand in the US to have one, and they made the choice to remove it.
reply