I think the pressure is just coming from behind the scenes.
The religious right knows many of their views are unpopular so they don't act in the open. They find underhanded ways to force their views onto us. Abortion bans wouldn't survive a simple up and down vote in almost any state, yet abortion bans are happening across the country.
The religious right really has their claws into this administration, and the far right has a much larger say in things than it seems like they would based on their proportional representation in the population. Things like gerrymandering and closed primaries don't help.
> It's not a sudden new thing. The financial theory seems to explain all the facts.
Literally all three of the examples you list were the direct result of lobbying from groups like Exodus Cry and Morality in Media. These campaigns had been in the works for years, and were well-known to people in the industry, who had been sounding the alarm for years.
It's maddening that people not only refuse to listen before the actions come down, but also still refuse to connect the dots even after they happen.
OK, I've checked that claim more carefully and confirmed it to my satisfaction. Comments retracted. Left my links up in the GP comment for context for yours.
> The religious right knows many of their views are unpopular so they don't act in the open. They find underhanded ways to force their views onto us. Abortion bans wouldn't survive a simple up and down vote in almost any state, yet abortion bans are happening across the country.
They do act out in the open! That's why it's so maddening to see how pervasive the belief that this isn't a push from right-wing groups is. They are extremely open about their goals and about their ideological alignment, and they have been at literally every step in the process.
They're telegraphing every single move in real-time. But for some reason, people just don't really want to believe it.
It turns out, the best way to get away with a heinous agenda is not to hide it, but to be completely open and direct about it. If you tell people exactly what they want and it's horrific enough, they will refuse to believe it's true, because nobody would be that cartoonishly villainous, right?
The sex industry isn't the only area where that principle works, although (like with many "technologies") it was one of the first where it was successfully applied.
That's fine, but even among Android users, nobody buys these removable battery phones. It's possible there's a disproportionate reservoir of iPhone&removable battery-only consumers, but it would surprise me if the desire for a reusable battery were strongly correlated with being locked into the Apple ecosystem. If anything, I would expect the propensity to desire removable batteries is more strongly correlated with Android use.
There are a plethora of reasons to prefer one phone to another and while removable battery phones exist if that's a strict criteria for you the market of available devices is extremely limited. Consumers don't have a real choice here.
I would expect that one of the main reasons that people prefer non-removable battery phones are the engineering tradeoffs inherent in making a phone with a removable battery. They will have strictly less choice on this axis when they no longer have the option to buy a non-removal battery phone.
I think you are vastly overvaluing how much consumers actually value phone thinness. The majority of consumers use phone cases (most modern phones have a camera popup specifically to be better compatible with a case to this end) so I think what customers value the most is lighter weight - not smaller form factor. A replaceable battery does come with a slight compromise to weight but stopping the endless chase of thinness has several engineering advantages when it comes to ports and cooling.
I don't think your speculation is completely unreasonable, but I just want to point out that consumer preference as revealed by current, actual reality only provides evidence in favor of my side of the argument. It's totally possible that the manufacturers are completely wrong about consumer preference and they are acting against their own interests by making the batteries non-replaceable, and somehow none of the manufacturers noticed this or were able to successfully take advantage of it to gain market share. But, I think that would be a pretty surprising thing if it turned out to be true.
Usually, in consumer electronics, the unencumbered market tends to gravitate toward what people actually want to buy. Totally possible this could be an exception to the rule, but I doubt it.
Such phones exist, for Android. Several companies* make highly rugged phones. You can drop a Blackview BV7000 down a concrete staircase, watch it drop into the ocean at the bottom, have lunch, come back, and retrieve your phone from 40" of water, likely completely undamaged.
It's an extreme example, and way too bulky for most people, but the point is: "rugged cellphones" absolutely exist.
Even very young children with very simple thought processes, almost no language capability, little long term planning, and minimal ability to form long-term memory actively deceive people. They will attack other children who take their toys and try to avoid blame through deception. It happens constantly.
Dogs too; dogs will happily pretend they haven't been fed/walked yet to try to get a double dip.
Whether or not LLMs are just "pattern matching" under the hood they're perfectly capable of role play, and sufficient empathy to imagine what their conversation partner is thinking and thus what needs to be said to stimulate a particular course of action.
> Maybe human brains are just pattern matching too.
I don't think there's much of a maybe to that point given where some neuroscience research seems to be going (or at least the parts I like reading as relating to free will being illusory).
My sense is that for some time, mainstream secular philosophy has been converging on a hard determinism viewpoint, though I see the wikipedia article doesn't really take stance on its popularity, only really laying out the arguments: https://en.wikipedia.org/wiki/Free_will#Hard_determinism
Are you trying to suppose that an LLM is more intelligent than a small child with simple thought processes, almost no language capability, little long-term planning, and minimal ability to form long-term memory? Even with all of those qualifiers, you'd still be wrong. The LLM is predicting what tokens come next, based on a bunch of math operations performed over a huge dataset. That, and only that. That may have more utility than a small child with [qualifiers], but it is not intelligence. There is no intent to deceive.
A small child's cognition is also "just" electrochemical signals propagating through neural tissue according to physical laws!
The "just" is doing all the lifting. You can reductively describe any information processing system in a way that makes it sound like it couldn't possibly produce the outputs it demonstrably produces. "The sun is just hydrogen atoms bumping into each other" is technically accurate and completely useless as an explanation of solar physics.
You are making a point that is in favor of my argument, not against it. I make the same argument as you do routinely against people trying to over-simplify things. LLM hypists frequently suggest that because brain activity is "just" electrochemical signals, there is no possible difference between an LLM and a human brain. This is, obviously, tremendously idiotic. I do believe it is within the realm of possibility to create machine intelligence; I don't believe in a magic soul or some other element that make humans inherently special. However, if you do not engage in overt reductionism, the mechanism by which these electrochemical signals are generated is completely and totally different from the signals involved in an LLM's processing. Human programming is substantially more complex, and it is fundamentally absurd to think that our biological programming can be reduced to conveniently be exactly equivalent to the latest fad technology and assume that we've solved the secret to programming a brain, despite the programs we've written performing exactly according to their programming and no greater.
Edit: Case in point, a mere 10 minutes later we got someone making that exact argument in a sibling comment to yours! Nature is beautiful.
Yes. I also don't think it is realistic to pretend you understand how frontier LLMs operate because you understand the basic principles of how the simple LLMs worked that weren't very good.
Its even more ridiculous than me pretending I understand how a rocket ship works because I know there is fuel in a tank and it gets lit on fire somehow and aimed with some fins on the rocket...
The frontier LLMs have the same overall architecture as earlier models. I absolutely understand how they operate. I have worked in a startup wherein we heavily finetuned Deepseek, among other smaller models, running on our own hardware. Both Deepseek's 671b model and a Mistral 7b model operate according to the exact same principles. There is no magic in the process, and there is zero reason to believe that Sonnet or Opus is on some impossible-to-understand architecture that is fundamentally alien to every other LLM's.
Deepseek and Mistral are both considerably behind Opus, and you could not make deepseek or mistral if I gave you a big gpu cluster. You have the weights but you have no idea how they work and you couldn't recreate them.
> I have worked in a startup wherein we heavily finetuned Deepseek, among other smaller models, running on our own hardware.
Are you serious with this? I could go make a lora in a few hours with a gui if I wanted to. That doesn't make me qualified to talk about top secret frontier ai model architecture.
Now you have moved on to the guy who painted his honda, swapped out some new rims, and put some lights under it. That person is not an automotive engineer.
I'm not talking about a lora, it would be nice if you could refrain from acting like a dipshit.
> and you could not make deepseek or mistral if I gave you a big gpu cluster. You have the weights but you have no idea how they work and you couldn't recreate them.
I personally couldn't, but the team behind that startup as a whole absolutely could. We did attempt training our own models from scratch and made some progress, but the compute cost was too high to seriously pursue. It's not because we were some super special rocket scientists, either. There is a massive body of literature published about LLM architecture already, and you can replicate the results by learning from it. You keep attempting to make this out to be literal fucking magic, but it's just a computer program. I guess it helps you cope with your own complete lack of understanding to pretend that it is magical in nature and can't be understood.
No, it's just obvious that there is a massive race going with trillions of dollars on the line. No one is going to reveal the details of how they are making these AIs. Any public information that exists about them is way behind SOTA.
I strongly suspect that it is really hard to get these models to converge though so I have no idea what your team could've theoretically made, but it certainly would've been well behind SOTA.
My point is if they are changing core elements of the architecture you would have no idea because they wouldn't be telling anyone about it. So thinking you know how Opus 4.6 works just isn't realistic until development slows down and more information comes out about them.
Short term memory is the context window, and it's a relatively short hop from the current state of affairs to here's an MCP server that gives you access to a big queryable scratch space where you can note anything down that you think might be important later, similar to how current-gen chatbots take multiple iterations to produce an answer; they're clearly not just token-producing right out of the gate, but rather are using an internal notepad to iteratively work on an answer for you.
Or maybe there's even a medium term scratchpad that is managed automatically, just fed all context as it occurs, and then a parallel process mulls over that content in the background, periodically presenting chunks of it to the foreground thought process when it seems like it could be relevant.
All I'm saying is there are good reasons not to consider current LLMs to be AGI, but "doesn't have long term memory" is not a significant barrier.
Intelligence is about acquiring and utilizing knowledge. Reasoning is about making sense of things. Words are concatenations of letters that form meaning. Inference is tightly coupled with meaning which is coupled with reasoning and thus, intelligence. People are paying for these monthly subscriptions to outsource reasoning, because it works. Half-assedly and with unnerving failure modes, but it works.
What you probably mean is that it is not a mind in the sense that it is not conscious. It won't cringe or be embarrassed like you do, it costs nothing for an LLM to be awkward, it doesn't feel weird, or get bored of you. Its curiosity is a mere autocomplete. But a child will feel all that, and learn all that and be a social animal.
Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence. Probabilistic prediction is inherently incompatible with deterministic deduction. We're years into being told AGI is here (for whatever squirmy value of AGI the hype huckster wants to shill), and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call.
Our computer programs execute logic, but cannot reason about it. Reasoning is the ability to dynamically consider constraints we've never seen before and then determine how those constraints would lead to a final conclusion. The rules of mathematics we follow are not programmed into our DNA; we learn them and follow them while our human-programming is actively running. But we can just as easily, at any point, make up new constraints and follow them to new conclusions. What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.
>Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence.
This is not even wrong.
>Probabilistic prediction is inherently incompatible with deterministic deduction.
And his is just begging the question again.
Probabilistic prediction could very well be how we do deterministic deduction - e.g. about how strong the weights and how hot the probability path for those deduction steps are, so that it's followed every time, even if the overall process is probabilistic.
Personally I think not even wrong is the perfect description of this argumentation. Intelligence is extremely scientifically fraught. We have been doing intelligence research for over a century and to date we have very little to show for it (and a lot of it ended up being garbage race science anyway). Most attempts to provide a simple (and often any) definition or description of intelligence end up being “not even wrong”.
>Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4.
Human Intelligence is clearly not logic based so I'm not sure why you have such a definition.
>and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call.
One of the most irritating things about these discussions is proclamations that make it pretty clear you've not used these tools in a while or ever. Really, when was the last time you had LLMs try long multi-digit arithmetic on random numbers ? Because your comment is just wrong.
>What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.
Good thing LLMs can handle this just fine I guess.
Your entire comment perfectly encapsulates why symbolic AI failed to go anywhere past the initial years. You have a class of people that really think they know how intelligence works, but build it that way and it fails completely.
> One of the most irritating things about these discussions is proclamations that make it pretty clear you've not used these tools in a while or ever. Really, when was the last time you had LLMs try long multi-digit arithmetic on random numbers ? Because your comment is just wrong.
They still make these errors on anything that is out of distribution. There is literally a post in this thread linking to a chat where Sonnet failed a basic arithmetic puzzle: https://news.ycombinator.com/item?id=47051286
> Good thing LLMs can handle this just fine I guess.
LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly. They can't even play Chess or Poker without breaking the rules despite those being extremely well-represented in the dataset already, nevermind a made-up set of logical rules.
>They still make these errors on anything that is out of distribution. There is literally a post in this thread linking to a chat where Sonnet failed a basic arithmetic puzzle: https://news.ycombinator.com/item?id=47051286
I thought we were talking about actual arithmetic not silly puzzles, and there are many human adults that would fail this, nevermind children.
>LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly.
Even if that were true (Have you actually tried?), You do realize many humans would also fail once you did all that right ?
>They can't even reliably play Chess or Poker without breaking the rules despite those extremely well-represented in the dataset already, nevermind a made-up set of logical rules.
LLMs can play chess just fine (99.8 % legal move rate, ~1800 Elo)
I still have not been convinced otherwise that LLMs are just super fancy (and expensive) curve fitting algorithms.
I don‘t like to throw the word intelligence around, but when we talk about intelligence we are usually talking about human behavior. And there is nothing human about being extremely good at curve fitting in multi parametric space.
Okay but chemical and electrical exchanges in an body with a drive to not die is so vastly different than a matrix multiplication routine on a flat plane of silicon
>Okay but chemical and electrical exchanges in an body with a drive to not die is so vastly different than a matrix multiplication routine on a flat plane of silicon
I see your "flat plane of silicon" and raise you "a mush of tissue, water, fat, and blood". The substrate being a "mere" dumb soul-less material doesn't say much.
And the idea is that what matters is the processing - not the material it happens on, or the particular way it is.
Air molecules hitting a wall and coming back to us at various intervals are also "vastly different" to a " matrix multiplication routine on a flat plane of silicon".
But a matrix multiplication can nonetheless replicate the air-molecules-hitting-wall audio effect of reverbation on 0s and 1s representing the audio. We can even hook the result to a movable membrane controlled by electricity (what pros call "a speaker") to hear it.
The inability to see that the point of the comparison is that an algorithmic modelling of a physical (or biological, same thing) process can still replicate, even if much simpler, some of its qualities in a different domain (0s and 1s in silicon and electric signals vs some material molecules interacting) is therefore annoying.
Intelligence does not require "chemical and electrical exchanges in an body". Are you attempting to axiomatically claim that only biological beings can be intelligent (in which case, that's not a useful definition for the purposes of this discussion)? If not, then that's a red herring.
There is an element of rudeness to completely ignoring what I've already written and saying "you know [basic principle that was already covered at length], right?". If you want to talk about contributing to the discussion rather than being rude, you could start by offering a reply to the points that are already made rather than making me repeat myself addressing the level 0 thought on the subject.
Repeating yourself doesn't make you right, just repetitive. Ignoring refutations you don't like doesn't make them wrong. Observing that something has already been refuted, in an effort to avoid further repetition, is not in itself inherently rude.
Any definition of intelligence that does not axiomatically say "is human" or "is biological" or similar is something a machine can meet, insofar as we're also just machines made out of biology. For any given X, "AI can't do X yet" is a statement with an expiration date on it, and I wouldn't bet on that expiration date being too far in the future. This is a problem.
It is, in particular, difficult at this point to construct a meaningful definition of intelligence that simultaneously includes all humans and excludes all AIs. Many motivated-reasoning / rationalization attempts to construct a definition that excludes the highest-end AIs often exclude some humans. (By "motivated-reasoning / rationalization", I mean that such attempts start by writing "and therefore AIs can't possibly be intelligent" at the bottom, and work backwards from there to faux-rationalize what they've already decided must be true.)
> Repeating yourself doesn't make you right, just repetitive.
Good thing I didn't make that claim!
> Ignoring refutations you don't like doesn't make them wrong.
They didn't make a refutation of my points. They asserted a basic principle that I agreed with, but assume acceptance of that principle leads to their preferred conclusion. They make this assumption without providing any reasoning whatsoever for why that principle would lead to that conclusion, whereas I already provided an entire paragraph of reasoning for why I believe the principle leads to a different conclusion. A refutation would have to start from there, refuting the points I actually made. Without that you cannot call it a refutation. It is just gainsaying.
> Any definition of intelligence that does not axiomatically say "is human" or "is biological" or similar is something a machine can meet, insofar as we're also just machines made out of biology.
And here we go AGAIN! I already agree with this point!!!!!!!!!!!!!!! Please, for the love of god, read the words I have written. I think machine intelligence is possible. We are in agreement. Being in agreement that machine intelligence is possible does not automatically lead to the conclusion that the programs that make up LLMs are machine intelligence, any more than a "Hello World" program is intelligence. This is indeed, very repetitive.
You have given no argument for why an LLM cannot be intelligent. Not even that current models are not; you seem to be claiming that they cannot be.
If you are prepared to accept that intelligence doesn't require biology, then what definition do you want to use that simultaneously excludes all high-end AI and includes all humans?
By way of example, the game of life uses very simple rules, and is Turing-complete. Thus, the game of life could run a (very slow) complete simulation of a brain. Similarly, so could the architecture of an LLM. There is no fundamental limitation there.
If you want to argue with that definition of intelligence, or argue that LLMs do meet that definition of intelligence, by all means, go ahead[1]! I would have been interested to discuss that. Instead I have to repeat myself over and over restating points I already made because people aren't even reading them.
> Not even that current models are not; you seem to be claiming that they cannot be.
As I have now stated something like three or four times in this thread, my position is that machine intelligence is possible but that LLMs are not an example of it. Perhaps you would know what position you were arguing against if you had fully read my arguments before responding.
[1] I won't be responding any further at this point, though, so you should probably not bother. My patience for people responding without reading has worn thin, and going so far as to assert I have not given an argument for the very first thing I made an argument for is quite enough for me to log off.
> Probabilistic prediction is inherently incompatible with deterministic deduction.
Human brains run on probabilistic processes. If you want to make a definition of intelligence that excludes humans, that's not going to be a very useful definition for the purposes of reasoning or discourse.
> What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.
Have you tried this particular test, on any recent LLM? Because they have no problem handling that, and much more complex problems than that. You're going to need a more sophisticated test if you want to distinguish humans and current AI.
I'm not suggesting that we have "solved" intelligence; I am suggesting that there is no inherent property of an LLM that makes them incapable of intelligence.
Are you ever concerned about the consequences of what you are making? No one really knows how this will play out and the odds of this leading to disaster are significant.
I just don't understand people working on improving ai. It just isn't worth the risk.
>I just don't understand people working on improving ai. It just isn't worth the risk.
A cynical/accelerationist perspective would be: it enables you to rake in huge amounts of money, so no matter what comes next, you will be set up to endure it better than most.
Of course, I think about this at least once a week maybe more often. I think that the technology overall will be a great net benefit to humanity or I wouldn't touch it.
I’m younger than most on this site. I see the next decades of my life being defined by a multi-generational dark age via a collapse in literacy (“you use a calculator right?”), median prosperity (the only truly functional distribution system we have figured out is labor), and loss of agency (kinda obvious). This outcome is now, as of 2026, essentially priced into the public markets and accepted as fact by most media outlets.
“It’s inevitable” is at least a hard point to argue with. “Well I’M so productive, I’m having the time of my life”, the dominant position in many online tech spaces, seems short-sighted at best.
I miss being a techno optimist, it’s much more fun. But it’s increasingly hard.
I really think the doom consensus is largely an online phenomena. We're in a tense period like the early 80s, and that would be true without AI in the mix, but I think its a matter of perspective. We're certainly still way ahead of the 1910s and the 1940s for instance (it's on us btw to make sure we don't fall to that in time).
Every generation has its strains and the internet just amplifies it because outrage is currency. Those strains are things you only start to notice as you start to get older so they seem novel when in reality in the scheme of humanity is basically standard.
Fwiw if the market actually priced it in it would be in freefall since the market would be shortly irrelevant. We are due for a correction soon though.
Internet discourse is a facsimile of real life and often not how real life operates in my experience.
So I see all the discourse around extremes on either end and based on lived experience and working in the field think theres a much neater middle ground we'll ultimately arrive at thanks to people working very hard to land the plane so to speak.
I answered the more important question of a seemingly lost youngin and how to deal with the stress of inheriting a world in a bit of turmoil.
That said, trivially we already see it advancing math and science research as an assistive tool, development and more. Extrapolate it out a few more generations and it helps us unlock a whole bunch of things on the skill tree of life so to speak.
Yes, doomerism is a symptom of severe doomscrolling addiction. All the people who talk like this spend all day on X. They sound like delusional drug addicts TBH.
The only thing seriously reducing trust in elections is anti-democratic politicians who will ALWAYS find a convenient reason to claim the election is rigged, and many of their followers will believe and propagate that lie to create distrust in the election.
There is really nothing we can do to satisfy these people except create some kind of structure they demand which will somehow be made to heavily lean in their favor. That is what will satisfy them. Nothing else will.
idk, If I was in control of a country in the EU I would realize, unfortunately for pretty much everyone on the planet, that we have made a drastic miscalculation by relying on the US so heavily for defense.
However, that is not something that can be reversed meaningfully in less than a decade. So for now, I would play the long game like Germany while working to get the EU to build up a military force large enough to significantly reduce our dependence on the US.
It's not as if the US hasn't repeatedly requested that European nations invest in their defense for the past few decades.
Looking at it dispassionately as a European living in the US, if you wanted to foment the sort of mistrust many Americans have of Europe, I don't think you could have created a more invidious policy.
Even though European defence investment was lacklustre - don't forget that those requests between the lines mean to buy US defence tech and still be dependent on US in time of war.
Countries that have actually invested have same problems - dependance on US tech and their unreliable leadership. Those who had stockpiles of American weapons (or even components from US in mostly domestically made weapons) - still need to coordinate with US (cannot find in the moment, but I definitely read about this, when Sweden couldn't send weapons due to American components inside).
France is mostly (totally??) independent in the matter of defence from America - and Americans hate French for that. America really hated de Gaulle's wish of military and political independence of Europe from America. But he was unsuccessful in his vision, essentially building this status quo: "Americans will military bases in European backyards, Europeans will be tame good boys and Americans will provide security with a pinky promise", Truman Doctrine - I believe.
(West) Germany's extreme pacifism is also thanks to USAs efforts to not repeat Versaille treaty's failures and rise of new Hitler-like figure.
> if you wanted to foment the sort of mistrust many Americans have of Europe, I don't think you could have created a more invidious policy
Sounds like something from Project 2025 propaganda preparations.
I will remind you that only USA triggered NATO Article 5 and whole Europe came to help in their now infamous "war on terror", even including countries that weren't in NATO at the time (though obviously were aligned and wanted to be there) and lost lives there.
I would maybe have believe this statement if current administration would have gone 110% into isolationism, as their election shouts where "America First". At the time it was phrased as: they won't help Ukraine, NATO, or any other organisation/action happening outside USA. Now it means: USA will take anything by force whether you like it or not.
Also you want to eat your cake and have it too. You still want to have tens of thousands of soldiers and your bases in EU, you want EU countries to invest in your defence sector (but pwease pwease don't get too independent, otherwise Uncle Sam will get angwy), though you want to freaking go to war against NATO countries, because Amerika stronk. Also not forget very close cooperation and access given to local military bases for Americans from European counterparts.
Many NATO countries in Europe are steadily investing in defence for 10+ years (mostly from 2014 Crimea annexation) and many more waking up with 2022 total war on Ukraine from ruzzia.
I want European part of NATO to be stronger and more decisive, actions are happening, but Europe still has democracy, not a some weird authoritarian kakistocracy with oligarchical flavour.
So let's not pretend that Europe should pay for USA's wish for total hegemony, worldwide policing and global reserve currency. Europeans lost their lives in USAs wars and enabled this USA vision of global hegemony for last 70+ years.
These rambles prove to me yet again - in what information bubble USA lives, which is dictated by geriatric 80-year-olds still living 20+ years in the past inside their heads and transferred by ignorant talking heads of 24h news cycle.
It can be reversed in a year. In 1941 the US increased its production of tanks by 7x. In 1942 it increased production again by 4x. This idea that building industry takes decades needs to die a painful death.
There's a certain large European country with plenty of resources that is pretty famous for scaling its tank production just a couple years before the US did.
It is a real problem that AI's will basically confirm that most inquiries are true. Just by asking a leading question often results in the AI confirming it is true or stretching reality to accommodate the answer being true.
If I ask if a drug has a specific side effect and the answer is no it should say no. Not try to find a way to say yes that isn't really backed by evidence.
People don't realize that when they ask a leading question that is really specific in a way where no one has a real answer then the AI will try to find a way to agree, and this is going to destroy people's lives. Honestly it already has.
> Almost all games these days are basically like a work in progress, so if you pirate them then the game doesn't stay up to date.
Which, as a mod author and consumer, isn't always a bad thing. More than once, I had to drop just enjoying a game, to patch my published mods because some update that is automatically pushed out, and people have to accept in order to even boot a single-player game. Why? I don't know, but it's really annoying sometimes.
Besides, nowadays cracking groups release smaller patches too, so while you might not get the update the same hour it was published on Steam, usually within a week or two the same group that uploaded the original release, has released another patch.
When you start a subscription, you're agreeing to pay X amount every Y period of time; you're not starting a new agreement every single Y period of time.
They can cancel the prior tier or bump up the price on renewal though. This is the problem with subscriptions, you become complacent and accept incremental changes until you finally notice that you’re being rinsed.
And actually some subscriptions can include unilateral price increases in the contract (a subscription is a contract) with early termination fees. It just isn’t commonly done because word gets around and you will lose business. You typically only see this in predatory industries where there are few alternatives and the service is necessary, like local waste management.
If the contract is unfair enough you can usually escape it in court or arbitration, but nobody wants to go through that.
No, that doesn't make sense at all. You've paid for consistent terms for that Y period of time. Not cancelling the subscription when it's up for renewal is an implicit agreement to any new terms. And I'm sure if you'd read those terms in the first place, you'd come to the same understanding.
(And it's not even that: the X you're charged is subject to change upon renewal!)
I'm not arguing that this is a good or bad thing, just pointing out the reality of every single subscription agreement I've signed up for online.
They can cancel the subscription if you don't agree to the new proposition after they fulfilled their contract. But they can't just change the terms of the agreement after it was made.
But doing so would mean risking to loose customers who were just too lazy to cancel. So most Businesses don't like it. (Spotify did cancel their old contracts though, for people who had not agreed with the recent price hike)
I think your question is reasonable, but no, I do not think a company gets to promote a service as having no ads as part of the sell, and then put ads in by default.
Not the person you're replying to, but it just feels like rent-seeking. Amazon is already a gigantic corporation, pretty much everyone spends lots and lots of money on Amazon, it just felt like a way to try and squeeze more money out of their existing customers.
ETA:
I mean, I'm sure there is some exception to this, but generally speaking everyone hates ads. Part of the reason that the whole "cable cutting" thing happened was because everyone hated paying a lot of money to some cable company just to be bombarded with advertisements. At least that's a big reason as to why I did it.
Now all these media companies realized that they can start shoving ads at us again and people will keep paying.
Obviously I'm not entitled to having media at a specific price indefinitely, but I'm perfectly allowed to not like it when companies engage in rent-seeking bullshit.
It wouldn't bother me as much if you could still buy media, but as far as I can tell most TV shows don't get Blu-ray releases anymore. The media companies realized that it's more profitable for them to make you pay for the same media forever instead of a lump cost, I guess preferably with you watching corporate brainwashing to buy products.
I suspect once the heat on this settles down, every streaming service is going to start forcing ads on us at all times, and then the only way to fight back on this will be bittorrent.
Or just stop watching. I seem to be out of tune with what people want in a TV show nowadays, I don't find much enjoyable. I accept there was never that much, but given how much content is produced now I would have expected more in my sweet spot.
The religious right knows many of their views are unpopular so they don't act in the open. They find underhanded ways to force their views onto us. Abortion bans wouldn't survive a simple up and down vote in almost any state, yet abortion bans are happening across the country.
The religious right really has their claws into this administration, and the far right has a much larger say in things than it seems like they would based on their proportional representation in the population. Things like gerrymandering and closed primaries don't help.
reply