message.is_blatant_spam # true if blatant spam
message.is_spam and not message.is_blatant_spam # true if possibly spam
not message.is_spam # true if not spam
But there's nothing stopping adding that middle one as another property, again without breaking compatibility:
@property
def is_possibly_spam(self) -> bool:
return self.is_spam and not self.is_blatant_spam
The point is, you don't actually have to break compatibility here, you can just define more predicates to add the extra granularity without breaking the existing ones.
Assuming US gallons, $8/US gallon works out as £1.60/litre. That sounds about right for current UK prices, depending on what and where you're buying it. (Yes, fuel is expensive here compared to the US; that's largely down to fuel duty and taxes.)
A lot of people are now struggling to detect which images are AI generated, and inferring reality from illusions.
To an extent, this was already the case with many other things, including stuff that was expressly labelled as fiction, but I recall an old quote, fooling all of the people some of the time and some of the people all of the time, it is now easier to fool more people all the time and to fool all people an increasing fraction of the time.
This isn't only limited to fake pics of kids, but kids are weak and struggle to defend themselves, and in this context the tools faking them seems to me likely to increase rates of harm against them.
Is that caption going to be written as text in the image, as people will just collect/share the image without a caption that was part of a webpage/PDF the image was originally embedded.
The history of age of consent laws including Pitcairn Island, the observed results of sexualised deepfakes in classrooms by other students, and the observation that according to sexual therapists "fetishisation" is the development of a sexual response and conversion into a requirement over the course of repeated exposure rather than any innate tendency that a person is born with.
I read that in a book, in a study, they were able to cause people to be aroused by money by pairing it with sexy bits. But later, they lost the association pretty quickly.
This doesn't contradict what you are saying, and the study could be like most psychology (unreplicated), but it seems the impact is minor... But minor on 6 billion people could be terrible for a few people.
Now the implications of letting people generate pictures of children....... Do I need to say more? Even then, I'm not sure my opinion on this. No one is getting hurt by the generation of the images, but they "might could maybe possibly" cause them to act on things in real life.
When I was a teenager I used to make this argument for legalization of drugs. It wasn't the drugs that caused people to steal and murder, it was the human.
Now that I'm older, I can imagine consequences of a few bad apples pointing to AI as the starting point.
Search for widevine decrypt. You’ll find code and forums where at least some L3 (software) keys are publicly shared. For high resolution on some platforms, you need L1 keys, but as far as I understand the decryption process basically stays the same once you have a working key.
You won't find a ton of up-to-date info that would let you do the same - the scene groups hold their methods closely specifically because of this cat-and-mouse game.
I don't think it's unrelated at all. I saw the same picture and just closed the tab right away. Why should I read this article, the whole thing might be written by an LLM.
Your comment reminds me of people complaining about how using emoji in communications/text has become normalized. Generating images with AI is pretty fun and seems like an appropriate thing to do for a personal blog. As in, this is the exact sort of place where it's most appropriate.
It's not like this person was ever going to pay someone to make a cartoon drawing so nobody lost their livelihood over it. Seems like a harmless visual identifier (that helps you remember if you read the article if you stumble across it again later).
Is it really such a bad thing when people use generative AI for fun or for their hobbies? This isn't the New York Times.
This happened to me too (almost subconsciously I might add). I'm actually not anti-AI at all, maybe a bit uninterested in AI-made art, since I don't fully see much use for it except for generating fun pictures of Golden Retriever dogs in silly situations, but this imitation-Ghibli art style is probably one of the least pleasing things to my eye that people love making. It's so round and without edge, it's colors are washed out in a very non-offensive way, and also it does not even look like the source material.
I wouldn't be so aggrieved by it, I think, if there wasn't that wave where everyone and their dog was making pictures in that style. Sorry, just a small rant tangentially related to the article, which is fine. :)
This is an overly broad generalisation - there are many cases of managers that do their best to primarily look after those under them, not just focus on getting higher up.
A lot of people here are criticising Nintendo not showing specific details here, seemingly forgetting a few key points:
A. The announcement is nothing more than a hype video, it obviously isn't intended to be the only marketing tool.
B. On the specifications front, Nintendo never focus on performance, and it's unlikely that will change now; their focus tends to be on games and features.